Information is ubiquitous. It is one of the most fundamental properties of the universe. We come across randomness throughout our lives that we are unable to quantify or make sense of. What is the reason of the existence of randomness in the model of the universe ? Or does this randomness even exist or is it the product of our incomplete set of information ? Why should ambiguity exist in a model? Is it the lack of inclusion of all the variables in the model or is it the limited understanding of the nature of variable and model itself ? If and when there is perfect information available to be used in the model, can the model be deterministically predictive or will it still retain a certain amount of ambiguity and uncertainty in its determination ? Should we know the nature of the most fundamental particles of the universe, can we discover a model that fits all our purposes and all our theories or even after knowing about the nature of this particle or particles we still do not enjoy the comforts of certainty ?

The chaos theory proposes that since a flap of the wing of a butterfly can cause a typhoon in the mexico city, no matter what amount of time it takes for this to happen, we will never be able to predict the future of certain models. The system of the universe is inherently deterministically chaotic. There are so many variables to be considered in the specification of the model that we may consciously or unconsciously abstain from including them in the model due to technical limitations. These limitations when observed over long periods of time fundamentally changes the trajectory of the model without us realising due to their non reflection in the specified model.  Therefore we have been led to believe that this randomness and unpredictability is in the general nature of existence.

In the binary representation of information, a change in a single 0 or 1 can change the nature of information being held. This in a way is the implication of the chaos theory. What we fail to understand is that it is due to our neglect of complete knowledge of the order of 0’s and 1’s that this ambiguity arises. If that order were known from the begining the outcomes of the model and the changes in it would have been certain. Chaos theory in a way tries to generalise the ambiguities in our models through recurrent classification of new information and using these classifications to produce good enough results. It is in this way like fuzzy logic which assigns a number between 0 and 1 to an object under a classification. This means that in the classification of purely subjective information one can assign these numbers in order to produce results through heuristics which are satisfying if not optimal.

But the problem with these theories is that they are at best approximation of reality. What these theories require is massive aggregation of information to produce models that are supposed to fit the reality. But when these massive approximations at any point of model calculation deviate from actuality of information they produce equally massive deviations in its results and predictions. This is where the chaos theory becomes prominent. Even the slightest deviation in information in the beginning of model specification leads to massive deviations in the end when the model predicts. To overcome these problems in non linear dynamic systems where slight errors in calculation accentuate to become unrecognisable deviations several ingenious approaches are applied. All these approaches at their core are probabilistic in nature. To overcome the uncertainties in the model, an uncertain science must come to rescue. What these dynamic and stochastic models were able to achieve was a trade off between efficiency and robustness.

We as observers and keepers of the knowledge of the universe are limited by our own perceptual limits. The tools we have are greatly impotent in deconstructing the true nature of our universe. A few hundred years ago we were not even close to where we are now and a few hundred years from now we will have traversed a path that is inconceivable today. But rest assured, the progress is inevitable. The question that still remains is that: what are the final frontiers of progress? Even when we have acquired the final tools that the universe has to offer for its observance, will it unfold its mysterious nature then, or will it still retain some of its ambiguity. The answer lies in a theory I proposed when I was in my graduation which I call for a lack of a better term ‘limits and limiters to human knowledge’. I am quoting it from an article below.

“The man is bound by the limits of nature itself. To know the universe from outside, man will have to be outside the universe. But nothing natural or physical can exist outside the universe. Hence the knowledge of the universe from without is not possible. Also to know the internal workings of the universe or its part, man’s frame of reference should be less than or within the system of his observation. But to observe the internal workings of the smallest indivisible substance, man’s frame of reference will have to be smaller than the smallest substance possible, which in itself is a contradiction. Hence man can neither know the true world from within and neither the true world from without. This means that man can only know all the possibilities within the two ultimate ways of perceiving the truth but not the truth itself. Yet we deceive ourselves in believing that we are only a little away from the transcendental reality which appears to be distant no matter how close we get to it.

But there may be a different way of thinking about this demarcation. Man can know the whole Universe from within and the smallest particle from without. So by changing the approach we find that by being equal to or greater than the smallest particle possible we can observe its outer workings. Also by being equal to or lesser than the universe, that is, the biggest natural existence, we can understand it from within.

But to know a substance completely we need to know that substance from within as well as without, which seems not to be possible when we reach the lower infinity and the upper infinity. Hence man will always be trapped between these infinities. He may know everything, from within and without, between these infinities but will always be left one step behind in understanding of the true reality.

It is like integration and differentiation. In calculus, we always tend to reach 0 or 1 but we never do reach them. The real answers are always like 0.00000000000000000001 or 0.999999999999999. This leads me into believing that the fundamentality of my understanding to the limits of knowledge has put limits to mathematics itself. Mathematics then cannot justify itself to be an absolute and independent source of knowledge. It then becomes only a highly advanced way of almost reaching the fundamental truths without ever reaching them. How scary and insane is that? It is these times that lead me into believing, it is only human belief and conviction in something higher than his understanding of the universe that can truly save him from pure skepticism.”

This means that no matter how advanced we become technologically, we will never be able to construct a ‘theory of everything’ which has perfect predictability. Although we will surely in time arrive at a theory with almost perfect predictability. Another idea that I want to introduce in this article, which I have not published before, is what i call ‘philosophical calculus’. All my readings in different domains of study has led me to discover a very clear pattern of design of the algorithm that powers our conscious thought process. This algorithm is not absolute but dynamic in the sense that it has a gradient of possibilities that it can assign to a single piece of information. The algorithm defines a spectrum on which the information is stored based on previous information. Previous information has a certain epistemic gravity that provides certain regions of the spectrum with some information stored more densely than others. This classification and arrangement of information on a spectrum over wide ranges of problems gives birth to the ideological nature of the self. Information regarding subjective matters gain some objectivity and the information regarding objective matter gains some subjectivity. It is this interaction of the objective nature of the universe with subjective nature of the algorithm that powers our consciousness that leads to development of different kinds of theories trying to explain the same phenomenon. If each individual had exactly the same information regarding the universe, the deviation in theories would have been minimum. But as there are limits to our present capacities of obtaining and processing of information, there are deviations in the way we process them.

As each individual experiences life differently with different frames of reference, there are different regions of philosophical spectrum that gets activated in them. This over a period of time leads to formation of neural networks in brain anatomy which become responsible for a certain ideological outlook.

to be continued...