Randomness and Mathematical Proof
Scientific American 232, No. 5 (May 1975), PP. 47-52
By Gregory J. Chaitin
Although Randomness Can Be Precisely Defined and Can Even Be Measured, a Given Number Cannot Be Proved To Be Random. This enigma establishes a limit to what is possible in match.
Almost Everyone Has An Intuitive Notion of What A Random Number IS. For Example, Consider these Two Series of Binary Digits:
01010101010101010101
01101100110111100010
The first is obviously constructed according to a simple rule; it consists of the number 01 repeated ten times If one were asked to speculate on how the series might continue, one could predict with considerable confidence that the next two digits would be 0 and 1. . Inspection of the second series of digits yields no such comprehensive pattern there is no obvious rule governing the formation of the number, and there is no rational way to guess the succeeding digits The arrangement seems haphazard;.. in other words, the sequence appears To be a random assortment of 0's and 1's.
The second series of binary digits was generated by flipping a coin 20 times and writing a 1 if the outcome was heads and a 0 if it was tails. Tossing a coin is a classical procedure for producing a random number, and one might think at first That The Provenance of the Series Alone Would Certify That IT IS Random. this is not so. Tossing a Coin 20 Times Can Produce Any ONE OE THAN A MILLION BINARY Series, And Each of Them Has Exactly the Same . probability Thus it should be no more surprising to obtain the series with an obvious pattern than to obtain the one that seems to be random;. each represents an event with a probability of 2-20 If origin in a probabilistic event were made the sole criterion of randomness, then both series would have to be considered random, and indeed so would all others, since the same mechanism can generate all the possible series. The conclusion is singularly unhelpful in distinguishing the random from the orderly.
Clearly a more sensible definition of randomness is required, one that does not contradict the intuitive concept of a `` patternless '' number. Such a definition has been devised only in the past 10 years. It does not consider the origin of a number but depends entirely on the characteristics of the sequence of digits. The new definition enables us to describe the properties of a random number more precisely than was formerly possible, and it establishes a hierarchy of degrees of randomness. of perhaps even greater interest than the capabilities of the definition, however, are its limitations. in particular the definition can not help to determine, except in very special cases, whether or not a given series of digits, such as the second one above, is in fact random or only seems to be random THIS LIMITATION IS NOT A FLAW IN THE Definition; IT IS A Consesequence of A Subtle But Fundamental Anomaly In The Foundation of Mathematics. It is closely related to a family theorem deviad and private in 1931 by Kurt Gödel, which has come to be known as Gödel's incompleteness theorem. Both the theorem and the recent discoveries concerning the nature of randomness help to define the boundaries that constrain certain mathematical methods.
Algorithmic DefinitionThe new definition of randomness has its heritage in information theory, the science, developed mainly since World War II, that studies the transmission of messages. Suppose you have a friend who is visiting a planet in another galaxy, and that sending him telegrams is very expensive. He forgot to take along his tables of trigonometric functions, and he has asked you to supply them. you could simply translate the numbers into an appropriate code (such as the binary numbers) and transmit them directly, but even the most modest tables of the six functions have a few thousand digits, so that the cost would be high. A much cheaper way to convey the same information would be to transmit instructions for calculating the tables from the underlying trigonometric formulas, such as Euler's equation eix = cos
X
i sin
X. Such a Message Could Be Relatively Brief, YET INEERENT IT IT INEFORMATION Contained in Even The Largest Tables.
Suppose, on the other hand, your friend is interested not in trigonometry but in baseball. He would like to know the scores of all the major-league games played since he left the earth some thousands of years before. In this case it is most unlikely that a formula could be found for compressing the information into a short message;. in such a series of numbers each digit is essentially an independent item of information, and it can not be predicted from its neighbors or from some underlying rule There is no alternative To transmitting the entire list of score.
In this pair of whimsical messages is the germ of a new definition of randomness. It is based on the observation that the information embodied in a random series of numbers can not be `` compressed, '' or reduced to a more compact form. In formulating the actual definition it is preferable to consider communication not with a distant friend but with a digital computer. The friend might have the wit to make inferences about numbers or to construct a series from partial information or from vague instructions. The computer does not have that capacity, and for our purposes that deficiency is an advantage. Instructions given the computer must be complete and explicit, and they must enable it to proceed step by step without requiring that it comprehend the result of any part of the operations it performs. Such a Program of instructions is an algorithm. It can Demand Any Finite Number of Mechanical MANipulation.The Definition Also requires that we be able to measure the information content of a message in some more precise way than by the cost of sending it as a telegram. The fundamental unit of information is the `` bit, '' defined as the smallest item of information capable OF Indicating a Choice Between Two Equally likely things. in binary not one bit is equivalent to one digit, Either a 0 OR A 1.
WE Are Now to Describe More Pregrade The Differences Between The Two Series of Digits Presented At The Beginning of this Article:
01010101010101010101
01101100110111100010
. The first could be specified to a computer by a very simple algorithm, such as `` Print 01 ten times '' If the series were extended according to the same rule, the algorithm would have to be only slightly larger; it might be made to read, for example, `` Print 01 a million times. '' The number of bits in such an algorithm is a small fraction of the number of bits in the series it specifies, and as the series grows larger the size of the program increases at a much slower rate.For the second series of digits there is no corresponding shortcut. The most economical way to express the series is to write it out in full, and the shortest algorithm for introducing the series into a computer would be `` Print 01101100110111100010. ''. If the series were much larger (but still apparently patternless), the algorithm would have to be expanded to the corresponding size This `` incompressibility '' is a property of all random numbers; indeed, we can proceed directly To Define Randomness in Terms of Incomp ..
This definition was independently proposed about 1965 by AN Kolmogorov of the Academy of Science of the USSR and by me, when I was an undergraduate at the City College of the City University of New York. Both Kolmogorov and I were then unaware of related proposals made in 1960 by Ray J. Solomonoff of the Zator Company in an endeavor to measure the simplicity of scientific theories. During the past decade we and others have continued to explore the meaning of randomness. The original formulations have been improved and the feasibility of the approach has been amply confirmed.Model of Inductive MethodThe algorithmic definition of randomness provides a new foundation for the theory of probability. By no means does it supersede classical probability theory, which is based on an ensemble of possibilities, each of which is assigned a probability. Rather, The Algorithmic Approach Complements The Ensemble method by Giving Precise Meaning To Concepts That Had Been Intuitively Appealing B Ut That Could Not Be Formally Adopted.
The ensemble theory of probability, which originated in the 17th century, remains today of great practical importance. It is the foundation of statistics, and it is applied to a wide range of problems in science and engineering. The algorithmic theory also has important implications, but they are primarily theoretical. The area of broadest interest is its amplification of Gödel's incompleteness theorem. Another application (which actually preceded the formulation of the theory itself) is in Solomonoff's model of scientific induction.
Solomonoff represented a scientist's observations as a series of binary digits. The scientist seeks to explain these observations through a theory, which can be regarded as an algorithm capable of generating the series and extending it, that is, predicting future observations. For any given series of observations there are always several competing theories, and the scientist must choose among them The model demands that the smallest algorithm, the one consisting of the fewest bits, be selected Stated another way, this rule is the familiar formulation of Occam's razor..: Given differing theories of apparently equal merit, the simplest is to be preferred.Thus in the Solomonoff model a theory that enables one to understand a series of observations is seen as a small computer program that reproduces the observations and makes predictions about possible future observations. The Smaller The Program, The More Comprehensive The Theory and the greater the degree of understanding. Observations t hat are random can not be reproduced by a small program and therefore can not be explained by a theory. In addition the future behavior of a random system can not be predicted. For random data the most compact way for the scientist to communicate his observations is for him to Publish.
Defining randomness or the simplicity of theories through the capabilities of the digital computer would seem to introduce a spurious element into these essentially abstract notions: the peculiarities of the particular computing machine employed Different machines communicate through different computer languages, and a set of instructions expressed. in one of those languages might require more or fewer bits when the instructions are translated into another language. Actually, however, the choice of computer matters very little. The problem can be avoided entirely simply by insisting that the randomness of all numbers be tested on the same machine. Even when different machines are employed, the idiosyncrasies of various languages can readily be compensated for. Suppose, for example, someone has a program written in English and wishes to utilize it with a computer that reads only French. Instead of translating The Algorithm Itself He Could Preface The Program with a Complete ENGLISH COUR se written in French. Another mathematician with a French program and an English machine would follow the opposite procedure. In this way only a fixed number of bits need be added to the program, and that number grows less significant as the size of the series specified ...............................
Since the choice of a particular machine is largely irrelevant, we can choose for our calculations an ideal computer. It is assumed to have unlimited storage capacity and unlimited time to complete its calculations. Input to and output from the machine are both in the form of binary digits. The machine begins to operate as soon as the program is given it, and it continues until it has finished printing the binary series that is the result. The machine then halts. Unless an error is made in the program, the computer will produce exactly one output for any given program.Minimal Programs and ComplexityAny specified series of numbers can be generated by an infinite number of algorithms. Consider, for example, the three-digit decimal series 123. It could be produced by an algorithm such as ` `Subtract 1 from 124 and print the result, '' or` `Subtract 2 from 125 and print the result, '' or an infinity of other programs formed on the same model. The programs of greatest interest, however, are th E SMALLEST Ones That Will Yield A Given Numeric Series. The Smallst Programs Are Called Minimal Programs; for a Given Series There May Be Only ONE Minimal Program OR The BE MANY.
Any minimal program is necessarily random, whether or not the series it generates is random. This conclusion is a direct result of the way we have defined randomness. Consider the program P, which is a minimal program for the series of digits S. If we assume that P is not random, then by definition there must be another program, P '., substantially smaller than P that will generate it We can then produce S by the following algorithm: `` from P' calculate P, then from P calculate S. '' This program is only a few bits longer than P ', and thus it must be substantially shorter than P. P is therefore not a minimal program.The minimal program is closely related to another fundamental concept in the algorithmic theory of randomness : the concept of complexity The complexity of a series of digits is the number of bits that must be put into a computing machine in order to obtain the original series as output The complexity is therefore equal to the size in bits of the minimal programs.. Of the seri Es. Having IntroducesD, We can now Restate Our Definition of Randomness In More Rigorous Terms: a Random Series Of Digits Is One Whose Complexity Is Approximately Equal To ITS Size In Bits.
The notion of complexity serves not only to define randomness but also to measure it. Given several series of numbers each having n digits, it is theoretically possible to identify all those of complexity n-1, n-10, n-100 and so forth and thereby to rank the series in decreasing order of randomness. The exact value of complexity below which a series is no longer considered random remains somewhat arbitrary. The value ought to be set low enough for numbers with obviously random properties not to be excluded and high enough for numbers with a conspicuous pattern to be disqualified, but to set a particular numerical value is to judge what degree of randomness constitutes actual randomness. It is this uncertainty that is reflected in the qualified statement that the complexity of a random series is approximately equal To the size of the series.properties of the algorithmic theory of probability can illuminate many of the property of baoth random and nonrandom numbers. The frequency distribution of digits in a series, for example, can be shown to have an important influence on the randomness of the series. Simple inspection suggests that a series consisting entirely of either 0's or 1's is far from random, and the algorithmic Approach Confirms That Conclusion. If Such a Series IS
N Digits Long, ITS Complexity Is Approximately Equal To The Logarithm To The Base 2 of
The exact value depends on the machine language Employed.) The Series Can Be Producted by A Simple Algorithm Such As `` Print 0
N Times, '' IN which virtually all the information neededed is contained in the binary numeral for
n. The size of this number is about log
2
n bits. Since for Even a Modely Long Series The Logarithm OFN Is Much Smaller Than
N Itself, Such Numbers Arefi Low Complexity; Their Intuitively Perceived Pattern is Mathematically Confirmed.
Another binary series that can be profitably analyzed in this way is one where 0's and 1's are present with relative frequencies of three-fourths and one-fourth. If the series is of size n, it can be demonstrated that its complexity is no greater than four-fifths n, that is, a program that will produce the series can be written in 4n / 5 bits. This maximum applies regardless of the sequence of the digits, so that no series with such a frequency distribution can be considered very random. in fact, it can be proved that in any long binary series that is random the relative frequencies of 0's and 1's must be very close to one-half. (in a random decimal series the relative frequency of each digit is, of course, one -thth.)
Numbers having a nonrandom frequency distribution are exceptional. Of all the possible n-digit binary numbers there is only one, for example, that consists entirely of 0's and only one that is all 1's. All the rest are less orderly, and the great majority must, by any reasonable standard, be called random. To choose an arbitrary limit, we can calculate the fraction of all n-digit binary numbers that have a complexity of less than n-10. There are 21 programs one digit long that might generate an n-digit series; there are 22 programs two digits long that could yield such a series, 23 programs three digits long and so forth, up to the longest programs permitted within the allowed complexity; of these there are 2n-11 The sum. Of this Series (21 22 ... 2N-11) IS Equal to 2N-10-2. hence there is Fewer Than 2N-10 Programs of Size Less Than N-10, and Since Each of these Programs CAN Specify No more Than One Series of Digits, Fewer Than 2N-10 of the 2N Numbers Have a complexity less Than N-10. Since 2N-10 / 2N = 1 / 1,024, IT Follows That of all the n-digit binary number number online less one in 1,000 have a complexity less Than N-10. in Other Words, ONLY About One Series in 1,000 Can Be Compressed Into a Computer Program More Than 10 Digits Smaller Than Itself.
A necessary corollary of this calculation is that more than 999 of every 1,000 n-digit binary numbers have a complexity equal to or greater than n-10. If that degree of complexity can be taken as an appropriate test of randomness, then almost all n -digit numbers are in fact random. If a fair coin is tossed n times, the probability is greater than .999 that the result will be random to this extent. It would therefore seem easy to exhibit a specimen of a long series of random digits ; actually it is impossible to do so.Formal SystemsIt can readily be shown that a specific series of digits is not random; it is sufficient to find a program that will generate the series and that is substantially smaller than the series itself The program need. NOT Be a minimal program for the series; it needs it. To Demonstrate That A Particular Series of Digits Is Random, on The Other Hand, One Must Prove That No Small Program for Calculating It Exists.
It is in the realm of mathematical proof that Gödel's incompleteness theorem is such a conspicuous landmark; my version of the theorem predicts that the required proof of randomness can not be found The consequences of this fact are just as interesting for what they reveal about Gödel's theorem. As They area for what.
Gödel's theorem represents the resolution of a controversy that preoccupied mathematicians during the early years of the 20th century The question at issue was:. `` What constitutes a valid proof in mathematics and how is such a proof to be recognized '' David Hilbert had? attempted to resolve the controversy by devising an artificial language in which valid proofs could be found mechanically, without any need for human insight or judgement. Gödel showed that there is no such perfect language.Hilbert established a finite alphabet of symbols, an unambiguous grammar specifying how a meaningful statement could be formed, a finite list of axioms, or initial assumptions, and a finite list of rules of inference for deducing theorems from the axioms or from other theorems. Such a language, with its rules, is called a formal system .
A formal system is defined so precisely that a proof can be evaluated by a recursive procedure involving only simple logical and arithmetical manipulations. In other words, in the formal system there is an algorithm for testing the validity of proofs. Today, although not in Hilbert's Time, The Algorithm Could Be Executed on a Digital Computer and The Machine Could Be asked to `` Judge '' The merits of it
Because of Hilbert's requirement that a formal system have a proof-checking algorithm, it is possible in theory to list one by one all the theorems that can be proved in a particular system. One first lists in alphabetical order all sequences of symbols one character long and applies the proof-testing algorithm to each of them, thereby finding all theorems (if any) whose proofs consist of a single character. One then tests all the two-character sequences of symbols, and so on. In this way all potential proofs can be checked, and eventually all theorems can be discovered in order of the size of their proofs (The method is, of course, only a theoretical one;. the procedure is too lengthy to be practical). unprovable StatementsGödel showed in his 1931 proof that Hilbert's plan for a completely systematic mathematics can not be fulfilled. He did this by constructing an assertion about the positive integers in the language of the formal system that is true but that can not be proved in the sys tem. The formal system, no matter how large or how carefully constructed it is, can not encompass all true theorems and is therefore incomplete. Gödel's technique can be applied to virtually any formal system, and it therefore demands the surprising and, for many, discomforting Conclusion That the the the the qustion `` What is a valid proof? ''
Gödel's proof of the incompleteness theorem is based on the paradox of Epimenides the Cretan, who is said to have averred, `` All Cretans are liars '' [see `` Paradox, '' by WV Quine; Scientific American, April, 1962] . The paradox can be rephrased in more general terms as `` This statement is false, '' an assertion that is true if and only if it is false and that is therefore neither true nor false. Gödel replaced the concept of truth with that of provability and thereby constructed the sentence `` This statement is unprovable, '' an assertion that, in a specific formal system, is provable if and only if it is false. Thus either a falsehood is provable, which is forbidden, or a true statement is unprovable, and hence the formal system is incomplete. Gödel then applied a technique that uniquely numbers all statements and proofs in the formal system and thereby converted the sentence `` This statement is unprovable '' into an assertion about the properties of the positive integers Because this transformation is possible, the incompleteness theorem applies with equal cogency to all formal systems in which it is possible to deal with the positive integers [see `` Gödel's Proof, '' by Ernest Nagel and James R. Newman; Scientific American, June, 1956 ].
The intimate association between Gödel's proof and the theory of random numbers can be made plain through another paradox, similar in form to the paradox of Epimenides It is a variant of the Berry paradox, first published in 1908 by Bertrand Russell It reads:.. ` `Find the smallest positive integer which to be specified requires more characters than there are in this sentence. '' The sentence has 114 characters (counting spaces between words and the period but not the quotation marks), yet it supposedly specifies an integer that, by definition, requires more than 114 characters to be specified.As before, in order to apply the paradox to the incompleteness theorem it is necessary to remove it from the realm of truth to the realm of provability. The phrase `` which requires '' must be replaced by `` which can be proved to require, '' it being understood that all statements will be expressed in a particular formal system. in addition the vague notion of `` the number of characters required to sp .
The result of these transformations is the following computer program:. `` Find a series of binary digits that can be proved to be of a complexity greater than the number of bits in this program '' The program tests all possible proofs in the formal system in order of their size until it encounters the first one proving that a specific binary sequence is of a complexity greater than the number of bits in the program. Then it prints the series it has found and halts. of course, the paradox in the statement from which the program was derived has not been eliminated. The program supposedly calculates a number that no program its size should be able to calculate. In fact, the program finds the first number that it can be proved incapable of finding.The absurdity of this conclusion merely demonstrates that the program will never find the number it is designed to look for. In a formal system one can not prove that a particular series of digits is of a complexity greater than the number of bits In The Program Employed to Specify The Series.
A further generalization can be made about this paradox. It is not the number of bits in the program itself that is the limiting factor but the number of bits in the formal system as a whole. Hidden in the program are the axioms and rules of inference that determine the behavior of the system and provide the algorithm for testing proofs. The information content of these axioms and rules can be measured and can be designated the complexity of the formal system. The size of the entire program therefore exceeds the complexity of the formal . system by a fixed number of bits c (. The actual value of c depends on the machine language employed) The theorem proved by the paradox can therefore be stated as follows: In a formal system of complexity n it is impossible to prove that a particular series of binary digits is of complexity greater than n c, where c is a constant that is independent of the particular system employed.Limits of Formal SystemsSince complexity has been defined as a measure of randomness, this theorem implies that in a formal system no number can be proved to be random unless the complexity of the number is less than that of the system itself. Because all minimal programs are random the theorem also implies that a system of greater complexity IS Required in Order to Prove That a .. .. ..
The complexity of the formal system has such an important bearing on the proof of randomness because it is a measure of the amount of information the system contains, and hence of the amount of information that can be derived from it. The formal system rests on axioms :. fundamental statements that are irreducible in the same sense that a minimal program is (. If an axiom could be expressed more compactly, then the briefer statement would become a new axiom and the old one would become a derived theorem) The information embodied in the axioms is thus itself random, and it can be employed to test the randomness of other data. The randomness of some numbers can therefore be proved, but only if they are smaller than the formal system. moreover, any formal system is of necessity finite , whereas any series of digits can be made arbitrarily large. Hence there will always be numbers whose randomness can not be proved.The endeavor to define and measure randomness has greatly clarified the signifi cance and the implications of Gödel's incompleteness theorem. That theorem can now be seen not as an isolated paradox but as a natural consequence of the constraints imposed by information theory. In 1946 Hermann Weyl said that the doubt induced by such discoveries as Gödel's theorem had been `` a constant drain on the enthusiasm and determination with which I pursued my research work. '' From the point of view of information theory, however, Gödel's theorem does not appear to give cause for depression. Instead it seems simply to suggest that in ORDER TO Progress, Mathematicians, Like Investigators in Other Sciences, Must Search for New Axioms.
Illustrations
(A) 10100 → Computer → 11111111111111111111 (b) 01101100110111100010 → Computer → 01101100110111100010Algorithmic definition of randomness relies on the capabilities and limitations of the digital computer. In order to produce a particular output, such as a series of binary digits, the computer must be given a set of explicit instructions that can be followed without making intellectual judgments Such a program of instructions is an algorithm If the desired output is highly ordered (a), a relatively small algorithm will suffice;.. a series of twenty 1's, for example , might be generated by some hypothetical computer from the program 10100, which is the binary notation for the decimal number 20. for a random series of digits (b) the most concise program possible consists of the series itself. The smallest programs capable of generating A Particular Series Are Called The Minimal Programs of The Series; The Size of these Program, Measured In Bits, or Binary Digits, Is The Complexity Of TH E Series. a Series of Digits IS Defined As Random If Series' Complexity Approaches ITS Size In Bits.
Alphabet, Grammar, Axioms, Rules of Inference
↓
Computer
↓
Theorem 1, Theorem 2, Theorem 3, Theorem 4, THEOREM 5, ...
Formal systems devised by David Hilbert contain an algorithm that mechanically checks the validity of all proofs that can be formulated in the system The formal system consists of an alphabet of symbols in which all statements can be written;. A grammar that specifies how the symbols are to be combined; a set of axioms, or principles accepted without proof;. and rules of inference for deriving theorems from the axioms Theorems are found by writing all the possible grammatical statements in the system and testing them to determine which ones are in accord with the rules of inference and are therefore valid proofs Since this operation can be performed by an algorithm it could be done by a digital computer in 1931 Kurt Gödel demonstrated that virtually all formal systems are incomplete:.. in each of them there is at least one Statement That Is True But That Cannot Be Proved.Observations: 0101010101
Predictions: 010101010101010101010101: Ten repetitions of 01 size of theory: 21 character
Predictions: 0101010101000000000000 THEORY: FIVE Repetitions of 01 Followed by Ten 0's size of theory: 42 character
Inductive reasoning as it is employed in science was analyzed mathematically by Ray J. Solomonoff He represented a scientist's observations as a series of binary digits;. The observations are to be explained and new ones are to be predicted by theories, which are regarded as algorithms instructing a computer to reproduce the observations (The programs would not be English sentences but binary series, and their size would be measured not in characters but in bits.) Here two competing theories explain the existing data;. Occam's razor demands that the simpler, or smaller, theory be preferred. The task of the scientist is to search for minimal programs. If the data are random, the minimal programs are no more concise than the observations and no theory can be formulated.Illustration is a graph of number of n -digit sequences as a function of their complexity. The Curve Grows Exponity from approximately 0 to approximately 2n as the complexity goes from 0 to n.
Random sequences of binary digits make up the majority of all such sequences. Of the 2n series of n digits, most are of a complexity that is within a few bits of n. As complexity decreases, the number of series diminishes in a roughly exponential manner Orderly Series Are Rare; There is Only, for Example, That Consists of n 1's.
Russell Paradox-Consider the set of all sets that are not members of themselves Is this set a member of itself Epimenides Paradox-Consider this statement:.? `` This statement is false '' Is this statement true Berry Paradox-Consider this.? sentence: `` Find the smallest positive integer which to be specified requires more characters than there are in this sentence '' Does this sentence specify a positive integer Three paradoxes delimit what can be proved The first, devised by Bertrand Russell, indicated.?. that informal reasoning in mathematics can yield contradictions, and it led to the creation of formal systems. The second, attributed to Epimenides, was adapted by Gödel to show that even within a formal system there are true statements that are unprovable. The third leads to The Demonstration That A Specific Number Cannot Be Proved Random.
(A) This statement is unprovable. (B) The complexity of 01101100110111100010 is greater than 15 bits. (C) The series of digits 01101100110111100010 is random. (D) 10100 is a minimal program for the series 11111111111111111111.
Unprovable statements can be shown to be false, if they are false, but they can not be shown to be true. A proof that `` This statement is unprovable '' (a) reveals a self-contradiction in a formal system. The assignment of a numerical value to the complexity of a particular number (b) requires a proof that no smaller algorithm for generating the number exists;. the proof could be supplied only if the formal system itself were more complex than the number Statements labeled (c) and (d) Are Subject To The Same Limitation, Since The Identification of A Random Number OR A Minimal Program Requires The Determination of Complexity.Further Reading
A Profile of Mathematical Logic Howard DeLong Addison-Wesley, 1970. Theories of Probability:.... An Examination of Foundations Terrence L. Fine Academic Press, 1973. Universal Gambling Schemes and the Complexity Measures of Kolmogorov and Chaitin Thomas M. Cover. . Technical Report No. 12, Statistics Department, Stanford University, 1974. `` Information-Theoretic Limitations of Formal Systems '' Gregory J. Chaitin in Journal of the Association for Computing Machinery, Vol 21, pages 403-424;.. July 1974.