Kolmogorov complexity.
September 25, 2006 8:22 PM computer science
Here are some very ill-thought-out ideas on Kolmogorov complexity.
We define a metric on the space of bit-strings Σ⋆. For a universal Turing machine T, let dT(x,y) be the "length" of the shortest program that outputs y on input x, or outputs x on input y. This should measure how difficult it is to "relate" x and y.
The ends of the metric space (Σ⋆,dT) should correspond to infinite random bitstrings, and because choosing a different univeral Turing machine just replaces this metric space with one quasi-isometric to it, the ends should be preserved, so there will still be a lot of infinite random bitstrings
But obviously I haven't thought about any of this very carefully: for instance, the triangle inequality probably only holds coarsely, because it depends on being able to concatenate programs.
Here's a similar question. Usually, we start with a partial function f:Σ⋆→Σ⋆ which tells how to translate descriptions into objects; Kolmogorov complexity is then defined as Cf(x)=minf(y)=x|y|. Any universal Turing machine gives a measure of complexity with the same asymptotics, i.e., Cg(x) and Cf(x) differ by a constant that depends only on f and g. Suppose I have another function h so that Ch has the same asymptotics: what more can be said about h?
There's a stupid rigidity for computable functions (a computable function is still computable if its value is changed at finitely many places), and maybe these sort of questions could lead to a rigidity theorem for computability, a local Church-Turing thesis.
And having written this, I'm terrified at how similar I sound to Archimedes Plutonium. Now I'll go to learn more about localization of spaces in the algebraic topology proseminar.