I meant I couldn’t find a version that I could freely look at …

]]>Your attempt with infinite graphs might be informed by

Attila Máté’s paper Nondeterministic polynomial-time computations and models of arithmetic. It cites earlier papers which also examine what happens to formalized polynomial-time computations when extended to nonstandard integers (or nonstandard-other structures), which might give more control than what you sketch above.

http://www.springerlink.com/content/661291h0w622rr76/ ]]>

http://doi.acm.org/10.1145/800061.808733 ]]>

Mike Sipser once told me that indeed, thinking about Borel Sets and Descriptive Complexity was what led him to the work in the Furst-Saxe-Sipser paper (which is the precursor to Hastad’s final optimal results).

He said that his original random restriction arguments were for “infinite depth-2 circuits”, which actually made the analysis easier. He then managed to convert these to a finite analogue, with “finite vs. infinite fan-in” turning into “bounded vs. unbounded finite fan-in”.

]]>(1) To simulate randomness (“Adleman’s trick”). The class BPP of problems solvable in probabilistic polynomial time is in polynomial time with a polynomial amount of advice, but eliminating the advice is one of the major open questions of complexity theory (derandomization). Indeed, eliminating the advice would actually imply lower bounds by results of Kabanets and Impagliazzo.

(2) The census trick: Perhaps the most interesting application of this is to the NE (non-deterministic time 2^O(n)) versus coNE (complement of NE) question, which is a “lifted” version of NP vs coNP: NE != coNE would imply NP != coNP, but not the other way around. Of course we don’t know if NE = coNE or not, but we know NE is in coNE with n+1 bits of advice (here n is the input length). The trick is to encode the number of strings belonging to your language L at input length n within your advice. Then in coNE, you can guess all the strings that are in L, using the advice to check that you have indeed guessed all such strings, and then accept only those strings which are not in L.

(3) To encode a promise condition: Sometimes advice can be used in dealing with so-called “semantic” classes such as probabilistic polynomial time where the acceptance and rejection criteria are mutually exclusive but not exhaustive. It is not known for example, whether probabilistic quadratic time is more powerful than probabilistic linear time, but this is known if each class is given just 1 bit of advice. Another example here is the result that MA with 1 bit of advice (here MA is NP but with the verification being probabilistic) doesn’t have Boolean circuits of size n^k for any fixed k. Note that the upper bound here uses just 1 bit of non-uniformity while the lower bound is against algorithms with a fixed polynomial number of bits of non-uniformity.

Having said all this, it’s true that we do not know how to take advantage of uniformity in our lower bounds. Indeed, most known ways to do this run up against the relativization barrier.

]]>