How is the Tsirelson space defined? Via some kind of inductive process involving the natural numbers.

I think the difficulty is that we have been placing notions of definability on a hierarchy relative to the natural numbers N. (I would think that model theory and morley rank is extending this hierarchy relative to the finite ordinals, which still begins with , which is still relative to N). In other words, these notions of definability have to do with recursive process that are arithmetic, and as such foreign to the real numbers, which is more geometric or at least, analytic.

Gowers defined combinatorial spaces, a definition which he didn’t like, describing them as not capturing the notion of “definable Banach space”. This notion of combinatorial spaces does not include the l_p spaces.

Chow and Dorais have considered the amount of recursion needed to define the norm or to describe the functions that the norm can give rise. It has not been easy to separate the Tsirelson space or variant of it from such weak version of recursion.

Combinatorics and weak versions of primitive recursion are opposite ends of a spectrum that are relative to the natural numbers N.

I’m arguing that we should have some kind of R-combinatorial notion. For instance, the supremum operation that takes a subset of R and gives its supremum is a basic operation in analytic. In fact, it is basic to the axiomatic definition of completeness of the reals. How about a language that allows the supremum operation, on top of taking the basic ring operations?

Let me try to formalize what is means for a subspace of the function space to be a “definable Banach space”. Here X is an abstract measure space. (The purpose of using an abstract measure space instead of N with counting measure is to make clear that I don’t want to rely on N).

This language will have two types. One for functions and the other for real scalars . This two-typed language consists of ring operations of scalars, and ring operations of functions. The key feature is the supremum operation.

The idea is somehow to be able to express integration. If we can express integration, then a “definable norm” could be one that the integrand is a definable formula in this two-typed language. For instance to define the integrand for the L_p norm, we first notice that can be expressed in terms of and . Integer powers are just iterated multiplication, rational powers amount to solving an equation . Taking arbitrary p-th powers can be expressed as a supremum of p_n-norms for a rational sequence increasing to p.

What I suggest is not complete and will like contain many unrigorous parts. What I’m trying to push for is to have a notion of “explicitly definable Banach space” in the manner in which we define it in class. Look at how we define it usually as in class. Take the basic operations like integration and supremum. Find languages to describe this operations. This is the basic process from examples to axiomatization.

What I’m cautious towards is the attempt to reduce this manner of definition in class to other kinds of definability/computability hierarchy like those in recursion or model theory. Recursion theory is an attempt to define complexity of functions from N->N.

We should look at definability of Banach spaces per se, on its own accord, and not true to reduce to another system. We are already aware that the ordinal hierarchy leads to undecidability with dealing with geometry and the real numbers. Let’s try to work on the real numbers as a primitive concept, and not in terms of the natural numbers.

This is my appeal.

]]>Our comments began with proposals of formalizing “explicit definability” via complexity of norm (combinatorial) and asymptotic notions of complexity (analysis). Towards the end, as we focused on how model theory could help in this formalization, the proposals involved arithmetic versions of definability.

As Banach spaces use the real numbers as a base field, the real numbers should be treated as a primitive, atomic object. When using recursion theory, the real numbers are coded, resulting in arithmetic versions of complexity, foreign to analytic or combinatorial presentations of Banach spaces.

Banach spaces have a geometry and having or as a subspace could be seen as a regularity property of the geometry. In general, geometry involves the continuum and arithmetic is not well-suited to describe geometry. My suggestion is to focus on formulations of “definability” that are native to the real numbers, and not employ foreign arithmetic systems to judge the complexity of the definition.

]]>What might tip the balance in favor of Explanation 2 would be something like this. One comes up with a Banach space whose norm is weird-looking but which most people agree is “explicitly defined.” One then proves that it doesn’t contain or . This “new example,” however, turns out not to be so new after all because it’s still based on the same idea underlying the Tsirelson space; however, by some clever tinkering one has managed to shift all the funny induction out of the definition and into the proof. This would be evidence that “explicitly definable norms” won’t save the day. At the same time it would still suggest that a strong induction principle is needed to prove the existence of pathological examples.

On the other hand, if it’s provable in RCA_0 that Schlumprecht’s space is a counterexample, then that would be evidence that it really is the definition of the norm that’s the culprit, rather than the logical strength of “there exists a separable Banach space not containing or .”

]]>Thanks for spotting the typo in Question 2: I did mean equivalence of bases, not of norms.

Thanks also for pointing to Christian Rosendal’s result. One of the relevant papers [Cofinal families of Borel equivalence relations and quasiorders. J. Symbolic Logic 70 (2005)] is not on his website, but the another one is here.

Do you remember a reference for the result on universal HI spaces?

]]>Anyhow, this kind of problem amuses me because of its closeness to the Paris-Harrington theorem. But whereas the Paris-Harrington theorem can be proved by applying the infinite Ramsey theorem, there is no hope of doing that here, since Tsirelson’s space doesn’t contain or . So the problem could be very hard indeed (or if not this one then another one of a similar kind that I know), since one would not expect a proof to be possible in PA, but one cannot use infinitary methods either.

]]>To show that a space with norm doesn’t contain a block sequence -equivalent to is equivalent to showing that the tree of all finite normalized block sequences that generate subspaces -equivalent to is well-founded. (The block sequences are ordered by end-extension; you may have to flip the tree up side down if your definition of well-founded goes the opposite way that your trees grow.) The ordinal ranks of these trees are the “ranks associated to ” that I alluded to in my comment after Question 1.

The problem with generic and randomized constructions is that they don’t lend themselves easily to constructing well-founded trees, except perhaps “by accident.” Tsirelson’s space is an example where the associated trees are rather short; I don’t know what good would come in trying to construct a space whose trees were rather tall. It seems that successful generic or randomized construction would have to cleverly avoid the well-foundedness issue altogether…

Remark: The final part of Iovino’s argument consists precisely in showing that one of the trees is not well-founded.

]]>My descriptive set theory background is very sketchy but by the relation of equivalence of norms on I think you mean the equivalence of bases. The complexity of this relation is computed by Rosendal and it is complete relation. The relation of isomorphism though is complete analytic on (I see this notation more often) of standard Borel space of separable Banach spaces.

About Question 1. I am not sure if the class of Banach spaces not containing or is a `nice` subset of . Because otherwise you could possibly construct a universal space for this class but not universal for all separable Banach spaces. But I think it is shown that any universal space for all HI spaces is universal for all spaces.

]]>First, one proves that every subspace (which I’ll use as shorthand for “subspace generated by blocks” because standard techniques show that we can restrict attention to those) contains copies of for all . These copies aren’t quite isometric but the distortion is uniformly bounded, and can in fact be made arbitrarily small. One can even say more: if you want a -accurate copy of then you can find it in the subspace generated by any blocks. The proof is easy but I won’t give it here.

This rules out (again very easily) and all spaces with the possible exception of . It remains to prove that you don’t get .

Now another result that’s easy to prove — ah, but perhaps this is interesting because the proof is infinitary — is that any space isomorphic to has a (block) subspace that is -isometric to . The proof is again quite easy: if you know that for every , then you pick a block sequence of unit vectors, all the time making as small as it can be compared with the -norm. Then the ratios may increase, but you get close to the limit and then discard all the initial vectors that were far from the limit. From that point, the triangle inequality tells you that the ratio is never bigger, and the construction tells you that it is never smaller. (Sorry — I didn’t write that carefully and I’ve probably got my smallers and biggers muddled up. But I just wanted to give the logical strength of it — it’s similar in flavour to the proof of the result that a bounded sequence has an increasing or a decreasing subsequence.)

Anyhow, all that remains is to show that in any subspace you can find unit vectors such that . This is done as follows. First you choose vectors that *do* generate a copy of . Let be the average of these. If is the maximum of the support of $x_1$, then you set and construct in the same way. You keep going like this. The rough idea of the proof is that in order to do anything useful to you need at least sets , so they have to start after , by the admissibility condition. And this can be used to complete the proof.

As I say, my main point is not to make the argument properly comprehensible, but just to demonstrate the sort of induction used in the proof.

My question now to Tim is this. In your Explanation 2 are you imagining the *definition* of Tsirelson’s space as an important part of the proof that some separable space does not contain or , or are you more interested in the proof, given the definition? My instinct is that the very first step — that there exists a norm that satisfies a certain equation — is crucial.

I just want to throw out another thought — a rather fanciful one. Suppose one tried to answer the main question by finding a genuinely new type of proof that there was a space that did not contain or . What might such a proof conceivably look like? It seems somehow unlikely that it would consist in giving an unexpectedly clever “direct definition” (whatever that means), but a thought that I have sometimes entertained, and got absolutely nowhere with, is that it might be possible to prove the existence of Banach spaces with strange properties by coming up with some notion of “generic” and proving that a generic Banach space had those properties. I don’t mean anything probabilistic or measure-theoretic: the problem is too infinite-dimensional for that (though there have been some interesting results of a rather different kind proved by pasting together random finite-dimensional spaces). But I have occasionally attempted to do something more Baire-category-ish, with complete lack of success so far. I mention this because it is relevant to Tim’s message: if there were a strange indirect proof of existence, then it would affect the formulation of any result one might try to prove, since one could no longer say that any proof had to go via constructing a Tsirelson-like space, though it might be possible to say that any norm you could define had to use some sort of Tsirelson-like induction.

]]>In the interest of involving as many fields as possible, here are two (possibly easy) questions for Descriptive Set Theorists. The two questions are related to the space of all normalized norms on (), i.e., the space of continuous functions that satisfy the usual norm inequalities and that map each standard basis vector to . This is a Polish space with the natural topology.

Question 1. The set of all elements of that don’t contain block sequences equivalent to or is clearly . Is it Borel complete?

Comment: Each or has an associated rank that looks well behaved. Barring any unusual interactions between these, I don’t see why it wouldn’t be Borel complete. However, my intuition about such things has been wrong before.

Question 2. The equivalence (of norms) relation on is a Borel equivalence relation. Where does it sit in the classification of Borel equivalence relations?

Comment: I haven’t thought about this question much. The answer may be very easy, very hard or even well known (to anyone but me).

The motivation behind these questions is as follows. Whatever “explicitly definable” means, it is likely a Borel subset of . If the answer to Question 1 is positive, then this Borel set can only capture a “small part” of the norms that contain or . Therefore, it is unlikely that such a set would stand out as the obvious notion of “explicitly definable” (i.e., one would always be able to find a reasonable looking norm that is not “explicitly definable”). Also, the answer to Question 2 may partly explain why the Main Question is hard.

PS: Does already have a name? I gave it the initials for Banach Norm, but I would rather use existing notation, if any.

]]>Explanation 1: There is some as-yet-unspecified concept of “explicitly definable Banach space” such that every explicitly definable Banach space contains or .

Explanation 2: There is in fact no reasonable definition of “explicitly definable Banach space,” but any *proof* that some separable Banach spaces do not contain or has to use some (mildly) exotic induction axiom.

While it’s true that the “Tsirelson space” in the nonstandard model of RCA_0 doesn’t tell you anything directly about the Tsirelson space in the real world, and thus does not help you with Explanation 1, it *would* be relevant to Explanation 2 (if Explanation 2 is correct).

Although the intuitions of everyone else here seem to be in favor of Explanation 1, I want to pose Explanation 2 as a possibility. Even if Explanation 2 is wrong, I for one would like to see an explicit articulation of the reasons for believing that the explanation for the observed phenomenon lies in the *definability of the norm* rather than in the *provability of the theorem*.

]]>The key fact is that to say that a separable Banach space doesn’t contain or is a statement. Therefore, since already exists in the minimal -model of RCA0, the statement “there is a separable Banach space that doesn’t contain or ” is true in every -model of RCA0.

So any model of RCA0 + “Every separable Banach space contains or ” will necessarily be nonstandard. Because of this, one can’t draw any kind of reasonable conclusion from this about what happens in the real world. For example, we know that there is a nonstandard model of RCA0 where Tsirelson’s space (as defined in that model) contains a copy of . That doesn’t say anything about the real Tsirelson’s space, it only says that this particular model of RCA0 is very strange.

Heuristically, since ordinals are omnipresent in constructions related to Tsirelson-like spaces, the various properties of these spaces are likely each equivalent to the wellorderedness of various ordinals. In other words, we know exactly what type of reversals that one can expect from these spaces. Something different from this would be very interesting. In this vein, let me restate a question that I asked earlier: are there interesting properties of whose proof uses the Nash-Williams Barrier Partition Theorem?

]]>It’s true that even if one succeeds in proving something like “It is unprovable in RCA_0 that not every (separable) Banach space contains or ,” one does not thereby obtain a precise concept of “explicitly definable Banach space.” However, it would at least partially answer the question of why the counterexamples necessarily involve some kind of intricate induction.

Besides extracting a fast-growing function from Schlumprecht’s space, there are some other ways of demonstrating unprovability in RCA_0. For example, one could try to deduce the weak Koenig lemma from it. That would be reverse mathematics in the true sense: proving an axiom from a theorem. However, it might not even be true that the axiom follows from the theorem.

Another way would be to come up with a model of RCA_0 in which every Banach space *does* contain or .

]]>Since this is intended to be a brainstorming session, it would be helpful if you could share some of your insight on what seem to be promising or dead-end approaches to “explicit definability.” Vague ideas and indefinite perceptions are acceptable, no one is expecting definite answers at this stage.

]]>FD’s description of the main ideas behind my papers on type definability and stability is quite accurate. I couldn’t have given a better explanation.

I actually wrote those papers motivated by Gowers’ question. However, I got to a dead end because, aside from the fact that the existence of enough definable types guarantees the existence of subspaces, I couldn’t see (and I still don’t see) a clear connection between the notion of type definability and the notion of “explicit definability’’ for norms.

Answer to BS: the results are both about occurring as a spreading model, and as a subspace; the connection is, as FD indicated, that when the spreading model is given by definable types, the information can be brought down from the spreading model to the base space.

]]>Say that a set is “elementary open” if it can be built from the basic sets

and

by taking finitely many unions and intersections. If is uniformly continuous on bounded sets, then is (positive quantifier-free) definable iff for all intervals and all there is an elementary open set such that

Thus, definability can be seen as a very strong form of continuity.

Because of their finitary nature, elementary open sets are “preserved” in ultrapowers. This puts a lot of constraints on possible extensions of definable functions in ultrapowers (e.g., uniqueness of heirs).

I wonder if definablity is a fundamentally new idea from logic or if it already existed in analysis (in some equivalent form). Similar (perhaps better) ideas could have appeared around the same time as Banach space ultrapowers gained popularity.

]]>First, he shows that condition (1) implies the existence of stable types. These are types for which the convolution on the span of extends to a separately continuous, commutative convolution on the closure of the span of in the space of types.

Then, he further refines the stable type so that he can pick a $p \in [1,\infty]$ such that (where is code for ) is block finitely represented in every element of the closure of the span of . (This is a variation on Krivine’s Theorem.)

Finally, he builds a sequence -equivalent to in steps. To do this he has to push and pull finite pieces of the sequence in and out of the ‘s and use the convolution to glue them back together in a coherent way. (This step is not an explicit construction, it is an existence proof of the type “there is a way to do this without ever shooting yourself in the foot.”)

Note that this argument is basically a “localized” version of the proof of the Krivine-Maurey theorem that Iovino gives in the Applications paper.

]]>The error in 11.9 of the Semidefinability paper may be just a minor omission. Iovino assumes that the spreading model for case (1) comes from a definable type (as in the matching case of 11.7). In this case the spreading model appears to be unique (see 6.1 of the Definable Types paper).

]]>Even that is not clear. In the follow up paper the connection with spreading models is explicitly stated in Theorem 11.9 but the proof doesn’t seem to be correct. The proof claims if a sequence generates a spreading model 1-equivalent to basis so does all of its block bases. A counterexample is: take a block basis $\latex (x_n)$ which generates such a spreading model in the Schlumprecht space $S$ (existence is proved by Kutzarova and Lin). Since the space is block minimal, some block sequence of $\latex (x_n)$ is equivalent to the unit vector basis of $S$, which is subsymmetric and generates $S$ as a spreading model.

]]>