Let me briefly try to defend my choice of problem. I wanted to choose a genuine research problem in my own area of mathematics, rather than something with a completely elementary statement or, say, a recreational problem, just to show that I mean this as a serious attempt to do real mathematics and not just an amusing way of looking at things I don’t really care about. This means that in order to have a reasonable chance of making a substantial contribution, you probably have to be a fairly experienced combinatorialist. In particular, familiarity with Szemerédi’s regularity lemma is essential. So I’m not expecting a collaboration between thousands of people, but I can think of far more than three people who are suitably qualified in the above way.
Other criteria were that I didn’t want to choose a famous unsolved problem, or a problem where I had no idea whatever where to start. For a first attempt, it seemed a better idea to choose a problem that I’d love to solve, about which I already have some ideas, but in which I don’t (yet) have a significant emotional investment.
Does the problem split naturally into subtasks? That is, is it parallelizable? I’m actually not completely sure that that’s what I’m aiming for. A massively parallelizable project would be something more like the classification of finite simple groups, where one or two people directed the project and parcelled out lots of different tasks to lots of different people, who go off and work individually. But I’m interested in the question of whether it is possible for lots of people to solve one single problem rather than lots of people to solve one problem each.
However, my contention would be that any reasonably complex solution to a problem is somewhat parallelizable and becomes increasingly so as one thinks about it: when one solves a problem, one doesn’t first try to guess what Lemma 1.1 might be, but instead one tries to think of relevant mathematical statements that one has a chance of proving. And often when one gets stuck on one of these, one isn’t stuck on the main problem because there were still several unexplored avenues. If lots of people were working on a problem, then all these avenues could be explored at once — but they would have to be created first.
With this seemingly narrow project — to try to decide whether one particular approach can be made to work for one (interesting) special case of a theorem that has already been proved by different methods — there are already a few different things to think about. For example, there is the question of whether the graph has any useful properties that can be exploited, whether one can give it a useful and natural system of edge weights that makes it easier to handle, what one would actually want of a graph in order for a sparse regularity argument to be feasible, what sparse regularity statements there actually are out there, whether the averaging trick could somehow allow one to operate in a dense part of , despite the objection in comment CC, whether there is some false statement that would need to be true for the approach to work, and so on.
I have two other ideas for projects of this kind if this one turns out to not to work very well, but I’ll keep those to myself for the time being.