Can Polymath be scaled up?

As I have already commented, the outcome of the Polymath experiment differed in one important respect from what I had envisaged: though it was larger than most mathematical collaborations, it could not really be described as massive. However, I haven’t given up all hope of a much larger collaboration, and in this post I want to think about ways that that might be achieved.

First, let me say what I think is the main rather general reason for the failure of Polymath1 to be genuinely massive. I had hoped that it would be possible for many people to make small contributions, but what I had not properly thought through was the fact that even to make a small contribution one must understand the big picture. Or so it seems: that is a question I would like to consider here.

One thing that is undeniable is that it was necessary to have a good grasp of the big picture to contribute to Polymath1. But was that an essential aspect of any large mathematical collaboration, or was it just a consequence of the particular way that Polymath1 was organized? To make this question more precise, I would like to make a comparison with the production of open-source software (which was of course one of the inspirations for the Polymath idea). There, it seems, it is possible to have a large-scale collaboration in which many of the collaborators work on very small bits of code that get absorbed into a much larger piece of software. Now it has often struck me that producing an elaborate mathematical proof is rather like producing a complex piece of software (not that I have any experience of the latter): in both cases there is a clearly defined goal (in one case, to prove a theorem, and in the other, to produce a program that will perform a certain task); in both cases this is achieved by means of a sequence of strings written in a formal language (steps of the proof, or lines of code) that have to obey certain rules; in both cases the main product often splits into smaller parts (lemmas, subroutines) that can be treated as black boxes, and so on.

This makes me want to ask what it is that the producers of open software do that we did not manage to do. I may not have the right answer to this question, but I do have a suggestion. Again, I have to admit that there is a lot I do not know about how open software is produced — for example, is there some big-picture planning stage before people start actually writing code, and if so, how is it organized? I’d be interested to hear from anyone who can answer this kind of question, and my suggestions may well need to be refined in the light of the answers.

Here, though, is my preliminary diagnosis. What I think we did that made it hard for all but a few people to contribute was to work on something that was not the final document: instead, we used blog comments in order to produce a high-level plan, which gradually became more detailed and precise. The comments were more like a conversation, and once an idea was digested by the participants one could leave the relevant comment behind and move on. Of course, some of the ideas made it into the eventual proof, but by that time they had been discussed in several other comments and their outer form had often been substantially modified.

Now it might seem as though we could not have done otherwise: it looks like a pretty essential feature of solving an unsolved mathematical problem that one does not know in advance what that proof will look like; and it also seems as though the best way to find out is to start with high-level thoughts and lower the level only when a thought seems to be promising. I do not want to deny any of that. What I would like to suggest is that we change what we think of as the “final document”. The obvious notion of “final document” is a write-up of the argument that eventually works, but there is a different notion that might serve as a better model for Polymath projects. I’m not quite sure what the best name for it is, but a first attempt is “proof-discovery tree”.

I am not going to discuss what the best implementation would be, because I do not know enough about what wiki-like facilities are out there, but the basic idea would be to produce an online document that thoroughly investigated all reasonable approaches to the main problem and arranged them in a natural hierarchical way. If, say, this was done on a wiki, then the main page would have links to subsidiary pages that would discuss very general ideas. (In the case of DHJ(3), one of these would have been the idea that we started with, namely, to model a proof on the triangle-removal approach to the corners problem.) Each of these general pages would naturally throw up several questions, and there would be links from these questions to pages at the next level of the tree, one page devoted to each question. A big priority in all these lower-level pages would be to make them as self-contained as possible, so that one could treat the whole process recursively: in theory you could go to a lower-level page and treat it just as you would the main problem, proposing general approaches to it, asking questions connected with those approaches, and so on. Of course, some of these lower-level questions might not be very interesting in isolation, but if you wanted to understand the motivation for them then all you would have to do is go up a level or two to see how they arose.

What would determine when a branch of this tree of web pages ended? It would be when a question was definitively answered. This definitive answer could well propagate up a few levels in the tree: for example, it might be a counterexample to a conjecture one level up, which might itself be so obviously necessary for the success of an approach outlined one level up still that that approach could be definitively labelled as not working (in which case one would put a note to that effect, but leave the lower parts of the tree that explained why it didn’t work), and so on. A complete proof-discovery tree could then be defined as one where all its branches had definitive endings, though it seems highly unlikely that this would ever be achieved for a problem such as DHJ. A successful proof-discovery tree could perhaps be defined recursively as follows: at the top level, one would have a precise approach to the problem, with links to subproblems that be sufficient, if solved, to solve the main problem, and each of these subproblems would have successful proof-discovery trees. The base case would simply be a question with a definitive answer.

In general a proof-discovery tree would have far more information than just a proof of a theorem: it would contain explorations of many other related ideas, and they would be organized in such a way that even if the tree was not a successful one, the document would make it easy to see what ideas had been considered and either rejected or temporarily abandoned. Such a document would be similar to the long sequence of comments that resulted from Polymath1, but with two differences, one major and one minor. The minor difference is that not everything in those comments would be worth including in a proof discovery tree. The major difference is that the logical structure of the mathematical ideas would be far more apparent. This would be a huge advantage for anybody who wanted to contribute to the project: they could simply follow a branch that interested them until they got to the end, and at that point they would attempt to make a contribution. Or if they preferred they could jump straight to a fairly deep point in the tree and think about a subproblem in isolation.

How might this all work in practice? I think it could be done in a way that is not too different from the way Polymath1 was organized, but there would be a change of emphasis. Instead of the blog conversation being seen as primary and the wiki being an add-on, the more codified proof-discovery tree would be the main focus of attention. However, there would still be a blog conversation. A typical contribution to the collaboration might be the creation of a new page of the proof-discovery tree and a brief explanation on the blog of what one had done and why. But it might be a minor edit to the proof-discovery tree (perhaps to make some page easier to understand), in which case a blog comment would not always be necessary — though for more elaborate edits it probably would be.

Why bother with the blog comments at all? Well, the linear structure of Polymath1 had definite advantages as well as disadvantages. The main advantage was that it was easy to find out what had been done recently. It was also good to have personal contact with other participants and to keep track of who had said what. And it would almost certainly be useful to be able to make blog comments that did not obviously fit into a proof-discovery tree.

My hopes for Polymath1 before it started were that it would be possible to make contributions without much effort. My reason for believing in this possibility was that, as Michael Nielsen elegantly put it, the solution to a problem arises as a result of an aggregation of small insights. I hoped that it would be possible to break the process of discovery down into sufficiently small steps that each one was fairly easy.

To some extent, that is what happened, but a serious problem with the idea was that, as I have already mentioned, having a small insight often depends on a rather deep understanding of the problem at hand. (If nothing else, this understanding helps one recognise which ideas are likely to be helpful.) A second problem is that a small idea can often depend on some other rather large ideas. For instance, “Mimic the proof of Theorem X” could be a small idea in the sense of being an idea that one can think of quickly and express in just six words, but quite a big idea in another sense if the proof of Theorem X has many stages, some quite technically complicated.

If we were to organize a Polymath project in the way that I am suggesting, then these two difficulties could be alleviated to some extent. We would value very highly the formulation of precise questions with yes/no answers, because such questions can be considered in isolation. Somebody adding such a question to the proof-discovery tree would be expected to explain it very carefully, and to present it as though it were the main question. This would be quite a lot of work, but the payoff would be that others would find it much easier to contribute. And in any case that kind of work could also be done collaboratively. Also, if somebody proposed a high-level approach such as “Mimic the proof of Theorem X,” the expectation would be that they, or others, would add links to detailed explanations of what Theorem X was and how it was proved.

If Polymath1 had been organized this way, then my initial contribution would have been to add to the top-level page (the main content of which would be a description of the density Hales-Jewett theorem), “Approach 1: mimic the triangle-removal approach to the corners theorem.” This would have been a link to an article explaining that idea in more detail. On the more detailed page there would have been the definition I gave of the tripartite graph in which triangles corresponded to combinatorial lines. There would also have been a link to a page explaining the triangle-removal proof of the corners theorem (on which there would have been a link to a page explaining the statement and proof of Szemerédi’s regularity lemma). And there would have been an enumeration of the definitions and proof steps associated with the proof of the corners theorem for which analogues were needed. These would have been hyperlinked to pages discussing them in more detail. One of these pages would, for example, have been a link to a page with the following subproblem: is there a usable analogue of Szemerédi’s regularity lemma for subgraphs of the tripartite graph where you join two sets if they are disjoint? And so on.

I have slightly oversimplified matters, because the logical structure of a proof-discovery tree would not always be as clear-cut as the above account would suggest. For example, if somebody proposes a variant of a question on the grounds that it might illuminate that question, then how does such a proposal fit into a proof-discovery tree? Here, I would envisage sticking to the tree structure, but making clear that one was not talking about formal logical implication. For instance, if a node of the tree was concerned with trying to prove Statement A, then Approach 1 might be, “Prove Statement B first, and then modify the argument to give a proof of Statement A.” There would be no guarantee that this approach would succeed, even if one managed to prove Statement B, but one could still have a link to a page all about Statement B. If the Statement B node ended up as a successful one, then one would go back to the Approach 1 node and follow a different subtree that was devoted to the question of how to modify the (now known) proof of Statement B. That subtree would be connected to Statement A in a more precise logical way.

My ultimate fantasy is that it might be possible to solve a problem without anybody taking a global view: everybody would concentrate on local questions, and at some point the proof-discovery tree would become a successful one. Success would be a purely formal property that one could verify automatically. How? Well, each time you had a node of the tree at the top of a successful tree, you would go to its parent node and make a note to the effect that one of the ingredients needed for that node to be successful had now been supplied. If it was the last ingredient then you would iterate this process, and if the top node became successful then you would be done.

I should explain further what I mean by “global view”. In one sense the initial, very general nodes of the tree would constitute global views of the problem. But that is not what I am talking about. These nodes would still be local when considered as part of the proof-discovery tree: it might be possible to understand that a certain general approach was worth trying but have only a very limited appreciation of how that idea could play out. So what I mean by “global view” here is a good understanding of a large part of the tree rather than merely an understanding the vertices that happened to be near the root.

If some fantasy like this became a reality, so that in an essential way the problem was solved by a collective super-brain and not by the combined global understanding of a handful of individual brains, then the problems about credit would be even more interesting. An ultimately successful Polymath project would be one in which nobody had done anything very impressive (just as a neuron doesn’t have deep thoughts) and yet the collective achievement was a notable one. Why would anybody want to contribute to such a project? I’m not sure, but I’m also not sure why so many people are prepared to give so much of their time to the open software movement. Perhaps it might be for some strange reason like wanting to know the solution to an interesting mathematics problem.

About these ads

43 Responses to “Can Polymath be scaled up?”

  1. Gil Says:

    Dear Tim,

    Actually the success of the project was beyond my expectation (and beyonf any reasonable fantasies) both in terms of achieving tha mathematical goal and in terms of the open collaboration. Your initial post contained a very detailed plan with 38 steps (A-Z+AA-LL) on an attack on the problem, and when I saw it I was quite pessimistic that this plan can lead to a large open collaboration. At the end it was a large open collaboration. To have a successful open collaboration even without a massive number of contributors is already significant.

    Here there was a fair number of collaborators and even larger number of precious participans who observed the progress.

    Probably in order to make it larger you need to have less internal, often hectic, “competition” in the collaboration itself. On the other hand the intense mode of the efforts by a few participans was the major reason for the mathematical success of this project.

    As you mentioned, problems that require much background and that a few people thought a lot about already,are probably worse in terms of “massive collaboration”. Your suggestion for polymath3 to openly discuss your ideas for Behrend-type upper bounds for Roth’s problem is a great idea for a next open collaboration; the stakes are order of magnitute higher than DHJT and it can be fruitful even if you will be the main player and it will force you to write these ideas in the open, and also if other people will just try to shoot them down. But it can lead to a larger form of open collaborations where the bounds will actually be pushed, or pushed relative to some other plausible conjectures.

    I agree that the success of polymath1 depended on many people having some good prior understanding of the problem and related issues, and the open mode helped aggregating these understandings. Good! this is already very significant.

    Is a large open collaboration a potentially good tool for gaining understanding collectively when we do not have it at all? We need more tries. Trying to attack a problem where little is known and little background is needed can be a good test.

    As a metameta comment: I think some of the metadiscussion is premature.

  2. name Says:

    “This makes me want to ask what it is that the producers of open software do that we did not manage to do. I may not have the right answer to this question, but I do have a suggestion. Again, I have to admit that there is a lot I do not know about how open software is produced — for example, is there some big-picture planning stage before people start actually writing code, and if so, how is it organized?”

    I believe that almost no open source project is a massive collaboration from the very start. Take for instance the Linux kernel. Before even releasing it to the public, Linus Torvalds had written a significant amount of code, a “skeleton” so to speak. Only after that did people start to contribute, adding things they needed that were missing, rewriting portions that they thought could be made better, and so on. The source code then grew a factor of 1000 over the next 17 years (the cumulative code size being much larger). Other projects start out with a mid size group, like the one in Polymath1, but having something with hundred of contributers from the start is very rare.

    A project then picks up contributers as it goes along: if your new hard drive doesn’t work with the system you might be compelled to patch the device driver so that it does, and so on. An individual contributer typically doesn’t, and need not, see the “big picture”, but for every part of the software there is someone or a group of people who maintains that piece and knows how things fit together. (In the “cathedral” model this is a chosen group, otherwise it’s someone whose track record makes people trust him or her.)

    Random comments on Polymath:(i) If you come across a fun/simple lemma, consider if you really should write down the formal proof yourself — it might be a perfect opportunity for an outsider to get into the project. (ii) Online reading seminars such as the one Tao organized are an awesome idea! (iii) Don’t make the platform overly complicated.

  3. Andrew Stacey Says:

    It’s a very interesting idea to compare mathematics with OSS. It’s something I’ve pondered a little recently. In fact, I’ve been setting up a blog/forum/wiki to explore this (and related) ideas!

    My thoughts are summarised in this:
    Open Source Mathematics

  4. Matt Leifer Says:

    One of the things that large-scale open source software products do is to offer a shallow learning curve, so that new contributers can start adding to the project straight away. Along with this, they also make it possible for individual developers to customize the software in their own way, without having to share the same “big picture” as the core developers.

    In web-based applications (where I have most of my experience) there is often a plugin architecture that achieves this. For example, in WordPress you can easily write a plugin that displays a widget on the sidebar, a new theme, or a new language translation. These things can be written after reading only a small portion of the documentation and without understanding how most of the main WordPress engine works. The vast majority of developers just stay at this level, maintaining their plugins as the core engine is updated. However, if a developer starts working on more and more sophisticated plugins, e.g. something that changes the way that the admin controls work, they gradually have to absorb more of the documentation and start looking at the source code to figure out how to do things. In this way, they begin to notice bugs and improvements in the source code and start to submit patches to the core engine. If they get even more interested, they may then become part of the team working on core development.

    It is also important that several different “big pictures” are allowed to coexist in the same project. For example, the core WordPress team want to make the best individual blogging platform on the planet. However, other groups see WordPress as a more general CMS and release distributions that are pre-installed with custom plugins specific functionality. There is Buddypress (a general social networking engine), WordPress mu (a multi-user, multi-blog version of WP) and something that turns a wordpress installation into a Twitter clone. Similar examples exist in other open-source projects, e.g. often people will write a version of something for the mac that looks more mac-native than the original – compare OpenOffice vs. NeoOffice, Firefox vs. Camino, etc.

    For most open-source projects, it is a relatively small group of developers that end up maintaining the core engine of the project. Most are working on plugins or customizations of the software for some specific purpose. Relatively few people care about the “big picture” and it is more usual to find a benevolent dictatorship rather than a democracy behind it all. In fact, these days it is fairly common to find a commercial company behind all the core development.

    The bottom line is that what you need is a “plugin architecture” for polymath projects, i.e. a way of contributing small things without having to know much about the core of the project.

  5. Chris Johnson Says:

    -[is there some big-picture planning stage before people start actually writing code, and if so, how is it organized?]-
    For open-source projects, I think the model is often that the core of the program is built by one person, and this program becomes a successful open-source project if is sufficiently interesting to, and extendable by, others. The big-picture planning (‘software architecture’) is done by the project initiator before he starts writing, and the practicalities of software development mean that once the core code has been written, it’s hard to change the structure. In commercial software, the big-picture planning will be done by a small team of expert programmers and project managers.

    This software-design stage is the part that is most like working out a mathematical proof, in the sense that it involves creative thought and lines of thought that lead one down blind alleys. In large commercial projects, there will be a formal specification of what the software is to achieve, and once the software architecture meets this specification, ‘the theorem is proved’. The actual coding stage is analagous to writing up the proof in LaTeX, using sufficiently explicit mathematical language that a computer could interpret it.

    It’s unusual for an open-source software project to have no-one who understands the overall picture, though common for some (perhaps the majority) of contributors to understand only a small part. The latter contributions are, in mathematical terms, simple corollaries to the main result or one of the main lemmas, often quite tangential to the main direction of the proof, which add a simple feature to the software to allow it to do something not anticipated or thought important by the original software designer.

  6. Daniel Says:

    Much has been correctly said about Open Source and it’s /modus operandi/, and i don’t mean to be pedantic nor repetitive, but i’d like to add a few notches and maybe contextualize things a bit.

    The creation of UNIX is really the “critical point” in this context: at that point, before the formal creation of ‘computer science’, what was done was to apply the already galvanized principles of science (including math) into doing computing. In this sense, Free Software was born, and lo and behold, it’s a strong reflection of the principles by which we all abide when doing science: collaboration (massive or not), free exchange of information, modularization (the break-down of a problem into its constituent parts), etc. So, firstly, i believe the appropriate metaphor here would be “Free Software”, rather than the more pragmatic “open source”, but this is just a minor point.

    Having said that, i believe there’s one fundamental difference between the way to prove a theorem and its analogue in software development, namely the writing of a piece of code: when you’re developing a piece of software, it’s very possible to “algorithm-ize” each and every step of the way; and although this may be similar to the construction of a proof, a technique for proving a certain theorem may not be so “modularizable”, i.e., sometimes it’s not possible to break a theorem down to it’s “dumb-bits”, tiny-little pieces that require very little “thinking” to be proved — which is something very common in the development of FLOSS (“Free-Libre-Open-Source-Software”). So, along this line, here’s my take on this problem:

    (1) modularization: Even though, sociologically and ecologically speaking, a FLOSS project is usually not born with all of its “management” ready (meaning, the formal breakdown of all the little pieces involved), it’s true that one of the core UNIX principles is that of maximum modularization: make one tool that does one job very well, can compound them to obtain a certain desired result. IMHO, this is only “perturbatively true” in math (or physics, for that matter) — meaning to say that, sometimes, it’s not possible to breakdown an action into its constituent bits… it’s very common for this to be possible only in hindsight (which is the very opposite of what’s done with FLOSS projects). Further, sometimes in math it’s not about “piping” the result of one tool into the input of another, but more like a “star-like” application, a multi-faceted combination of the use of these bits. And, in this sense, as Tim has already pointed out, one does need to have some sort of “global picture”, otherwise it’s all but impossible to know how to proceed.

    (2) Focus on *collaboration*: this may look like a marginal point, but the inner workings of a piece of software do model its outcome in a very clear way. For instance, Wiki projects focus on a database-heavy approach, i.e., as far as wikis are concerned, it’s about accumulating data, filling the DB with information. However, from the point of view of software development, i believe that an approach like ‘git’ (i.e., a “version control”-like approach) is better suited: this approach focuses on the particular contributions /per se/, rather than on “volume of data gathered”. While it’s true that wikis do have a very basic “version control” system, this is not their main fulcrum; while for “version control” software, this is the very point — and this is the reason, e.g., ‘git’ was chosen to handle the projects of the Kernel, Gnome, X, etc; this way, massive collaboration can be better handled.

    So, in summary, i’d say that if a person was able to break apart the proof of a theorem into completely dumb-proof bits, it’s massive collaboration would be optimized, once one could sit and blindly apply a few [highly modular] tools in order to ‘spit’ the answer. However, for more intricate projects, the collaboration may need to be twofold: on the front of the proof itself, but also on the strategical front as well (which makes it a bit of a meta-collaboration).

    Cheers.

  7. Robert Says:

    Others have mentioned modularity and plugin-in structure before. But what I would like to emphasize is the specification of “interfaces”: In order to contribute modules without having to understand the big picture, you would need to have a somewhat clear structure of what the finer grained structure is supposed to deliver. Such minimal interface allows you to contribute plug-ins with only the requirement to match the interface.

    In the tree model of progress, there is of course as well the danger of reinventing the wheel a million times: There should be a possibility to have cross connections for sub-problems that appear at different places but that are of a similar nature so they might as well be handled all at once. To discover those, of course, a more global point of view is needed.

  8. Martin Schwarz Says:

    With regards to the similarities of OSS and mathematical proof development, I think there is one crucial difference to observe: A successful OSS project is usually released very early, giving only an idea of where it is about to go and what might be useful, but where it is already sufficiently useful to attract some expert users that both, make use of it, and – as they are experts – have the capabilities to change, extend, and generalize it to solve their higher-level goals even better. Only after the expert group has sufficiently generalized and completed the product, non-experts users will be able to pick it up and use it as black-box.

    Translating to maths collaborations, I think this would translate to proof important special cases first, that are usable by quite a number of experts, attract their attention and make them generalize the special-case to full generality. Or, it might map to the development of good conjectures with supporting evidence, that would allow to proof various conditional results within some other context and which would attract experts of these fields motivated to remove the condition their higher-level proof depends on.

  9. Jason Rute Says:

    Hi Tim,

    One comment I have to add is that your new suggestions on how to do collaborative math remind me of collaborative projects in formalized math where the goal is too find a machine-checkable formal proof of a math theorem. The projects–whether done individually or in a group–often involve splitting a much bigger project into smaller subgoals, which in the case of a collaborative project can be handed out to individuals who don’t necessarily need to know the details of the larger proof.

    One example is Tom Hale’s Flyspeck project to give a formal proof of the Kepler conjecture using the theorem proving system HOL Light. It’s hosted on Google Code under an open source license. It involves collaborators from across the world. And each can work on their part of the project without necessarily having to understand the other parts. (This is only one such project, but also possibly one of the most open ones. There are others using the Mizar, Coq, and Isabelle systems. See the December copy of the Notices for more details.)

    There are however some big differences between your approach and that of formal math. Formal math projects usually start from an existing informal proof (although it may still need to be modified novelly to make it easier to proof formally). Also, it’s easier to hand out smaller projects since it usually involves giving someone a proof of lemma X and saying “go formalize this.” Also the time frame on these projects can be quite long because of the tediousness of formalizing even the most simple math.

    Yet, there may still be some similarities that can be learned from. It seems in the Flyspeck case, that there are two types of participants. The first are those who are “experts” in the field who have helped Tom considerably in checking the correctness of the computer code (the reason his proof was so controversial in the first place). The others are the people newer to the scene (many undergrad or grad students) who formalize basic mathematical facts needed for the proof. To make the later case go smoothly, Tom Hales spent (what I imagine was) considerable time making a clear outline of the facts needed for his proof and a sketch on how to prove them. This document is on the Google code site and may have similarities to the wiki idea you have. Although, as I already mentioned, the goals are a bit different.

  10. gowers Says:

    There are many interesting points here, of which one, which has been made in various ways, strikes me as the main one: that in order to make small contributions to a large software project you need to have something that’s already fairly well developed (or at least, if this isn’t an absolute necessity then it certainly makes things far easier).

    Just to make sure there is no misunderstanding, I’d like to spell out the analogy I am drawing, because I think I didn’t make it sufficiently clear. In the third paragraph I may have seemed to suggest that the analogy was between computer programs and mathematical proofs, with subroutines corresponding to lemmas, and so on. And indeed, I think there is a close analogy there. However, that is not the analogy I want to highlight. The important analogy from the Polymath point of view is between a big piece of software and a proof-discovery tree. The proposer of a Polymath project could get things started by writing down a number of thoughts and arranging them in a tree-like fashion (or perhaps some more general directed graph). That would be the analogue of an initial piece of software that did actually perform some interesting function.

    Note that the initial Polymath document would not necessarily need to prove anything: the analogue of “perform some interesting function” would be “improve understanding of the main problem”. Then the analogue of “discover that the existing software does not do what you want it to do” would be “find that the existing explanation is hard to understand”. The analogue of “rewrite part of the code so that it works on my computer” would be “rewrite part of the proof-decision tree so that I find it more transparent”, and so on.

    In other words, the hope would be to get the project to take off by having as its main objective not so much the solving of the problem (though of course that is what one wants to be the result of the exercise) as achieving a very complete understanding of it and all its attendant difficulties.

    Another point I’d like to make very clear: if you have a tree-like structure then there is implicitly some relationship that is expected to hold between a node and its children. That relationship would not be logical dependence, but rather something like “is the motivation for”. Logical dependence from a child node to a parent node would be just one of many ways in which the parent node could be motivation for the child node.

    If somebody were to produce a proof-discovery tree that reached a certain critical mass and had lots of “open” leaves (that is, leaves with well-defined questions that were still not answered in any remotely definitive way), then I think that some of the barriers to participation would be removed.

  11. Gil Says:

    “First, let me say what I think is the main rather general reason for the failure of Polymath1 to be genuinely massive.”

    “Here, though, is my preliminary diagnosis. What I think we did that made it hard for all but a few people to contribute…”

    I do not know what masses are expected when overall the mathematical community is rather small. And I also do not fully understand what “few” means. But I think Tim’s statement are simply false.

    There were a large number of people contributing, and there was an even larger number of people following in a way which make them potential contributors if something will come their ally. When people said that they followed the discussion and had some idea but then somebody presented it before they had a chance to, for me this is part of contributing. In fact, even people who followed it and thought about the issues and did not come with something to say are contributors to this collective effort.

    So I think that overall the large collaboration we have witnessed is superior in terms of potential in solving math problems and in terms of having a large number of people being part of a collective effort compared to the tree-fantasy in the post.

    “My ultimate fantasy is that it might be possible to solve a problem without anybody taking a global view: everybody would concentrate on local questions, and at some point the proof-discovery tree would become a successful one.”

    What so good about this fantasy? Having a global view is a nice part of doing mathematics and the bottlenecks are often with technical matters and not with global views.

    “If some fantasy like this became a reality, so that in an essential way the problem was solved by a collective super-brain and not by the combined global understanding of a handful of individual brains, then the problems about credit would be even more interesting.”

    Maybe we should wait with this super-brain ideas for a little while.

  12. Hunter Says:

    Re: the form of informal discussion
    (the original) wikipedia now has a ‘discussion page’ attached to each content page… It seems to me that following this model would provide a good place for what were formerly blog comments. The discussion pages should have rss feeds automatically generated so collaboraters could aggregate the feeds of parts of the problem they’re interested in. If the project got large, perhaps someone would come forward to do a daily digest of a broad range of the discussions so the thing didn’t get too balkanized.

  13. gowers Says:

    Gil, the super-brain idea is perhaps a little far-fetched, and risks taking some of the joy out of doing mathematics. It would be good only if there were some problems that could be solved in that way that could not be solved with smaller collaborations, and as you say it is far from clear that that is the case.

    Nevertheless, I think that the change of emphasis I describe could be a good thing quite independently of whether it increased the number of participants by a factor of ten. If we took the wiki part of the process much more seriously, aiming to include, in a carefully organized fashion, all the promising thoughts that emerged in the blog conversation, then the result would be a document that ought to be much more useful to people who wanted to join in the collaboration. And it would also be make it much easier for people who wanted to attempt the problem at a later date if it did not end up being solved by the Polymath collaboration.

  14. Gil Says:

    Dear Tim,

    Indeed you may be right. I tried also to think if the idea of more emphasis on wiki rather than blog threads is good or bad but beside my personal hunches and preferences I simply dont know.

    Regarding polymath1 I’d love to see the discussion continues in parallel to having a wiki proof at place and letting the participants digest it. Talking about Shalahfication or Gowersfication of the approach towards better bounds and discussing spin-off matters with the attention the project got, can be more useful (and more timely) than trying to tune the polymath mechanism itself, and I am sure many like me are eager to see how the simplest Szemeredi proof will look like, and perhaps teach it soon.

    But the main point is this: polymath1 was a success in all respects. It should be completed (where, as usual, those who contributed the most will have the lion-share of the writing and finishing and explaining job). And you should be proud and happy; it is not GRH, neither it is a computationally-superior mode of doing mathematics (yet), but it is a very important problem, and a truly novel mode of collaboration, which attract attention and enthusiasm.

  15. Kristal Cantwell Says:

    I think the idea of proof-discovery tree sounds interesting. It is worth being implemented to see what happens. I suspect that any growth between this and the next such project will be around a factor of two and that a series of projects might be needed to get greater growth. Your ultimate fantasy reminds me of the Chinese room argument. It might come up as an attempt at converting a large computer proof into something that could be understood by humans.

  16. Joseph Myers Says:

    (Writing as a mathematician turned (mainly open source) software developer.)

    There is by now a substantial body of literature dealing with open source software development from a social sciences viewpoint. This might help answer some of the questions about how and why it works and what motivates people to take part. (I do not have any specific guide to or bibliography of this literature to recommend. Although Eric Raymond’s essays are certainly worth reading as one viewpoint on things.)

    Then again, the literature is likely to give several different answers to some questions; different open source projects can be very different in how their development communities operate, so you can’t necessarily read general conclusions from a study of or experience in one community. And just as the communities differ, so too do the individual participants and how they participate. Some do make small local changes without a global picture, some make more wide-ranging changes or refactor code after subsequent changes have made it clearer how things should be structured; some projects have more scope for the local changes and others have greater need of the refactoring. Some discuss and agree on designs for more complicated changes before bringing them into effect. Some provide community leadership, official or otherwise. Some may focus on documentation, or on triage of bug reports. Some prefer to contribute pieces to larger longstanding projects and choose projects to contribute to accordingly; others prefer creating something new on their own and found new projects. (Sourceforge is littered with any number of duplicative and largely defunct projects from people who made their own instead of contributing to someone else’s project to do something similar.) It’s likely massively collaborative mathematical research also has room for the different styles of projects and contributors.

    Studies have, for example, considered such areas as: the economics of open source software; how developers interact in open source development; the demographics of open source development (linking into a previous question regarding women contributing to Polymath, at least one study found that open source developers were 98-99% male compared to 70-80% in traditional software development, and there is a whole bibliography concerning that subject); motivations of open source developers; the extent to which the development is done by developers who are or are not paid to do it. (And where any such area is studied statistically, different results may and do arise depending on whether you count by number of developers, number of separate contributions or amount of code contributed.)

    One day social scientists may be looking at Polymath collaborations in similar ways.

  17. Michael Hudson Says:

    Two points:

    1) There is a growing belief in software development in general that the “top down” style of software/project organization where requirements are understood before the architecture is laid out before the programming begins is not a realistic or helpful model for how to get things done (keywords “agile”, “lean”, “scrum” etc, though it does seem to turn into a religion for some people). I have no idea what this means for collaborative mathematics — except that perhaps it’s worth thinking about not over-emphasizing on solving a particular problem and rather try to develop tools that are generally useful.

    2) The way open source projects keep track of where they are and what needs to be done is often in an issue tracker, not a wiki. Have you considered using trac or Bugzilla or similar software?

    • Michael Nielsen Says:

      Two points about issue tracking software:

      (1) Issue trackers help a lot with modularizing problems (and localizing the scope of conversation), making it easier for more people to contribute in parallel, without being overwhelmed by the quantity of conversation.

      (2) In a fashion similar to Tim’s suggestion of a proof-tree, issue trackers can be used to track dependencies in a hierarchical way, even showing dependency graphs.

      (The link is to the Firefox “bugtracker”, but despite the name it’s used to handle issues other than bugs, including feature development – here’s an example.)

      I don’t think existing issue tracking software is suitable for Polymath-like collaborations, but as a successful existing mode of organization it’s a useful model, and addresses some of the problems raised in Tim’s post.

  18. Rajiv Das Says:

    You can explore more collaborative UI like a storyboard that makes it easy to add/rearrange tag ideas as well as have a big picture view of what’s hapenning.

  19. Boris Says:

    I do not understand, Polymath has succeeded in solving a larger problem than it meant to solve. It did so quickly. Why then is “small” scale of polymath a failure? Why is it a failure to solve a problem having expended efforts of fewer humans rather than more? Doubling the number of people will not only double the number of thoughts produced, but also increase the number of duplicate thoughts, and increase the communication overhead regardless of communication infrastructure used. Is it a wise expenditure of human efforts? Isn’t it better if instead there are two (or more) completely independent projects, so that less effort is wasted overall?

    My personal feelings are strongly against doing mathematics without “global picture”. Even if I concur that it is possible, I would not want to be the person on the bottom of this solution-tree, who labours on a subproblem he hardly understands relevance of. The “ultimate fantasy [...] that it might be possible to solve a problem without anybody taking a global view” is my ultimate nightmare. The only reason I think about this or that piece of mathematics is that I want to understand what goes on. I do not care whether a theorem is true or false. If tomorrow there is a solution to some problem in which I am greatly interested, such as the diagonal Ramsey numbers or Roth in Z_3^n, but it is such that I have no realistic hope of understanding the solution, or the solution ‘cheats’ and provides no insight, it will make me sad rather than happy. I do not want to become a part of a “super-brain”, I want to understand the mathematics with my own feeble brain,

  20. Michael Nielsen » On scaling up the Polymath project Says:

    [...] Gowers has an interesting post on the problem of scaling up the Polymath project to involve more contributors. Here are a few [...]

  21. gowers Says:

    Boris, maybe it was a mistake to talk about the fantasy of lots of people rapidly solving a problem with nobody having a global picture. I would be fascinated if that were possible, but I would be no keener than you to spend my life making small contributions to lots of proofs that I didn’t really understand. However, I would like to repeat that one obtains a different and more palatable version of the fantasy if one interprets the word “local” differently: I think it could be mathematically rewarding to participate in a Polymath project and understand only a tiny part of the proof-discovery tree if that tiny part had the structure of a branch. Then one could be working on a small problem, but because one understood the entire path that led from the statement of the main problem to the small problem, one would have a clear motivation for the work, a clear sense of progress if one managed to solve it, a clear idea of where to go next, and so on. Indeed, this is very like usual research: one spends a lot of time thinking about local problems, and it is even possible to forget the big picture while one is doing so.

    I’d also add that it seems very likely that some problems are by their nature much more parallelizable than others, and this could have a big effect on the optimal size of a collaboration. I think this is the right way of looking at what happened with DHJ: I certainly don’t regard that as a failure, and the size of the collaboration felt as though it worked extremely well. Perhaps it was even more or less optimal for that problem.

    One last thing I wanted to say was to take up a point made by somebody in a comment on another post. (I apologize for not being able to remember exactly where the comment is and who made it. Gil, was it you?) They compared Polymath to what goes on in theoretical computer science, with posts corresponding to papers, which in that area are often short and made public very rapidly. There is a large subset of what goes on in theoretical computer science that could be regarded as a massive collaboration aimed at proving that P does not equal NP. One can have a rich and satisfying mathematical life contributing to this huge project, even without understanding every last detail of every promising approach to the problem. What I envisage is something like this but on a smaller scale and done online.

  22. Jason Dyer Says:

    Re: Boris’s comment.

    For massive collaboration, I think a “cloud” metaphor is better than a “tree”. We need to get away from the notion of one portion of the work being “more important” than another. Yes, there are problems that contribute more to the larger mathematical picture, ideas that are more brilliant, but side problems can be interesting in themselves (and form their own spin-off projects that have nothing to do with the original — and I think that should be allowed). For example, even if the hyper-optimist conjecture is proven false, I think Fujimura’s problem is interesting enough to be studied in itself.

    This ought to remove some of the depression of not being able to see all the parts of the cloud. This is how it is in normal mathematical development anyway.

    What also is appealing to me about a “cloud” is that there could be many different separate projects, yet if everything is united with the right technical tools. Imagine two projects having the exact same side question, and another group is formed to solve that side question which “links” the other two projects.

  23. nicolaennio Says:

    polymath project is about finding a mathematical proof, but an important by-product is that collaborators gained knowledge and insights about combinatorics.

    What I mean is that this kind of collaboration is a format for teaching mathematics in the same way an open source project makes you learn about a software or train you to produce clearer code.

    Wouldn’t be nice to look for “instructive theorems”? That is theorems that should be solved by a group and that challenges people to “create” a common background that will be the real product of the collaboration?

  24. Gil Says:

    The tree-proof with nobody having a global view of what is going on is sort of a nice idea even if a little far fetched (not the idea itself but the position that it may lead to some sort of a superior ability to prove theorems). We can try to experiment it sometime. It seems related to automatizing doing math which is another intruiging idea.

    I wouldnt mind spending time in doing some little work in a big project I do not understand. Actually I took part in a sort of a similar project. It was a large tree-like collaboration aimed at refereeing Doron Zeilberger proof of the alternating sign matrices conjecture (monotone triangles). At some point Doron thought that his new refereeing tree-process where nobody has a global understanding but only locally check some subtree of the proof as even more important than the proof itself. You can find the paper and the refereeing story in Electronic journal of combinatorics.
    The entire proof was checked also single handedly by David Bressoud.

    One lesson I had from that project that a more involved directed graph is better for various reasons than a tree. Maybe it applies also here.

    (“There is a large subset of what goes on in theoretical computer science that could be regarded as a massive collaboration aimed at proving that P does not equal NP.”

    I think a lot of it is indeed around NP not equal P but only small part of it really meant at proving that NP is not P.)

  25. carnegie Says:

    Are problem solving projects really suitable for mega-collaboration? A focus on problem-solving tends to involve lots of deep thought rather than lots of broad thought. Usual caveats about no clear dichotomy etc. apply.

    Theory building projects would seem to me to be far easier to compartmentalize, to have a “benevolent dictator” (like Langlands) keeping track of the direction and management – and also for people doing “small” things to feel they are making genuine contributions whilst maintaining a sense of the bigger picture.

    In many ways Gorenstein already accomplished this, just not quickly and without the massive communication boosts the internet allows. Atiyah did too, with his “influence number theory using tools from QFT” project which has come to fruition with (e.g.) the proof of the fundamental lemma or the calculation of Tamagawa numbers using Yang-Mills theory.

  26. Polymath1: Success! « Combinatorics and more Says:

    [...] related posts: Tim Gowers raised in this post  interesting questions regarding the possibility of projects were the actual number of provers [...]

  27. Timothy Chow Says:

    It surprises me a little that, in this discussion, there haven’t been more frequent mentions of the classification of finite simple groups as an example of a polymath-type effort where nobody had a “global” view in Tim’s sense. As I understand it, no single individual has a complete grasp of the entire proof. The lack of this global view didn’t mean that working on individual pieces of it was not intellectually satisfying.

    What strikes me as being a key feature shared by (a) large-scale software development, (b) Flyspeck-like formal theorem-proving, and (c) Zeilberger’s tree of referees is that there is a pretty clear idea at the outset of what needs to be done. To be sure, the details are not at all clear and can turn out to be quite different from what one might initially expect. However, one knows at the start that there are no fundamental obstacles to completing the project and that the difficulties are largely logistical. I suspect, then, that while Polymath might be extremely good at solving problems for which a lot of the necessary tools are already “out there,” it may not have any particular advantage when it comes to coming up with radically new ideas.

  28. Kevembuangga Says:

    Hé, Hé, interesting…
    It seems carnegie and Timothy Chow hold contradictory opinions.
    Could any or both of you elaborate?

  29. Timothy Chow Says:

    carnegie and I seem to agree that Polymath is more naturally suited to “broad thought,” i.e., combinations of disparate but existing ideas, rather than “deep thought,” i.e., radically new ideas. Where we might disagree is whether “broad thought = theory building” and “deep thought = problem solving.” The dichotomies broad/deep and theory/problems seem orthogonal to me.

  30. gowers Says:

    Just to add my pennyworth, I think I do genuinely (if tentatively and conjecturally) disagree with you here Tim: my hope is that the broad thought that Polymath is suitable for can provide more quickly and efficiently a platform for the discovery of the radically new idea that solves the problem. For a long time I have tried to argue against the conception that unexpected new ideas come from a mysterious “flash of inspiration” or “stroke of genius”, and Polymath is (partly) an attempt to add to that argument. (I’m not trying to suggest that you have a simplistic attitude to this, and it may be that we don’t disagree after all, but your words sound on the surface as though they disagree with me to some extent.)

  31. Gil Says:

    Hmm, this is an interesting issue and my heart goes with Tim G. on this one. An example I like is the “probabilistic method”. This is a deep conceptual idea that had profound effect in various fields, but not at the same time. Suppose that there was a secratary in the math department whose job was to send memos like: probability was used successfully in number theory; folks, lets try it for combinatorics, or for algorithms, or for groups, or for Banach spaces, or for topological manifolds,…So in principle, some flashes of inspirartions could be replaced by routine efforts.

  32. carnegie Says:

    Timothy Chow: yes, on reflection that was not a valid dichotomy. But the point is that with ‘theory building’ you can achieve an awful lot by asking “what structures are involved” and then “what superstructures are these structures examples of”. If you find GL(n,C) playing a role in some theory, an “obvious next step” is to say “can we extend this to an arbitrary Lie group”.

    If you have a property which applies to smooth manifolds an “obvious next step” is to ask “does this extend to orbifolds”.

    “Obvious next steps” in physics include creating supersymmetric versions of a theory, or noncommutative versions of a theory.

    If a property holds over the complex numbers, a theory-building number theorist will immediately ask “what about finite fields and p-adics”.

    This attitude is far more amenable to breaking up tasks.

    Gil says he agrees with Tim Gowers, but the example he takes perfectly demonstrates the theory builder’s attitude. There is no “problem” to solve, the issue is instead “use methods from field X to gain an understanding of field Y”.

    I don’t agree with Tim Gowers. I believe that “moments of inspiration” are often genuinely enlightening. Many mathematicians have vivid memories of such moments.

    Of course, the chances for such moments to occur increases when you are exposed to and moderately engaged with many different people talking about different things. But the actual process leading up to a flash of inspiration is opaque. In that sense, polymath could prove incredibly valuable from the perspective of mathematical sociology, not because it is massively multiplayer, but because of the accompanying wiki-like documentation of the lead-up to the flash.

    If I kept notebooks of all my ideas, including the stupid ones, and occasionally wondered upon achieving some milestone: how did I get here, and what was the roadblock, and how can I make sure I incorporate those ideas into my future thinking, I suspect I would become a far better mathematician.

  33. Timothy Chow Says:

    Let me clarify my view on “radically new ideas” or “flashes of insight” versus Polymath. I don’t mean to endorse the modernist ideal of the solo genius having a brilliant idea ex nihilo. I agree that ideas that seem to have come out of nowhere could (in principle at least, if we had a complete record) usually be traced back to “explainable” origins, and that the inputs from multiple people could expedite these flashes. In a recent interview, Ingrid Daubechies said that she often had recollections of “sudden flashes”; however, when she went back to her written notes (she keeps quite detailed notes of her ideas, including false starts), she found that her memory was faulty: in point of fact, the elements of those sudden flashes could be discerned in retrospect among the unsuccessful and abandoned attempts. Wiles’s account in his famous paper on Fermat’s Last Theorem, if you read it carefully, also shows that his sudden flash was not ex nihilo, but benefited from earlier abandoned ideas and the input of others.

    However, I think it is useful to distinguish between Polymath on the one hand, and the entire mathematical community as a whole on the other hand. You could, perhaps, argue that the community of all mathematicians, past and present, comprise one giant Polymath. This point of view strikes me as not being very useful. For me, Polymath is an entity that works on a certain project for a certain amount of time. The exact boundaries may be fuzzy, but they stay within certain limits.

    The question, then, is whether major breakthroughs can be significantly expedited by a Polymath-type collaboration. It’s possible, but my instinct is that the nature of a major breakthrough is that its genesis can’t be “forced” just by having a much larger collaboration than usual. For the breakthroughs to occur, we of course need a healthily functioning mathematical research community that shares its results and builds on previous work on so on. This creates a giant pot of simmering ideas, out of which we hope that good things come. Beyond this, though, I think it’s very hard to control when the perfect confluence of events will occur that generates a big leap. It might occur in Polymath’s mind or in the mind of an individual who happens not to be collaborating with anyone at that moment (though of course he or she will have learnt a great deal from others prior to that moment).

    Where Polymath has a definite advantage over the individual is in projects where there is already a tolerably decent map of the territory to be explored but the territory is too large or requires too many different kinds of talents to be covered by one individual. But for problems where we currently have no idea how to proceed, I think we just have to muddle ahead as an entire community and hope that we’ll eventually get close at some point.

  34. Klas Markström Says:

    One thing that the polymath seems to be efficient at is eliminating flawed approaches to solving a problem. When someone proposes an approach there is a large number of people with different skills who can construct counterexamples which the proposer might not have found so quickly.

    In a way this gives a kind of natural selection-effect which both guides the efforts towards those approaches which have a chance at working and helps build an understadning of what a working approach must be able to cope with.

  35. gowers Says:

    Tim, I think we probably do disagree, but not all that much, and with much less than full certainty on both sides. My instinct tells me something like this. Obviously no method can “force” a breakthrough, for the trivial reason that there may simply not exist an argument that is remotely within today’s technology. So the question is whether, if Polymath works on a problem where a very unexpected argument does happen to exist, it is potentially a more efficient method for finding that argument.

    My instinct tells me that it is. One reason is the one that Klas Markström gives: Polymath can judge approaches quickly, which makes it more feasible to throw out slightly wild ideas in the hope that one of them will work. (As an individual, one could do the same, but sifting out the one good idea from the 99 bad ones would be a much slower process.) A second reason is that, as Polymath1 showed, the initial exploration of fairly standard ideas can be done very quickly, so one can arrive sooner at the point where it is much clearer what needs to be done and what the real gap is.

    I’m still not certain that I’m disagreeing with you, because you may perhaps be talking just about rather extreme cases of problems like the irrationality of \gamma, where nobody has the slightest idea even where to start. But for that kind of problem I think the history of maths shows that progress, if it occurs, tends to occur when other parts of the subject develop to the point where the problem changes from being completely out of reach to being within reach but difficult. One might cite Fermat’s last theorem as an example of this. What it doesn’t show is that we have to wait for a genius to have an unexpected idea in isolation. Of course, it often has taken a brilliant idea, and the connection between Fermat and Shimura-Taniyama-Weil was such an idea, but it’s not obvious that an appropriate Polymath couldn’t have had that insight more quickly. (E.g., someone might have just suggested completely unseriously that a counterexample to Fermat could lead to an interesting elliptic curve, someone else might have picked up on that idea, etc., which, in slow motion, is sort of what happened.)

  36. Timothy Chow Says:

    The irrationality of \gamma is certainly one problem of the kind I was thinking about. Consider the irrationality of \zeta(3) as another example. While it’s possible that Polymath might have beaten Apery to the punch had it existed then, I think it’s far from clear. The problem with \zeta(3) wasn’t that it was “technologically” out of reach. I think it was just that almost everyone thought it was an unapproachable problem—or at least that it wasn’t worth working on.

    For Polymath to work on something, a certain number of people have to all be convinced that it’s worth their while to think about the problem. An unpopular topic, or an approach that everyone thinks is crazy, may actually have *less* chance of getting traction with Polymath than with an individual. There are plenty of instances where someone decides that he or she doesn’t give a hoot about fashion, and insists on working doggedly in some direction that everyone else thinks is hopelessly misguided, and is eventually vindicated.

    Big breakthroughs often have a strong psychological component to them. Looking at the idea after the fact, one might be able to argue that many other mathematicians could have made the same breakthrough, but just didn’t have the courage to believe that such a bizarre approach could possibly work. Groups and individuals have different psychological characteristics, and I believe it is a mistake to think that the group will always outperform the individual. The individual has the advantage of just needing persistence and faith and doesn’t have to worry about selling the idea to others before the final results are obtained.

  37. Jason Dyer Says:

    I think we’ll see a proof of the normality of \pi before a proof of the irrationality of \gamma.

    (For fun, look up Kaida Shi on Arxiv, who has apparently not only proven that gamma is irrational, but also the Riemann Hypothesis, the Goldbach conjecture, and the twin primes conjecture.)

    I’m with the “flash of inspiration is a myth” crowd, but that’s a very good point about the social implications of crazy ideas. Dr. Gowers tried to alleviate this to an extent with his rules, but I did sense people less willing to get out on a limb in posts than they were doing in their heads. Is there anything else we can do to encourage risk-taking ideas?

    (On a side note, could someone involved with the project read over this post of mine intended for a general audience? Feedback has been positive from outside visitors but I’d also like an opinion from someone who can tell based on the overall picture if anything should be changed.)

  38. Peter Boothe Says:

    Although there is a 1:1 correspondence between proofs and programs, there are important social differences between them that make your metaphor a little worrying. In particular, a program can be successful without being bug-free. Linux has, and has always had, many bugs. Indeed, it is quite likely that all programs of sufficient size and complexity have some bugs.

    A program with bugs can still be very useful, but it seems like a slightly-incorrect proof is, at best, only a little better than just being wrong. If this is true, then the open source “bazaar” model doesn’t really work without a consistent and well-defined global view of the problem among all participants.

    The other big problem with the metaphor is that most large programs start out as working small programs and grow from there. Most are not designed as big programs solving a big problem. Is it possible to “grow” a small (correct) proof into a larger (correct) proof of some larger truth? It seems like this is not how mathematics is usually practiced, but it would be the growth process that most closely mirrors the open source process.

  39. David Rutter Says:

    I like the way the “proof-discovery tree” very much resembles a software dependency tree. Since dependencies are checked and dependents automatically compiled in, it seems possible that this idea could easily be implemented in web software. In particular, it could be mechanically organized so that when a proof-discovery tree is complete, the software has already compiled the proof in a single page. All that would need to be done to make a paper would be to clarify the proof output (by, for instance, changing duplicate variable names and adding more coherent English transitions between proof steps) in ways that software would find difficult.

  40. Henry Says:

    Hallo!
    I think the problem is about accessibility. When users have read through an excess of beta material, they lose a great amount of time, perhaps creating a sense of chaos. A solution is to index the sites, and customise a search engine for each index. It is the dogma of Math Harbour, small is beautiful.

    Before we have the nice Google Wave, perhaps, the CSEs at Mathematics would be useful to you. Just click the right hand corner for unsolved problems. Lectures, proofs and examples can be found in the middle.

    Happy solving.

  41. Dan Dutrow Says:

    I wonder if the collaboration activity could be viewed in multiple ways, catering to how the individual wants to digest the information.

    For example, integrating something like Google Wave into the mix could provide the threaded discussions necessary to follow down different paths. Meanwhile, the same information could be exported to a blog in serial/linear fashion so thoughts could be viewed chronologically.

    Furthermore, integrating chat or micro-blogging tools into the mix would allow for quicker, simpler, contributions. (I noticed that the average length of posts increased beyond what was stated in the “rules.”) That can be done through Google Wave, XMPP (Jabber), or twitter.

    Another idea would be to display all posts chronologically, but color them by thread, and draw colored lines through the discussion. That would allow the eye to quickly filter through the information – focused on one thread, but collecting inputs from other threads through osmosis.

    At some point, threads could split and join (analogous to branching and merging of software). The issue there, of course, is that threads won’t join cleanly. Instead, only some aspects of one thread will relate to the other. Thus, it may be of value to share posts between threads. I don’t know of any tools that allow you to do this, besides concurrent versioning systems, like SVN, but that’s not really web-based.

    If you represent all posts as individual nodes, there must be some kind of network where interconnections could be made. There is value not only in the node, but also the link, because the connection of independent thoughts provides much insight into the problem. However, it might be hard to digest such a diagram in traditional web formats.

  42. Massively Collaborative Mathematics: lessons from polymath1 « Hypios – Thinking Says:

    [...] maybe in collaborative mathematics, the final document should not be thought of as a proof, but a proof-discovery tree: “The basic idea would be to produce an online document that thoroughly investigated all [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 1,439 other followers

%d bloggers like this: