In my previous post I suggested a way in which an online system of submitting and commenting on papers might perhaps work better than our current system of journals, editors and anonymous referees. I am very grateful to all who commented, both positively and (more often) negatively. It has given me a lot to think about. One thing that I wasn’t expecting, but should have expected, was that a number of people just plain don’t like the idea of an online alternative, regardless of the rational arguments. I don’t mean that there aren’t arguments to back up the dislike — merely, that I think that there is a dislike there, which becomes an argument in itself, since if many people have an emotional reaction against a new system, then that makes it less likely that the system will be adopted by enough people to become as officially recognised as the journal system. To avoid misunderstanding, let me stress that I’ve got nothing against emotional reactions, as long as they are backed up with arguments; and in the comments on my previous post they have been. Indeed, the arguments against various aspects of what I suggested have caused me to realize that there are some disadvantages I didn’t think of and others that I underestimated.
In this post, I want to summarize the points made in the comments (for the benefit of anyone who is interested in what was said but doesn’t have time to read through them all), and then make a second suggestion, which I think deals with a number of objections to the first. As with the first, I don’t see the details as set in stone. I think it’s an improvement on the first, but doubtless it can itself be improved on. Whether it reaches the level where one should actually consider trying to implement it is of course quite another matter. But I do think that these issues should be discussed: if we were designing a system from scratch for disseminating and evaluating mathematical output, I don’t think we would come up with the current journal system, though of course that’s not the situation, and historical accidents often result in quite good ways of doing things.
Summary of the reaction to the previous post.
I’ll number the reactions and attribute them, with links to the comments where they were expressed (in more detail). This isn’t a comprehensive list of objections — more like a list of the objections that have had an influence on the new suggestion. (Even then it may not be complete — apologies to anyone that I accidentally miss out.)
1. Andrew Stacey. The incentive system I proposed (roughly speaking, Mathoverflow-type reputation points) will not be enough to make people contribute. Similar attempted sites have failed, and this may be because what people need as motivation is a direct and immediate benefit from contributing.
2. Andy P. Even if a new system is demonstrably better for mathematicians, it still needs to be taken seriously by people in other subjects who have power over mathematicians (e.g. when handing out money). Everyone understands how peer-reviewed journals work, but that won’t be the case for some new website.
3. Alexander Woo. We need a way of rapidly sifting out the vast majority of candidates for positions that may attract hundreds of applications. A website with detailed narrative descriptions of papers will make that an impossibly long process.
4. Henry Cohn. Mathoverflow reputation points work because we know that they’re just a game. If they actually mattered, then abuses of the system and all manner of unpleasantness would be much more likely.
5. Yla Tausczik. A useful aspect of the current system is that journals fix an official version of an article, which can then be the one that other articles refer to.
6. Scott Morrison. To have a realistic chance of success, any proposal should be incremental rather than revolutionary.
7. David Savitt. One of the most valuable aspects of the current system is the kind of nit-picking feedback that ends up improving the presentation of a paper. People would be unwilling to provide that except anonymously — if, that is, they could be bothered to provide it at all.
8. Noah Snyder. So much is published that, whatever incentive systems one tries to provide, most of it would simply not be looked at.
9. Super Mario. It is very hard to persuade a lot of people to use a website, and the success or failure of attempts is sometimes extremely sensitive to tiny details of how the site works.
10. Andy P. The journal system works just fine, so trying to devise a new system is pointless, and potentially damaging.
11. Shahab also makes a number of interesting points, too long to summarize here.
The new suggestion.
Imagine the following common situation. You’ve worked for some time on a problem, and finally you’ve proved something interesting enough to publish — or so it seems to you anyway. So you write a preprint. Are you happy with it? Yes, up to a point, but you have a few residual anxieties, such as whether you’ve really got all those technical lemmas correct, whether you’ve mentioned all the relevant previous work, whether your write-up is going to be comprehensible to anyone else, etc. etc. Wouldn’t it be nice to get some detailed feedback before you submit it for publication? But who is going to be prepared to put in the work it takes to check calculations, comment on presentation, and so on?
That’s where http://www.howsmypreprint.com comes in. (The suggestion for a name is of course not serious.) You put your preprint on the arXiv and you create a page on Howsmypreprint, wait a little while, and then a few weeks later (or perhaps much sooner if the system really works) you get a list of typos, small errors, big errors if there are any, suggestions for how to improve the presentation, and so on.
But why is anybody ready to do that for you? Here’s where a suggestion of Andrew Stacey comes in: if you want your paper checked over in this way, then you have to pay for that service by doing it for other people. In other words, you accrue points for working on other people’s papers, and you spend them when they do work on yours.
That’s the basic idea. There are many details to discuss, but first let me say why I think that in principle it can deal with almost all the objections above. The numbers here will correspond to the numbers up there.
1. There is now a genuine selfish incentive for contributing to the site. If it is seen to work, then people will be keen to use the service, and therefore keen to contribute.
2. The site is not intended to supplant the journal system. It is meant to provide a new service.
However, it could have a profound influence on the journal system. For instance, if I get detailed feedback on my preprint, I could then submit not just the paper but the feedback too. Then the work of the journal could be greatly reduced: all they need from the referee is an assessment of how interesting the paper is, and the difficult bit — reading it carefully and making lots of suggestions — has been done already. Some journals might start to insist that all their submissions must first have spent a certain period of time on the site.
3. Since journals still survive, we still have a rapid sifting mechanism.
4. The points on the site are no longer reputation points — they are “brownie points”. In case that’s just a UK expression, I mean that they are rewards for a service rather than an indication of how amazing you are. Also, since the purpose of the site is not to evaluate papers, there isn’t much reason to game the system. (The only one I can think of is trying to earn lots of points by giving rather rushed and incomplete feedback. I’ll discuss that potential problem later.)
5. Not a problem as journals still survive.
6. This system is incremental rather than revolutionary, since it is an addition to what we have now, which could gradually replace certain aspects of it (the main one being that the hard work done by referees would be done at a different stage of the process).
7. Not a problem — the feedback could be provided anonymously.
8. If the points system was properly calibrated (which might be a challenge) then something like Kirchoff’s current law ought to apply: on average, if you contributed to the site, you would be rewarded for your contribution. To put it more crudely, all those authors writing uninteresting papers would be helping out with other uninteresting papers.
9. I can’t say that this proposal addresses the problem that it’s hard to predict what will work.
10. I think most of these objections don’t apply to this revised proposal.
So I’ve ended up saying that all the objections that a new proposal can reasonably be expected to deal with have been dealt with. However, that leaves another question: does this suggestion throw away so much that it ends up being pointless? If all that happens is that you sometimes get feedback on a paper before you submit it instead of getting it after you’ve submitted it, has anything significant changed?
I’ll discuss this at some length, but before I do, I’d remind you of Scott Morrison’s point, that change should be evolutionary. This is meant as a first step (which might be the only step we ever wanted to take), so part of the point of it is that it is not a big change. What I’d like to argue is that it’s a good small change, and that it could potentially lead to further evolutionary steps — by the gradual addition of extra features to the site.
But suppose that we just stick with the proposal above, which leaves the work of evaluating, certifying and archiving to journals but potentially takes from journals the more arduous task of reading carefully through submissions. Doesn’t that just leave everybody doing the same amount of work, with no further benefits?
I think not. One benefit of this system would be that the voluntary work we do by critically reading other people’s papers would be coupled more closely to the benefit we get from having our own papers critically read. At the moment, if we do a lazy job, or sit on a paper for a long time, or refuse to referee it because we are too busy, almost nobody will find out, so the negative consequences for us are close to zero. With the online system (which, remember, is first and foremost a supplement to the current system, which would only gradually come to replace certain aspects of it), we would be putting in the work in order to earn a reward. That would feel fair.
I also think that carefully reading a paper and making suggestions for improvements is a very different process from deciding whether it is good enough for a particular journal. This system could in principle decouple these two tasks. One person could do the careful reading and report via the website. Another, for the journal, could make a judgment on its suitability for publication. What’s more, the second person would be looking at a revised and improved paper, and would (if things work as I envisage them) have access to the report of the first person. So they would be making a judgment with more information available. I think this would make the job of refereeing papers for journals much less painful and much more streamlined. Something like it happens already when one is asked to give a quick judgment on whether it is worth refereeing a paper at all, but wouldn’t it be better to make those quick judgments after a paper has been through the “cleansing” process? And wouldn’t it be better for people who find it hard to get their results published if they could at least get some feedback?
Perhaps those would be fairly small gains — I’m not sure — but an online system would come with a lot of flexibility that the current journal system does not provide, which could potentially add considerably to those gains. For example, if one added back some of the features I suggested earlier, like the possibility of offering constructive comments on other people’s work, then the journal referee would have more information to go on, such as how other people in the area were reacting to the paper. Another gain that I’ve already mentioned is that it would be easy to allow different kinds of mathematical document to receive feedback, even if they were not intended for journal publication.
There is a potential problem with the points system, which is that it wouldn’t be right to reward people for giving just any feedback to papers: it has to be useful feedback, of a kind that demands quite a bit of time. It would be unfair if people were to receive detailed and helpful feedback in return for having offered feedback that merely mentioned one or two easily spotted typos here and there.
How can this problem be overcome? One idea is that when a report is offered on a paper, the author of that paper can say how satisfied they are with the report, on a scale from 1 to 5, say. So if your report says merely, “This looks OK to me, except that on page 2 line 5 you’ve written “the the”,” then you won’t get rewarded very much. I think it might be an idea to have a feature a bit like Mathoverflow’s “acceptance” of answers: if somebody does such a good job on your paper that it’s clear that there’s no need for anyone else to make a comprehensive list of detailed suggestions, then you “accept” their report — and they are duly rewarded. But the satisfaction mark could take into account how difficult you thought your paper was to work through in the first place.
Should these reports be public, and should they be anonymous? One possibility is that the writer of the report could decide whether he or she wanted to be named and was willing for the report to be made public. The author would also have a say in whether the report was public. If both referee and author were happy to have the report made public, then it would be. One could also have a private link to the report, which the author could make available to the journal to which he or she decides to submit the paper.
Another feature one might have is a sort of reverse acceptance, where the writer of a report would tick a box to confirm that a new draft of the author has dealt satisfactorily with the suggestions made. Again, this information could speed up the process of conventional publication considerably.
What if the author of the paper unfairly fails to recognise the hard work put in by somebody who writes a report? I don’t see an easy solution to this if the report remains private. If it’s public, then the unfairness of the author would be there for all to see, but not if it’s private. However, I think that only in rather difficult, exceptional cases would there be any reason for authors to behave in this way. Perhaps some people would be a little ungenerous, but if the referee had put in a lot of work, then surely the vast majority of authors would be happy to reward it appropriately.
A very simple additional feature that could be helpful is “certification buttons” that you press to give some useful information to other people. One might be, “This is a serious mathematical paper.” It wouldn’t say anything about whether the paper was correct, but just that it wasn’t the work of a crank. If you pressed that button, you would get a very small addition to your points, and it would be a matter of public record that you had pressed it. (The same would go for all certification buttons, to help people judge the value of the certifications.)
Another might be, “I haven’t checked in detail, but I’m confident that this proof is essentially correct. Yet another, for which more points would be on offer, could be, “I have checked carefully and am happy to confirm that the proof is essentially correct.” (That wouldn’t be a guarantee that every last detail was correct, but just that the certifier — who would be named — was very confident in the results.)
What happens if cranks start certifying each other’s papers? There are many possible answers to this. One I like is due to Noam Nisan (or at least, I got it from this comment of his), which is to set up “networks of trust”. If for any reason I decide that I trust the judgments of some reviewer, I click on a box that creates an edge between me and that reviewer (in a graph of which we are both vertices). Various algorithms can be used to derive information from the resulting graph about who to trust if you begin with some small group of people that you definitely trust. And official institutions could set up their own networks, possibly making them public.
As I see it, the main properties of this second suggestion are these.
(i) It is designed to be a useful supplement to the journal system rather than a replacement for it.
(ii) It could streamline the work we do for journals.
(iii) The incentive for working on this site would be that others would do the same for you. (That’s sort of true for the current system, except that if you don’t do your share of the work, others still do it for you.)
(iv) If somebody didn’t want to have anything to do with the new system, that would be fine.
(v) It would be easy to add further features to the system, which could allow both it and the journal system to evolve. Here are three examples. First, if a simple certification system could tell us that people we trusted had judged a paper to be serious and almost certainly correct, we might well have, for many papers, all the information we needed for metrics, sifting out of job applications and the like. We might find we could get by with far fewer journals. Second, if people wanted to experiment with ideas like virtual journals, it would be easy for them to do so. Third, one could make it possible to give feedback in the form of smallish comments that were different in style from the detailed reports that would be the main purpose of the site, but also useful.
Before I stop, let me mention one other feature that I’d like to see, which I forgot to mention earlier. It’s that everyone would start with a credit of say three papers (maybe more if they were PhD students). That’s partly so that the system can get started at all, and partly because beginning mathematicians probably need to get a few papers under their belts before they start refereeing the work of other people. (That said, many graduate students work through recently published papers of more senior people, and could in principle offer extremely useful feedback. That wouldn’t be ruled out at all.)
Another thing I forgot to mention is that since points would be just for earning the right to have feedback on your submissions, there would be no need to make them public, and so no unhealthy competition.
Yet another thing I forgot to mention is Andy P’s view that the journal system ain’t broke so we shouldn’t fix it. Rather than comment on this, I refer you to Noam Nisan’s elegantly written response (to which Andy P in turn responds).
Added later: I make a further suggestion in this comment below, which I think could significantly improve the chances of a site like this working.