Mathematics meets real life

I’ve been in two minds about whether to post this. On the one hand, I try to keep personal matters out of this blog — though there has been the occasional exception — but on the other hand I have a topic that fits quite nicely with some of what I’ve been writing about recently, since it concerns a fairly important medical decision that I have had to make based on what felt like inadequate information. Since that is quite an interesting situation from a mathematical point of view, and even a philosophical point of view, and since most people have to make similar decisions at some point in their lives, I have opted to write the post.

The background is that over the last fifteen years or so I have had occasional bouts of atrial fibrillation, a condition that causes the heart to beat irregularly and not as strongly as it should. It is quite a common condition: I’ve just read that 2.3% of people over the age of 40 have it, and 5.9% of people over 65. Some people have no symptoms. I myself have mild symptoms — I can feel a slightly strange, and instantly recognisable, feeling in my chest, and I experience a few seconds of dizziness almost every time I stand up from a relaxed seated position — otherwise known as orthostatic hypotension, which I often used to get anyway (as do many people).

I would gladly live with those symptoms, but unfortunately that’s not all there is to it. When a heart is in atrial fibrillation, it is not beating as efficiently as it should, and a little pool of blood can form that doesn’t get pumped away. And if that happens, it can form a clot. And if your heart then goes back into sinus rhythm (that is, it starts to beat normally again), that clot can get pumped out into your bloodstream and wreak havoc: in particular, it can lead to a stroke. I know this all too well, because my father had a severe stroke for exactly that reason in 2001.

Until 2004, my bouts of atrial fibrillation, of which I think there were two, were several years apart and lasted a few hours each. But then in early 2004 I went into atrial fibrillation and didn’t come out anything like so quickly. To decrease the risk of stroke, I had injections of a blood-thinning drug called Heparin (or at least, that’s what it’s called in the UK) into my stomach, once a day. But then I was put on to Warfarin, a drug that is used in rat poison. When a rat eats Warfarin, it goes off and has a haemhorrage and dies, but if you take just the right amount, you can thin your blood to the point where it is not dangerously thin, but atrial fibrillation is less likely to cause clots. The appropriate dose varies widely from person to person, so they have to increase it very gradually, testing your blood several times, until you reach the right INR (international normalized ratio), which roughly speaking means the ratio of the time your blood takes to clot to the time it would take to clot if you didn’t thin it. The recommended INR for people with atrial fibrillation is between 2 and 3.

When my atrial fibrillation ended in 2004, I was at an early stage of the process of getting the dose right. I asked my doctor whether that meant that I had in fact been more or less unprotected at the critical moment, and the answer was yes. So a completely standard procedure — coming off the Heparin before the correct Warfarin dose was established — had a very obvious defect. Fortunately, I didn’t have a stroke. (The probability was quite small, but even so.) That’s nothing compared with my father’s experience: he had his stroke after being advised by his cardiologist to come off Warfarin, a recommendation that other doctors told him later made no sense at all.

Let me try to fast forward to the present day. I have had quite long periods without AF, sometimes as long as two or three years, but I have also had two periods where I went into AF and it became pretty clear that I wasn’t going to come out of it again spontaneously. This, by the way, is very normal — once it starts, it gets gradually worse. Twice I had an electrical cardioversion: you go under a general anaesthetic and are given an electric shock that stops your heart, and when it starts again, if you are lucky it starts in sinus rhythm. When you have an electrical cardioversion, they try three times before giving up. I was lucky both times and went into sinus rhythm after just one shock.

The first cardioversion lasted me three years, apart from one 24-hour bout that ended of its own accord. The second cardioversion was in June, but in early September I went back into atrial fibrillation.

So what is the decision I have recently made? Well, back in 2004, when I first went to visit the person who is now my regular cardiologist, he told me about an operation called a catheter ablation. This is a procedure where the surgeon puts a wire into your leg and up an artery all the way to your heart. The tip of the wire then burns a bit of the surface of your heart, which causes it to form scar tissue that doesn’t conduct some faulty electrical signals that are responsible for the atrial fibrillation. At the time, my cardiologist said that it offered the prospect of a permanent cure for atrial fibrillation, but that it was probably best to wait, since the operation was relatively new and improving all the time.

We occasionally discussed the operation in the intervening years, and then in June he arranged an appointment with somebody who specializes in catheter ablations and performs them at Papworth Hospital, which is near Cambridge and is famous for being where the UK’s first successful heart transplant was carried out. This second specialist told me the following things (some of which I had read on the internet already).

1. I was probably progressing from “paroxysmal AF” (occasional bouts) to “permanent AF” (what it sounds like).

2. Catheter ablation is more effective against paroxysmal AF.

3. Now that more data is available, it has become clear that catheter ablation is not after all a permanent cure, but it might delay the progression of AF by five to ten years or something like that.

4. It is more effective the younger you are when you have it, with the decline in effectiveness quite high at my age, so in his opinion I should have it done sooner rather than later.

5. It has only a 60-70% chance of working at all.

6. It carries risks.

Most of that was pretty negative news, so I left the consultation not entirely sure what to do, though thinking I probably ought to have the operation. But the risks seemed pretty serious — about a 1% risk of a major complication — so I wanted to think a bit harder about them.

The two that bothered me most (and still do) are stroke and death. There are other serious things that can go wrong, but if their effects are temporary, then for me that puts them in a different league from a stroke, which could end my productive life, and death, which would end my life altogether.

The risk of death is put at one in a thousand, and this is where things get interesting. How worried should I be about a 0.1% risk? How do I even think about that question? Perhaps if my life expectancy from now on is around 30 years, I should think of this as an expected loss of 30/1000 years, or about 10 days. That doesn’t sound too bad — about as bad as having a particularly nasty attack of flu. But is it right to think about it in terms of expectations? I feel that the distribution is important: I would rather have a guaranteed loss of ten days than a 1/1000 chance of losing 30 years.

In the end, what convinced me that I shouldn’t worry too much about this risk was looking up what the risk of death is anyway over, say, the next year. I found on this site that the average risk of death in the UK for a man between 45 and 54 is 1/279, much higher than 1/1000. So if I am worried about a 1/1000 mortality rate from an operation, I should be about as worried that I will die from some other cause over the next four months or so. And yet I don’t lose any sleep over that possibility.

But maybe the problem is that I am concentrating four months’ risk into a few hours. Doesn’t that change everything?

Yes it would if I was planning to have lots of catheter ablations, but this is much more of a one-off event (though quite a few people have to have it done two or three times before it works). That makes a significant difference. For example, if aeroplane flights carried a 1/1000 mortality risk, that would be completely unacceptable, since some people take enough flights that all those risks would combine to create a near certainty of dying. So what matters in addition to the risk of the operation is the fact that I will have it at most a very small number of times. Maybe a rough rule of thumb is that I shouldn’t be too concerned, since on average my frequency of having this operation will be significantly less than once every four months.

Another possible counterargument is that for various reasons I am probably less likely than average for my age to die over the next year — by most people’s standards I am well off, I am in generally good health, I don’t smoke, and so on. However, I think that many of those factors also reduce my chances of major complications from a catheter ablation, so I’m inclined to guess that the validity of the rough calculation above is not hugely affected.

What about the risk of stroke? That brings me to something I haven’t yet mentioned. Even if a 1/1000 risk of death isn’t something to get too worked up about, one doesn’t want to take that risk unless there is some benefit from doing so. And because the benefit can be measured in terms of reducing risks, I am in the useful position of being able to compare like with like. In other words, it’s not like being asked whether I want to play Russian roulette for a million pounds, where I would have to weigh up a lot of money against a one in six chance of dying. It’s more like being asked to play Russian roulette (but with much better odds) once in order to avoid having to play it once a year for the next five to ten years, since the additional risk per year of having a stroke if you are in atrial fibrillation is comparable to the risk of having a stroke as a result of catheter ablation, even if one is taking Warfarin. (AF increases your annual stroke risk by a factor of about 5, but Warfarin divides that by about 3, or so I’ve read.) I can’t now find the figures I used. Again, the calculations were complicated by the fact that relative to many AF patients my risks of stroke are quite small, but again I think that applies to the risks as a result of the operation as well. As a precaution, it is standard practice to have weekly blood tests to make sure that one’s INR stays within the right range for a good long time before the operation, which mine has, so I have done what I can to minimize the stroke risk.

One final complication is that Warfarin carries its own risk: because it thins your blood, it increases the chances that you will have a brain haemorrhage, which can have very serious consequences. I think the extra risk may be something like 1% a year. Unfortunately, it isn’t considered safe to stop taking Warfarin after a successful catheter ablation, so this risk isn’t going to go away. But from the point of view of balancing the risks and benefits of the operation, that means that this particular risk doesn’t have to be taken into account, as it will be there either way.

In summary then, I’ve looked online at various statistics, none of which tell me exactly what I want to know — since they refer to populations that are more general than me, and in particular usually somewhat older than me (which is good news, since my risks should therefore be lower than average) — and concluded that probably the risk of having the operation is comparable to the risk associated with not having it. The fact that the last time I went into AF, which I’m in now, was only three months after I had had an electrical cardioversion was what finally persuaded me of that. And if I do have it and it works, then my quality of life will be improved, though not hugely, by my not being in AF. And it seems that when the doctors say that the risks of the operation are low, they are (in this instance) talking sense, since the risks are comparable to the background risk that everyone faces.

The operation is tomorrow. It takes a few hours and is done under a mixture of a local anaesthetic (in your leg where the wires go in) and light sedation. So I’ll be conscious. Assuming all goes well, it should be quite an interesting experience — I’ll report back on that when it’s over.

Two more small things. One is that AF itself is mathematically interesting: it seems that something causes the heart to go from a nice periodic rhythm into a more chaotic one, and it is not well understood why. The other is that I have tried to look things up in the medical literature in order to be able to assess the risks as well as I can. Last night I decided I wanted to look at a paper in the Lancet, that renowned medical journal published by Elsevier. Cambridge subscribes to Science Direct, Elsevier’s huge electronic package of all its science journals, so I in theory I should have been able to read the paper. I won’t go into details, but suffice it to say that even though I was entitled to read it, the system, for some mysterious reason, wouldn’t let me and kept giving annoying error messages. It reminded me why I’m in favour of getting rid of the subscription model for academic journals, and also of why I don’t like being told by Elsevier how much they have invested in Science Direct.

I’ll end by saying that this post is very much not intended to be a plea for sympathy. I know many people who have had much worse decisions (of the same general kind, but with far less favourable probabilities) to face than this one. In fact, it’s supposed to be the opposite of a plea for sympathy: more like an explanation of why a risk of 1/1000, which initially seems quite scary, is in fact not that scary after all. If, very much contrary to expectations, something goes badly wrong tomorrow, that will be the moment for sympathy. But the chances of that are very much smaller even than the chances that Mitt Romney will win the presidential election, something that as an avid Nate Silver reader I find highly encouraging.

49 Responses to “Mathematics meets real life”

  1. Joel Says:

    Gosh, I hope it goes well!

  2. Daniel Rust Says:

    I’m sure you’re well aware, but just in case, you might like to read up on the ‘micromort’ – . The micromort is a unit of risk which helps to compare the risks of various ‘one off’ events in a meaningful way. Of course, I wish you the best in your operation tomorrow.

  3. Onofrio de Bari Says:

    The post is absolutely interesting from both the mathematical and empirical point of view. It made me remember two things; when I’ve studied the heart beat as a limit cycle and when some years ago I have led a similar reasoning to decide about my operation for bile stones (if I had kept them, complications could have been pancreatitis and eventually death). I was 27 at that time (now I’m 35). Everything went the good way.

    Best of luck.

  4. Felipe Pait (@pait) Says:

    All of us your readers take your post in the manner it was intended, and join in our deep and sincere wishes for a speedy recovery.

  5. Rogier Swierstra Says:

    All the best, sincerely.

    I recommend you have a look at Daniel Kahnemann’s book “Thinking Fast and Slow” about how humans deal with uncertainty. Particularly the second half. Mathematicians are good at beating back uncertainty, but in situations like these you can’t know everything and even if you could it only gives you the odds. Better for your peace of mind if you understand a little more of how your mind works.

    • Niall MacKay Says:

      Have some sympathy anyway!

      Yes, I read Kahnemann’s book over the summer. It began (I felt) slowly (it reminded me a lot of Eysenck’s ‘Uses and Abuses of Psychology’), but proved to be a thoroughly good read for a mathematician, in that every couple of pages I would spend a while staring into space thinking through nice examples and generalizations which fit in perfectly with Tim’s real-world problems.

      For example, he talks about our willingness to accept a win-$200-lose-$100 even bet, but for me this has to be considered in the context of Kelly betting: it’s entirely rational, for long-term gain, that the decision depend on how much money you have in total, and elementary calculus allows you to make the calculation. (Basically, maximize the expected value of the log of the wealth multiplier).

  6. Jim Hefferon Says:

    I had the operation at age 44. I am now 54. I have not had any AF since, wheras before I had perhaps 5 a week. It was, for me, a tremendous increase in quality of life.

    Good luck.

  7. Robin Ashford Says:

    Best to you. My son had his first bout of AF at 17, had the procedure you’re having done at 24. He’s 34 now, 100% successful, no problems at all.

  8. Anonymous Says:

    All the best and recover fast!

  9. Christopher Gosnell Says:

    Your quote: “That makes a significant difference. For example, if aeroplane flights carried a 1/1000 mortality risk, that would be completely unacceptable, since some people take enough flights that all those risks would combine to create a near certainty of dying.”

    I don’t think that it is completely correct. Whether you take one flight or a million your risk will be 1/1000 for each flight. I put this in the ‘heads or tails’ coin flip category. Each coin flip has a 50/50 chance of heads or tails regardless of the last flip if the coin is not faulty. I wish you well with your procedure and pray for a full recovery.

    • gowers Says:

      You’d have to take quite a lot of flights to get to the point I’m talking about, but some people — e.g. people who work as cabin crew — do. If, for example, you took 1000 flights, then your chances of dying in one of them would be about 1-1/e, which is about 63%. If you took 3000 flights, then those chances would go up to about 95%.

      The point I was making was that a 1/1000 risk is one that is not too worrying as long as one doesn’t take it at all often. Taking it 100 times, say, would be pretty worrying, as then the chances of things going wrong on one of those times would be about 9.5%.

  10. Anonymous Says:

    Good luck! I hope you recover soon.

  11. jeffhsu3 Says:


    The probability of not crashing at least once would be 1-(1-1/1000)^n_{flights} which is certainly greater than 1/1000 for number of flights > 1.

  12. Shecky R Says:

    Great (and interesting) piece… don’t dwell on the 1/1000 chance of something going wrong; think about the 999/1000 chance it will be safe and successful!

  13. Jeremy Henty Says:

    Well, it’s now tomorrow (as of 43 minutes ago) so I’m wishing you luck for your operation today!

  14. Anonymous Says:

    Awesome post; I especially liked the mortality link. Looking forward to your speedy recovery.

  15. Fredrik Meyer Says:

    Very well written – this is absolutely the kind of mathematical reasoning one should teach in school. Good luck!

  16. Colin Beveridge Says:

    Enlightening as always – I hope it goes well!

  17. Piotr Migdal Says:

    Quite incidentally, just before I had my electrophysiology study*), when I was lying on the table, I had a discussion with the doctor of how to apply probabilities (e.g. of complications) to a single instance.

    Especially as all meaningful statements on that matter involve more people – e.g. “on average 1% will have complications”. For a single run there is only one outcome, so no so much room for taking any averages.

    Yet, patients and doctors alike need to take decisions one by one. I am not even asking “what does it really mean”, but what should I do, given probabilities of outcomes and their consequences?

    And the only answer I have in mind is an evolutionary one – on average, genes and memes promoting the optimal choice survive.

    *) AF too, no ablation though.

  18. Monday/Tuesday Highlights | Pseudo-Polymath Says:

    […] Health and a Fields Medal winner. […]

  19. Stones Cry Out - If they keep silent… » Things Heard: e245v1n2 Says:

    […] Health and a Fields Medal winner. […]

  20. Ghoussoub Says:

    All this just to weasel out of your BIRS reviews? Best wishes, Tim.

  21. Anonymous Says:

    wishing you quick recovery.

  22. Jeffrey Shallit Says:

    Sending my best wishes for a speedy recovery.

  23. Bill Johnson Says:

    Best wishes, Tim, for a perfect outcome.

  24. Simon Lyons Says:

    Best wishes professor Gowers.

    I think Darryl Holm of Imperial College was involved in some work on mathematical modelling of cardiac rhythms with applications to AF, though I haven’t read the work myself.

  25. 5 Content Management WordPress Plugins | Open Knowledge Says:

    […] I’ve been in two minds about whether to post this. On the one hand, I try to keep personal matters out of this blog — though there has been the occasional exception — but on the other hand I have a topic that fits quite nicely with some of what …More By gowers […]

  26. Anonymous Says:

    Many thanks for this thought-provoking post — and please do recover quickly! You would be very sadly missed, and by many.

  27. Anonymous Says:

    Best of luck with the ablation procedure! You might be pleased to know that in some cases after a successful ablation warfarin can be discontinued. In my case it was discontinued after about four months, which gave enough time for my heart to heal after the procedure and for my physician (based on a heart monitor that I wore for about a month) to be confident that the ablation was successful. I was put on a daily aspirin instead of the warfarin.

    (I had that ablation in 2008, and was AF-free until this summer. It returned, a bit more assertively, and I had a repeat ablation about a month ago. I’m hoping it will be successful, and give a substantially longer respite from AF.)

    Another thing I learned: During the first few months after the ablation, irregular rhythms may occur due to the procedure itself, and these don’t necessarily mean that the procedure did not succeed.

    Best wishes for success, and thanks for your thought-provoking and informative posts!

  28. Catherine Says:

    I’m French, I was in London last week and read an article entitled “Maths gets real(“04.11.12, News Review). As I found it very interesting, I decided, back to France, to learn more about you : so I went on the internet and read everything I could about you and wanted to find more on the way maths can be applied to the real world. That’s how I came across that post written the day before the ablation : I’m really impressed because a few days ago I didn’t know anything about you.
    Best wishes for recovery and success. Looking forward to reding a next post soon.

  29. Eugene S Says:

    Dear Prof. Gowers, best wishes for your health and for sharing your deliberations with us!

  30. meditationatae Says:

    I think it’s a most wortwhile topic for a Blog. I remember my father used to take Heparin as a blood thinner. He didn’t have AF, but he had blocked arteries necessitating at least one by-pass surgery. Around 2010, he had a stroke, causing partial paralysis on the right side (he was righthanded). This links to left-hemisphere stroke because the left-hemisphere controls the right side of the body. Also, increasing dementia, which could be from a stroke or Alzheimer’s. It seems to me, in the interest of gathering better data on the catheter operation you had, that for the somewhat likely serious side-effect of stroke in the event of a catheter ablation procedure, it would be good to have a distribution function of the severity of the stroke, as strokes can cause greatly variable neurologic damage depending on which regions of the brain are affected.

    • gowers Says:

      That last point is an excellent one. I don’t have precise figures, but what I managed to find out about it (before the operation) seemed to work in my favour. I know from what people have said to my father that AF strokes have a tendency to be quite bad, as indeed my father’s was, whereas I read somewhere that strokes that result from catheter ablation are not too bad. I don’t know how reliable that second assertion is, but I trust the first one enough to regard an AF stroke as something that I want to do the best I can to avoid. I’m not quite sure how all that is affected by whether or not one is on Warfarin.

  31. Anonymous Says:

    There has been a study (sorry don’t have ref to hand) asking two groups of people if they would have an operation based on a risk of harm (in this case, miscarriage). Group A were told the risk was 1%, and Group B were told the risk was 1 in 100. Most members of A chose to have the operation. Most members of Group B chose not to have the operation. Why? Perhaps 1% seems like a small risk because it is a small percentage. However, when presented with 1/100, people think of themselves as being the “1”, so believe a higher risk.

  32. Light Blue Touchpaper » Blog Archive » Will the Information Commissioner be consistent? Says:

    […] But this is trickier than you might think. For example, Tim Gowers just revealed on his excellent blog that he had an ablation procedure for atrial fibrillation a couple of weeks ago. So if our […]

  33. Vedic Mathematics Says:

    Its really a nice post which help a lot of students.

  34. Vasilis Says:

    I hope everything went well with your operation. My best (and warmest) wishes.
    Vasilis, Athens, GR.

  35. Anonymous Says:

    I can’t help it but thinking that the correct risk estimation was all along 50-50. Thankfully things went good for you. My best wishes to you professor.

    I can’t imagine taking stats for granted when talking about individual cases; it is only 1:1000 for a sample of 1000, 1:1 for any specific case.

    I figure the probability any one is going to live or die at any given second would actually be 1:1. It makes sense since neither all relevant factors are known, nor controllable.

  36. smaug12345 Says:

    “I figure the probability any one is going to live or die at any given second would actually be 1:1. It makes sense since neither all relevant factors are known, nor controllable.”
    If this were true, then your life expectancy follows a geometric distribution with probability parameter 1/2; this has mean 2, and your average life expectancy would be two seconds. The same reasoning would lead you to believe that you have a 1/2 chance of winning the lottery on each ticket; in which case, why have you never won anything?

  37. Anonymous Says:

    “If this were true, then your life expectancy follows a geometric distribution with probability parameter 1/2; this has mean 2, and your average life expectancy would be two seconds. The same reasoning would lead you to believe that you have a 1/2 chance of winning the lottery on each ticket; in which case, why have you never won anything?”

    Because the probability is in theory. In practice, tossing a coin, for example, is never a fair game. We just know quite for sure that we cannot repeat the exact same factors we have produced in the first run (force, angle, environment variables, etc), so we decide that if we had got first a head, it will CHANGE, so probability is just how certain we are that change is inevitable. If we could use a machine that is capable of rewinding the experiment exactly for how many times we please, it is 100% we win.

    It is meaningless, otherwise, because it is not reasonable nor scientific to anticipate specific effect without its proper cause. So 1:2 does properly mean we are in the dark as of which factors are more favorable.

    My point here is probability is descriptive not predictive.

    It is true that with one million tickets the “probability” of having one winner of lottery is 1/1000000. Well, it is not really a probability, we know for sure that there is only one winning ticket. As for the chance of wining of any particular player though it is another story. What is CHANGE in this game? it is 1 of 2 scenarios: either winning or losing. This doesn’t change year after year. If I never win, and I am sure the randomizing mechanics is not flawed (i.e., it is truly generating random number and it is also not dependent on the number of participants) then I am perfectly sure I would have never won, even if I was competing against one person, not 1000000. Obviously this is true not because the probability is 1:1000000 or 1:2, it is just my luck (so to speak. It is just that I am in the dark as of which factors are more favorable for the random number generating algorithm.)

    This is a bit different from tossing coins, with coins it is 1:2 for each round and for all rounds, in lottery however it is 1:2 for each ticket and 1:1000000 for all tickets. This is true as long as “we are in the dark as of which factors are more favorable.” (since it depends completely on pure luck, aka uncontrolled conditions, so that cause-effect is not possibly applicable.)

    Say I was the lucky one, would that mean my chance was 100% all along? 1:1000000? the second probability is in utter error, I won. It doesn’t matter how many competitors I had, I don’t compete against people, I compete against losing. It is just 1:2 chance.

    I remember watching something about this on TED a while ago. It was mentioned that we psychologically would participate in a luck game with a chance 1:10 as long as we are competing against 9 other people. Once we know that the other 9 tickets were owned by one opponent we would no further go ahead (sorry but I seem to have lost the link.)

    When a doctor declares that in spite of the fact that we are confident of our procedures, we still fail once per 1000. Numbers is not what is important in his statement, again we have to find our CHANGE. This is analogous to lottery situation. Success or failure doesn’t depend on how many trials do we have. After all it is true that not all people die all of a sudden. But still some die all of a sudden. Those are not affected with descriptive numbers. Practically CHANGE is two possibilities. That is true as long as “we are in the dark as of which factors are more favorable.” And my guess was that at any given second, yes, we are in the dark. This will actually depend on whether you believe so or not. So it is not as shocking as it would look at first to realize that 1:2 is the probability that our lives depend on. It is meaningful nevertheless (philosophy fill in here.)

    What all this means is that sometimes we misuse probability and statistics: probability is descriptive not predictive.

  38. smaug12345 Says:

    But success and failure does depend on how many trials I run – if I enter the lottery fourteen million times, I’m guaranteed to win the lottery because I’ve bought every ticket.
    “Probability is just how certain we are that change is inevitable” – that’s an unusual definition, and I think you might be mixing up two things: firstly, probability is an inherent property of an event and a sample space; given a description of the event in sufficient detail, and a description of the sample space, I can give you the probability that the event will happen. “How certain we are” that change is inevitable is, by contrast, a property of us – it describes my estimate of the probability, not the probability itself. If I were omniscient (and assuming randomness is an inherent feature of the universe) then these two things would be the same – my estimate of the probability would exactly be the probability – but I am not omniscient, and so many things are unknown to me, and I must merely estimate.
    Secondly, if we have a machine that is capable of exactly repeating an experiment, then as soon as the first experiment is complete, I will indeed adjust my assessment of the probability of the experiment’s success on all subsequent runs to 1 (or very close to; there is a chance the machine blows up or something). But the point here is that we’re no longer measuring “the same thing”:
    P(the coin comes down heads on the first toss) = 0.5
    P(the coin comes down heads on the second toss, given that under the first run of the experiment it came down tails) = 1, because I know that the experiment has the same outcome every time. The probabilities (0.5 and 1) aren’t the same, because we’re measuring different events – one depends on the result of the other. (If we didn’t know the result of the first experiment, of course, then P(the coin comes down heads in the second experiment) = 0.5, because the condition is no longer there.)
    It is certainly scientific to anticipate specific effect without its proper cause, if I’ve understood you correctly – whenever someone is put under anaesthetic, for example, we administer a drug about which we know almost nothing – only that it works. How it works is a complete mystery; we have no “proper” reason to believe that it works, but what we do have is an enormous weight of evidence that it does. Almost the entire scientific method is about gathering evidence and then finding models under which the evidence would be produced; it can become a very probabilistic approach (cf. the existence of the Higgs boson; currently we have about five sigma’s worth of evidence that it does exist.)
    I think your argument is predicated on the assumption that there is no randomness in the universe; then “Say I was the lucky one, would that mean my chance was 100% all along? 1:1000000? the second probability is in utter error, I won. It doesn’t matter how many competitors I had” makes more sense to me. An omniscient being would be able to give you a definitive “yes/no” answer, assuming no randomness in the universe, and hence would give a probability of 1 or 0. However, I would give an estimate of the probability as 1/(49 choose 6), because my knowledge is very limited. Every piece of knowledge I gain, about anything, should cause me to update that (even if only by a minuscule amount; knowing where one molecule is doesn’t restrict the possibilities for the state of the universe sufficiently for me to alter my assessment of that probability much); the more knowledge I gain, the more I update (up or down), until if I know everything, I have updated either to 1 or 0. Your statement is essentially “What is the probability that I won the lottery, given that I won the lottery?” – this is all the information I need to determine that that probability is 1. However, I have much less information to deal with the question “What is the probability that I will win the lottery?”. I think you’re conflating the two.
    A more prosaic-seeming argument is that saying all events have a 1/2 chance of happening tells you nothing at all about what will happen, and therefore can’t be used as a basis for making any meaningful judgements about the world. A system which tells you “I have a 1/14000000 chance of winning the lottery, with an expected win of about 1 million/14 million which is 7 pence gained for every pound lost” tells you pretty strongly what the correct course of action is – not to play the lottery. A system which tells you “The chance is 1 in 2 that I win the lottery” causes you to lose large amounts of money.

  39. Anonymous Says:

    I admit my lack of rigor regarding a follow up to this discussion, also I may have badly expressed my point, but I would like to address my main concern here: I am afraid – and this reminds me of history – that people be driven away into superstition if they started deciding their course of action based on blind procedures.

    I myself may have misunderstood and wrongly translated what the probability of 1/1000 of the OP should mean. I think many people would think it is the rate of success to failure of the operation (as I initially did and thus was encouraged to comment on the subject.) It turns out the way I should have read 1/1000 is as an indication of skillfulness of the surgical team. Something that is concrete evidence, not theoretical.

    We have to set one thing clear. The essence of science is its precision that allows us to predict. Your philosophical standpoint seems to be that science builds upon probability. I think this is a bad simplification, because it misses the main point there: the known vs the unknown. This opposition is what I was referring to in my last post.

    My view is that probability deals with what we are unable to make any educated guessing about. It turns out that we could take a very good idea how good doctors are based on their rate of success, but we still have no idea whatsoever if they are going to be as good as they were before in their next operation.

    The truth is we are subconsciously aware 1/1000 is not the rate of success to failure, and that is why we hesitate and think of it as scary enough to withdraw. By all means 1/1000 is superb for a mortality risk, if we do really believe this means what it is said to mean, and we should then be satisfied.

    I admit I am easily misunderstandable. We should collect data and take decisions based on meaningful numbers. But this is not an estimation of the mortality risk of the operation (or of any moment of our lives as previously asserted,) rather it is a good utilization of the known, beyond of which we cannot be really sure.

    The probability is 1:1 (1 in 2) because we cannot decide otherwise. Because we deal with the unknown. I disagree that the measurement of drugs is unknown (in principal, it is ‘almost’ known, things blow up sometimes.) Lottery is quite unknown. What happens in the operating room is mostly unknown (including a space for human errors.) Generalizing to everything in life.

    We could discuss this without any inclusion of philosophy. I do have my bias of course but my argument is not really dependent on my stand on the cause-effect disputation, and it does not really require the adoption of any specific philosophy.

    I am concerned with such wording as this: “A more prosaic-seeming argument is that saying all events have a 1/2 chance of happening tells you nothing at all about what will happen, and therefore can’t be used as a basis for making any meaningful judgements about the world.” because it implies our ability to predict, what we know nothing about.

    Probability is purely theoretical. It is an expression of things we cannot decide about. What you are implying is that science and guessing are the same thing, because – sort of – accumulating the unknown provides us with the known. If things is as simple, why would it even be called probability. Keeping a distinction between probability and since is a better representation. Again philosophy aside, we might know better in future, so that what is unknown now may just become known then, but as to-day if there are no rules, there are no predictions.

    The problem with probability is that it misses with our understanding of how our world works. If we know things, there is no room for probability, if we do not, we should know that we do not. Any theory beyond that is a renewal of superstition IMO.

    If we do not know how random is random, 1/14000000 is meaningless. 1/2 is just a reflection of our state of knowledge. It could serve as a cover up for such estimations as 1/14000000, but what does it really say is that, we do not know what is going to happen next.

    I do not need 14000000 reasons (as if they were) not to throw my money away when I have no guarantee I will get it back, 1 reason is as good as 2 which is just as good as the 14000000. Poetic but true.

  40. smaug12345 Says:

    “I do not need 14000000 reasons (as if they were) not to throw my money away when I have no guarantee I will get it back, 1 reason is as good as 2 which is just as good as the 14000000. Poetic but true.” – so how about a kind of “inverse lottery”, where you have only a 1 in 14000000 chance of losing your money, and otherwise you win lots? Your advice seems to be that if I want to win money, either lottery is just as good; and yet clearly if I pick the “inverse lottery”, I will win an enormous amount of money, and if I pick the standard lottery, I will lose a lot.
    As I interpreted it, 1/1000 means “for every thousand operations performed, 999 will be successful; one will not”. We think of it as “scary” because we’re very bad at long-term thinking; blind evolution has primed us to fear risks which are very near to us, rather than risks which are more distant. As I read it, Prof Gowers’s post was about the process of discovering and overcoming the bias that that particular heuristic has endowed us with.
    I think I understand your point now; it seems more of a philosophical one, rather than a practical one, because science through experimentation is based on the tools of probability. The scientific method essentially consists of “Pick a range of hypotheses; generate data; compare data with hypotheses, ruling out those which are extremely unlikely; repeat.” For this reason, we very much need a way to tell which hypotheses are “extremely unlikely”; hence we formalise probability with an axiomatic approach, and derive all sorts of useful laws like Bayes’s Theorem that tell you about real outcomes. (I can predict, and I will be right, that you will not win the lottery next time you enter, for example.) You can’t really separate probability from science; if you do, you have no way of using evidence. You can gather evidence all you like, but without probability, it’s just data; it can’t be used to confirm or refute any hypotheses.
    For instance, I formulate the hypothesis “My chair is solid”. I have evidence to this effect: I have observed a solid chair. This causes me to update my estimate of the probability of “my chair is solid” up to about 99.99999999 (the number of 9s here is somewhat arbitrary, but I’m allowing for the possibility that I’m hallucinating in some strange way.) Without probability, I couldn’t make that update; observing a solid chair would tell me nothing about the probability that the chair is solid, because I wouldn’t have Bayes’s Law of Conditional Probability, which tells me that P(the chair is solid, given that I observed a solid chair) = P(I observe a solid chair given that the chair is solid) * P(the chair is solid)/P(I observe a solid chair) which is pretty much 1. I can use the laws of probability to correct my world-view based on evidence; without probability, I can’t do that consistently. (Or, at least, I know of no system which lets you.)
    I suspect we’re talking at cross-purposes, but science is based on probability; for a better explanation than I could hope to provide, see . If you only care about things which you know absolutely for certain, then you can assign other things an arbitrary 1/2 probability, but as you point out, nothing in physics etc is certain. Yes, physics makes excellent predictions through its precision, but it could still be wrong (think of General Relativity, which overturned the “correct” classical mechanics, despite the excellent predictions made by classical mechanics), so according to that view, every prediction of physics should have probability 1/2, too.

  41. Anonymous Says:

    You have summarized it perfectly.

    I would just like to point out one last thing regarding the inverse lottery scenario, to serve as a moral side of the story.

    Let us make it a Russian roulette with 14000000 sized chamber, in order to make it more concrete. My answer is, I personally would not take it for any reward you could come up with (not all the money of the world, not anything you could imagine.) For me gambling is inherently wrong because you cannot guarantee the consequences, and any numbers provided does not essentially change that fact.

    As for the medical risk assessment for example (or any risk assessment in general,) decisions should be taken on “as-needed” basis. The details are relative to the situation.

    I would like to thank you for your engagement in this discussion, smaug12345.

  42. smaug12345 Says:

    Ah, I understand your point of view now (even if I would take that Russian roulette if I were paid all the money!) Thanks for the discussion.

    • gowers Says:

      I have sometimes taken aeroplane flights in order to give talks for which I get paid an honorarium. So effectively I’ve played the inverse lottery.

  43. When Probability Meets Real Life | Unhinged Group Says:

    […] wasn’t guaranteed to succeed, the eminent mathematician Timothy Gowers resorted to a detailed risk-benefit calculation. Fortunately, it turned out well for Gowers, who is also a co-founder of the Polymath project. […]

  44. Shaun Peck Says:

    I hope your ablation went well. One issue you did not mention in your piece was that there is a relationship between alcohol consumption and episodes of atrial fibrillation. It is well known after episodes of binge drinking in otherwise fairly young men (mostly).

  45. personal development leadership skills Says:

    personal development leadership skills

    Mathematics meets real life | Gowers's Weblog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: