Mathematics meets real life | Gowers's Weblog

]]>I have sometimes taken aeroplane flights in order to give talks for which I get paid an honorarium. So effectively I’ve played the inverse lottery.

]]>I would just like to point out one last thing regarding the inverse lottery scenario, to serve as a moral side of the story.

Let us make it a Russian roulette with 14000000 sized chamber, in order to make it more concrete. My answer is, I personally would not take it for any reward you could come up with (not all the money of the world, not anything you could imagine.) For me gambling is inherently wrong because you cannot guarantee the consequences, and any numbers provided does not essentially change that fact.

As for the medical risk assessment for example (or any risk assessment in general,) decisions should be taken on “as-needed” basis. The details are relative to the situation.

I would like to thank you for your engagement in this discussion, smaug12345.

]]>As I interpreted it, 1/1000 means “for every thousand operations performed, 999 will be successful; one will not”. We think of it as “scary” because we’re very bad at long-term thinking; blind evolution has primed us to fear risks which are very near to us, rather than risks which are more distant. As I read it, Prof Gowers’s post was about the process of discovering and overcoming the bias that that particular heuristic has endowed us with.

I think I understand your point now; it seems more of a philosophical one, rather than a practical one, because science through experimentation is based on the tools of probability. The scientific method essentially consists of “Pick a range of hypotheses; generate data; compare data with hypotheses, ruling out those which are extremely unlikely; repeat.” For this reason, we very much need a way to tell which hypotheses are “extremely unlikely”; hence we formalise probability with an axiomatic approach, and derive all sorts of useful laws like Bayes’s Theorem that tell you about real outcomes. (I can predict, and I will be right, that you will not win the lottery next time you enter, for example.) You can’t really separate probability from science; if you do, you have no way of using evidence. You can gather evidence all you like, but without probability, it’s just data; it can’t be used to confirm or refute any hypotheses.

For instance, I formulate the hypothesis “My chair is solid”. I have evidence to this effect: I have observed a solid chair. This causes me to update my estimate of the probability of “my chair is solid” up to about 99.99999999 (the number of 9s here is somewhat arbitrary, but I’m allowing for the possibility that I’m hallucinating in some strange way.) Without probability, I couldn’t make that update; observing a solid chair would tell me nothing about the probability that the chair is solid, because I wouldn’t have Bayes’s Law of Conditional Probability, which tells me that P(the chair is solid, given that I observed a solid chair) = P(I observe a solid chair given that the chair is solid) * P(the chair is solid)/P(I observe a solid chair) which is pretty much 1. I can use the laws of probability to correct my world-view based on evidence; without probability, I can’t do that consistently. (Or, at least, I know of no system which lets you.)

I suspect we’re talking at cross-purposes, but science is based on probability; for a better explanation than I could hope to provide, see http://yudkowsky.net/rational/bayes . If you only care about things which you know absolutely for certain, then you can assign other things an arbitrary 1/2 probability, but as you point out, nothing in physics etc is certain. Yes, physics makes excellent predictions through its precision, but it could still be wrong (think of General Relativity, which overturned the “correct” classical mechanics, despite the excellent predictions made by classical mechanics), so according to that view, every prediction of physics should have probability 1/2, too. ]]>

I myself may have misunderstood and wrongly translated what the probability of 1/1000 of the OP should mean. I think many people would think it is the rate of success to failure of the operation (as I initially did and thus was encouraged to comment on the subject.) It turns out the way I should have read 1/1000 is as an indication of skillfulness of the surgical team. Something that is concrete evidence, not theoretical.

We have to set one thing clear. The essence of science is its precision that allows us to predict. Your philosophical standpoint seems to be that science builds upon probability. I think this is a bad simplification, because it misses the main point there: the known vs the unknown. This opposition is what I was referring to in my last post.

My view is that probability deals with what we are unable to make any educated guessing about. It turns out that we could take a very good idea how good doctors are based on their rate of success, but we still have no idea whatsoever if they are going to be as good as they were before in their next operation.

The truth is we are subconsciously aware 1/1000 is not the rate of success to failure, and that is why we hesitate and think of it as scary enough to withdraw. By all means 1/1000 is superb for a mortality risk, if we do really believe this means what it is said to mean, and we should then be satisfied.

I admit I am easily misunderstandable. We should collect data and take decisions based on meaningful numbers. But this is not an estimation of the mortality risk of the operation (or of any moment of our lives as previously asserted,) rather it is a good utilization of the known, beyond of which we cannot be really sure.

The probability is 1:1 (1 in 2) because we cannot decide otherwise. Because we deal with the unknown. I disagree that the measurement of drugs is unknown (in principal, it is ‘almost’ known, things blow up sometimes.) Lottery is quite unknown. What happens in the operating room is mostly unknown (including a space for human errors.) Generalizing to everything in life.

We could discuss this without any inclusion of philosophy. I do have my bias of course but my argument is not really dependent on my stand on the cause-effect disputation, and it does not really require the adoption of any specific philosophy.

I am concerned with such wording as this: “A more prosaic-seeming argument is that saying all events have a 1/2 chance of happening tells you nothing at all about what will happen, and therefore can’t be used as a basis for making any meaningful judgements about the world.” because it implies our ability to predict, what we know nothing about.

Probability is purely theoretical. It is an expression of things we cannot decide about. What you are implying is that science and guessing are the same thing, because – sort of – accumulating the unknown provides us with the known. If things is as simple, why would it even be called probability. Keeping a distinction between probability and since is a better representation. Again philosophy aside, we might know better in future, so that what is unknown now may just become known then, but as to-day if there are no rules, there are no predictions.

The problem with probability is that it misses with our understanding of how our world works. If we know things, there is no room for probability, if we do not, we should know that we do not. Any theory beyond that is a renewal of superstition IMO.

If we do not know how random is random, 1/14000000 is meaningless. 1/2 is just a reflection of our state of knowledge. It could serve as a cover up for such estimations as 1/14000000, but what does it really say is that, we do not know what is going to happen next.

I do not need 14000000 reasons (as if they were) not to throw my money away when I have no guarantee I will get it back, 1 reason is as good as 2 which is just as good as the 14000000. Poetic but true.

]]>“Probability is just how certain we are that change is inevitable” – that’s an unusual definition, and I think you might be mixing up two things: firstly, probability is an inherent property of an event and a sample space; given a description of the event in sufficient detail, and a description of the sample space, I can give you the probability that the event will happen. “How certain we are” that change is inevitable is, by contrast, a property of us – it describes my estimate of the probability, not the probability itself. If I were omniscient (and assuming randomness is an inherent feature of the universe) then these two things would be the same – my estimate of the probability would exactly be the probability – but I am not omniscient, and so many things are unknown to me, and I must merely estimate.

Secondly, if we have a machine that is capable of exactly repeating an experiment, then as soon as the first experiment is complete, I will indeed adjust my assessment of the probability of the experiment’s success on all subsequent runs to 1 (or very close to; there is a chance the machine blows up or something). But the point here is that we’re no longer measuring “the same thing”:

P(the coin comes down heads on the first toss) = 0.5

P(the coin comes down heads on the second toss, given that under the first run of the experiment it came down tails) = 1, because I know that the experiment has the same outcome every time. The probabilities (0.5 and 1) aren’t the same, because we’re measuring different events – one depends on the result of the other. (If we didn’t know the result of the first experiment, of course, then P(the coin comes down heads in the second experiment) = 0.5, because the condition is no longer there.)

It is certainly scientific to anticipate specific effect without its proper cause, if I’ve understood you correctly – whenever someone is put under anaesthetic, for example, we administer a drug about which we know almost nothing – only that it works. How it works is a complete mystery; we have no “proper” reason to believe that it works, but what we do have is an enormous weight of evidence that it does. Almost the entire scientific method is about gathering evidence and then finding models under which the evidence would be produced; it can become a very probabilistic approach (cf. the existence of the Higgs boson; currently we have about five sigma’s worth of evidence that it does exist.)

I think your argument is predicated on the assumption that there is no randomness in the universe; then “Say I was the lucky one, would that mean my chance was 100% all along? 1:1000000? the second probability is in utter error, I won. It doesn’t matter how many competitors I had” makes more sense to me. An omniscient being would be able to give you a definitive “yes/no” answer, assuming no randomness in the universe, and hence would give a probability of 1 or 0. However, I would give an estimate of the probability as 1/(49 choose 6), because my knowledge is very limited. Every piece of knowledge I gain, about anything, should cause me to update that (even if only by a minuscule amount; knowing where one molecule is doesn’t restrict the possibilities for the state of the universe sufficiently for me to alter my assessment of that probability much); the more knowledge I gain, the more I update (up or down), until if I know everything, I have updated either to 1 or 0. Your statement is essentially “What is the probability that I won the lottery, given that I won the lottery?” – this is all the information I need to determine that that probability is 1. However, I have much less information to deal with the question “What is the probability that I will win the lottery?”. I think you’re conflating the two.

A more prosaic-seeming argument is that saying all events have a 1/2 chance of happening tells you nothing at all about what will happen, and therefore can’t be used as a basis for making any meaningful judgements about the world. A system which tells you “I have a 1/14000000 chance of winning the lottery, with an expected win of about 1 million/14 million which is 7 pence gained for every pound lost” tells you pretty strongly what the correct course of action is – not to play the lottery. A system which tells you “The chance is 1 in 2 that I win the lottery” causes you to lose large amounts of money. ]]>

Because the probability is in theory. In practice, tossing a coin, for example, is never a fair game. We just know quite for sure that we cannot repeat the exact same factors we have produced in the first run (force, angle, environment variables, etc), so we decide that if we had got first a head, it will CHANGE, so probability is just how certain we are that change is inevitable. If we could use a machine that is capable of rewinding the experiment exactly for how many times we please, it is 100% we win.

It is meaningless, otherwise, because it is not reasonable nor scientific to anticipate specific effect without its proper cause. So 1:2 does properly mean we are in the dark as of which factors are more favorable.

My point here is probability is descriptive not predictive.

It is true that with one million tickets the “probability” of having one winner of lottery is 1/1000000. Well, it is not really a probability, we know for sure that there is only one winning ticket. As for the chance of wining of any particular player though it is another story. What is CHANGE in this game? it is 1 of 2 scenarios: either winning or losing. This doesn’t change year after year. If I never win, and I am sure the randomizing mechanics is not flawed (i.e., it is truly generating random number and it is also not dependent on the number of participants) then I am perfectly sure I would have never won, even if I was competing against one person, not 1000000. Obviously this is true not because the probability is 1:1000000 or 1:2, it is just my luck (so to speak. It is just that I am in the dark as of which factors are more favorable for the random number generating algorithm.)

This is a bit different from tossing coins, with coins it is 1:2 for each round and for all rounds, in lottery however it is 1:2 for each ticket and 1:1000000 for all tickets. This is true as long as “we are in the dark as of which factors are more favorable.” (since it depends completely on pure luck, aka uncontrolled conditions, so that cause-effect is not possibly applicable.)

Say I was the lucky one, would that mean my chance was 100% all along? 1:1000000? the second probability is in utter error, I won. It doesn’t matter how many competitors I had, I don’t compete against people, I compete against losing. It is just 1:2 chance.

I remember watching something about this on TED a while ago. It was mentioned that we psychologically would participate in a luck game with a chance 1:10 as long as we are competing against 9 other people. Once we know that the other 9 tickets were owned by one opponent we would no further go ahead (sorry but I seem to have lost the link.)

When a doctor declares that in spite of the fact that we are confident of our procedures, we still fail once per 1000. Numbers is not what is important in his statement, again we have to find our CHANGE. This is analogous to lottery situation. Success or failure doesn’t depend on how many trials do we have. After all it is true that not all people die all of a sudden. But still some die all of a sudden. Those are not affected with descriptive numbers. Practically CHANGE is two possibilities. That is true as long as “we are in the dark as of which factors are more favorable.” And my guess was that at any given second, yes, we are in the dark. This will actually depend on whether you believe so or not. So it is not as shocking as it would look at first to realize that 1:2 is the probability that our lives depend on. It is meaningful nevertheless (philosophy fill in here.)

What all this means is that sometimes we misuse probability and statistics: probability is descriptive not predictive.

]]>If this were true, then your life expectancy follows a geometric distribution with probability parameter 1/2; this has mean 2, and your average life expectancy would be two seconds. The same reasoning would lead you to believe that you have a 1/2 chance of winning the lottery on each ticket; in which case, why have you never won anything? ]]>

I can’t imagine taking stats for granted when talking about individual cases; it is only 1:1000 for a sample of 1000, 1:1 for any specific case.

I figure the probability any one is going to live or die at any given second would actually be 1:1. It makes sense since neither all relevant factors are known, nor controllable.

]]>Vasilis, Athens, GR. ]]>

That last point is an excellent one. I don’t have precise figures, but what I managed to find out about it (before the operation) seemed to work in my favour. I know from what people have said to my father that AF strokes have a tendency to be quite bad, as indeed my father’s was, whereas I read somewhere that strokes that result from catheter ablation are not too bad. I don’t know how reliable that second assertion is, but I trust the first one enough to regard an AF stroke as something that I want to do the best I can to avoid. I’m not quite sure how all that is affected by whether or not one is on Warfarin.

]]>Best wishes for recovery and success. Looking forward to reding a next post soon. ]]>

(I had that ablation in 2008, and was AF-free until this summer. It returned, a bit more assertively, and I had a repeat ablation about a month ago. I’m hoping it will be successful, and give a substantially longer respite from AF.)

Another thing I learned: During the first few months after the ablation, irregular rhythms may occur due to the procedure itself, and these don’t necessarily mean that the procedure did not succeed.

Best wishes for success, and thanks for your thought-provoking and informative posts!

]]>Have some sympathy anyway!

Yes, I read Kahnemann’s book over the summer. It began (I felt) slowly (it reminded me a lot of Eysenck’s ‘Uses and Abuses of Psychology’), but proved to be a thoroughly good read for a mathematician, in that every couple of pages I would spend a while staring into space thinking through nice examples and generalizations which fit in perfectly with Tim’s real-world problems.

For example, he talks about our willingness to accept a win-$200-lose-$100 even bet, but for me this has to be considered in the context of Kelly betting: it’s entirely rational, for long-term gain, that the decision depend on how much money you have in total, and elementary calculus allows you to make the calculation. (Basically, maximize the expected value of the log of the wealth multiplier).

]]>I think Darryl Holm of Imperial College was involved in some work on mathematical modelling of cardiac rhythms with applications to AF, though I haven’t read the work myself.

]]>