I’m not sure I agree that that log-against-log plot is good news. It looks to me as though it is growing roughly linearly — I don’t see any sign of the gradient levelling off. It still looks like growth, but perhaps the picture will change when there’s more data.

]]>By the way, the hand calculations I posted earlier were almost right, though there was one mistake (a prime unnecessarily assigned +1) towards the very end.

]]>Here you are. Looks like potentially good news …. I’ll set my computer calculating overnight and hope for a more conclusive plot tomorrow.

]]>As usual I’m fascinated every time I see one of these plots. For the function to be an interesting one theoretically, we’d like the growth to be subexponential as a function of (so that it’s subpolynomial as a function of ). At the moment, it’s not terribly clear: it looks to me as though it could be exponential, but it could also be quadratic. It might be clearer if one plotted both axes on a logarithmic scale, since then the question would be whether the underlying curve is concave (good news) or not concave (bad news). I’m feeling a little pessimistic though: it looks as though a typical value of the partial sum up to is about .

]]>Here’s a plot of what the algorithm generates up to 40,000 or so. (If you extend the plot to the right by a thousand or so, setting all primes to +1, then the sequence heads reassuringly upwards, so I don’t think there are lookahead issues.)

]]>I see. I think that gives the same output as the less greedy algorithm I suggested. And it may well be that if only a limited amount of backtracking is needed then the looking forward that I suggested would not save time.

I think the output of this algorithm is the first sequence in the lexicographical order (counting -1 as earlier in the alphabet than 1) with all its partial sums non-negative. So there is a certain greedy flavour to it (in the sense that each time a new prime p comes along, it will set f(p)=-1 unless that is provably not a valid continuation of the sequence so far).

]]>The algorithm I was trying was:

1) Set p = -1 unless this pushes the partial sum below zero;

2) Fill in composite numbers myopically (ie, no look-ahead) until either

2a) you reach a prime (go to step 1); or

2b) the series goes negative, in which case backtrack and change the sign of the last prime p which was set equal to -1 to a +1 (then go to the beginning of step 2 again).

On the down side, there’s no obvious limit to the amount of backtracking that may be required. On the up side, I didn’t get close to getting in trouble when I was generating the first few elements of the sequence.

I’m going to be busy for a while now, so probably won’t get the chance to do anything more systematic on this for a while – in particular I would have liked to check that the hand calculations below can be relied on…

]]>A quick answer to Ian’s question. I was trying to define a greedy algorithm, where by that I mean an algorithm that doesn’t backtrack. But the informal idea I described *does* require one to backtrack. So I ended up not being quite sure what the algorithm even was.

I get the impression from Thomas’s plot that the genuinely greedy algorithm I then suggested gives, as expected, not a very good result. So perhaps it’s worth going back to the original idea, as Ian was doing above. Here is what I had envisaged. Instead of setting all as yet unspecified values to zero when evaluating future partial sums, one sets them to 1. The problem I have with this is that when I fix the value of , then I will definitely increase some of the later partial sums (because however I do it I’ll change some 1s to -1s and I won’t change any -1s to 1s) so I don’t have any reason to suppose I can keep them all positive. But perhaps some limited backtracking would be enough to deal with this problem when it arises, as Ian suggests. I’m interested to see that the sequence Ian has produced below is better than (as far as it goes).

]]>Sorry, I posted my comment below in the wrong place. And I should clarify that my hand calculations were for the informal algorithm that Tim suggested here.

]]>{1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1,

-1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1,

1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1,

-1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, 1, 1, 1, -1, -1, -1,

-1, 1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1,

1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1,

1, 1, -1, 1, 1, -1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1,

-1, -1, 1, -1, 1, 1, -1, 1, -1, 1, -1, 1, -1, -1, 1, -1, 1, 1, 1, -1,

-1, -1, -1, 1, 1}

Thomas, if you put this as input into your program does it go negative even if all subsequent primes are set to 1?

]]>Based on many tests with my programs, it seems that this algorithm like the previous ones depend quite sensitively on precisely which large value one chooses to sum up to. If I run this new one with it does set f(7) to -1, but with larger values like it now chooses f(7)=+1. Here’s a plot for which shows oscillations.

]]>I had a quick try by hand and think I have managed to get up to 100 using a rather vague approach that I think corresponds what what you were suggesting. The obvious problem is that there is no obvious control on the amount of backtracking that may be required; I proceeded on the assumption that there would be enough room to get out of trouble, and only ever needed one layer of backtracking (on maybe three or four occasions). Here’s what I got:

{1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1,

-1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1,

1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1}

When you say that the rough idea you had doesn’t work, Tim, do you mean that you’ve convinced yourself that the backtracking is an issue, or just that you haven’t convinced yourself that it’s not?

]]>Since the value at p influences 1/p of the later values, I still think 1/p weights are worth exploring in various forms.

]]>The idea is that, as usual, you define on each prime in turn. Each time you do so, you work out all the consequences that follow just from multiplicity.

We can think of the process as follows. We start with the sequence 1 0 0 0 … At the next stage, we set f(2)=-1 and work out the consequences of that, which gives us the sequence 1 -1 0 1 0 0 0 -1 0 0 … . Note that the partial sums of this sequence are all positive. At the next stage we are obviously forced to choose , which gives us a sequence that starts 1 -1 1 1 0 -1 0 -1 1 0 0 1 0 … . At the stage after that, we have to choose f(5)=1, because otherwise the partial sum up to 8 would be negative. (Here I depart from my previous approach, where I could have set f(5)=-1 and compensated for it by setting f(7)=1. But I’m not quite sure whether I have an algorithm that does that kind of compensation systematically.) So now we have a sequence that begins 1 -1 1 1 1 -1 0 -1 1 -1 0 1 0 0 1. I don’t know without going off and doing some calculations, but I think it is probably safe at this point to let f(7) be -1. And so on.

At each stage, it is provably possible to continue the algorithm without backtracking. Indeed, if all the partial sums of the sequence so far are non-negative and you set f(p)=1, then the partial sums along the multiples of p are all non-negative, and all multiples of p previously took the value zero, so no partial sum has been decreased. So the idea is that you try out f(p)=-1. If it leads to a negative partial sum somewhere then you set f(p)=1 and continue.

The unintelligent nature of this algorithm leads me to think that its partial sums may grow pretty fast, but I’m still interested to know, just in case they don’t.

]]>According to my calculations the value at 17 should be -1. I think that is right, because the function would then agree with as far as 17, so there shouldn’t be any proof that positivity implies that f(17)=1.

]]>I’ve launched a run up to but it loks like it’ll take a while to complete.

]]>Your values agree with the ones I calculated. (So far I’ve got up to 40.) But I’m not quite sure we’re doing the same thing: the idea I wanted to pursue involves looking ahead infinitely far, so to speak. (In practice it would involve picking a large and at each stage choosing the smallest prime that’s bigger than the last prime you changed to -1 and that you can change to -1 without any of the partial sums up to going negative — after you have put in all the changes at multiples of .)

]]>I have a question about this: the first values I find up to 12 are: sequence 1+,2-,3+,4+,5-,6-,7+,8-,9+,10+,11-,12+ ;

partial sums:1,0,1,2,1,0,1,0,1,2,1,2. So one could be tempted to pick 13-, but by multiplivativity we have 14-,15- which rules out 13- or else the sum at 15 becomes negative. Could anyone confirm this? If so, I have a code ready which includes the look-ahead up to the next prime, but the resulting discrepancy growth is large.

I had a little try at the positive constraining algorithm, just to check that it didn’t give . I’m not 100% sure about this, but I think the first place where it differs from is at 37, where it seems to be possible to set this function to equal -1 instead of 1. So it’s just conceivable that this could give us *better* behaviour than , which would be very interesting given that its partial sums remain positive. However, the evidence so far is pretty flimsy, to put it mildly.

I found this interesting page about the summatory Liouville function:

http://demonstrations.wolfram.com/UsingZetaZerosToComputeASummatoryLiouvilleFunction/

The formula that determines its oscillations is

where are the zeta zeros on the critical line. This is apparently proved using Perron’s formula, which in turn relies on information about the Dirichlet series defined by the Liouville function. I don’t know if some approach like this could be useful to us.

]]>Thanks. I didn’t know how to get the feed addresses (actually, I didn’t know that there *were* feed addresses) for wiki changes and for the Blog comments.

]]>Regarding Alec’s question, a few simple observations. Wikipedia says that the Dirichlet series for the Liouville function is a ratio of zeta values, namely , so that indeed when is not a Riemann zero the sum stays finite, meaning that the number of integers that have an odd number of prime factors is comparable to that of those which have an even number. And when is a nontrivial Riemann zero then the sum diverges, so that one type of integer decomposition dominates, and the Liouville function then correlates well with something like . Now of course the Liouville function doesn’t give us enough information for EDP, as a number with an even total number of prime factors could still be given value +1 or -1. But nevertheless such a case covers in particular numbers of the form which we do know must be assigned to a +1. So the oscillation coming from nontrivial Riemann zeros in the Liouville partial sums seen on wikipedia may still transfer to EDP. Does that make sense?

As for the numerical aspect, yes I understood many ideas should be tested, for instance I had looked at the forward idea but it didn’t work well. I’ll give a go at your latest “positive constraining” one and try others too, and report the results.

]]>In the light of that I want to make doubly clear that the main point of my previous question was not whether that particular idea would give a suitable damping mechanism (though I did have my hopes) but rather whether there is *some* way of getting rid of the oscillations. Alec’s remark about the Liouville function is interesting in this respect, since I’m getting the impression that we need to have a better understanding of why the oscillations occur if we want to make intelligent guesses about how to get rid of them.

A general idea that could perhaps underlie an algorithm is that if the partial sums are on their way down anyway then there is no need to help them along. That to me suggests an algorithm that looks ahead a bit. For instance, if you’ve chosen the values up to , then perhaps you could calculate the partial sum up to 2p, ignoring the places where is not yet defined, and choose accordingly.

Here’s an idea that probably won’t work, but I can at least explain the motivation behind it. It’s a different sort of greedy algorithm that keeps the partial sums as small as possible *but insists that they are all positive*. The way it does this is as follows. It starts by setting for all primes . It then looks at 2 and sees whether it can change while keeping all partial sums positive. Finding that it can, it changes to -1. It then can’t change (or the sum up to 3 would be -1) so it doesn’t. It can change to -1 so it does that.

It seems possible that this could result in the function , in which case it works but doesn’t give anything interestingly new. If that happens, then one could make a few “wrong” choices to start with.

The reason for doing it is that if the algorithm knows that it mustn’t go below zero, then it will be very careful to slow things down if the partial sums seem to be decreasing to zero too fast. In other words, it might lead to a natural damping tendency.

]]>I’ve just tried this superimposition idea, but it unfortunately doesn’t seem to have a nice behaviour. I’ve only run it until length 14000 but already got in logscale this plot.

]]>hmm… I can’t even spell “RSS” :S

I’m not sure what you want to know. I subscribed to the feeds wiki’s recent changes, comments for Gowers’s Weblog, posts Gowers’s Weblog and finally articles on WordPress with the tag polymath5. I’m using Firefox with the addon NewsFox. (I’m not sure that is the best RSS reader. It makes my Firefox freeze from time to time). This is a much easier way to keep up that just pressing F5 on this blog and on the wiki all the time, but I don’t know how you do? Please ask again if I didn’t answered you question.

]]>