In the process of refining my program to search for an odd weird number, I needed to determine an upper bound: the least integer p such that (1 + 1/p)^k <= 1 + 1/(r  1), for given integer k >= 1 and rational r > 1.
Now analytically this is straightforward: rearrange to isolate p, and we get p >= 1/((1 + 1/(r  1))^(1/k)  1). But the program is written to use an arbitrary precision integer library, and I can't do that in integers: I'd need to go to some highprecision reals, and that's going to get complex and (above all) slow.
So I wondered whether there was an expression I could calculate more easily that would get close: after all, I only need an integer, and to a real the gap between one integer and the next is huge.
In the simple case, when k = 1, we get p = r  1. By observation with higher k it looks like p is a bit less than kr, so let's see what we can do with that. If we take p = kr, we need to show: (1 + 1/(kr))^k <= 1 + 1/(r  1)
There's a standard expansion of (1+n)^k, so the first step is easy  note that you can read "sum_{i=0}^{k}{ f(i) }" as "the sum of the values of f(i), when i successively takes each of the values from 0 to k": (1 + 1/(kr))^k
= sum_{i=0}^{k}{ (k!/(i!(ki)!)) / (kr)^i }
Now k!/(ki)! is a product of i numbers from k downwards, so it can't be more than k^i (and can be equal only when i is 0 or 1); and we're dividing by a factor of k^i, so that cancels out neatly: (1 + 1/(kr))^k
= sum_{i=0}^{k}{ (k!/(i!(ki)!)) / (kr)^i }
<= sum_{i=0}^{k}{ 1 / (i! r^i) }
Also note that i! must be at least 1 within the relevant range, so 1/i! <= 1: (1 + 1/(kr))^k
= sum_{i=0}^{k}{ (k!/(i!(ki)!)) / (kr)^i }
<= sum_{i=0}^{k}{ 1 / (i! r^i) }
<= sum_{i=0}^{k}{ 1 / r^i }
And it just happens that there's a wellknown sum for the infinite series 1/r^i, which conveniently happens to be ... 1 + 1/(r  1). And since all the terms in the series are positive, the sum of just a few of them must be less than that: (1 + 1/(kr))^k
= sum_{i=0}^{k}{ (k!/(i!(ki)!)) / (kr)^i }
<= sum_{i=0}^{k}{ 1 / (i! r^i) }
<= sum_{i=0}^{k}{ 1 / r^i }
< 1 + 1/(r  1)
QED.
Well that all seems pretty painless: so what does it give us? Well, some typical numbers from early in the program would be r = 385, with k some small integer like 3 or 4. Let's see what the calculator says for the full on calculation: p = 1/((1 + 1/(1  r))^(1/k)  1)
r = 385, k = 3 => p ~= 1152.999427
r = 385, k = 4 => p ~= 1537.499206
Now this gets interesting: taking p = kr gives us 1155 and 1540 respectively for those two calculations, while the calculated numbers seem strangely close to 3r  2 and 4r  2.5 respectively: other values for k show the pattern continuing, with the calculated p in each case being a tiny bit less than (kr  (k + 1)/2).
That's interesting also, because when k = 1 the inequality becomes equality: (1 + 1/(1.r  (1 + 1)/2))^1 simplifies to exactly 1 + 1/(r  1). And matching our inequality exactly at k = 1, and within a factor of about one in a million for larger k sounds pretty good  particularly because with the values so spookily close to exact I'd expect them to fall out of the algebra with just a little bit of fiddling. We just need to repeat our earlier trick with a new inequality: (1 + 1/(kr  (k + 1)/2))^k <= 1 + 1/(r  1)
[Sound effects: a little bit of fiddling, then a bit more. A pencil snaps.]
Nope, can't see where to go from there: the first proof was very simple, but the steps that established inequalities were what simplified the initial expression, and between them they managed to use up all the (quite generous) slack in the system. This new one is way tighter, and there's no way we're going to get away with anything like that.
Next step, go talk to Simon: does he have some graphing software handy that'll show what some of these values look like? He does indeed, and we try one out. I don't think the picture will change much with small changes to r from 385, but since r can in principle be any rational number greater than 1, that calculation of 1/(r  1) gets to move a lot more when r is close to 1, so let's try setting it to 11/10: the inequality then becomes: (1 + 1/(11k/10  (k + 1)/2))^k <= 11
and the left hand side (LHS) rearranges to (1 + 10/(6k  5))^k. Here's what the graph looks like: [to be inserted when I get a screenshot]. The graph decreases uniformly from infinity at the singularity when k = 0, and since our inequality gives an exact match at k = 1 we'd have proof enough if we could show the graph acts like that for every r > 1. Can we show that?
Well a function f(x) is decreasing if f(x + d) < f(x) for arbitrarily small positive d. Which is another inequality: (1 + 1/(kr  (k + 1)/2))^k < (1 + 1/((k + d)r  ((k + d) + 1)/2))^(k + d)
Ouch, that looks horrible, and I'm not going to touch it with a bargepole.
[Later] Silly me, it is at least a bit easier than that: a function f(x) is decreasing wherever it's derivative f'(x) < 0. But the derivative looks just as nasty  (f(x)^x)' = f(x)(xf'(x)/f(x) + ln(f(x))  so I'm not going to bother.
Ok, time to step back and rethink. What I'd like to do is get something simpler than that (kr  (k + 1)/2) expression by factoring out the k, but that gives (k(r  1/2)  1/2) which is hardly better. But let's remember these are my own variables and the r is only used in one other place: what happens if I hide the 1/2 with a change of variable? Let's say s = r  1/2, what would that give us?
The LHS is easy: we just substitute in the s, and just for kicks I'll multiply through by 2 to get rid of the other 1/2: (1 + 2/(2ks  1))^k
For the RHS it's probably easier if we combine it into a single fraction: 1 + 1/(r  1)
= r/(r  1)
= (s + 1/2) / (s  1/2)
= (2s + 1) / (2s  1)
Ooh, pretty. :)
But now the 2's are bugging me, so let's redefine it slightly: let's say instead s = 2r  1, that gives: (1 + 2/(ks  1))^k <= (s + 1) / (s  1)
That gives me an idea: what if we turn the LHS into a single fraction as well? 1 + 2/(ks  1)
= (ks + 1) / (ks  1)
Dang, now we're getting somewhere  put them together, check the range for which s is valid, and we now have, for k >= 1, s > 1: ((ks + 1) / (ks  1))^k <= (s + 1) / (s  1)
That's so pretty it just has to be true.
I still have no idea how to prove it, but at least it's now looking like something that might be a theorem someone's previously proven, or failing that one that a proper mathematician might be motivated to have a crack at: time to go ask around.
Hugo
