Jump to content

I'm Heart, your friendly neighbourhood admin!


Heart

Recommended Posts

Fascinating. :cake:

I was not sure how to proceed and finally dropped my attempt at it. Sometimes such things are not at all obvious to me.

Link to post
Share on other sites

Fascinating. :cake:

I was not sure how to proceed and finally dropped my attempt at it. Sometimes such things are not at all obvious to me.

I'm glad I'm not the only one! I'll keep puttering; challenges that I can't do are sometimes the funnest ;)

Link to post
Share on other sites
CosineTheCat

I somewhere along the lines missed welcoming you in Gender, and Ended up missing a whole bunch of Math?!

(even though it's out of my reach right now [couple more years]) xD

Link to post
Share on other sites
nerdperson777

I keep trying to read the math but it's not happening.

Link to post
Share on other sites

That's ok. It's something to strive for ;)

And I think we should have one or two math riddles at high school or early undergrad level. Anyone know of one or two?

Link to post
Share on other sites

Actually most of the stuff above is just basic calculus with a tiny bit of complex analysis - for the most part it's accessible to a year undergrad. But I agree it's difficult to follow cuz it's a forum, and equations are hard to follow in text. I wish I knew an efficient way of inserting Latex equations into AVEN as Kelly apparently does.... (Or maybe she has more time to get it all right...)

Kelly's Rodd and Steve beer puzzle on the other hand is not so simple because it was first proven in the late 1800s. But none of the posts above, including my own, contain a proof... my post only gives a plausibility argument as to why the answer looks like it should be zero. But proving it is zero is much harder. Unless, Kelly, you have a new and simple proof in mind? :P

But OK here is a little puzzle that I recently saw on another forum that doesn't involve any theory. I posted in elsewhere on AVEN. I'll copy it word for word so I don't make any errors.


KOLYA AND VITYA PLAY THE FOLLOWING GAME. THERE IS A PILE OF 31 STONES ON THE TABLE. THE BOYS TAKE TURNS MAKING MOVES AND KOLYA BEGINS. IN ONE TURN A PLAYER DIVIDES EVERY PILE WHICH HAS MORE THAN ONE STONE INTO TWO LESSER ONES. THE PLAYER WHO AFTER HIS TURN LEAVES ALL PILES WITH ONLY ONE STONE IN EACH WINS. CAN KOLYA WIN NO MATTER HOW VITYA PLAYS?

In fact if anyone wants I'll play them at this game. :)

Link to post
Share on other sites

Actually most of the stuff above is just basic calculus with a tiny bit of complex analysis - for the most part it's accessible to a year undergrad. But I agree it's difficult to follow cuz it's a forum, and equations are hard to follow in text. I wish I knew an efficient way of inserting Latex equations into AVEN as Kelly apparently does.... (Or maybe she has more time to get it all right...)

Kelly's Rodd and Steve beer puzzle on the other hand is not so simple because it was first proven in the late 1800s. But none of the posts above, including my own, contain a proof... my post only gives a plausibility argument as to why the answer looks like it should be zero. But proving it is zero is much harder. Unless, Kelly, you have a new and simple proof in mind? :P

But OK here is a little puzzle that I recently saw on another forum that doesn't involve any theory. I posted in elsewhere on AVEN. I'll copy it word for word so I don't make any errors.

KOLYA AND VITYA PLAY THE FOLLOWING GAME. THERE IS A PILE OF 31 STONES ON THE TABLE. THE BOYS TAKE TURNS MAKING MOVES AND KOLYA BEGINS. IN ONE TURN A PLAYER DIVIDES EVERY PILE WHICH HAS MORE THAN ONE STONE INTO TWO LESSER ONES. THE PLAYER WHO AFTER HIS TURN LEAVES ALL PILES WITH ONLY ONE STONE IN EACH WINS. CAN KOLYA WIN NO MATTER HOW VITYA PLAYS?

In fact if anyone wants I'll play them at this game. :)

First thoughts:

As a first response, I find myself going through iterations for the lower (and simpler) numbers. Starting with a game of 2 stones, it is obvious that Kolya wins, on the very first turn. Then, with 3 stones, Kolya can divide it into either 2 and 1 or 1 and 2, both of which give the same result: Vitya has one pile of one and one pile of two. So Vitya divides the pile of two and wins the game. I went through and wrote out all the possible combinations (eliminating permutations of piles as I went) for all possible plays with 2, 3, 4, 5, and 6 stone games. It appears that Kolya always wins the even numbered games, and Vitya always wins the odd numbered games.

Thus, I predict that, in a game of 31 stones, Vitya must win. However, I shall now attempt to prove it... maybe by starting to list combinations for turns for 31 rocks, it wasn't nearly as hard as I thought it would be given how many of them are just permutations of each other :P

*wanders off to find a pen and paper* Can't keep numbers that high in my head :P

Edit: no. I think I can actually do this in my head. Give me some time to compose my thoughts...

Second wave of thoughts on the answer:

I'm going to start thinking in terms of even and odd numbers. Any odd numbered pile of rocks can be split into an odd and an even pile, and that's the only option. Any even numbered pile though can be split into two odd numbered piles OR two even numbered piles. Ultimately, the winning move in any game will be to split a pile of 2 into two 1's, so an even number into two odds. The first move in the 31 stone game must be to split the piles into one odd and one even. Then, if the odd one is not just one stone or the even one just two stones, Vitya has three options: split the odd pile into an odd and an even, split the even pile into two odds, or split the even pile into two evens. If the odd pile is just one stone, then the first option is not allowed, and if the even pile is just two stones, then the last option is invalid.

Thinking in this way, we can prove that, for all odd numbered stone games, Vitya can win (though I have not yet thought of how to prove that he must); by simply taking one stone off the "big" pile each turn, we can easily see that, for a game of n stones, there are n-1 turns until they are all in piles of one stone. So, for any odd number, there are an even number of turns, which means Vitya's turn is the last one. The same reasoning applies to even numbered stone games. If there are an even number of stones n, then n-1 turns makes an odd number of turns, so Kolya wins if each turn is to simple split the pile into one pile of one and the other pile housing all the rest of the stones.

I shall think for another few minutes to see if this is generalisable to any permutation of turns, or if it only works if both players coordinate to only take one stone off the pile each turn, instead of, for example, splitting it in half.

I'm enjoying this, please pardon the stream of consciousness, but at least I have proven now that Vitya can win :) Next step is to prove that he must win.

Third wave of thoughts: oops, almost late for work. I'll think about it between tasks ;)

Though, on the bright side, I have read through and mostly understood the others by now :D Yay for a day off yesterday! I have to admit, I am a bit intimidated by infinite sums, and I haven't dealt with them nearly enough to be as comfortable as I'd like. Zeta functions too; those are foreign things to me mostly. If only I have infinite amounts of time to learn everything I want to learn... :P

Link to post
Share on other sites

I like puzzles like the stone pile one, Mic and Heart. :)

Once again, I am asking what is the difference in the beer served between Rodd and Steven as we approach infinity.

michaeld understood the hint and figured it out (I figured that he would :) ).

The difference in the sum total of beer that Rodd and Steven and would pour is:

Zero. Despite the types of numbers being different (odd and even prime factors), they will sum to the same amount.

Here is a log plot of the running totals:

Trend1.jpg

They were just about caught up at the tenth round.

How can this be?

I actually gave a hint when I came up with the idea of vector sums give a -1 for numbers with an odd number of factors, 0 for those divisible by a square, and +1 for those with an even number of factors. Indeed, there is a function called the Möbius function μ(n), and it does the same thing. We have, for positive integers:

μ(n) = +1 if n is a square-free positive integer with an even number of prime factors

μ(n) = 0 if n has a squared prime factor

μ(n) = −1 if n is a square-free positive integer with an odd number of prime factors

The vector sums that I used works (and yes, we need 1 <= k <= N, I implied this by "integers k are the coprimes of N" meaning that k must be less than N), but finding the prime factors and then assigning a value as above works too.

Secondly, this Möbius function has a peculiar relationship with the Riemann Zeta function, ζ(s):

300e4bdecad68ccf4fa7043c0deeb6fc.png

Now, when we have a term from a number that is divisible by a square, that term is zero, so we are really only adding terms from those integers with an odd and even number of factors.

If we let s = 1, then we have a sum that is actually the sum of the pours done by Rodd minus those poured by Steven, meaning that the equation gives us Rodd's pours subtracted from those of Steven. That is:

Sum_Mu.jpg

And that will give us our series in question (the answer to the puzzle).

An easy way to do this might be to determine ζ(1) and then take the reciprocal, because we know that from the first equation presented in this post equates the Möbius function to the Zeta function, and thus our series in question is the reciprocal of ζ(1).

However, s = 1 is the only place on the complex plane were the Riemann Zeta function cannot be evaluated (approaching it on the real number line from the more than 1 gives a positive infinity limit; approaching it from less than 1 give a negative infinity). We learned from michaeld that proving that we can use the limit for s -> 1 in the Zeta/ Möbius function equation is difficult (but it is proven in Chapter 5, section 6, A Postscript to De la Vallee Poussin's Proof,in the 1974 Riemann's Zeta Function by H. M. Edwards).

Nonetheless, as a physicist, I cheated. I realized that the Zeta/ Möbius function equation is only valid for real s > 1, so I took the limit from the positive side and got infinity.

The reciprocal of infinity is zero. So, we have, for the difference between the pours done by Rodd and Steven is:

0 = 1 - 1/2 - 1/3 - 1/5 + 1/6 - 1/7 + 1/10 - 1/11 - 1/13 + 1/14 + 1/15 - ...

So the difference between the amount of beer served by Rodd and Steven is zero. They will pour the same amount.

The difference plotted, as a function of rounds looks like this:

difference1.jpg

They catch up rather quickly, but pass one another frequently, but by only a small and getting smaller amount. The absolute difference at 100,000 is about 0.0004872, and at a million, is about 0.0002006.

The fact that the two sums equate at infinity (might or) might not be intuitive. It is obvious once the Zeta/ Möbius function equation is utilized, or, one could follow the trends and make a logical conclusion.

Bonus: A digression!

Let us look at the trends of the sums of beers poured by Rodd:

Odd_Sum.jpg

Steve:

Even_Sum.jpg

And Pasquare:

Square_Sum.jpg

We see a few things. First, the trends are logarithmic and predictable. And it also means that the amount of numbers in each category are consistent. We have about 30.4% of integers that have an odd number of prime factors (and this includes primes themselves), another 30.4% that have an even number of prime factors, and 39.2% of numbers that are divisible by a square. These are true from close to about ten or so on to infinity.

Secondly, since we only have one keg, we would need to know when to change it for a new full one. To do that, we would want to know the formula for the running total for all three bartenders combined. This, in turn, would give the formula for the sum of all reciprocals of integers (1 + 1/2 + 1/3 + 1/4 ...).

We can actually find an approximation by adding up the trends posted on all of these plots. This gives:

0.30396Ln(N) + 0.52201 +

0.30396Ln(N) + 0.52201 +

0.39206Ln(N) - 0.46657

This gives 0.99998Ln(N) + 0.57745

If we obtained the trends further on, we would get a more accurate approximation, and from what we have so far, it looks like the factor that multiplies the natural logarithm may indeed be 1.000...

And to jump to the conclusion, it is correct.

This means that the approximation of the sum of the reciprocals of positive integers is given by:

ns1.jpg

When N is large. It is the natural log of N plus a constant C. If we go to infinity, we could determine this constant. True, the natural logarithm of infinity is infinite, and the sum of the reciprocals of all positive integers is infinite; but the sum of reciprocals is still a lager infinity by the amount C.

Let us also realize that the integral of 1/x from 1 to N is Ln(N). That is:

ni1.jpg

So let us look at these:

Gamma_Ln.jpg

Each gray rectangle has an area of 1/N (they are 1/N of a unit tall and 1 unit wide). The sum of all rectangles up to, say, 6, is 1 + 1/2 + 1/3 + 1/4 + 1/5 (which is about 2.2833; the rectangles' number is at the beginning of the rectangle, so the first rectangle rests between 1 and 2). And note that for up to 6, there are five rectangles, since we start with one.

The black line is 1/x. It is continuous, so at x = 1.5, its height is 1/1.5. The area under the curve from, say, 1 to 6 (in short, the integral of 1/x dx from 1 to 6) is Ln(6). And the natural logarithm of 5 is about 1.7918.

The difference between 1 + 1/2 + 1/3 + 1/4 + 1/5 = about 2.2833,

and Ln(6) = about 1.7918

is about 0.4915.

These represent the area of the grey rectangles that are above the curve. When we continue, we will eventually reach a constant. And that value is about:

0.577215665

And it is known as "gamma" or, the Euler-Mascheroni Constant. It shows up in other places, too.

To continue adding up the portions of rectangles above the curve, up to 100, we see:

App_Gamma.jpg

The first 100 of these areas sum to about 0.572207, which is not that far from the actual value.

You might complain that I am actually calculating:

ns2.jpg

Since I used only five rectangles (representing the sum of reciprocals up to 5) and Ln(6). But that is just to show, geometrically, how gamma can be obtained from the tops of the rectangles. As N approaches infinity, Ln(N) approaches Ln(N+1), and for that matter, we can still obtain gamma from the difference between the sum of integers up to N and Ln(N) as N gets large. But we start with N = 1 (and 1/1 = 1) and Ln(1) = 0, so the difference is one, and that error from gamma is 1 - gamma = about 0.4228. It approaches gamma even more quickly that the sums of the tops of the rectangles, except that it approaches from the opposite direction:

Up_Down.jpg

Both curves have the same asymptote (gamma).

Two new ones:

1) We saw earlier that with just the two bartenders, the difference between what Rodd and Steven poured was Ln(2). This time, with three bartenders, the difference was something else. Now let us consider this third bartender, Pasquare. As we approach infinity, what is the difference between what Pasquare pours and what Steven pours?

2) We have learned about curves that are logarithmic, and never settle to a final value. These can be inferred by plotting the curve with a logarithmic x-axis and noting that they form straight lines.

We have also seen some asymptotic curves, such as method to obtain gamma. These do approach a final value and the higher one goes, the closer it gets.

Let us consider the method of the sums of reciprocals and the natural logarithm.

ns3.jpg

C approaches gamma as N gets large. Let us look at that. The difference, Δ, between C and gamma is:

ns4.jpg

And let us put this into perspective. When N gets larger, Δ gets smaller; but when we multiply this delta times N, this Δ*N factor approaches a specific value as N gets larger (we have an asymptote). What is this value?

Link to post
Share on other sites
Dodecahedron314

Fun fact: when I first got my calc textbook for the summer and saw that it was written by Michael Spivak, my first thought upon seeing the last name was "what, like the pronouns? Nah, probably just a coincidence." Well, I finally decided to do a quick Wikipedia search on the guy, and it turns out...yes, actually, like the pronouns. The dude who wrote my math book is also the reason e/em/eir pronouns are called Spivak pronouns, because he used them in a LaTeX manual. Who'd-a thunk it?

Link to post
Share on other sites

Fun fact: when I first got my calc textbook for the summer and saw that it was written by Michael Spivak, my first thought upon seeing the last name was "what, like the pronouns? Nah, probably just a coincidence." Well, I finally decided to do a quick Wikipedia search on the guy, and it turns out...yes, actually, like the pronouns. The dude who wrote my math book is also the reason e/em/eir pronouns are called Spivak pronouns, because he used them in a LaTeX manual. Who'd-a thunk it?

Major hat-tip to Spivak!! *high fives em*

Link to post
Share on other sites

I know of Spivak the author (though I haven't read his books) but I didn't know there was such a thing as Spivak pronouns. :P

Heart I think I won't give any hints for the puzzle so as not to spoil your fun! :P

Kelly:

>(and yes, we need 1 <= k <= N, I implied this by "integers k are the coprimes of N" meaning that k must be less than N

I think the coprimes of N just means every integer that's coprime to N (i.e. no non-trivial common factors with N), which I think would include integers higher than N too...

> Nonetheless, as a physicist, I cheated. I realized that the Zeta/ Möbius function
> equation is only valid for real s > 1, so I took the limit from the positive side and got infinity.

Yeah, the thing that's easy to prove is that zeta(s) -> infinity as s->1+. (Or in fact as s->1 generally.)

But what is not easy is proving that:

1^s - 1/2^s - 1/3^s - 1/5^s + 1/6^s - 1/7^s + 1/10^s - 1/11^s - 1/13^s + 1/14^s + 1/15^s - ..
->
1 - 1/2 - 1/3 - 1/5 + 1/6 - 1/7 + 1/10 - 1/11 - 1/13 + 1/14 + 1/15 - ..

as s->1. It looks obvious but it really isn't at all; you cannot just replace every s with 1 in the limit and hope it works, precisely because the resulting series is not absolutely convergent.

As for the harmonic series... Hmmm. I'm well aware that

1/1 + ... + 1/N - log N -> gamma

but I don't think I recall the higher asymptotic of the harmonic series. (Which would be the answer to your question - in particular the O(1/N) term.)

By the way I'm going to write log not ln for natural log. For me all logs are base e - except on some occasions they are base 2 in which case I would write log_2. Base 10 logs have very little use. :)

Anyway let's see if we can work it out....

The way you show that 1/1 + .. + 1/N = log N + c + o(1) for some constant c (and where o(1) just stands for a function that tends to 0) is by comparing the harmonic series with the harmonic integral (i.e. the integral of 1/x), which, between 1 and N, is precisely log N. So to get the higher asymptotics, the obvious thing to do is to look more carefully at the difference between the sum and the integral.

Well... we want to approximate 1/n by 1/x for x that is nearby to n. So let's compare 1/n to 1/(n+x) where 0 <= x < 1.

Let's start with:

1/n - 1/(n+x) = x/[n(x+n)]

By using Taylor series we may write that:

1/n - 1/(n+x) = x/n^2 - x^2/n^3 + O(1/n^4).

Integrate from x = 0 to 1:

1/n - log(n+1) + log n = 1/(2n^2) + O(1/n^3)

That seems like a good start but how can that help us...

Well what we actually want is the limit of

R_N := N*[H_N - log N - gamma]

where H_N := 1/1 + ... + 1/N is the Nth harmonic number.

But gamma = sum[1/n - log(n+1) + log(n)] from n=1 to infinity. That sum from n=1 to N is H_N - log N+1. Therefore:

R_N = N*[-sum(n=N+1..infty) [1/n - log(n+1) + log(n)] + log(N+1) - log(N)]

The thingy in the sum is what we just estimated a minute ago:

R_N = N*[-sum(n=N+1..infty) [1/(2n^2)+O(1/n^3)] + log(N+1) - log(N)]

Now sum 1/(2n^2) from n=N+1 to infinity is 1/(2N) + O(1/N^2) by comparing it with the integral of 1/(2x^2). The O(1/n^3) term sums to O(1/N^2). And log(N+1) - log(N) = 1/N + O(1/N^2). So putting that together:

R_N = -1/2 + 1 + O(1/N) -> 1/2 as N->infinty.

So all being well that's Kelly's limit.

EDIT: corrected sign error.

===

Kelly, for the difference between Pasquare and Steven, do you mean the simple difference in the limit

(which surely is infinity)


or the difference in the coefficient of log N?

Link to post
Share on other sites

That is a very good way of solving it, Mic. :)

Fun fact: when I first got my calc textbook for the summer and saw that it was written by Michael Spivak, my first thought upon seeing the last name was "what, like the pronouns? Nah, probably just a coincidence." Well, I finally decided to do a quick Wikipedia search on the guy, and it turns out...yes, actually, like the pronouns. The dude who wrote my math book is also the reason e/em/eir pronouns are called Spivak pronouns, because he used them in a LaTeX manual. Who'd-a thunk it?

.

spock.png

That makes Transwhatevers even more awesomely geeky. :)

1) We saw earlier that with just the two bartenders, the difference between what Rodd and Steven poured was Ln(2). This time, with three bartenders, the difference was something else. Now let us consider this third bartender, Pasquare. As we approach infinity, what is the difference between what Pasquare pours and what Steven pours?

.

Hopefully, this one is easy. :)

We saw before that the equation (or a rather precise up to a few significant figures) for the pours done by Pasquare was (or can be approximated by):

0.39206Ln(N) - 0.46657

And for Steven, it was:

0.30396Ln(N) + 0.52201

Where N is the number of the round and N is large. The difference between what the two pour would be given by subtracting Steven's equation from Pasquare's.

That gives about:

0.08810Ln(N) - 0.98858

But this is also a log function, and diverges. It is the same as the sum of reciprocals curve, except multiplied by a constant and with a different (this time negative) constant.

Thus, it is infinite. A round of beer to everyone who got this one (it was easy).

It slowly diverges. At one Septillion (a 1 followed by 24 zeros), it has only summed to about 3.9.

Delta_PS.jpg

We saw that originally Steven was ahead of Pasquare, but Pasquare caught up. To find when they catch up, set the equation to 0 and solve for N:

0.08810Ln(N) - 0.98858 = 0

Go ahead, because you can reuse what you learned in High School. Pasquare passes Steven at round 74,691.

When N gets larger, Δ gets smaller; but when we multiply this delta times N, this Δ*N factor approaches a specific value as N gets larger (we have an asymptote). What is this value?

.

Since we defined Δ as:

ns4.jpg

We have asked to solve Δ*N, which is then:

image.jpg

I knew what this value was when I posted the question, because it seemed obvious, but I did not know how to prove it.

It is obvious if we simply act like a physicist and plot it:

image.jpg

It looks like it zeros in at 1/2 rather quickly. But zooming in, we see that it is asymptotic:

image.jpg

For every order of magnitude we increase N, the difference between the curve and the value of 1/2 decreases by an order of magnitude.

The short and easy answer is that Δ*N approaches the value of 1/2 asymptotically as N gets large.

And a round of beer to all those that got it! :)

Proof:

I was not sure if I could prove this when I asked the question. I figured that if I could not, I could at least supply the answer and hope that someone else could prove it. But then I remembered the asymptotic expansion of the harmonic function (the harmonic function in this case is the standard sum of reciprocals of positive integers). It is:

image.jpg

Where B2k are Bernoulli numbers. The remainder term Δ that we are looking for is thus:

image.jpg

So we need:

image.jpg

Before jumping into that, let me explicitly show a number of terms for the original equation. First, let us solve for gamma:

image.jpg

And expand the sum with Bernoulli numbers to the first four terms:

image.jpg

When N gets large, the terms with N to the power of anything above 1 will get quite small, so we can use this to calculate gamma to up to a good amount of precision. Just using the first few terms yields a fairly precise value, and with N up to 100, we equal the value of actual gamma within the [limited] precision of Excel:

table3.jpg

So that is an easy way to calculate gamma. One could also just use N up to 10 or even 2, but one would require more terms to be used.

Back to the value of Δ*N as N gets large. We have:

image.jpg

And frankly, the sum with Bernoulli numbers gets very small as N gets large, and we have just the 1/2 term left.

We have:

image.jpg

The term with the cubed N will flee from the decimal point three orders of magnitude for every order of magnitude for N; the higher ones will flee even faster. The main term besides the 1/2 constant is the 1/12N term, and it will reduce in order of magnitude for every increase in order of magnitude for N. So, as N -> ∞, Δ*N -> 1/2.

We can work it out for fun:

table4.jpg

The first few Δ*N are not accurate to the precision shown because we would need to use more terms, but the trend is clear, and one could infer that at N = 1,000,000,000,000, Δ*N would be 0.499999999999916667 to the precision shown.

Have we had enough an infinite mathematicians walk into a bar or zeta function puzzles? If not, I can continue. If so, I can think of something else. ;)

Link to post
Share on other sites

Have we had enough an infinite mathematicians walk into a bar or zeta function puzzles? If not, I can continue. If so, I can think of something else. ;)

Can I answer both yes and no? Or is that cheating? I could write you up a wavefunction if it means I get to see both more mathematician puzzles and other types of puzzles... ;)

Link to post
Share on other sites

The short and easy answer is that Δ*N approaches the value of 1/2 asymptotically as N gets large.

And a round of beer to all those that got it! :)

Proof:

I was not sure if I could prove this when I asked the question. I figured that if I could not, I could at least supply the answer and hope that someone else could prove it. But then I remembered the asymptotic expansion of the harmonic function (the harmonic function in this case is the standard sum of reciprocals of positive integers). It is:

image.jpg

Where B2k are Bernoulli numbers.

Hehe yeah. But that leaves open the question of how to prove the asymptotic series for the harmonic series. Your question was basically asking for the O(1/N) term in the asymptotic series, so I thought we wouldn't be allowed to look this asymptotic series up. :)

But now you've mentioned it, why not try and prove the asymptotic series? My previous post contains a proof that the coefficient of 1/N is 1/2. I'm pretty sure the method should generalise without too much difficulty to find the entire asymptotic series.

Any takers? :P

Actually on second thoughts one can do something more general. A neat little result that I did know about but forgot about for the time being is the Euler-Maclaurin formula.

https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula

This should give the whole asymptotic expansion quickly I believe.

Link to post
Share on other sites

Hehe yeah. But that leaves open the question of how to prove the asymptotic series for the harmonic series. Your question was basically asking for the O(1/N) term in the asymptotic series, so I thought we wouldn't be allowed to look this asymptotic series up. :)

.

Admittedly, this does show how I am different from a mathematician. As a physicist/engineer, if am equation is known, I start there, rather than derive it. I was trying to be easy. ;)

But now you've mentioned it, why not try and prove the asymptotic series? My previous post contains a proof that the coefficient of 1/N is 1/2. I'm pretty sure the method should generalise without too much difficulty to find the entire asymptotic series.

Any takers? :P

Actually on second thoughts one can do something more general. A neat little result that I did know about but forgot about for the time being is the Euler-Maclaurin formula.

https://en.wikipedia.org/wiki/EulerMaclaurin_formula

This should give the whole asymptotic expansion quickly I believe.

.

Thanks for that second spoiler. However, I gave it a try, and it does not look so good.

With the Euler Maclaurin formula given, we can start there. We have:

fc3cca935b5134008a5d3dd86955f5cd.png

However, I saw it elsewhere as:

EM1.jpg

That has a remainder term. I tried both. This will use the R term.

We have f(n) and f(x) = 1/N

a = 1; b = N; f(a) = 1; f(b) = 1/N

We have multiply differentiated functions on the right of the equation. With f(x) = 1/x:

f'(x) = -1/x2

f''(x) = 2/x3

f'''(x) = -2*3/x4

f(4)(x) = 2*3*4/x5

And in general, we have:

f(k)(x) = (-1) kk!/xk+1

So for the terms on the right, we have:

f(2k-1)(x) = (-1) 2k-1(2k-1)!/x2k

For f(b) = f(N), we have:

(-1) 2k-1(2k-1)!/N2k = -(2k-1) !/N2k

For f(a) = f(1), we have:

(-1) 2k-1(2k-1)!/12k = -(2k-1)!

So f(2k-1)(b) - f(2k-1)(a) = (2k-1)!*(1 - 1/ N2k)

Putting them into the equation gives:

EM2.jpg

We can simplify the numerators and denominators. We also know that the integral will equal Ln(N). We have:

EM3.jpg

From the equation that I had in a previous post, which was:

EM4.jpg

Re-writing the first gives:

EM5.jpg

We see that there are terms in the first that are not in the second. And one (gamma) in the second that is not in the first. If (big if, I suppose) both equations are true (if m is sufficiently high or near infinite), then:

EM6.jpg

When m is sufficiently high.

However, the sum does not converge. Indeed, them Bernoulli terms increase about one order of magnitude approximately once for every time that k increases by 2k. And although they alternate in signs and are divided by an increasing number, it does not converge. I could try explicitly working out Rn(f,m).

Link to post
Share on other sites

Hehe yeah. But that leaves open the question of how to prove the asymptotic series for the harmonic series. Your question was basically asking for the O(1/N) term in the asymptotic series, so I thought we wouldn't be allowed to look this asymptotic series up. :)

.

Admittedly, this does show how I am different from a mathematician. As a physicist/engineer, if am equation is known, I start there, rather than derive it. I was trying to be easy. ;)

But now you've mentioned it, why not try and prove the asymptotic series? My previous post contains a proof that the coefficient of 1/N is 1/2. I'm pretty sure the method should generalise without too much difficulty to find the entire asymptotic series.

Any takers? :P

Actually on second thoughts one can do something more general. A neat little result that I did know about but forgot about for the time being is the Euler-Maclaurin formula.

https://en.wikipedia.org/wiki/EulerMaclaurin_formula

This should give the whole asymptotic expansion quickly I believe.

.

Thanks for that second spoiler. However, I gave it a try, and it does not look so good.

With the Euler Maclaurin formula given, we can start there. We have:

fc3cca935b5134008a5d3dd86955f5cd.png

However, I saw it elsewhere as:

EM1.jpg

That has a remainder term. I tried both. This will use the R term.

We have f(n) and f(x) = 1/N

a = 1; b = N; f(a) = 1; f(b) = 1/N

We have multiply differentiated functions on the right of the equation. With f(x) = 1/x:

f'(x) = -1/x2

f''(x) = 2/x3

f'''(x) = -2*3/x4

f(4)(x) = 2*3*4/x5

And in general, we have:

f(k)(x) = (-1) kk!/xk+1

So for the terms on the right, we have:

f(2k-1)(x) = (-1) 2k-1(2k-1)!/x2k

For f(b) = f(N), we have:

(-1) 2k-1(2k-1)!/N2k = -(2k-1) !/N2k

For f(a) = f(1), we have:

(-1) 2k-1(2k-1)!/12k = -(2k-1)!

So f(2k-1)(b) - f(2k-1)(a) = (2k-1)!*(1 - 1/ N2k)

Putting them into the equation gives:

EM2.jpg

We can simplify the numerators and denominators. We also know that the integral will equal Ln(N). We have:

EM3.jpg

From the equation that I had in a previous post, which was:

EM4.jpg

Re-writing the first gives:

EM5.jpg

We see that there are terms in the first that are not in the second. And one (gamma) in the second that is not in the first. If (big if, I suppose) both equations are true (if m is sufficiently high or near infinite), then:

EM6.jpg

When m is sufficiently high.

However, the sum does not converge. Indeed, them Bernoulli terms increase about one order of magnitude approximately once for every time that k increases by 2k. And although they alternate in signs and are divided by an increasing number, it does not converge. I could try explicitly working out Rn(f,m).

Yeah the thing is that if you use EM between 1 and n then the remainder R_n is not O(1/N^n) (you can write it down as an integral to see this), so it doesn't give you an asymptotic series. It's true that the remainder goes to 0 in the limit, but both the sum and the constant diverge. Indeed the series we're aiming for is only an asymptotic series, not a convergent series.

(Also should your lower sum limit be a+1 not a in the statement of EM?)

Anyway I think the following is more promising. Like in my earlier post, what you need to do is to estimate the tail of the series. How does that help? Well (this is basically what I had in my original post but written more clearly now I thought about it more....)

Let M > N. Then consider:

sum_(r=N+1...M) 1/n - integral_(N to M) 1/x dx

Well if H_n is the nth harmonic number then the first term - the sum - is H_M-H_N. The second term - the integral - is log(M)-log(N). So this works out as:

(H_M - H_N) - (log M - log N)

= (H_M - log M) - (H_N - log N)

Thus taking the M->infinity limit while leaving N fixed then the first term on the right H_M - log M tends to gamma and so we get:

lim(M->inf) [sum_(r=N+1...M) 1/n - integral_(N to M) 1/x dx] = gamma - (H_N - log N)

so that

(H_N - log N - gamma) = -lim(M->inf) [sum_(r=N+1...M) 1/n - integral_(N to M) 1/x dx]

Now the gamma is already taken care of on the left hand side, and you should be able to use EM on the thing inside the square brackets - it's a sum minus an integral. You can either be careful and write it out the Euler-Maclaurin sum with a remainder, show the remainder is bounded by O(1/N^k) independently of M>N and then take the M->infinity limit. Or you can throw caution to the wind slightly and just write out the EM sum in its infinite form, hoping the remainder can be ignored, and then hope you can take M->inf term by term. (You can but the first suggestion justifies it really.)

Anyway let me know if that works if you wanna try it. :)

Link to post
Share on other sites
  • 2 weeks later...

Announcement:



I will be away from now until August 30th. 'Tis time for a vacation ;)



As such, if you need anything, please PM one of the two awesome admins who have generously offered to cover for me in my absence. Both Lady Girl and +Pookzar are able to do anything that I would normally do as a moderator. Also, be patient if I haven't answered a PM or thread; I've been busy getting ready and wrapping everything up :)



See everyone in two weeks! Don't burn the place down, play nice, and all that good stuff :cake:



Yours,


Heart


Gender Discussion Moderator


Link to post
Share on other sites
Calligraphette_Coe

Happy Vacation, Heart! Have a positively lovely time. We promise not to move the furniture while you're gone.Well, not, at least, too much. :)

While we're talking about math, I found another one of *those* books yesterday on math that I always ending up loving. This one is entitled 'How Not To Be Wrong: The Power of Thinking Mathematically'. So far, I'm enjoying it as much as I did one of my favorites from a few years ago entitled "Conned Again, Watson! : Cautionary Tales of Logic, Math and Probability".

Oh! And a tip of the beret to all you math folks who make my job easier by coming up with systems of equations and algorithms used in computer graphics such as Solidworks that make short work of rendering isometric third angle projection representations of real building blocks. And then allow me to fit them together in VR space to see if they bump up against the preset constraints of a whole system and sort out problems with geometric tolerancing.

Link to post
Share on other sites
littlepersonparadox

I'll have to come back to this form more often. I'm in the mist of finals so I can't try your little math puzzles but I'll do my best and see if I can find one for high school students and those of us like me who haven't taken calculus yet. ( I soooo want to now tho.)

Link to post
Share on other sites

... Anyway let me know if that works if you wanna try it. :)

.

I am still trying. My first attempt failed, and I tried with your new hints, but am not certain that all is well. I will try again when I have more free time.

For now...

An infinite number of mathematicians walk into a bar.

Since St Paddy's day was a few days way, the Hilbert Hotel hired yet another bartender, so now they have four. We still have Rodd on the left, Pasquare next, then Steven, and the new one (a private investigator whose name is an unknown) is to the right of Steven. Each mathematician walks in and takes a number. They then queue from left to right and repeat. So, the first goes to Rodd, the second to Pasquare, the third to Steven, and the fourth to he-who-must-not-be-named. The fifth goes to Rodd and the process repeats.

This means that Rodd will serve odd odd mathematicians 1, 5, 9, 13 and so on, and Steven will serve even odd math geeks 3, 7, 11, 15 and so on.

Each mathematician will be served a pottle (that is equal to a half gallon, or four pints) divided by their number. So, Rodd will serve the first patron 4 pints. Steven will serve his first imbiber 4/3 pints. Rodd will serve his second math lover 4/5 of a pint, Steven will serve a 4/7 pint portion, and so on.

Like usual, what is the difference in the amounts between what Rodd poured and what Steven poured in pints (4 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ...)?

This should be easy. So please use spoilers; I will have more info on this soon. :)

Link to post
Share on other sites

Due to the overwhelming response to this puzzle, perhaps no more mathematicians in bars puzzles. :P

Anyway, here is a way to solve the latest:

The formula for summing 4 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ...

Is:

sum1.jpg

One could solve this like a physicist. Here are the terms up to the 100,000th round (equivalent to going up to n = 50,000 in the series above because half of the rounds, those being served by Pasquare and the P.I., are ignored):

plot1.jpg

It appears to be asymptotic, and the line drawn in, to which it appears to converge, is pi. Indeed, I gave a few hints, including it being a few days before Saint Patrick's day (which is 17 March, and Pi Day is 14 March in the USA), as well as the new bartender being a Private Investigator (PI for short).

At the 100,000th round, this gives a sum of about 3.14157, accurate to the first four decimal places. Using Mathematica and summing to 1,000,000, I get 3.141591, accurate for the first five decimal places.

One can be more creative with this data, and plot the data against the reciprocal of the round number. Using data from rounds 10,000 to 100,000, that gives:

plot2.jpg

We have a straight line with a slope that would likely be exactly -2 and a y-intercept (where the sum should be at an infinite number of terms) of 3.141592654, which is accurate to nine digits.

One can also do something similar for the time that the difference between Rodd and Steven was Ln(2):

Making straight lines of the highs and lows, we see that they converge at the y-intercept. Here, we only used data from n = 1,000 to 10,000.

plot3.jpg

When I average the two intercepts that are given for each line, I get 0.69314718053, which is accurate to ten digits of Ln(2). If I used data from 10,000 to 100,000, we could likely gain an extra digit of accuracy. The sum itself, up to 10,000 was only accurate to four digits.

But, why does 4 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ... = pi?

To find out why, we would want to find an expression of pi that we can expand. Well, we know that the tangent of π/4 is 1 (because both legs of the right triangle here are equal). We can take the inverse of that:

Arctan(1) = π/4.

We can expand the arctan function by the Maclaurin series:

NumberedEquation1.gif

It looks like we need to take a lot of derivatives. Let f(x) = arctan(x). We have:

functions1to7.jpg

Next, we need to set those to f(0), and when we do, we see that the denominators all go to 1. And half of the numerators go to zero. The other half go to integers.

We now have:

f'(0) = 1

f''(0) = 0

f'''(0) = -2

f4 = 0

f5 = 24

f6 = 0

f7 = -720

We can now put them into the Maclaurin series (and realizing that the first term would be arctan(0) which is zero):

sum2.jpg

We can simplify the numerators and denominators, and while at it, remember that we will evaluate the arctan where x = 1:

sum3.jpg

We have established that arctan(1) = π/4, so:

sum4.jpg

And that is what we were looking for.

:cake: to those who got it. :)

A future post may show other ways to derive this number.

Link to post
Share on other sites
Calligraphette_Coe

Due to the overwhelming response to this puzzle, perhaps no more mathematicians in bars puzzles. :P

An inifinite number of mathematicians having pi bakeoffs instead? Or in my case, a way to serve an infinite number of managers each wanting me to solve only *their* problem before the next Big Bang and not seeing that as being a problem because 'I'm only asking you to solve one problem and I got here first'.

As to why pi was the answer..... "rounds" ? :P

Link to post
Share on other sites

Actually this one is a special case of one seen earlier.


In post #120 I showed that if N is an odd positive integer then the alternating sum

1/1^N - 1/3^N + 1/5^N - ...

is equal to

(-1)^(N-1)*pi^N / [(N-1)!4^N] * cot^(N-1)(pi/4).

(Out loud, the last factor on the right is "the (N-1)th derivative of cot evaluated at pi/4".)

The 0th derivative of cot is just cot, and evaluated at pi/4 it's 1/tan(pi/4) = 1.

So plugging in N=1 we get

1/1 - 1/3 + 1/5 - 1/7 + ... = (-1)^0 pi^1 / [0! 4^1] * 1 = pi/4.





The way Kelly outlines is the most elementary though. If you want to derive the Taylor series of arctan, it's easiest to start with the geometric series

1/(1+z) = 1-z+z^2-z^3+... for |z|<1.

Replace z with z^2

1/(1+z^2) = 1-z^2+z^4-...

Integrate:

arctan z = z - z^3/3 + z^5/5 - ... for |z|<1.

(The constant of integration is zero by setting z=0.)

Then you can take the limit z->1- and use Abel's theorem (https://en.wikipedia.org/wiki/Abel's_theorem) to swap the limit and sum.

This is what Kelly did, just fleshed out a bit more.

This series is very slowly converging though - not very useful for calculating pi.

Link to post
Share on other sites

Awesome, michaeld. Your solutions are always so much more elegant.

Link to post
Share on other sites

Hey all!!

So, I know I'm not supposed to be back until tomorrow, but I'm chilling at an airport in a layover, and shaking in withdrawal from AVEN. So I'm back for a short while. Give me all the gossip!

How is everyone? Any new revelations about your gender, or gender in general? How are the various transitions going?

Give me a few days to catch up on all that I've missed, and I'll get right back at those puzzles :D I'm glad a few popped up while I was gone ;)

*wanders off to sit in a chair, lands on the floor because it was moved from where I remember, but only by a bit. Cheers Zen <3

Link to post
Share on other sites

Welcome back!!! :cake:

I was supposed to be away for a week ... but... the villa we were staying at unexpectedly had limited internet access so I didn't have to withdraw from AVEN after all...

No, I'm still cisgender. :P

How was the time away?

Link to post
Share on other sites

Well..... Don't tell anyone, but I actually had internet access twice while I was gone too, but I decided to stay strong and only check cute horse videos ;)

It was actually amazing. For those of you who don't know, I have a sibling who recently graduated an undergrad degree, so my dad, who is awesome, and his girlfriend, who is exactly as awesome, decided to get a vacation for the four of us anywhere in the world we chose as a belated grad present for me, and a timely one for the sibling :D So we chose Belize. And oh my goodness, I HAVE NO REGRETS! It was so chill, and everyone there was friendly. Snorkelling, sight-seeing, Mayan ruins exploring, even *gasp* riding a horse through the jungle to a nice river to go swimming in!!

I think I may be in heaven. Two weeks of heaven followed by returning to my awesomesauce community of awesomeness of you guys :)

Also, in recent news, I was perusing the internets catching up on news, and apparently awesomesauce is now in the Oxford English Dictionary. So I now feel justified in using it ;)

How was your week with limited internet access Mic?

Link to post
Share on other sites
Calligraphette_Coe

*wanders off to sit in a chair, lands on the floor because it was moved from where I remember, but only by a bit. Cheers Zen <3

Oh, that! Sorry 'bout that, but were having a bit of trouble with our Roomba 650 robotic vacuum. We think a piece of cake skittered under the chair, and the Roomba sensed it but couldn't get to it on the first 100 tries, so it kept headbutting the chair until it could, moving it out of its normal position.

There's a new cake in the 'fridge, BTW! Welcome home!

Link to post
Share on other sites
nerdperson777

Cake in the fridge? That reminds me I have to post some pics in the photo thread! There should be a cake coming, unless I forgot that I posted it.

And with my gender stuff, I think my mom is coming to terms with me better while my dad is just a stubborn mule.

Link to post
Share on other sites

Awesomesauce, everyone!

Heart went to Belize!

And michaeld lived in a villa!

And cake!

:wub:

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...