I was looking at the data mine that I have started to accumulate of the 196 palindrome, and the started wondering if there was a correlation between the iterations counted and the resulting digit length. Let me tell you the answer.....
The ratio of iterations to digits is 2.4 to 1.
Now, again, you must remember that I am a bit slow, so I don't understand WHY there is an almost exact ratio, no matter how many iterations or digits, but there is.
What I did, was drop some of the major digit milestones into a spreadsheet, along with their corresponding iteration counts. Then I plotted the numbers on a chart, and stared in amazement. I was prepared for curves. Either arcing up, or drooping down. I was even prepared for a zig-zag line that went all over the place. But I was NOT prepared for a razor straight line that went up to the right!!
Then I divided, the two numbers, and saw that it was an increasing ratio of 2.4 every time.
If you go our to say 4 decimal places, it may be 2.4151 or 2.4159 or 2.4161. But the base 2.4 to 1 is always there.
Next, I plotted random points from the current run of 14-18 million. What I got, was the chart below:
The data points for the above chart look like this:
And again, 2.41XX to 1 rise rate. I'm going to have to think about this for a while. Maybe I'll understand it sooner or later. I mean I understand that it takes longer and longer for the next iteration to complete it's calculation, but this is not dealing with time. This is dealing with 2.41 iterations being required to add 1 new digit to the length.
I'll get back to you if I figure something out, or if someone else explains it to me.
UPDATED: Jason and I talked about this. I have to admit, that I am a bit embarrassed about not seeing this coming. Let me explain by just quoting Jason's note....
I expected the ratio to be this consistent. I think the sheer speed and largeness of the numbers are confusing your common sense. Had we only looked at the first 100 iterations, then your common sense is right - the line would have it's ups and downs. BUT, since we are looking at millions and millions of additions, the laws of probability will set in, and what you will see is the average.
Compare it to tossing a coin (1 for heads and 0 for tails). It will have ups and down as you keep the average. But if you only look at each million tosses, well then, you'll have an answer of 0.5 accurate to 4 or 5 digits, just like you do here. Given random numbers, a certain percentage of them will result in a sum that is one digit longer (when added to it's reversal), and a certain percentage won't. The 'ratio' you have calculated is very close to this probability (just like your theoretical 0.49998738 would be for the coin toss). And until you calculate the exact probability, then this ratio is as close to it as you will ever get. Whatever the program's last iteration is, use its iteration-to-digit ratio, and that's more accurate than any of the preceding iterations - in other words, we are approaching the real ratio value with each next iteration.
Once I thought about it like this, I almost smacked myself in the head!! When I go back to Excel, and changed the decimal point out to 10 places, the differences in the calculations become much more apparent. O well. Communication keeps me learning....
You can look at the Data Sets for the specific ratios for each data set.
Overall, it looks like this: