It depends on what you're using to 'contain' that number, right and if you are.
For example, in computing and simulation there's something called a float and a double, each with it's own level of precision that loses accuracy at the ultra high numbers and lower end numbers. so a number like 5 might actually be 'contained' as 4.999999999999999999999999999, but with rounding it's as you see it - 5.
Now algebraically, you're replacing surrogates with numbers. For instance in the Pythagorean function where a^2 + b^2 = c^2, why they're still calling it a theorem I will never understand - you'll eventually replace a^2 with a real number.
So the question is - what are YOUR boundaries for these numbers within the system you're working in? There's no one size fits all equation and limitation on the algebraic replacement of numbers, and there's no real limitation to the number of decimal places you can go when using physical numbers.
For instance. If I take PI to the 256 decimal place:
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485
And then store that in something called a 'single' variable, an 8 bit variable in visual basic 6.0, here's what I get:
3.141593
Notice the extreme rounding?
But let's say I take that same number of PI above and shove it into a double:
3.14159265358979
Quite a few more places are preserved in that right?
One's 8 bit storage, the other's 16 bit.
So while I can extend out PI into a trillion digits, there's a problem when trying to actually using that number in a simulated environment which occurs through rounding, and even as you get larger and larger variables (64 bit+) to store these decimal based numbers in, you still will get rounding and potential accuracy issues at the highest and lowest numbers.
So conceivably - you can extend your decimals out infinitely. There's no limit, which is why infinity is what's known as an abstract concept. The limitations occur when preparing those numbers for conversion and interaction with other variables. While it's fine and dandy having PI going to infinite potential decimal places, there's no modern programming language or means of expression where that is actually usable.
yet.
Which certainly has me questioning. How did they figure out pi mathematically to that degree or did they just put a monkey on a keyboard and have it type numbers out?
Is there a form of math which allows that, that's undiscovered, or just a really bored monkey?