Post by Arlon10 on May 10, 2020 20:33:24 GMT
Secondly. even if they were exact, no one has that many digits. Most Calculators do not store more than 15 digits, less expensive ones only 8 digits. Microsoft Excel only uses as many as 15 digits. I wrote a program that can do 100 digit calculations, but I knew full well at the time it is only good for philosophical investigations into the properties of numbers. There is no real world measurement that precise. You are talking about even larger numbers of digits. No matter how small you write you can't get that many digits on a ream of paper. Even Watson the supercomputer can't work with that many digits. It then becomes irrelevant what that many digits might "equal," does it not? In any actual use of mathematics for real world computations it is not a matter of exactitude, it is a matter of necessary precision, for example ±10-8 is beyond anything you will likely need.
You may of course use any notation you wish as long as you and your associates understand each other.
The difference is described as "infinitesimal" which must be larger than zero because there must always be ("infinite" process, remember) an even smaller number before reaching zero. In order for the sum to be equal to 1 the difference must actually be zero which it never is. While indeed the difference is negligible in all real world applications that does not mean there is an actual equality. No, mathematicians do not confuse any ongoing process with fixed quantities, just retarded atheists who think they are mathematicians or scientists simply because they failed religion too. If "infinitesimal" means the "same" thing as "zero" why not just say zero?

