The debate "0.999...=1" was started by
January 29, 2018, 4:07 pm.
15 people are on the agree side of this discussion, while 23 people are on the disagree side.
That might be enough to see the common perception.
It looks like most people are against to this statement.
Thaoss posted 12 arguments, Nemiroff posted 3 arguments to the agreers part.
Nemiroff posted 8 arguments, IHateMyLife posted 1 argument to the disagreers part.
Nemiroff, Thaoss, NPW and 12 visitors agree.
IHateMyLife, neveralone, DrMrDaniel, Against_eu, mzidanr and 18 visitors disagree.
cool. I knew about the proof but thanks anyway.
that's what i thought, but if you Google the procedure to turn a repeating decimal into a fraction (I know we all know that 0.333... = 1/3, but there's an actual proof for it using algebra) and then apply it to 0.999... you literally get exactly 1. it's very counter intuitive but the math is precise. I'll copy paste the proof as I presented it earlier, but you can Google "how to turn a repeating decimal into a fraction" for a full explanation.
x = 0.999...
10x = 9.999...
10x - x = 9.999... - 0.999...
9x = 9
x = 9/9 = 1
0.999999999999....... isnt for sure equal to 1 but since its numerically closer to 1, we round off every other value before the decimal point to get 1.
Rounding off decimal figures make many things simple and way too easy. For example derivations of laws in physics or no. of moles in chemistry, etc.
I didn't meant that the proof isn't simple, I meant that understanding why the proof is a valid proof is harder than everybody think.
"I'm not sure what extra rules are needed for 9.999... - 0.999..., it seems like based arithmetic only slightly philosophically more complex then 1.54 - 0.54. all infinity of the 9s after the decimal have counterpart 9 being substracted"
The extra rules are adding the sums of convergent series is equivalent to adding each term, but whatever let's not get into details.
I see what you mean but it's hard to explain my point, I'll give an exemple.
If I told you
1= 1- (1-1) - (1-1) + ...
It's simple arithmetics"
Would you believe me ?
Well, by "using" the algebra you meant something like doing this :
"1/3, in euclidian division I get 0 and rest 1, so the result starts with 0.
Now 10/3 i get 3 and the rest is 1 so 0+0.3 -> 0.3
10/3 u get 3 and rest 1 so 0.3+0.03-->0.33
over and over again and you get 0.333... "
I give you an other way of doing the algebra I just take the factor found in euclidian division-1(unless it's 1 or 0) :
"1/3 i get 0 rest 1 --> 0.
10/3 i get 2 rest 4 --> 0.2
40/3 i get 12 rest 4 --> tricky here 0.2+0.12=0.32
40/3=i get 12 rest 4 --> 0.32+0.012--> 0.332
And again and again, 0.333..." and as the 2 always goes more right i don't write it anywhere
Another one to convince you it works
1/6 get 0 rest 1 -> 0.
10/6 get 1 rest 4 -> 0.1
40/6 get 5 rest 10 -> 0.15
100/6 get 15 rest 10 -> 0.165
100/6 get 15 rest 10 -> 0.1665
etc etc, forget the 5 cause always at the most right and 1/6=0.1666...
Now for 1/1
1/1 get 0 rest 1 -> 0.
10/1 get 9 rest 1 -> 0.9
10/1 get 9 rest 1 -> 0.99
10/1 get 9 rest 1 -> 0.999
again and again so 1/1=0.999...
Here it is, I've created an algebra of division that gives a good answer for every rational number, with infinitely repeating 9's.
I wouldn't call that proof complicated in any way. it was the most basic form of algebra and likely junior high level math. however I don't disagree with your assessment (unfortunately).
I'm not sure what extra rules are needed for 9.999... - 0.999..., it seems like based arithmetic only slightly philosophically more complex then 1.54 - 0.54. all infinity of the 9s after the decimal have counterpart 9 being subtracted, eliminating all infinity of them. the only difficulty is the infinity aspect not the actual math.
the state of education in this nation is very sad. we have lost math competitions to 3rd world nations in Africa! math is seen as boring even tho it's essentially just a bunch of puzzles. people confuse science with technology and can't even properly define the word or describe the basic process. And critical through is far too rare. even worse there is a distrust of "experts/elite" with elite being defined not as the super wealthy and power brokers, but anyone with a bachelor's trying to explain basic economics or any other field.
back to the math, I'm not exactly following the new equation. I don't think the ... infinite repetition is necessary for it, and I'll have to decode it on paper later unless you don't mind breaking it down a bit with fewer ....s if possible.
Also, given the exact proof of 0.999... = 1, I'm baffled by the symbol argument. perhaps that was just an oversimplificiation for simplicity's sake?
the thing about B was that I didn't think one could write 0.999... as an exact fraction.
I had to look up the definition of rational numbers after your premises, and found that they include all repeating decimals and can all be written as a fraction... at which point I had to look up now one can possibly write them as a fraction and was amazed to find that proof demonstrated by an article that had nothing to do with 0.999... but still solved for it perfectly. prior to that research I would have said no, you could not write 0.999... as a fraction, especially not one with equal parts.
regarding C, I found the proof using algebra but how would one get to 0.999... from 1/1 using any form of division?
The x=0.999... ; 9x=9 proof is nice to see and correct indeed.
I just don't like to use it because people usually don't know all the conditions needed for it to work. And a proof someone doesn't understand why it works is useless.
Such as, what tells us we can do
The rules implying it's consistent to to that are subtile.
And it the end it would take more work to prove that this "9x=x" proof is a correct proof than it takes to just find the value of 0.999...
I thinks that's a problem with the perception of mathematics (especially in the USA where I find people very overconfident). From the axioms and logic mathematicians created numbers and decimal representation as useful as they could so that we could use +, - on every decimals, even the infinitely repeating ones. It took a lot of theorems etc..
And they made it so easy that people think it is, and use + and - as if it was granted and always okay to do so.
That how we get absurdities like :
So 1=0=S ?
All of that to say, the " 10x-x=9.000... " proof is correct but I bet more than 75% of the people using it can't really see why the reasonning is valid, and would use the same reasonning when the reasonning isn't correct anymore.
That's only my opinion though, little bit off-topic.
b)Ok then if you want to find a,b : 2 whole numbers so that a/b mathematically equals 0.999...
You'll always find a=b.
That's a proof 1=0.999...
c)Of course, I only pointed it out because you earlier said :
"if you divide 1/9 you literally get an endless sequence of 1s. same with 1/3. but 1/1 doesn't result with a sequence of 9s or any 9s at all."
Which could be implying that you were thinking "since 1/1 gets me to 1 and not 0.999... , 0.999... isn't 1/1"
But it was obvious that doing 1/1 the way we are used to to divisions would lead to only one decimal representation out of several possibilities.
So the only point of (c) was to say "yes doing 1/1 with the division we learn at school leads to 1 and not 0.999... But the way we can write divisions isn't unique and another way of doing the division would lead 0.999..."
c) no, however having numerous ways to write it doesn't mean your suggested way is one of them.
x = 0.999...
10x = 9.999...
10x - x = 9.999... - 0.999...
9x = 9
x = 9/9 = 1
I believe that is proof you are right.
I'll start all over again.
What is 0.999... :
A number written in the decimal system. As it's in the decimal system and the decimal is infinitely repeating it is a rational number.
(a)Do we agree ?
Can we write 0.999... as a fraction (write it as a/b with a and b whole numbers)
Yes because it's a rational number.
(b)Do we agree ?
If you write every rational numbers as infinite decimals :
You write 1/9 as 0.111...
3 as 3.000...
5/4 as 1.25000...
(with the rule that you only write the repeating part 3 times then ...)
(c)Do you think that there is one AND ONLY ONE decimal notation equal to the fractional notation ?
Can you just answer (a) (b) (c) by yes or no and if it's no to (a), (b) or yes to (c) explain why.
in the representation set of:
(1, 1.0, 1.00, 1.000, 0.999...)
one looks like it doesn't belong and I can't see any purpose to it. I'm actually not sure if I would agree that 0.111... "approaches" 1/9 when it simply IS 1/9 written in a completely different notation. decimal vs fraction. it's akin to saying 0.5 approaches 1/2. the only symbol needed is the "..." or the - over the number.
I understand that 0.5 is not unending string "approaching" but I just wanted to demonstrate the same number different notation. 1/9 IS 0.111... directly. no symbolism. just in a different language.
0.999.... is not mathematically equal, just a symbol, thus not the same, and I think it's irrelevant. yes I am against consensus.
When you do 1/1 the process doesn't give you 0.999... indeed, but that not a problem at all. I only spoke about 1/9 to give an idea of how the idea of infinite repeating decimals were defined, it's a way of understanding, not an argument.
The argument is only the definition itself.
The decimal "0.aaa..." litteraly means "the number i'm getting close to when i do 0.a 0.aa 0.aaa 0.aaa".
When you do 0.1 0.11 0.111 0.111 0.1111 you're getting close to 1/9. So 0.111...=1/9
Same thing for 0.999...
1.0 is an adequate decimal representation of 1, so are 1 1.00 1.000... and 0.999...
A number doesn't necessarely have only one decimal representation.
I see the logic with 1/9. if you divide 1/9 you literally get an endless sequence of 1s. same with 1/3. but 1/1 doesn't result with a sequence of 9s or any 9s at all. I'm not sure if that analogy holds up.
and why wouldn't 1.0 be an adequate decimal representation of 1?
Or at least that's what I think, but the way decimal notation was created is probably more complex.
But i gave my opinion pn the more likly reasons to me
Well 0.999... itself has no purpose and isn't really useful.
The real purpose of that notation was to to write any rational or real number in decimal.
For 1/1 it's wasn't really needed, but for 1/9 we can't help but notice that the sequence 0.1 011 0.111 is convergent and has the limit 1/9 so why not having a notation symbolizing the limit by writing it 0.111...
And that way every possible rational number has an infinite repeating decimal number.
And every real number has a decimal representation too thanks to that definition.
So I guess mathematicians didn't give up on that very useful way of defining decimal numbers just because 0.999... was ugly
well in your 2 assertions you stated that 0.999... and 1 were the "limit" of a sequence. that was very i got my finite set assertion from, but it could have been a misunderstanding.
but if your talking about simple symbolism, I'm not sure why you provided those assertions which seemed like mathematical proofs. but symbols are just symbols. if it's agreed upon, sure, 0.999... can be 1, so can any character, but it's a pretty stupid symbol requiring 8 characters and numerous pen strokes for what could be easily symbolized with just the 1. I don't see the purpose in having such a symbol.
seeing as you even knew about this I'd assume you are a math major, or at least a fan. what is the purpose of 0.999...?
I never said anything about a finite set.
The set "0.9 0.99 0.999 0.9999 ..." is indeed infinite, it's totally true, and there are many question about infinity in this set.
0.999... is a number. It's not the set, it's true that instinctively the notation 0.999... makes us think about many properties about the set "0.9 0.99 etc. "
But 0.999.. is a only a way of writing a finite number, a number that has nothing to do with the set itself other than the value of the set's limit.
For some people it's a shame that there is nothing more to 0.999.. , but that what the mathematicians chose the symbol would mean : a number. It's only an element of the set of real numbers. And it's value can be found, and it's 1 when we calculate it correctly.
Every proof that argues by trying to say what would happen if we never stopped adding 9's is falacious to me and that's why I don't use them. So no need to talk about these proofs, i don't agree with them either.
is this what your talking about?
Also confirmed in wikipedia that your assertion doesn't even have much critical opposition (aside from stubborn students, and I'm afraid I will have to throw my beliefs in with the stubborn students).
you are correct according to the math consensus but I think the senselessness of infinity gives me some philosophical wiggle room. also, your proof is very different from the proofs I found on the sites. according to you it's = 1 because it is a finite set, to which I say it's not finite, nothing with a ... in math is finite I think.
their proof is that it is = 1 because it is infinite, and anywhere you stop the count it's less than 1, but you still have an infinite more 9s after that bringing it to 1. to this I disagree because I don't think having an infinite progression jumps it to the next whole number but just gets it infinitely closer and closer. I am however not only disagreeing with you but the math consensus so I'm likely missing some concepts, but math is not science, and especially when dealing with infinities, is closer to philosophy. so I confidently disagree with them.
what if i don't believe 0.9, 0.99, 0.999, etc is a limited sequence. what if i believe (as I think is the prevailing view) that any such set is by definition infinite, and thus the set you speak of will indeed get closer to one yet never reach it.
that is at least in response to your assertions. your claim that 0.999... is not a zero with infinite 9s after it but is actually a symbol is not familiar to me. I've never seen that symbolism.
It's not approximately 1, it is 1.
Look, the mistake a lot of people make when they disagree that 0.999...=1 is that they misinterpret 0.999...
0.999... is not a zero with infinite 9's behind it, it's not a number that's going closer and closer to 1, 0.999... is a symbol used to represent a real number.
And the value of 0.999... is BY DEFINITION the limit of a sequence 0.9 ; 0.99 ; 0.999 ; ...
I'll give 2 assertions :
(1) 0.999... is the limit of the sequence 0.9; 0.99 ; 0.999 ; ...
(2) 1 is the limit of the sequence 0.9; 0.99 ; 0.999 ; ...
If (1) and (2) are true then 1=0.999..., if you can't disprove neither (1) nor (2) then you can't say that 1 doesn't equal 0.999...
most open sequences are unlimited, and your right that there infinite ways to write a number in the decimal system, but they all involve varying amounts of zeros before or after the default number: 0001.00000 = 1
.9999.... is almost 1, it's practically 1, it's approximately 1, but it's not equal to 1.
kinda like Newton's theory of gravity is functionally accurate for all everyday purposes, but it's not equal to Einstein's which work in ordinary and many extreme circumstances.
(Newton is .9 Einstein is .99999 still working in 1.0)
It's not an argument. It has never been proved or said that there is only 1 way to to write a number in the decimal system.
Do you agree that the sequence 0.9 ; 0.99; 0.999 ; 0.9999 ; ... has a limit and this limit is 1 ?
(Not talking about 0.999... yet, i'm only asking what is the limit of a certain sequence)
What do you mean "0.999..." is a fraction of 1 ? Who ever defined that ?
that's just stupid mate 0.9999 is a fraction of 1 hence the 0.**