Except that’s not the worst case. If the machine predicted you would pick A&B, then B contains nothing, so if you then only picked B (i.e. the machine’s prediction was wrong), then you get zero. THAT’S the worst case. The question doesn’t assume the machine’s predictions are correct.
I meant, let’s imagine the machine predicted B and is wrong (because I take A+B). I would call that scenario „I have free will - no determinism.“
Then I will have 1.000.000.000 „only“. That’s a good result.
The best case result is 1.001.000.000 (A+B) vs 1.000.000.000 (B) only. Worst case is I have 1.000.000 only.
I go with B only because the difference feels tiny / irrelevant.
Maybe I actually have free will and this is not determism kicking in, but who knows. I‘m not in for the odds with such a tiny benefit.
Except that’s not the worst case. If the machine predicted you would pick A&B, then B contains nothing, so if you then only picked B (i.e. the machine’s prediction was wrong), then you get zero. THAT’S the worst case. The question doesn’t assume the machine’s predictions are correct.
Good point. Actually I was assuming that the machine’s predictions were never wrong. That’s also what is defined in the Newcomb’s Paradox wiki page.
If that‘s not a 100% given, you are definitely right.
Well if you actually have free will, how can the machine predict your actions?
What if someone opened box B and showed you what was in it? What would that mean? What would you do?
I meant, let’s imagine the machine predicted B and is wrong (because I take A+B). I would call that scenario „I have free will - no determinism.“ Then I will have 1.000.000.000 „only“. That’s a good result.