Discussion
Newcomb's paradox needs a demon
jack_pp: There's rational and then there's common sense, if put in that situation who in their right mind would take even a 50% chance that the entity is wrong and greed it for 1000$. All I'd need to know is that it is far more likely I get the million if I go into the game thinking I'd only one-box
danbruc: The existence of a flawless predictor means that you do not have a choice after the predictor made its prediction, the decision must already be baked into the state of the universe accessible to the predictor. It also precludes that any true randomness is affecting the choice as that could not be predicted ahead of time.I do not think that allowing some prediction error fundamentally changes this, it only means that sometimes the choice may depend on unpredictable true randomness or sometimes the predictor does not measured the relevant state of the universe exactly enough or the prediction algorithm is not flawless. But if the predictor still arrives at the correct prediction most of the time, then most of the time you do not have a choice and most of the time the choice does not depend on true randomness.Which also renders the entire paradox somewhat moot because there is no choice for you to be made. The existence of a good predictor and the ability to make a choice after the prediction are incompatible. Up to wild time travel scenarios and thinks like that.
halfcat: A flawless predictor would indicate you’re in a simulation, but also we cannot even simulate multiple cells at the most fine-grained level of physics.But also you’re right that even a pretty good (but not perfect) predictor doesn’t change the scenario.What I find interesting is to change the amounts. If the open box has $0.01 instead of $1000, you’re not thinking ”at least I got something”, and you just one-box.But if both boxes contain equal amounts, or you swap the amounts in each box, two-boxing is always better.All that to say, the idea that the right strategy here is to ”be the kind of person who one-boxes” isn’t a universe virtue. If the amounts change, the virtues change.
ordu: > Which also renders the entire paradox somewhat moot because there is no choice for you to be made.Not quite. You did choose your decision making methods at some point in your life, and you could change them multiple times till you came to the setup of Newcomb's paradox. If we look at your past life as a variable in the problem, then changing this variable changes the outcome, it changes the prediction made by the predictor.> The existence of a flawless predictor means that you do not have a choice after the predictor made its predictionI believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.Moreover I think I can hint how to deal with it: relativity. Different observers cannot agree if an observed agent has free will or not. Accept it fundamentally, like relativity accepts that the universal time doesn't exist, and all the logical paradoxes will go away.
chriswarbo: > I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.Indeed, I think of concepts like "agency", "choice", "free will", etc. as aspects of a particular sort of scientific model. That sort of model can make good predictions about people, organisations, etc. which would be intractable to many other approaches. It can also be useful in situations that we have more sophisticated models for, e.g. treating a physical system as "wanting" to minimise its energy can give a reasonable prediction of its behaviour very quickly.That sort of model has also been applied to systems where its predictive powers aren't very good; e.g. modelling weather, agriculture, etc. as being determined by some "will of the gods", and attempting to infer the desires of those gods based on their observed "choices".It baffles me that some people might think a model of this sort might have any relevance at a fundamental level.
danbruc: A flawless predictor would indicate you’re in a simulation [...]No, it does not. Replace the human with a computer entering the room, the predictor analyzes the computer and the software running on the computer when it enters. If the decision program does not query a hardware random source or some stray cosmic particle changes the choice, the predictor could perfectly predict the choice just by accurately enough emulating the computer. If the program makes any use of external inputs, say the image from an attached webcam, the predictor also needs to know those inputs well enough. The same could, at least in principle, work for humans.
vidarh: I agree with you that it doesn't require that you are in a simulation, but a flawless predictor would be a strong indication that a simulation is possible, and that should raise our assumed probability that we're in a simulation.
arethuza: I would think that the existence of a flawless predictor is probably more likely to indicate that memories of predictions, and any associated records, have been modified to make the predictor appear flawless.
vidarh: Assuming I have no way of testing the predictor, my decision would be to pick both boxes on the basis that $1000 is not a lot of money to me, but $1000000 is, and I wouldn't worry about the odds, because without knowing the nature of the specific predictor we're down to Pascal's Wager married to the Halting Problem:We don't know whether or how our actions and thought processes processes might affect the outcome, and so any speculation over odds is meaningless and devolves to making assumptions we can't test, without even knowing whether that speculation itself might alter the outcome, or how.But I don't need to speculate about the relative value of $1000 and $1000000 to me. Others might opt for the safe $1000 for the same reason.
malfist: Two boxes is the only choice that makes sense. It is always better than one box.No matter what you do after you enter the room, the predictor has already made their move, nothing you do now will change it. The only logical thing to do is to take both boxes because whatever the value in the second box is it will be added to the first box. If you only take the second box you are objectively always giving up $1,000 and getting no value in exchange for doing so (since not taking the first box doesn't change what's in the second)
Smaug123: And for you, of course, that's true! Because you are the sort of being who two-boxes, and this fact is visible to the predictor. Other types of being can do better.