Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Newcomb's paradox
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Causality and free will== {| class="wikitable floatright" ! {{diagonal split header|Actual<br />choice| Predicted<br />choice}} ! A + B ! B |- ! A + B | $1,000 || Impossible |- ! B | Impossible || $1,000,000 |} Causality issues arise when the predictor is posited as [[infallible]] and incapable of error; Nozick avoids this issue by positing that the predictor's predictions are "''almost'' certainly" correct, thus sidestepping any issues of infallibility and causality. Nozick also stipulates that if the predictor predicts that the player will choose randomly, then box B will contain nothing. This assumes that inherently random or unpredictable events would not come into play anyway during the process of making the choice, such as [[free will]] or [[quantum mind]] processes.<ref name='langan'>{{cite journal|journal=Noesis|author=Christopher Langan|issue=44|title=The Resolution of Newcomb's Paradox|url=http://megasociety.org/noesis/44/newcomb.html}}</ref> However, these issues can still be explored in the case of an infallible predictor. Under this condition, it seems that taking only B is the correct option. This analysis argues that we can ignore the possibilities that return $0 and $1,001,000, as they both require that the predictor has made an incorrect prediction, and the problem states that the predictor is never wrong. Thus, the choice becomes whether to take both boxes with $1,000 or to take only box B with $1,000,000{{snd}} so taking only box B is always better. [[William Lane Craig]] has suggested that, in a world with perfect predictors (or [[Time travel|time machines]], because a time machine could be used as a mechanism for making a prediction), [[retrocausality]] can occur.<ref>{{cite journal |author=Craig |year=1987 |url=http://www.leaderu.com/offices/billcraig/docs/newcomb.html |title=Divine Foreknowledge and Newcomb's Paradox |journal=Philosophia |volume=17 |issue=3 |pages=331β350 |doi=10.1007/BF02455055|s2cid=143485859 |url-access=subscription }}</ref> The chooser's choice can be said to have ''caused'' the predictor's prediction. Some have concluded that if time machines or perfect predictors can exist, then there can be no [[free will]] and choosers will do whatever they are fated to do. Taken together, the paradox is a restatement of the old contention that free will and [[determinism]] are incompatible, since determinism enables the existence of perfect predictors. Put another way, this paradox can be equivalent to the [[grandfather paradox]]; the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions.<ref>{{cite journal |last=Craig |first=William Lane |author-link=William Lane Craig |year=1988 |title=Tachyons, Time Travel, and Divine Omniscience |journal=[[The Journal of Philosophy]] |volume=85 |issue=3 |pages=135β150 |jstor=2027068|doi=10.2307/2027068 }} </ref> [[Gary Drescher]] argues in his book ''Good and Real'' that the correct decision is to take only box B, by appealing to a situation he argues is analogous{{snd}} a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street.<ref>{{cite book |last=Drescher |first=Gary |author-link=Gary Drescher |year=2006 |title=Good and Real: Demystifying Paradoxes from Physics to Ethics |publisher=MIT Press |isbn=978-0262042338}}</ref> [[Andrew David Irvine|Andrew Irvine]] argues that the problem is structurally isomorphic to [[Braess's paradox]], a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.<ref>{{cite journal |first=Andrew |last=Irvine |title=How Braess' paradox solves Newcomb's problem |journal=International Studies in the Philosophy of Science |volume=7 |issue=2 |year=1993 |pages=141β60 |doi=10.1080/02698599308573460}}</ref> Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based and the stage after it. While the player is still in the first stage, they are presumably able to influence the predictor's prediction, for example, by committing to taking only one box. So players who are still in the first stage should simply commit themselves to one-boxing. Burgess readily acknowledges that those who are in the second stage should take both boxes. As he emphasises, however, for all practical purposes that is beside the point; the decisions "that determine what happens to the vast bulk of the money on offer all occur in the first [stage]".<ref>{{cite journal |last=Burgess |first=Simon |title=Newcomb's problem and its conditional evidence: a common cause of confusion |journal=Synthese |date=February 2012 |volume=184 |issue=3 |page=336 |doi=10.1007/s11229-010-9816-1 |jstor=41411196 |s2cid=28725419}}</ref> So players who find themselves in the second stage without having already committed to one-boxing will invariably end up without the riches and without anyone else to blame. In Burgess's words: "you've been a bad boy scout"; "the riches are reserved for those who are prepared".<ref>{{cite journal |last=Burgess |first=Simon| title=Newcomb's problem: an unqualified resolution |journal=Synthese |date=January 2004 |volume=138 |issue=2 |page=282 |doi=10.1023/b:synt.0000013243.57433.e7 |jstor=20118389 |s2cid=33405473}}</ref> Burgess has stressed that{{snd}} ''pace'' certain critics (e.g., Peter Slezak){{snd}} he does not recommend that players try to trick the predictor. Nor does he assume that the predictor is unable to predict the player's thought process in the second stage.<ref>{{cite journal |last=Burgess |first=Simon |title=Newcomb's problem and its conditional evidence: a common cause of confusion |journal=Synthese |date=February 2012 |volume=184 |issue=3 |pages=329β330 |doi=10.1007/s11229-010-9816-1 |jstor=41411196 |s2cid=28725419}}</ref> Quite to the contrary, Burgess analyses Newcomb's paradox as a common cause problem, and he pays special attention to the importance of adopting a set of unconditional probability values{{snd}} whether implicitly or explicitly{{snd}} that are entirely consistent at all times. To treat the paradox as a common cause problem is simply to assume that the player's decision and the predictor's prediction have a common cause. (That common cause may be, for example, the player's brain state at some particular time before the second stage begins.) It is also notable that Burgess highlights a similarity between Newcomb's paradox and the [[Kavka's toxin puzzle]]. In both problems one can have a reason to intend to do something without having a reason to actually do it. Recognition of that similarity, however, is something that Burgess actually credits to Andy Egan.<ref>{{cite journal |last=Burgess |first=Simon |title=Newcomb's problem and its conditional evidence: a common cause of confusion |journal=Synthese |date=February 2012 |volume=184 |issue=3 |page=338 |doi=10.1007/s11229-010-9816-1 |jstor=41411196 |s2cid=28725419}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)