Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Strategyproofness
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Concept in mechanism design}} In [[mechanism design]], a '''strategyproof (SP) mechanism''' is a [[game form]] in which each player has a weakly-[[dominant strategy]], so that no player can gain by "spying" over the other players to know what they are going to play. When the players have private information (e.g. their type or their value to some item), and the strategy space of each player consists of the possible information values (e.g. possible types or values), a '''truthful mechanism''' is a game in which revealing the true information is a weakly-dominant strategy for each player.<ref name=agt07>{{Cite Algorithmic Game Theory 2007}}</ref>{{rp|244}} An SP mechanism is also called '''dominant-strategy-incentive-compatible (DSIC)''',<ref name="agt07" />{{rp|415}} to distinguish it from other kinds of [[incentive compatibility]]. A SP mechanism is immune to manipulations by individual players (but not by coalitions). In contrast, in a '''group strategyproof mechanism''', no group of people can collude to misreport their preferences in a way that makes every member better off. In a '''strong group strategyproof mechanism,''' no group of people can collude to misreport their preferences in a way that makes at least one member of the group better off without making any of the remaining members worse off.<ref>{{Cite web|url=https://www.rochester.edu/College/gradstudents/vmanjuna/Vikram_Manjunath/Vikram_Manjunath_files/TwoAlternatives.pdf|title=Group Strategy-proofness And Social Choice Between Two Alternatives|last=|first=|date=|website=|publisher=|access-date=|archive-date=2020-02-12|archive-url=https://web.archive.org/web/20200212120329/https://www.rochester.edu/College/gradstudents/vmanjuna/Vikram_Manjunath/Vikram_Manjunath_files/TwoAlternatives.pdf|url-status=dead}}</ref> == Examples == Typical examples of SP mechanisms are: * a [[majority voting|majority vote]] between two alternatives; * a [[second-price auction]] when participants have [[quasilinear utility]]; * a [[VCG mechanism]] when participants have [[quasilinear utility]] Typical examples of mechanisms that are ''not'' SP are: * [[Gibbard's theorem|any deterministic non-dictatorial election]] between three or more alternatives; * a [[first-price auction]] === SP in network routing === SP is also applicable in [[network routing]].{{Cn|date=April 2024}} Consider a network as a [[Graph (discrete mathematics)|graph]] where each edge (i.e. link) has an associated [[cost]] of [[Transmission (telecommunications)|transmission]], privately known to the owner of the link. The owner of a link wishes to be compensated for relaying messages. As the sender of a message on the network, one wants to find the least cost path. There are efficient methods for doing so, even in large networks. However, there is one problem: the costs for each link are unknown. A naive approach would be to ask the owner of each link the cost, use these declared costs to find the least cost path, and pay all links on the path their declared costs. However, it can be shown that this payment scheme is not SP, that is, the owners of some links can benefit by lying about the cost. We may end up paying far more than the actual cost. It can be shown that given certain assumptions about the network and the players (owners of links), a variant of the [[VCG mechanism#Quickest paths|VCG mechanism]] is SP.{{cn|date=May 2024}} == Formal definitions == There is a set <math>X</math> of possible outcomes. There are <math>n</math> agents which have different valuations for each outcome. The valuation of agent <math>i</math> is represented as a function: : <math>v_i : X \longrightarrow R_+</math> which expresses the value it has for each alternative, in monetary terms. It is assumed that the agents have [[Quasilinear utility]] functions; this means that, if the outcome is <math>x</math> and in addition the agent receives a payment <math>p_i</math> (positive or negative), then the total utility of agent <math>i</math> is: : <math>u_i := v_i(x) + p_i</math> The vector of all value-functions is denoted by <math>v</math>. For every agent <math>i</math>, the vector of all value-functions of the ''other'' agents is denoted by <math>v_{-i}</math>. So <math>v \equiv (v_i,v_{-i})</math>. A ''mechanism'' is a pair of functions: * An <math>Outcome</math> function, that takes as input the value-vector <math>v</math> and returns an outcome <math>x\in X</math> (it is also called a [[social choice]] function); * A <math>Payment</math> function, that takes as input the value-vector <math>v</math> and returns a vector of payments, <math>(p_1,\dots,p_n)</math>, determining how much each player should receive (a negative payment means that the player should pay a positive amount). A mechanism is called '''strategyproof''' if, for every player <math>i</math> and for every value-vector of the other players <math>v_{-i}</math>: : <math>v_i(Outcome(v_i,v_{-i})) + Payment_i(v_i,v_{-i}) \geq v_i(Outcome(v_i',v_{-i})) + Payment_i(v_i',v_{-i})</math> == Characterization == It is helpful to have simple conditions for checking whether a given mechanism is SP or not. This subsection shows two simple conditions that are both necessary and sufficient. If a mechanism with monetary transfers is SP, then it must satisfy the following two conditions, for every agent <math>i</math>:<ref name=agt07/>{{rp|226}} '''1.''' The payment to agent <math>i</math> is a function of the chosen outcome and of the valuations of the other agents <math>v_{-i}</math> - but ''not'' a direct function of the agent's own valuation <math>v_i</math>. Formally, there exists a price function <math>Price_i</math>, that takes as input an outcome <math>x\in X</math> and a valuation vector for the other agents <math>v_{-i}</math>, and returns the payment for agent <math>i</math>, such that for every <math>v_i,v_i',v_{-i}</math>, if: : <math>Outcome(v_i,v_{-i}) = Outcome(v_i',v_{-i})</math> then: : <math>Payment_i(v_i,v_{-i}) = Payment_i(v_i',v_{-i})</math> PROOF: If <math>Payment_i(v_i,v_{-i}) > Payment_i(v_i',v_{-i})</math> then an agent with valuation <math>v_i'</math> prefers to report <math>v_i</math>, since it gives him the same outcome and a larger payment; similarly, if <math>Payment_i(v_i,v_{-i}) < Payment_i(v_i',v_{-i})</math> then an agent with valuation <math>v_i</math> prefers to report <math>v_i'</math>. As a corollary, there exists a "price-tag" function, <math>Price_i</math>, that takes as input an outcome <math>x\in X</math> and a valuation vector for the other agents <math>v_{-i}</math>, and returns the payment for agent <math>i</math> For every <math>v_i,v_{-i}</math>, if: : <math>Outcome(v_i,v_{-i}) = x</math> then: : <math>Payment_i(v_i,v_{-i}) = Price_i(x,v_{-i})</math> '''2.''' The selected outcome is optimal for agent <math>i</math>, given the other agents' valuations. Formally: : <math>Outcome(v_i, v_{-i}) \in \arg\max_{x} [v_i(x) + Price_i(x,v_{-i})]</math> where the maximization is over all outcomes in the range of <math>Outcome(\cdot,v_{-i})</math>. PROOF: If there is another outcome <math>x' = Outcome(v_i',v_{-i})</math> such that <math>v_i(x') + Price_i(x',v_{-i}) > v_i(x) + Price_i(x,v_{-i})</math>, then an agent with valuation <math>v_i</math> prefers to report <math>v_i'</math>, since it gives him a larger total utility. Conditions 1 and 2 are not only necessary but also sufficient: any mechanism that satisfies conditions 1 and 2 is SP. PROOF: Fix an agent <math>i</math> and valuations <math>v_i,v_i',v_{-i}</math>. Denote: : <math>x := Outcome(v_i, v_{-i})</math> - the outcome when the agent acts truthfully. : <math>x' := Outcome(v_i', v_{-i})</math> - the outcome when the agent acts untruthfully. By property 1, the utility of the agent when playing truthfully is: : <math>u_i(v_i) = v_i(x) + Price_i(x, v_{-i})</math> and the utility of the agent when playing untruthfully is: : <math>u_i(v_i') = v_i(x') + Price_i(x', v_{-i})</math> By property 2: : <math>u_i(v_i) \geq u_i(v_i')</math> so it is a dominant strategy for the agent to act truthfully. === Outcome-function characterization === The actual goal of a mechanism is its <math>Outcome</math> function; the payment function is just a tool to induce the players to be truthful. Hence, it is useful to know, given a certain outcome function, whether it can be implemented using a SP mechanism or not (this property is also called [[Implementability (mechanism design)|implementability]]).{{Cn|date=April 2024}} The [[Monotonicity (mechanism design)|monotonicity]] property is necessary for strategyproofness.{{Cn|date=April 2024}} == Truthful mechanisms in single-parameter domains == A ''single-parameter domain'' is a game in which each player <math>i</math> gets a certain positive value <math>v_{i}</math> for "winning" and a value 0 for "losing". A simple example is a single-item auction, in which <math>v_{i}</math> is the value that player <math>i</math> assigns to the item. For this setting, it is easy to characterize truthful mechanisms. Begin with some definitions. A mechanism is called ''normalized'' if every losing bid pays 0. A mechanism is called ''monotone'' if, when a player raises his bid, his chances of winning (weakly) increase. For a monotone mechanism, for every player ''i'' and every combination of bids of the other players, there is a ''critical value'' in which the player switches from losing to winning. A normalized mechanism on a single-parameter domain is truthful if the following two conditions hold:<ref name=agt07/>{{rp|229-230}} # The assignment function is monotone in each of the bids, and: # Every winning bid pays the critical value. == Truthfulness of randomized mechanisms == There are various ways to extend the notion of truthfulness to randomized mechanisms. They are, from strongest to weakest:<ref name=":0">{{Cite book |last1=Chakrabarty |first1=Deeparnab |last2=Swamy |first2=Chaitanya |title=Proceedings of the 5th conference on Innovations in theoretical computer science |chapter=Welfare maximization and truthfulness in mechanism design with ordinal preferences |date=2014-01-12 |chapter-url=https://doi.org/10.1145/2554797.2554810 |series=ITCS '14 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=105–120 |doi=10.1145/2554797.2554810 |isbn=978-1-4503-2698-8|s2cid=2428592 }}</ref>{{Rp|page=|pages=6-8}} * '''Universal truthfulness''': for each randomization of the algorithm, the resulting mechanism is truthful. In other words: a universally-truthful mechanism is a randomization over deterministic truthful mechanisms, where the weights may be input-dependent. * '''Strong stochastic-dominance truthfulness (strong-SD-truthfulness)''': The vector of probabilities that an agent receives by being truthful has [[first-order stochastic dominance]] over the vector of probabilities he gets by misreporting. That is: the probability of getting the top priority is at least as high AND the probability of getting one of the two top priorities is at least as high AND ... the probability of getting one of the ''m'' top priorities is at least as high. * '''Lexicographic truthfulness (lex-truthfulness)''': The vector of probabilities that an agent receives by being truthful has [[lexicographic dominance]] over the vector of probabilities he gets by misreporting. That is: the probability of getting the top priority is higher OR (the probability of getting the top priority is equal and the probability of getting one of the two top priorities is higher) OR ... (the probability of getting the first ''m''-1 priorities priority is equal and the probability of getting one of the ''m'' top priorities is higher) OR (all probabilities are equal). * '''Weak stochastic-dominance truthfulness (weak-SD-truthfulness)''': The vector of probabilities that an agent receives by being truthful is not first-order-stochastically-dominated by the vector of probabilities he gets by misreporting. Universal implies strong-SD implies Lex implies weak-SD, and all implications are strict.<ref name=":0" />{{Rp|page=|pages=|location=Thm.3.4}} === Truthfulness with high probability === For every constant <math>\epsilon>0</math>, a randomized mechanism is called '''truthful with probability <math>1-\epsilon</math>''' if for every agent and for every vector of bids, the probability that the agent benefits by bidding non-truthfully is at most <math>\epsilon</math>, where the probability is taken over the randomness of the mechanism.<ref name="agt07" />{{rp|349}} If the constant <math>\epsilon</math> goes to 0 when the number of bidders grows, then the mechanism is called '''truthful with high probability'''. This notion is weaker than full truthfulness, but it is still useful in some cases; see e.g. [[consensus estimate]]. == False-name-proofness == A new type of fraud that has become common with the abundance of internet-based auctions is ''false-name bids'' – bids submitted by a single bidder using multiple identifiers such as multiple e-mail addresses. '''False-name-proofness''' means that there is no incentive for any of the players to issue false-name-bids. This is a stronger notion than strategyproofness. In particular, the [[Vickrey–Clarke–Groves]] (VCG) auction is not false-name-proof.<ref>{{Cite journal | doi = 10.1016/S0899-8256(03)00045-9| title = The effect of false-name bids in combinatorial auctions: New fraud in internet auctions| journal = Games and Economic Behavior| volume = 46| pages = 174–188| year = 2004| last1 = Yokoo | first1 = M. | last2 = Sakurai | first2 = Y. | last3 = Matsubara | first3 = S. | citeseerx = 10.1.1.18.6796}}</ref> False-name-proofness is importantly different from group strategyproofness because it assumes that an individual alone can simulate certain behaviors that normally require the collusive coordination of multiple individuals.{{Citation needed|date=April 2024}}{{Explain|date=April 2024}} == See also == * [[Incentive compatibility]] * [[Individual rationality]] * [[Participation criterion]] – a player cannot lose by playing the game (i.e. a player has no incentive to avoid playing the game) == Further reading == * Parkes, David C. (2004), On Learnable Mechanism Design, in: Tumer, Kagan and David Wolpert (Eds.): Collectives and the Design of Complex Systems, New York u.a.O., pp. 107–133. * [http://www.math.auckland.ac.nz/~slinko/Research/Borda3.pdf On Asymptotic Strategy-Proofness of Classical Social Choice Rules] An article by Arkadii Slinko about strategy-proofness in voting systems. == References == {{reflist}} {{game theory}} [[Category:Game theory]] [[Category:Mechanism design]] [[Category:Electoral system criteria]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite Algorithmic Game Theory 2007
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Cn
(
edit
)
Template:Explain
(
edit
)
Template:Game theory
(
edit
)
Template:Navbox with collapsible groups
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Short description
(
edit
)