I don't think that matches what they're describing in the article. The problem is that we only get to try implementing one of policy A and policy B, so if someone bets "I think policy A will achieve the goal", but we implement policy B, you have to just void their bet.
If we had already decided on policy A, and were just trying to predict whether it'll work, what you describe would be fine. But in the article, we're trying to decide whether to implement policy A or policy B, by having two separate markets, one for "what will happen if we do A" and another for "what will happen if we do B", and one of those two will get voided.
Unless I unnderstood it wrong, you could use a variant of that:
Market 1 has options A, not A. Market 2 has options B, not B. At the end of the trading period, void the "losing" market and reward the winning one. It's trivial to implement if you're using e.g. electronic payments and you forbid "cross" trading between A and B.
Right, that's the system they propose. But I'm saying that can result in an agent being incentivized to put their money into a policy they believe to be worse, as long as they believe that policy is underpriced and more likely to "win", which is undesirable.
So for example, suppose we have the objective "increase our production of paperclips by next year". Our two policy options are "build a paperclip factory", and "build a paper mill". We now have two bettings markets, each with a Yes/No pair of options, "Will building a paperclip factory increase our production of paperclips?" and "Will building a paper mill increase our production of paperclips?".
Now let's say that currently, the paper mill has "Yes, this will work" at 60%, and the factory has "Yes, this will work at 40%". I'm a paperclip genius, and I know that the true odds are that the factory has a 90% chance of working, and the mill has a 75% chance of working.
Where do I put my money? Ostensibly, we want me to put it on the factory, because that's the best policy. But the factory is unpopular and that policy is unlikely to be implemented (since it's down by 20 percentage points). Even if I nudge it up a bit, my bet is likely to be voided, and I make zero return for my knowledge. Instead, I will bet on "yes the mill will work", because that market is also underpriced, and the policy will actually be implemented. By doing this, I maximize my expected reward, and I also move us away from what I think is the best policy.
> But the factory is unpopular and that policy is unlikely to be implemented (since it's down by 20 percentage points). Even if I nudge it up a bit, my bet is likely to be voided, and I make zero return for my knowledge.
I'm not sure that's what would actually happen unless you add some weird constraints. Under usual (unrealistic, okay, but just for the sake of argument) assumptions, you would buy infinitely many As at any price <.9, and infinitely many Bs at any price <.75. By definition you know the true odds, so your posterior predictive has zero hyperparameter variance: every single one of those trades has positive expectation.
Both A and B would increase in price, but you would stop buying B after a while. Assuming infinite time, infinite liquidity, no budget constraints and no weird information asymmetries, you could single-handedly make the market converge at their "true" values: you will always buy A if the price is lower than your threshold, and any rational seller who doesn't believe your odds would sell it to you.
Certainly that's true if I have infinite capital, but if these markets require participants to have infinite capital in order to work, we've got a problem. If I have finite capital, then any money I put on the factory is money I can't put on the mill, and that's losing value.
Actually, if the options are mutually exclusive and bets on the losing option get voided, there's no reason to forbid you from betting on both options with the same money, is there? Only one bet will stand.
For that matter, anyone could safely lend you X, where X is what you already bet on A, for the purpose of betting on B. One way or another you'll get X back in voided bet money, so you're a perfectly safe borrower.
Typically in a betting market, you can continue to buy and sell your shares after placing your bet, so if the market moves and you now think that A is overpriced, you can sell some of your shares and lock in profit. It's not entirely clear to me how you make this work if your investment in A and B is with mirrored funds. If there's a way to make it work, it certainly seems like a step in the right direction.
If you have a budget constraint, it's rational for you to buy argmax(true(A) - market_value(A), true(B) - market_value(B)), which is exactly the Pareto-efficient behavior.
That's where I disagree. Your expected value on buying A is (probability A is implemented) * (true(A) - market_value(A), and similarly for B, because your receive zero return if the thing you bet on is not implemented. Thus, even if A is badly mispriced, you may not want to buy it if it has very low probability of being implemented.
If we had already decided on policy A, and were just trying to predict whether it'll work, what you describe would be fine. But in the article, we're trying to decide whether to implement policy A or policy B, by having two separate markets, one for "what will happen if we do A" and another for "what will happen if we do B", and one of those two will get voided.