Skip to main content

Belief markets: short conspiracy theories using prediction market technology

Overview

This article introduces a new concept called 'belief market'. The closest relatives of belief markets are prediction markets, thus I will introduce the concept by describing how prediction and belief markets differ.

Prediction markets allow participants to make bets on outcomes of events. More generally, bets can be made on answers to questions which can be unambiguously answered in future. A participant can update his position until the market is closed and a question is resolved, which makes it different from other kinds of betting.

Belief markets allow participants to bet on answers to questions which cannot be reliably resolved. In other words, belief markets allow participants to bet on something they believe should be true, even if an answer cannot be determined in such a way 100% of participants would agree with. Instead of that, a virtual resolution is based on a consensus among a subset of participants.

To make this possible, belief markets rely on cryptocurrency-inspired token economics model which assumes that a token can be valuable even if it has no intrinsic value and not backend by anything. Such tokens which are issued within blockchain systems and are not backed by anything currently have a total market value above 200 billion USD, which means that the model is empirically confirmed.

Belief markets can also be understood as a generalization of decentralized prediction markets. (They are essentially a way to repurpose some of prediction market designs I came up with years ago.)

What is the possible use of belief markets? 
1. They would let participants to 'short' beliefs which they find incorrect or unproductive. For example, it might be possible to short 5G-causes-coronavirus conspiracy theory if there are people willing to buy it.
2. They allow more complex prediction markets for questions which cannot be precisely specified and/or will take unknown amount of time to resolve.

Decentralized prediction markets

In this section we will consider a simplified model of a decentralized blockchain-based prediction market which would serve as a model demonstrating belief markets' "virtual resolution" model. (Described model is similar to one used in "Decentralized Prediction Market without Arbiters" paper written by me, Iddo Bentov and Meni Rosenfeld).

Suppose a blockchain-based prediction market allows participants to bet on a single event at a time. Suppose the blockchain implements a single cryptocurrency PRED which can be used to make bets and trade and is traded on outside markets (i.e. it's convertible to USD at market price).

Suppose the first event (E1) participants can bet on is "Will Barack Obama win US presidential elections in 2012?". (Obviously, betting happens before elections take place, e.g. in 2011; why this particular event is chosen for the example will be explained later). Participants can split 1 PRED into a pair of outcome-tokens E1_Y and E1_N and trade them among each other, e.g. people who believe Obama would win will sell E1_N and keep E1_Y.

In a traditional prediction market model, a definitive resolution must be made after a true outcome is known. Many method can be used in a blockchain-based prediction market to decide on a resolution:

1. a trusted third party (also known as an 'oracle')
2. a consensus among miners/block producers (note that miners have an incentive to resolve it correctly as otherwise price of PRED will crash on the outside markets and they will collect much less value from block rewards)
3. a majority among of oracles which are reputable / incentivized for their work
4. a majority of PRED token holders

Note that none of these resolution models is perfect: oracles might be corrupted, miners might be bribed by participants who bet on the wrong outcome or owners of a competitor cryptocurrency. The majority of PRED holders might make a wrong resolution if they bet on the wrong outcome. (Note that in a proof-of-stake based blockchain "majority of PRED holders", "majority of block producers" and "majority of participants" might be the same. In that case, a prediction market essentially devolves to a popularity market where participants are incentivized to bet on the most popular outcome, as in a Keynesian beauty contest.)

Another option is to use the mechanism which decentralized blockchains use for governance: forks. E.g. Bitcoin was split into Bitcoin [Core] and Bitcoin Cash as a result of block size debate, and Ethereum was split into Ethereum and Ethereum Classic due to a debate on The DAO hack resolution. In Ethereum, every upgrade is implemented as a fork which lets users to stay on the old version if they want to (thus creating a new cryptocurrency), although in vast majority of cases they choose to apply the developer-recommended upgrade.

So a blockchain PRED` might split into PRED_E1_Y and PRED_E1_N. In PRED_E1_Y E1_Y tokens will be considered same as PRED tokens, while E1_N tokens will be destroyed. We can assume that the price of PRED_E1_Y might be higher as traders would expect that other traders in future will prefer a chain with truthful resolution and more users will use PRED_E1_Y than PRED_E1_N, thus creating more demand for PRED_E1_Y.

In fact, if nobody is seriously interested in PRED_E1_N, it might have zero value, have no miners and then `PRED_E1_Y` side of the fork can simply be renamed to PRED while PRED_E1_N will be forgotten.

On the other hand, if there's a group of people believing that Obama is secretly a Kenyan muslim and thus cannot win in US presidential elections, they can keep maintaining PRED_E1_N and keep using it among themselves. We can imagine that this blockchain might become popular among conspiracy theorists who prefer alternative facts, and might be interested in betting on events such as "Will Obama send patriots to FEMA death camps?".

Note that this fork can happen regardless of what is choosen as an official resolution mechanism in the original prediction market. Even if oracles/miners/token holders vote for the correct outcome E1_Y, a fringe group can always fork the blockchain and force resolution differently. As long as a prediction market uses its own independently traded token used as currency, this can happen.

Another thing to note that token split does not actually require a separate blockchain. For example, a prediction market can be implemented on top of a multi-token platform such as Ethereum, then token split can happen within the same platform without creating separate blockchains.

Thus a minimalist decentralized prediction market can be implemented in following way: First, PRED tokens are acquired by participants, e.g. exchanged for ETH. Then, when a bettable event is defined (e.g. through a vote by PRED holders), all PRED is split into PRED_E1_Y and PRED_E1_N tokens and original PRED token disappears. Participants which wish to make a bet can trade their tokens on a decentralized exchange (e.g. those who wish to make YES bet will sell PRED_E1_N either for ETH or more PRED_E1_Y).  When outcome is known (e.g. E1_Y is a correct outcome), rational participants will dump their PRED_E1_N tokens, if they have any, onto those who are irrational/mistaken/trolling/do not wish to accept the reality etc. PRED_E1_Y can then be used to make further bets, e.g. it will be split into PRED_E1_Y_E2_Y and PRED_E1_Y_E2_N, and so on. A wallet which is prediction-market-aware can display PRED_E1_Y balance as PRED balance iff its user accepts E1_Y as a valid outcome.

Thus we assume that after outcome is known, PRED_E1_Y value will be same as PRED value before, as PRED_E1_Y is functionally identical to PRED.

Same is true for PRED_E1_N, it also can be used to make future bets, but if E1_Y is widely known as the correct outcome, only crazy, irrational people would use PRED_E1_N as if it was PRED.

Thus ability to split tokens and trade them is sufficient to establish a decentralized prediction market, under following conditions:
1. Betting can only be done using a token specific to the market, external cryptocurrency such as BTC, ETH, USDT and so on cannot be used. Winners will be fully exposed to PRED volatility.
2. Only one event can be available for bets at a time.

The later problem can be mitigated by splitting on multiple events at the same time. E.g. if two events are available for betting at the same time, PRED token will be split into 4 different tokens:

1. E1Y_E2Y
2. E1Y_E2N
3. E1N_E2Y
4. E1N_E2N

A person who believes that outcome of first event will be Y while outcome of second event will be N can simply sell all tokens except E1Y_E2N.

Thus, in practice, the number of events which can be bet on at the same time will be limited on one side by blockchain implementation scalability, and on another side by user's ability to understand what is going on and react (which can be solved using client (wallet) UI, which would translate his position on multiple tokens into something more comprehensible).

Now let's note again that absolutely nothing happens on "blockchain level" at a time event is resolved, there's simply no concept of resolution in the system. The resolution might affect how users think about the tokens, but it does not affect the tokens themselves. Thus the same implementation can be used for belief markets. If event E1 is ambiguous, both tokens E1Y_* and tokens E1N_* might have non-zero value. The difference between decentralized prediction markets of this kind and belief markets is only that belief markets are designed to work with a large number of ambiguous questions.

Recombination

Decentralized Prediction Market without Arbiters paper mentioned above described a prediction market system which is different from the one given above in following ways:
1. It requires participants to split their tokens explicitly, so un-encumbered PRED tokens remain.
2. It then allow one to combine a pair of 1 E1_Y and 1 E1_N tokens back into 1 PRED.
The underlying assumption of the paper is that once E1_Y is known as the true outcome, the price of E1_N would collapse as traders try to get rid of them, which would allow E1_Y owners to buy E1_N for a low price and combine it back into PRED. The paper considers this process from game-theoretic point of view.

I believe that an ability to recombine outcomes might be actually harmful for a decentralized prediction market, as it signals that PRED_E1_Y tokens are inferior to PRED, which won't allow market forces to resolve outcomes. Without recombination, traders will have to pick sides. If recombination is possible, they would avoid taking sides as much as possible, thus preferring PRED to PRED_E1_Y and thus devaluing PRED_E1_Y. (But there might be also a positive side, e.g. PRED might be backed by ETH, thus having a more stable value.)

However, for a belief market ability to combine different opinions on a question into a pristine token can be beneficial, as it allows market to be more liquid and dynamic. Belief market participants should understand that committing to a particular answer is the whole point of the market, thus they won't consider committed tokens value-less.

I will describe structure and possible implementation of belief markets in later sections.

What can you do with belief markets?

First, belief markets can be used as a more flexible kind of a prediction market which allows people to bet on complex, ambiguous, open-ended questions.

For example, consider a proponent of fusion power generation. He might want to bet that at some point a significant amount of energy will be generated through fusion. E.g. "Fusion power will generate more than 10% of energy consumption in EU for at least a year some time in future". He might not know when exactly that would happen. It might happen in 2050 if we have a breakthrough, but might be delayed until 2070. Note that the bet will resolve to false if fusion energy contributes only 9% in 2070, but is ramped up to 11% in 2071, which would upset our user, as his point is that fusion power is useful, not that it will generate a specific amount of energy in 2070. 

Even if we set a concrete date for resolution, it's still problematic to find an arbiter who will be able to resolve the question in 50 years. And with a traditional market, money will have be locked in a bet for 50 years (as long as some hope remains), which is dull and inefficient.

On the other hand, belief markets allow questions to be specified in a flexible form such as "Fusion power will generate more than 10% of energy consumption in EU for at least a year some time in future" (or perhaps even "fusion power will be widely used"). And as belief markets allow one to have a position on a combination of answers of different questions, our participant will be able to engage in other bets with similarly-minded people. For example, suppose he committed 100 tokens to the "yes" outcome of the above question (which we can call 'E1'). He can then use these committed tokens to make bets on more specific events, such as launch date of ITER and DEMO, success of non-tokamak designs and so on. The fact that he will only be able to engage in this bets only with other holders of E1_Y tokens is not a serious impediment, as fusion power-related predictions are mostly relevant to people who are optimistic about fusion energy.

It's worth noting that people who do not wish to make a bet on fusion energy success can still participate in this market: e.g. Alice can split her pure tokens into 100 E1_Y and 100 E1_N, use E1_Y to bet on a question "Will ITER be launched before 2030?" and then once the outcome is known, she will recombine E1_Y_E2_N with E1_Y_E2_Y getting E1_Y back, and then recombine it with E1_N to get pure unencumbered tokens back. If her prediction is true and markets are somewhat rational, she should end up with a larger number of pure tokens in the end.

Other possible use of belief markets is to make bets (or commit to) something which user has a strong opinion on, for example:

Past events which are ambiguous or controversial. E.g. "Was 9/11 an inside job?" "Who killed JFK?"

Conspiracy theories. "5G can be used for mind control".

Methodology. "Scientific method is the best way to gain knowledge", "Bayesian approach is the best".

Religious. "Christian God created the Universe ~6000 years ago".

To illustrate how this works, if 25% of traders believe that 9/11 was an inside job, the price of "No" outcome on this question might be around 0.75 while the price on "Yes" might be 0.25. Somebody who doesn't believe it was an inside job might buy 133 "No" tokens for 100 pure belief market tokens, and then engage in other trades/bets with people who also do not believe that it was an inside job.

What value would tokens have?

How do tokens have a value? Are belief markets actual markets, or are they are just a virtue signaling toy for geeks?

This actually is a complex question, and I believe it is both. Let's consider it in more detail.

First, suppose a particular instance of a belief market starts from a token PURE. PURE has value for one of two reasons (or both):
1. It might be backed by another token, e.g. one can get 1 PURE if he deposits 1 ETH. This can be handled by a smart contract which would also return 1 ETH for 1 PURE. Alternatively, a bonding curve can be used which would allow PURE price to fluctuate with the demand.
2. It might be a non-backed, in that case its value will be according to the market expectations in demand for PURE (which will eventually depend on belief market use).

Now, why would tokens committed to a particular belief have any value?

In the decentralization prediction market example, ability to repeatedly engage in a prediction market was giving tokens their market value: a user believes that other users might want to engage in this prediction market in future, thus they'd be interested in buying the token from him, which means that the token has a value.

To some extent, this can also give committed belief market tokens value, but since a belief market inherently fragments into multiple incompatible prediction markets, it's less clear. So let's consider other factors.

Ability to combine committed tokens back into PURE tokens provides a lower bound for a pair of committed tokens. Indeed, if sum of price E1_Y and E1_N is less than 1 PURE, somebody can buy both and convert to 1 PURE. Thus no-arbitrage condition gives us Price(E1_Y)+Price(E1_N)  Price(PURE). 

This means that usually price of E1_N will go up as E1_Y price goes down. This gives participants an opportunity to profit from change in prices of outcomes. For example, a rational thinking might expect that price of E1_Y would go down over time because a particular conspiracy goes out of fashion/becomes less popular, or because irrational people are more likely lose their money and have to liquidate E1_Y position for a lower price. On the other hand, a conspiracy theorist might expect that price of E1_Y would go up if new evidence is uncovered, or sheeple would wake up. In both cases holding a token is rational because expected value of a committed token is higher than the current market value. Demurrage can further increase probability that token price would change over time, as one of sides will have a lower desire or ability to maintain their position.

Recombination is also possible for tokens which commit to a combination of beliefs: it's still possible to buy back all combinations and get back PURE.

An argument can be made that some commitment tokens can be valuable beyond their lower bound value (essentially, scrap rate), as they can be used
1. to participate in prediction markets relevant to some subgroup of users
2. to signal commitment to a particular set of beliefs (as in "show me that you're a real proponent of fusion power who puts his money where his mouth is")
3. to unlock access to a particular event/meetup/conference (e.g. "Only those who have at least 1000 Bayesian-proponent tokens are allowed to come to this meetup") or given a discount for goods, etc; essentially a practical extension of point #2
4. for payment for goods and services relevant to a particular group (e.g. pay with Bayesian-proponent for a ticket to Bayesian conference)

Thus we can assume that a committed value can be between its scrap value and value of PURE (since PURE can always be converted to committed tokens).

User groups/Clusters

A mature belief market system might have thousands of questions/events one can commit to. If every has two answers, 2^1000 combinations are possible, each of which is associated with a particular kind of a token.
Thus we can conclude that users will need some structure to navigate the system. Otherwise they'd drown in information, and markets will become too chaotic/illiquid.

One way to structure is to create explicitly defined user groups/clusters. These groups will define a set of commitments members agree with. This would allow users to make their tokens compatible, thus engaging in commerce or prediction markets within a cluster.

At the same time, clusters do not force anyone into accepting beliefs he disagrees with. If there's a significant disagreement, a user can exit one cluster and join another, or create his own cluster and invite others to join. 

Implementation

While belief markets can certainly be implemented in a centralized way and use e.g. USD as 'pure' token, a blockchain-based implementation is more relevant, as belief market users will likely be interested in a decentralized nature of it. Centralized implementation also create a possible longevity issue: Does it make sense to commit to a bet which might resolve in 50 years if a startup is unlikely to survive more than 10 years? A blockchain can survive as long as its users want to use it.

Is it possible to implement it on a blockchain? Yes, in principle. But in practice it requires handling large numbers of different kinds of tokens (possibly millions) and markets (order books), while at the same time offering low fees and fast execution (as otherwise it would be too inconvenient for people to use.

But I believe it can be done on Chromia, as it is essentially designed with this kind of applications in mind.

However, there's still a number of open questions, particularly, the way user groups/clusters are implemented might greatly affect UX.

Comments

dafniehabenicht said…
The odds of a game of likelihood have been established mathematically on the time the rules of the game had been created, and nothing can make you roughly more likely to|prone to} win than another player. This supply of numbers known as as} a Pseudo-Random Number Generator or RNG. The RNG is used to generate numbers in a sequence that appears random, however is difficult to predict. Its main function is to produce a sequence in which it's not potential to seek out|to search out} in an obvious means what the following quantity might be. If we add truth that|the truth that} the RNG continuously generates numbers every 50 milliseconds , we are faced with the human impossibility of predicting the following quantity within the sequence. In this way, a slot machine’s 먹튀사이트 먹튀프렌즈 laptop has a means of producing performs “at random” using a pseudo-random mathematical formulation.
yardleyvalko said…
If you wish to strive sure games, want to} make sure that|be sure that|ensure that} the site presents a variety of|quite a lot of|a big selection of} games. In addition, there are many of|there are numerous} more games, and the more possibilities you have to to|you must} discover another game with great odds. The more game options there are, the more probably it's that 1xbet one thing will match a selected player’s playing in} fashion and preferences.

Popular posts from this blog

Lisp web tutorial?

"PHP vs. Lisp: Unfortunately, it's true..." article initiated quite active discussion on reddit , one fellow asking : Can someone post a tutorial for taking a clean install of Ubuntu (or windows or anything) to finish with serving a basic CRUD application using lisp? Maybe a TODO list with entires consisting of: incomplete/complete boolean, due date, subject, body? actually i had an impression that there are more than enough such tutorials, but as nobody replied i've tried finding one myself, starting with Hunchentoot tutorials. surprisingly, none of them covered a short path from clean OS install to working examples. neither i've found my ABCL-web  tutorial suitable for this, so i decided to try myself.  my idea was that Linux distros like Debian and Ubuntu contain a lot of Lisp packages, and it should be fairly easy to install them, as it automatically manages dependencies etc. i've decided to try Hunchentoot -- i'm not using it myself, but it's k

Lisp syntax is great!

lots of people complain about Lisp syntax -- they find it too weird and verbose, they call LISP "Lots of Irritating  Silly Parentheses"; and sometimes they even pop up with proposals to "fix Lisp" on comp.lang.lisp -- "Lisp is sort of cool, but this syntax... let me show you my great ideas." on the other hand, most lispers (and I among them) actually love s-expression syntax. who is right here? are syntax preferences a subjective thing, or one can decide which is better quite in an (more-or-less) objective way? or, perhaps, that's just a matter of taste and custom? i've got a good example today.. i'm using Parenscript -- cool Common Lisp library that automatically generates JavaScript from Lisp-like syntax -- and i've wrote a function that caches document.getElementById results (that makes sence for dumb browsers like IE):   (defun my-element-by-id (cache id) (return (or (slot-value cache id)     (setf (slot-value cache

out-of-memory: a sad case

the problem.. while Turing machine's tape is infinite, all real world programs are running within some resource constraints -- there is no such thing as infinite memory. for some programs that ain't a problem -- amount of memory needed by the algoritm can bee known beforehands, at programming time. but for most real world applications memory requirements are not known until run time, and sometimes it is very hard to predict how much does it need. obvious example is an application that allocates memory according to a user input -- for example, an image editor asks user for a dimension of an image he'd like to create. it needs to allocate an array in memory for the image (to have a fast access), so when dimensions exceed possible bounds, good-behaving application should notify user -- and user can either reduce image size or, perhaps, upgrade his machine, if he really wants to work with large images. while it is fairly easy to implement such functionality on a single-task O