Skip to main content

Posts

Belief markets: short conspiracy theories using prediction market technology

Overview This article introduces a new concept called 'belief market'. The closest relatives of belief markets are prediction markets, thus I will introduce the concept by describing how prediction and belief markets differ. Prediction markets allow participants to make bets on outcomes of events. More generally, bets can be made on answers to questions which can be unambiguously answered in future. A participant can update his position until the market is closed and a question is resolved, which makes it different from other kinds of betting. Belief markets allow participants to bet on answers to questions which cannot be reliably resolved. In other words, belief markets allow participants to bet on something they believe should be true, even if an answer cannot be determined in such a way 100% of participants would agree with. Instead of that, a virtual resolution is based on a consensus among a subset of participants. To make this possible, belief markets rely on cryptocurre

Cosmos SDK vs Rell

When I first looked at Tenderming ~4 years ago (I seriously considered using Terndermint to implement consensus in Postchain), I was not particularly impressed by its programming model -- the example code in Go looked overly verbose. I looked through Cosmos SDK docs today. Docs look really nice. And it's nice that the application can be built in a modular fashion, using pre-made Cosmos SDK modules. However, verbosity is still there... Let's look at code needed to implement a single SetName operation in Cosmos. This operation simply checks that transaction's signer is the owner of the name, and then changes value associated with the name. This should be simple, right? It's probably the simplest thing you can do with blockchain. Let's first look at how one would implement it in Rell, e.g.: operation set_name_value (name, value: text) { val entry = name_entry @ { name }; require(is_signer(entry.owner)); entry.value = value; } Even if you don't k

Fujitsu UH572 review

This ultabook is barely relevant now; but, on the other hand, I've been using this ultrabook for about 6 months, which allows me to cover things not covered by other reviews. Perhaps, it would be of interest to people who consider buying used laptops. Overview  Fujitsu UH572 is a cheap and light ultrabook. For me, the weight is a very important factor, as I often move around with an open laptop in my hands (sometimes even typing while I hold it one hand). Anything heavier than 1.6 kg is unacceptable. On the other hand, I need fast CPU and lots of RAM for the work I do, so netbook-like devices are not an option. And I can't afford high-end ultrabooks, so I'm glad that devices like UH572 are available. Here are specs of the one I got: Intel Core i5-3317U (1.7-2.6 GHz, 3 MB cache, 2 cores/4 threads, Ivy Bridge) 4 GB RAM 500 GB HDD 32 GB iSSD (SanDisk i100) Intel Centrino Wireless-N2230 b/g/n (see the rest of specs in datasheet) The alternatives were: De

Impure world resurrection: optimize for responsiveness

In the previous article I discussed a design of a computation system with non-deterministic evaluation order in context of parallelization. But there is also another, much more interesting thing which it can do: As evaluation order is non-deterministic, it is possible to prioritize code execution on a very fine-grained level -- not on level of threads, but on level of individual computations and I/O operations. Moreover, due to runtime traversal it is possible to prioritize things on fly, according to run-time needs. E.g. according to user's input. Do you see what I mean? If it is possible to identify most desired computational goals, then it's possible to prioritize them at runtime, so they are reached in lowest time possible. From user's perspective this means responsiveness: he doesn't have to wait, computer reacts to his actions immediately. Well, in ideal case. While computers became orders of magnitude faster in a last couple of decades, in many cases responsiv

Impure world resurrection: Implicit parallelism through non-deterministic evaluation strategy with implicit side-effect control

When I've found out that there exists a pure functional programming language which is actually practical (Haskell), I was excited: I expected that pureness would allow optimizations such as automatic memoization and parallelism. However, when i started learning Haskell I was surprised that even though memoization and parallelisation are possible, they have to be explicit. Later I understood a reason for this: Haskell has a deterministic evaluation order, it is lazy unless you explicitly demand eager computation. With deterministic evaluation performance is also deterministic: programmer can expect certain performance characteristics from certain algorithms. So you wouldn't be surprised by out-of-memory condition caused by memoization and speculative execution. That's nice. Haskell's pureness allows one to make sure that parallelization is safe, but compiler won't make decisions for you. (Although it can do an optimization if it can prove that it won't make perf

out-of-memory: a sad case

the problem.. while Turing machine's tape is infinite, all real world programs are running within some resource constraints -- there is no such thing as infinite memory. for some programs that ain't a problem -- amount of memory needed by the algoritm can bee known beforehands, at programming time. but for most real world applications memory requirements are not known until run time, and sometimes it is very hard to predict how much does it need. obvious example is an application that allocates memory according to a user input -- for example, an image editor asks user for a dimension of an image he'd like to create. it needs to allocate an array in memory for the image (to have a fast access), so when dimensions exceed possible bounds, good-behaving application should notify user -- and user can either reduce image size or, perhaps, upgrade his machine, if he really wants to work with large images. while it is fairly easy to implement such functionality on a single-task O

Lisp syntax is great!

lots of people complain about Lisp syntax -- they find it too weird and verbose, they call LISP "Lots of Irritating  Silly Parentheses"; and sometimes they even pop up with proposals to "fix Lisp" on comp.lang.lisp -- "Lisp is sort of cool, but this syntax... let me show you my great ideas." on the other hand, most lispers (and I among them) actually love s-expression syntax. who is right here? are syntax preferences a subjective thing, or one can decide which is better quite in an (more-or-less) objective way? or, perhaps, that's just a matter of taste and custom? i've got a good example today.. i'm using Parenscript -- cool Common Lisp library that automatically generates JavaScript from Lisp-like syntax -- and i've wrote a function that caches document.getElementById results (that makes sence for dumb browsers like IE):   (defun my-element-by-id (cache id) (return (or (slot-value cache id)     (setf (slot-value cache