Skip to main content

Posts

A note to my former self: You're not supposed to take care of everything

In 2012-2013, I led the development of an open-source project called "Colored Coins", which defined a protocol for user-issued fungible tokens on the Bitcoin blockchain. In fact, this was the first protocol of its kind; before colored coins, the only fungible tokens on a blockchain were the native tokens (e.g., BTC on the Bitcoin blockchain). How did I become the lead dev? It was simple: I thought it was a cool project and relatively easy to implement. In August 2012, I stumbled upon an article about colored coins while browsing a Bitcoin forum. At that point, it was merely a theoretical concept. Intrigued by the idea, I believed that it could be implemented in a few weeks and might be a nice addition to my CV. Back then, the world of "crypto" was less about money and more about exploring the possibilities of decentralized, peer-to-peer networks. The project started with just a few people discussing it on a mailing list. The first implementation I created the first

Belief markets: short conspiracy theories using prediction market technology

Overview This article introduces a new concept called 'belief market'. The closest relatives of belief markets are prediction markets, thus I will introduce the concept by describing how prediction and belief markets differ. Prediction markets allow participants to make bets on outcomes of events. More generally, bets can be made on answers to questions which can be unambiguously answered in future. A participant can update his position until the market is closed and a question is resolved, which makes it different from other kinds of betting. Belief markets allow participants to bet on answers to questions which cannot be reliably resolved. In other words, belief markets allow participants to bet on something they believe should be true, even if an answer cannot be determined in such a way 100% of participants would agree with. Instead of that, a virtual resolution is based on a consensus among a subset of participants. To make this possible, belief markets rely on cryptocurre

Cosmos SDK vs Rell

When I first looked at Tenderming ~4 years ago (I seriously considered using Terndermint to implement consensus in Postchain), I was not particularly impressed by its programming model -- the example code in Go looked overly verbose. I looked through Cosmos SDK docs today. Docs look really nice. And it's nice that the application can be built in a modular fashion, using pre-made Cosmos SDK modules. However, verbosity is still there... Let's look at code needed to implement a single SetName operation in Cosmos. This operation simply checks that transaction's signer is the owner of the name, and then changes value associated with the name. This should be simple, right? It's probably the simplest thing you can do with blockchain. Let's first look at how one would implement it in Rell, e.g.: operation set_name_value (name, value: text) { val entry = name_entry @ { name }; require(is_signer(entry.owner)); entry.value = value; } Even if you don't k

Fujitsu UH572 review

This ultabook is barely relevant now; but, on the other hand, I've been using this ultrabook for about 6 months, which allows me to cover things not covered by other reviews. Perhaps, it would be of interest to people who consider buying used laptops. Overview  Fujitsu UH572 is a cheap and light ultrabook. For me, the weight is a very important factor, as I often move around with an open laptop in my hands (sometimes even typing while I hold it one hand). Anything heavier than 1.6 kg is unacceptable. On the other hand, I need fast CPU and lots of RAM for the work I do, so netbook-like devices are not an option. And I can't afford high-end ultrabooks, so I'm glad that devices like UH572 are available. Here are specs of the one I got: Intel Core i5-3317U (1.7-2.6 GHz, 3 MB cache, 2 cores/4 threads, Ivy Bridge) 4 GB RAM 500 GB HDD 32 GB iSSD (SanDisk i100) Intel Centrino Wireless-N2230 b/g/n (see the rest of specs in datasheet) The alternatives were: De

Impure world resurrection: optimize for responsiveness

In the previous article I discussed a design of a computation system with non-deterministic evaluation order in context of parallelization. But there is also another, much more interesting thing which it can do: As evaluation order is non-deterministic, it is possible to prioritize code execution on a very fine-grained level -- not on level of threads, but on level of individual computations and I/O operations. Moreover, due to runtime traversal it is possible to prioritize things on fly, according to run-time needs. E.g. according to user's input. Do you see what I mean? If it is possible to identify most desired computational goals, then it's possible to prioritize them at runtime, so they are reached in lowest time possible. From user's perspective this means responsiveness: he doesn't have to wait, computer reacts to his actions immediately. Well, in ideal case. While computers became orders of magnitude faster in a last couple of decades, in many cases responsiv

Impure world resurrection: Implicit parallelism through non-deterministic evaluation strategy with implicit side-effect control

When I've found out that there exists a pure functional programming language which is actually practical (Haskell), I was excited: I expected that pureness would allow optimizations such as automatic memoization and parallelism. However, when i started learning Haskell I was surprised that even though memoization and parallelisation are possible, they have to be explicit. Later I understood a reason for this: Haskell has a deterministic evaluation order, it is lazy unless you explicitly demand eager computation. With deterministic evaluation performance is also deterministic: programmer can expect certain performance characteristics from certain algorithms. So you wouldn't be surprised by out-of-memory condition caused by memoization and speculative execution. That's nice. Haskell's pureness allows one to make sure that parallelization is safe, but compiler won't make decisions for you. (Although it can do an optimization if it can prove that it won't make perf

out-of-memory: a sad case

the problem.. while Turing machine's tape is infinite, all real world programs are running within some resource constraints -- there is no such thing as infinite memory. for some programs that ain't a problem -- amount of memory needed by the algoritm can bee known beforehands, at programming time. but for most real world applications memory requirements are not known until run time, and sometimes it is very hard to predict how much does it need. obvious example is an application that allocates memory according to a user input -- for example, an image editor asks user for a dimension of an image he'd like to create. it needs to allocate an array in memory for the image (to have a fast access), so when dimensions exceed possible bounds, good-behaving application should notify user -- and user can either reduce image size or, perhaps, upgrade his machine, if he really wants to work with large images. while it is fairly easy to implement such functionality on a single-task O