Skip to main content

Impure world resurrection: optimize for responsiveness

In the previous article I discussed a design of a computation system with non-deterministic evaluation order in context of parallelization. But there is also another, much more interesting thing which it can do: As evaluation order is non-deterministic, it is possible to prioritize code execution on a very fine-grained level -- not on level of threads, but on level of individual computations and I/O operations. Moreover, due to runtime traversal it is possible to prioritize things on fly, according to run-time needs. E.g. according to user's input. Do you see what I mean? If it is possible to identify most desired computational goals, then it's possible to prioritize them at runtime, so they are reached in lowest time possible. From user's perspective this means responsiveness: he doesn't have to wait, computer reacts to his actions immediately. Well, in ideal case. While computers became orders of magnitude faster in a last couple of decades, in many cases responsiveness haven't got any better, it even became worse. For example, when you load a game from tape on ZX Spectrum, you have to wait for a while, but when it is loaded, it usually works without noticeable delays. This was true for 8-bit computers and game consoles, and to a large extent to MS-DOS applications and games. Reason is simple: applications were run in exclusive mode, so they could do whatever they need, i.e. respond to user (usually). Also, they were generally simpler and were optimized. But in a modern OS with virtual memory and multiple processes running it's much more complex: application no longer has control over performance. Some innocent memory access might trigger page fault, and reading page from disk when there is already a lot of requests from other processes takes some time. Often you have to wait because some background process like antivirus or auto-updater thinks that it's necessary to do something. Sure, it's not possible to get control back by just changing programming language your application is written it, but at least it's possible to make it somewhat better: application's flow won't be blocked by things which are non-essential from user interface's perspective. Programmers often use background worker threads for expensive computations and blocking I/O, but it's not feasible to use them for each little thingie. For example, let's say your application reads configuration and loads plugin modules during initialization, at start. It's probably an appropriate place to load them, but unfortunately it might block showing showing main window, so from user's perspective program takes time to load. Doing it on demand, later, or in a separate thread might make code more complicated. But if compiler can figure out that loading plugins has no effect on showing main window, and showing main window is a main priority, it will be able to reorder load sequence to minimize perceived delay. Even if plugins are used somewhere in user's interface: for example, there is a list of them in menu, we still can postpone loading them up to the moment when that menu item is displayed. (Actually, we can load them earlier, but if system is under heavy loaded we might postpone it as long as possible.) Once I talked to a person who really hated the way modern computers work. He had a reason for this: when he was writing music software for computers with 8-bit CPUs (or maybe 16-bit ones, I don't remember details), he had to heavily optimize this application as CPU was too slow to do stuff in real time. So when user's cursor was close to certain effect button he will start to compute result of that effect speculatively, so if user will press that button he will be able hear result instantly without any latency. Now we have computers which can do a lot of things in real time, but latencies only got worse: sometimes it works instantly, other time computing effect would trigger a page fault and thus some lengthy I/O. (I don't use music software myself, but that dude said he makes workstations for musicians, so I tend to believe him.) Using non-deterministic evaluation order with explicit priority control would allow you to implement such tricks which hide latency with ease: you can compute things which might be triggered by user's input lazily, on demand, but dynamically prioritize some of them according to input events. To summarize, non-deterministic evaluation strategy can be used to utilize resources of computer in optimal way: in some cases it might be parallelization for maximal throughput, in other cases it might be prioritization for maximum responsiveness. These things are related: you might want to compute something speculatively in both cases, and you might want to compute something in parallel in both cases. Use of runtime code traversal helps with changing goals, uncertainty about input and hardware application runs on, difficulties of static analysis. Hopefully relatively simple compiler/run-time library can do interesting, efficient things, but it is a topic of another post.


kadirlamotta said…
Another nice factor about} Spin Casino is its mobile part, supplying you with exciting casino action in the palm of your hand. Whether you like heart-racing video slots or want to put your card sport technique into action, find a way to|you presumably can} simply do so in your smartphone or pill. From climbing the mountains and spinning the reels on Olympus to embodying your inside James Bond by taking part in} card games, 1xbet korea Spin Casino offers you many alternatives to have plenty of fun. What’s extra important, Spin Casino very frequently updates its offers to make sure gamers can all the time discover one thing for them. No matter whether or not you are a new player or might have} been taking part in} at Spin Casino for some time; there is something for you here to enjoy.

Popular posts from this blog

Lisp web tutorial?

"PHP vs. Lisp: Unfortunately, it's true..." article initiated quite active discussion on reddit , one fellow asking : Can someone post a tutorial for taking a clean install of Ubuntu (or windows or anything) to finish with serving a basic CRUD application using lisp? Maybe a TODO list with entires consisting of: incomplete/complete boolean, due date, subject, body? actually i had an impression that there are more than enough such tutorials, but as nobody replied i've tried finding one myself, starting with Hunchentoot tutorials. surprisingly, none of them covered a short path from clean OS install to working examples. neither i've found my ABCL-web  tutorial suitable for this, so i decided to try myself.  my idea was that Linux distros like Debian and Ubuntu contain a lot of Lisp packages, and it should be fairly easy to install them, as it automatically manages dependencies etc. i've decided to try Hunchentoot -- i'm not using it myself, but it's k

Lisp syntax is great!

lots of people complain about Lisp syntax -- they find it too weird and verbose, they call LISP "Lots of Irritating  Silly Parentheses"; and sometimes they even pop up with proposals to "fix Lisp" on comp.lang.lisp -- "Lisp is sort of cool, but this syntax... let me show you my great ideas." on the other hand, most lispers (and I among them) actually love s-expression syntax. who is right here? are syntax preferences a subjective thing, or one can decide which is better quite in an (more-or-less) objective way? or, perhaps, that's just a matter of taste and custom? i've got a good example today.. i'm using Parenscript -- cool Common Lisp library that automatically generates JavaScript from Lisp-like syntax -- and i've wrote a function that caches document.getElementById results (that makes sence for dumb browsers like IE):   (defun my-element-by-id (cache id) (return (or (slot-value cache id)     (setf (slot-value cache

out-of-memory: a sad case

the problem.. while Turing machine's tape is infinite, all real world programs are running within some resource constraints -- there is no such thing as infinite memory. for some programs that ain't a problem -- amount of memory needed by the algoritm can bee known beforehands, at programming time. but for most real world applications memory requirements are not known until run time, and sometimes it is very hard to predict how much does it need. obvious example is an application that allocates memory according to a user input -- for example, an image editor asks user for a dimension of an image he'd like to create. it needs to allocate an array in memory for the image (to have a fast access), so when dimensions exceed possible bounds, good-behaving application should notify user -- and user can either reduce image size or, perhaps, upgrade his machine, if he really wants to work with large images. while it is fairly easy to implement such functionality on a single-task O