In the previous article I discussed a design of a computation system with non-deterministic evaluation order in context of parallelization. But there is also another, much more interesting thing which it can do: As evaluation order is non-deterministic, it is possible to prioritize code execution on a very fine-grained level -- not on level of threads, but on level of individual computations and I/O operations. Moreover, due to runtime traversal it is possible to prioritize things on fly, according to run-time needs. E.g. according to user's input.
Do you see what I mean? If it is possible to identify most desired computational goals, then it's possible to prioritize them at runtime, so they are reached in lowest time possible. From user's perspective this means responsiveness: he doesn't have to wait, computer reacts to his actions immediately. Well, in ideal case.
While computers became orders of magnitude faster in a last couple of decades, in many cases responsiveness haven't got any better, it even became worse.
For example, when you load a game from tape on ZX Spectrum, you have to wait for a while, but when it is loaded, it usually works without noticeable delays. This was true for 8-bit computers and game consoles, and to a large extent to MS-DOS applications and games. Reason is simple: applications were run in exclusive mode, so they could do whatever they need, i.e. respond to user (usually). Also, they were generally simpler and were optimized.
But in a modern OS with virtual memory and multiple processes running it's much more complex: application no longer has control over performance. Some innocent memory access might trigger page fault, and reading page from disk when there is already a lot of requests from other processes takes some time. Often you have to wait because some background process like antivirus or auto-updater thinks that it's necessary to do something.
Sure, it's not possible to get control back by just changing programming language your application is written it, but at least it's possible to make it somewhat better: application's flow won't be blocked by things which are non-essential from user interface's perspective.
Programmers often use background worker threads for expensive computations and blocking I/O, but it's not feasible to use them for each little thingie. For example, let's say your application reads configuration and loads plugin modules during initialization, at start. It's probably an appropriate place to load them, but unfortunately it might block showing showing main window, so from user's perspective program takes time to load. Doing it on demand, later, or in a separate thread might make code more complicated. But if compiler can figure out that loading plugins has no effect on showing main window, and showing main window is a main priority, it will be able to reorder load sequence to minimize perceived delay. Even if plugins are used somewhere in user's interface: for example, there is a list of them in menu, we still can postpone loading them up to the moment when that menu item is displayed. (Actually, we can load them earlier, but if system is under heavy loaded we might postpone it as long as possible.)
Once I talked to a person who really hated the way modern computers work. He had a reason for this: when he was writing music software for computers with 8-bit CPUs (or maybe 16-bit ones, I don't remember details), he had to heavily optimize this application as CPU was too slow to do stuff in real time. So when user's cursor was close to certain effect button he will start to compute result of that effect speculatively, so if user will press that button he will be able hear result instantly without any latency.
Now we have computers which can do a lot of things in real time, but latencies only got worse: sometimes it works instantly, other time computing effect would trigger a page fault and thus some lengthy I/O. (I don't use music software myself, but that dude said he makes workstations for musicians, so I tend to believe him.)
Using non-deterministic evaluation order with explicit priority control would allow you to implement such tricks which hide latency with ease: you can compute things which might be triggered by user's input lazily, on demand, but dynamically prioritize some of them according to input events.
To summarize, non-deterministic evaluation strategy can be used to utilize resources of computer in optimal way: in some cases it might be parallelization for maximal throughput, in other cases it might be prioritization for maximum responsiveness. These things are related: you might want to compute something speculatively in both cases, and you might want to compute something in parallel in both cases. Use of runtime code traversal helps with changing goals, uncertainty about input and hardware application runs on, difficulties of static analysis. Hopefully relatively simple compiler/run-time library can do interesting, efficient things, but it is a topic of another post.
"PHP vs. Lisp: Unfortunately, it's true..." article initiated quite active discussion on reddit , one fellow asking : Can someone post a tutorial for taking a clean install of Ubuntu (or windows or anything) to finish with serving a basic CRUD application using lisp? Maybe a TODO list with entires consisting of: incomplete/complete boolean, due date, subject, body? actually i had an impression that there are more than enough such tutorials, but as nobody replied i've tried finding one myself, starting with Hunchentoot tutorials. surprisingly, none of them covered a short path from clean OS install to working examples. neither i've found my ABCL-web tutorial suitable for this, so i decided to try myself. my idea was that Linux distros like Debian and Ubuntu contain a lot of Lisp packages, and it should be fairly easy to install them, as it automatically manages dependencies etc. i've decided to try Hunchentoot -- i'm not using it myself, but it's k...
Comments