AI, Lisp and Why Languages Die

In my exploration of the things arcane and mythical, I stumbled upon a forgotten book by Peter Norvig – Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. My IT colleague was delighted to see it and highly recommended the study of Common Lisp as a sort of meta language. As programming languages interest me greatly and my understanding of functional programming is rather lacking (I loath Python lambdas), I decided to give it a go. What I discovered was a language with an unusual syntax (defining all structures as lists of objects), yet a potential for writing useful tools efficiently. Then, I learned various Lisp dialects were heavily used during the computer boom of 1960-80s when the US government would pump billions of dollars into military and NASA projects (machine learning, AI algorithms, behavioral simulations,, etc.). The trend died down in the early 1990s and together with it Lisps either gave rise to modern languages like Clojure (also from the Lisp family) or simply disappeared. From the old generation Scheme and Common Lisp are still in use, though less and less by the day.

Artificial Intelligence has always been an extremely vital (and interesting) field of computer science. In fact, now more than ever, as rapid growth of the Internet forces us to develop smart tools to sieve through the wild abundance of information in real-time. No wonder projects like Alexa (Amazon) or Cortana (Microsoft) are on the rise. There are 2 crucial aspects of AI that seem to garner much interest in my opinion – human language interfaces (How to make programs understand us humans in all/most of our natural languages?) and intelligent filtering algorithms (How to make programs increasingly aware of our human needs and remember them?). The second aspect involves machine learning, which delves into data extrapolation, approximation and the progressive nature of filtering algorithms. It all boils down to making computers more human and doing some (most?) of our work for us. There are many quite realistic pitfalls, of course, like algorithms deciding that humans are the limiting factor in making our (human) lives easier. When we consider emotions as a complete contradiction to reason, this makes perfect sense. Unpredictable humans are the weakest link in an approach that relies on predictable values.

Going back to Lisp and its dialects, after its inception in 1959 it quickly became the language of choice for writing mathematical algorithms, especially in the field of AI. It was clear that the Lisp S-expression syntax makes code easy to read and the language itself has a strong propensity for evolution. From more modern times (1990-2000) there are plenty of success stories on how Lisp saved the day. Finally, Lisps pioneered crucial concepts like functional recurrence, concurrency and interactive programming (the famous REPL read-eval-print-loop, nowadays a common feature of Haskell, Python and other languages). Taking all of this into consideration it is quite difficult to understand why Common Lisp (the standardized Lisp effort) stopped being the hot stuff. Some of the sources I found mentioned that Lisps were pushed aside for political reasons. Budget cuts made a lot of NASA projects struggle for survival or meet swift demise. Also, new cool languages (*cough* *cough* Perl) came to be and Lisps were supposedly too arcane to be picked up and used easily. However, to me Common Lisp seems far less verbose (obfuscated?) than for example Java and far more orderly than said Perl. Its performance is also supposedly on par with Java, which might be interesting to people who would like to write useful tools quickly (as quickly as in Python, for instance), yet not get into the memory management details of vanilla C or C++ for better performance.

The truth is that no language is really dead until it becomes naturally obsolete. Even if it suddenly loses enterprise backing. While Lisps have some viable descendants, one would be hard pressed to find a language that directly supersedes Lisps. There are of course multiple functional languages that share Lisps’ strengths, yet they typically sport a vastly less approachable syntax, devoid of easily readable S-expressions. Therefore, I believe Scheme, Common Lisp and other modern Lisps deserve not only attention, but also proper appreciation.

Fedora 26 – RTL8188EU the Hard Way!

Following my former entry preaching on the greatness of Fedora 26, I decided to share some wisdom regarding USB wireless adapters (aka dongles) with the Realtek RTL8188EU chip. These and many other Realtek-based (usually RTL8188EU and RTL8192CU) adapters are affordable and extremely common. Companies like Hama, Digitus, TP-LINK and Belkin fit them into the cheapest 150N and 300N dongles, claiming that they’re compatible with Linux. In principle, they are. In practice, the kernel moves so fast that these companies have problems keeping up with driver updates. As a result,  poor quality drivers remain in the staging kernel tree. Some Linux distributions like Debian and Ubuntu include them, but Fedora doesn’t (for good reasons!) so Fedora users have to jump through quite some hoops to get them working…

The standard approach is to clone the git repository for the stand-alone RTL8188EU driver, compile it against our kernel + headers (provided by the Linux distribution of choice) and modprobe load if possible. Alas, since the stand-alone driver isn’t really in-sync with the kernel, it often requires manual patching and is in general quite flaky. An alternative, more fedorian approach is to build a custom kernel with the driver included. The rundown is covered by the Building a custom kernel article from the Fedora Wiki. All configuration options are listed in the various kernel-*.config files (standard kernel .config files prepped for Fedora), where “*” defines the processor architecture. Fortunately, we don’t have to mess with the kernel .configs too much, merely add the correct CONFIG_* lines to the “kernel-local” text file and fedpkg will add the lines prior to building the kernel. The lines I had to add for the RTL8188EU chip:

# ‘m’ means ‘build as module’, ‘y’ means ‘build into the kernel’

This however will differ depending on the Realtek chip in question and the build will fail with an indication which line in the kernel .config was not enabled when it should’ve been. Finally, if you do not intend to debug your product later on, make sure to build only the regular kernel (without the debug kernel), as that takes quite some time.


The Kernel, the Kernel’s to Blame!


When getting my Raspberry Pi 3 set up recently I experienced quite some woes concerning out-of-the-box detection of SD cards. One might expect that an SD card slot is nothing more than a USB-like interface. In theory yes, in practice quite some distributions have problems with accepting that fact. Gentoo failed me numerous times, though partially because I decided to go for an extremely slim kernel config. Manjaro also surprised me in that respect – SD card detected, but not as a USB drive (thereby, not mountable). Fedora and Lubuntu had no problems. Each distribution uses a different set of graphical utilities and desktop environments so users often blame the front-end setup. That’s wrong, though, because the inability of a system to detect a piece of hardware has everything to do with the kernel configuration. Indeed, the kernel’s to blame.

I personally prefer the Arch approach – almost everything as modules. Although this could add significant overhead due to the way modules are loaded, in reality it makes Arch-based systems very light on resources. After all, what’s not in, doesn’t get loaded at all. The drawback is that the distribution or the user is required to ascertain that the initramfs is complete enough to allow a successful boot-up. The alternative is to integrate as many drivers as necessary into the kernel, though that of course makes the kernel bulky and isn’t always the optimal solution. There is a lot in-between that unfortunately causes weird issues like the one I experienced.

I think there should seriously be some consensus between distribution teams regarding what goes into a kernel and what doesn’t. Weird accidents can be avoided and it’s down to individual teams to iron that out. Of course, one can go hunting for drivers on GitHub and trying out 5 versions of a Realtek 8188eu driver, but why should the user be required to do so?

ARMing For the Future


Image taken from

For some time now I’ve been itching to get my hands on a Raspberry Pi single-board computer. Unfortunately, retailers like Saturn and MediaMarkt would shrug my inquiries off with a “we’re expecting them soon”. To my dismay the “soon” seemed like it would never come. Surprising, since the computer geek culture is constantly expanding and the demand is definitely there. Finally, after months of waiting, the Pi arrived to Austria. I quickly armed myself (pun intended) with a RPi 3 model B, a Pi-compatible power supply (5.1 V, 2.5 A) and a mate black case. The rest I already had since I collect various adapters, SD cards, etc. as a hobby. Always handy, it seems. Without further ado, though!

Get your geek on!

Contrary to my expectations, getting a Linux distribution to boot on the Pi was a bit of a hustle. Raspberry Pis don’t have a typical BIOS like laptops or desktop PCs. The firmware is extremely minimal, enough to control the on-board LEDs, hardware monitors and swiftly proceed to booting from a flash drive (SD card, USB stick), or a hard drive. Therefore, one doesn’t actually install a Linux distribution on the Pi. Rather, it’s required to *dump it* onto a disk and plug that disk into a port on the Pi to get it working. There is a fine selection of dedicated distributions out there already – Raspbian, FedBerry, etc. Major projects like FreeBSD, OpenBSD, openSUSE, Fedora and Debian provide ARM-compliant images as well. It’s just a matter of downloading an image, putting it onto an SD card (8-16GB in size, preferably) and we’re ready to go.

Pushing the limits

Not everything is as smooth as it may sound, however. Some of the distributions like FedBerry suggest desktop environments and utilities that are clearly too much for the Pi to handle. Even the top-of-the-line Pi 3 model B is not strong enough to run Firefox smoothly. Part of the problem is the GUI-heavy trend in software design, the other part being the still evolving design of the Pi. At the moment we’re at 1 GB RAM. That’s quite modest per today’s standards. With increasing hardware needs, more care should be taken in regards to the board itself also. Both the CPU and GPU will quickly overheat without at least a basic heat sink. I like ideas such as this, which try to provide the extra add-ons to turn a Raspberry Pi into a full-blown computer. Personally, I use minimalist tools such as Vim/Emacs, Openbox, Dillo so the limitations aren’t really there for me.

IoT for the future!

Truth be told, ARM-powered devices are everywhere. Though it’s a resurrected platform, people have not forgotten about the merits of RISC. Raspberry Pi is not the only Pi, nor is it the only single-board computer worth attention. With such an overabundance of almost-open-source hardware, one can do anything. Pi Zero computing cluster? Check. Watering device sensitive to solar light intensity? Check. Minecraft server? Check. NAS for the whole family? Check. It’s there, it’s cheap, it just needs a bit of geek – namely you!

Software Design Be Like…

I stumbled upon this very accurate article from the always-fantastic Dedoimedo, recently. I don’t agree with the notion that replacing legacy software with new software is done only because old software is considered bad. Oftentimes we glorify software that was never written properly and throughout the years accumulated a load of even more ugly crust code as a means of patching core functionalities. At one point a rewrite is a given. However, I do fully agree with the observation that a lot of software nowadays is written with an unhealthy focus on developer experience, rather than user experience. Also, continuity in design should be assumed as sacred.

One of the things that ticks me off (much like it did in the case of the Dedoimedo author) is when developers emphasize how easy it is for prospective developers to continue working on their software. Not how useful it is to the average Joe, but how time-efficient it is to add code. It’s nice, but should not be emphasized so. Among the many characteristics a piece of software may have, I appreciate usefulness the most. Even in video games, mind you! Anything that makes the user spend less time on repetitive tasks is worth its weight in lines of code (or something of that sort). Features that developers consider useful are in reality not so to regular users way too often. Also, too many features are a bad thing. Build a core program with essential features only and mix in a scripting language to add features that users may require on a per-user basis. See: Vim, Emacs, Chimera, PyMOL, etc. Success guaranteed!

Another matter is software continuity. Backward-compatibility should be considered mandatory. It should be the number one priority of any software project as it’s the very reason people keep getting our software. FreeBSD is strong on backward-compatibility and that’s why it’s such a rock-solid operating system. New frameworks are good. New frameworks that break or remove important functionalities are bad. The user is king and he/she should always be able to perform their work without obstructions or having to re-learn something. Forcing users to re-learn every now and then is a *great* way of thinning out the user base. One of the top positions on my never-do list.

Finally, an important aspect of software design is good preparation. Do we want our software to be extensible? Do we want it to run fast and be multi-threaded? Certain features should be considered exhaustively from the very beginning of the project. Otherwise, we end up adding lots of unsafe glue code or hackish solutions that will break every time the core engine is updated. Also, never underestimate documentation. It’s not only for us, but also for the team leader and all of the future developers who *will* eventually work on our project. Properly describing I/O makes for an easier job for everyone!

PC Parts Recycling Nightmares

To warn everyone from the get-go, this will be a rant. Therefore, hold onto your socks and pants, and watch the story unfold. And the gravy thicken…

Recycling computer parts is an extremely important aspect of keeping computers alive. It often lets you turn 5 broken computers into at least 1-2 that are still fully operational. Not to mention rebuilding, tweaking, expanding, etc. Theoretically, you could have a single desktop computer and just keep replacing its “organs” as they die. All is good even if a hard drive fails. We swap hard drive(s), restore the operating system and our data, and we’re good to go in minutes/hours. Since my very first computer was a self-assembled desktop PC, way before laptops were a “thing”, I got used to this workflow. Laptops required me to adjust, because each company would use different connectors and build schematics. Also, there were model lines like the early Dell Latitudes that had quirks one needed to know before opening up the case. That’s laptops, though. A complicated world of its own. I agree that no one should expect a mobile device to be tinkering-friendly. It’s supposed to be light, energy-efficient and just strong enough to get the job done. Fully understandable priorities! However, I would never in my wildest dreams (or nightmares?) expect these priorities to leak into and envenom the world of tower-sized desktop computers.

Imagine this scenario – you get a midi tower workstation computer from an acclaimed manufacturer like Dell or HP. It has a powerful Intel Xeon 4-core processor with hyper-threading. Marvelous beast! You can use it as a build farm or an efficient virtual machine host. However, years go by and you want to expand it a tad – swap in extra drives, a RAID card perhaps. Or maybe put in a decent graphics card to do enterprise-grade 3D modeling in AutoCAD. You open the case, look inside a bit and you instantly begin to cry. The workstation has a shamefully bad 320W power supply unit (PSU). You then wonder how was this PSU able to support both the power-hungry Intel Xeon CPU and the graphics card. You run web-based PSU calculators and all of them tell you the same thing – you’d fry your computer instantly with such a PSU and at least a 450-500W one is needed. Unlike many others, you were lucky to last that long. That’s not the end of the story, though! Your workstation’s current PSU cannot be replaced with a more powerful standard ATX PSU. HP decided to use fully proprietary power connectors. Also, a replacement PSU cannot be bought anymore, because this model line was dropped years ago. Now you’re stuck and need to buy a new server motherboard that would fit your Intel Xeon, a new PSU and a new case, because the HP case was designed for the HP PSU. You drop to the floor and wallow at the unfair world… Many more stories can be found on the Internet here and here.

I fully understand that manufacturers need to make a living. However, using low-grade proprietary computer parts in systems that are considered upgradable by universal standards is not only damaging to the market by introducing artificial constraints, but also a sign of bad design practices. Not to mention the load of useless electronic junk such attitude produces. I believe manufacturers should care more about common standards as in the end it’s beneficial to everyone.

Resources and Limitations

Somewhat inspired by the extensive works of Eric Raymond and Paul Graham I decided to write a more general piece myself. Surprisingly, the topic is almost never touched upon or discussed only indirectly. We programmers often write about software efficiency in terms of resource usage (RAM, CPU cycles, hard drive space, material wear, etc.), however the mentioned resources are actually secondary or even tertiary resources. There is a single fundamental resource, from which all the others are derived – time.

We are all born with a certain selection of genes that predisposes us to a defined lifespan. Thanks to the improvements in Medicine, this lifespan can be adjusted so that we don’t die prematurely due to a genetic defect or an organ failure. Still, the overall limit is quite tangible. In order to sustain our living, we exchange bits of this lifespan (time) for a currency unit by working. With enough units we can afford accomodation, nurishment, entertainment, etc. In essence, to keep ourselves in good spirits and in a healthy body. As part of software design we constantly measure time in combination with previously mentioned resources. We try to spend less time on repetitive tasks that can be easily automated via programs, but also require efficient tools to write those programs. It’s very clear that with the need to make a living, we most likely don’t have enough time to master every major programming language or write every tool we need to get the job done. We need to trust fellow programmers in that respect. As Eric Raymond once wrote, one should typically not need to write a tool twice, unless for learning purposes.

Thereby, provided that the secondary/tertiary computer resources are not limiting, it would be wise to use a tool (operating system, programming language, API, framework, etc.) that gives the highest efficiency. For instance, Ubuntu or OpenSUSE instead of Slackware, Arch Linux or Gentoo. Python, Ruby or Java instead of C or C++. There is absolutely no shame in using a high-level tool! The good enough is far more important than prestige or misdirected elithism. That’s how you win against competition – by being efficient. I think we should all remember that!