The Kernel, the Kernel’s to Blame!

desktop_linux-100276138-orig

When getting my Raspberry Pi 3 set up recently I experienced quite some woes concerning out-of-the-box detection of SD cards. One might expect that an SD card slot is nothing more than a USB-like interface. In theory yes, in practice quite some distributions have problems with accepting that fact. Gentoo failed me numerous times, though partially because I decided to go for an extremely slim kernel config. Manjaro also surprised me in that respect – SD card detected, but not as a USB drive (thereby, not mountable). Fedora and Lubuntu had no problems. Each distribution uses a different set of graphical utilities and desktop environments so users often blame the front-end setup. That’s wrong, though, because the inability of a system to detect a piece of hardware has everything to do with the kernel configuration. Indeed, the kernel’s to blame.

I personally prefer the Arch approach – almost everything as modules. Although this could add significant overhead due to the way modules are loaded, in reality it makes Arch-based systems very light on resources. After all, what’s not in, doesn’t get loaded at all. The drawback is that the distribution or the user is required to ascertain that the initramfs is complete enough to allow a successful boot-up. The alternative is to integrate as many drivers as necessary into the kernel, though that of course makes the kernel bulky and isn’t always the optimal solution. There is a lot in-between that unfortunately causes weird issues like the one I experienced.

I think there should seriously be some consensus between distribution teams regarding what goes into a kernel and what doesn’t. Weird accidents can be avoided and it’s down to individual teams to iron that out. Of course, one can go hunting for drivers on GitHub and trying out 5 versions of a Realtek 8188eu driver, but why should the user be required to do so?

ARMing For the Future

singleboard_computers
Image taken from edn.com

For some time now I’ve been itching to get my hands on a Raspberry Pi single-board computer. Unfortunately, retailers like Saturn and MediaMarkt would shrug my inquiries off with a “we’re expecting them soon”. To my dismay the “soon” seemed like it would never come. Surprising, since the computer geek culture is constantly expanding and the demand is definitely there. Finally, after months of waiting, the Pi arrived to Austria. I quickly armed myself (pun intended) with a RPi 3 model B, a Pi-compatible power supply (5.1 V, 2.5 A) and a mate black case. The rest I already had since I collect various adapters, SD cards, etc. as a hobby. Always handy, it seems. Without further ado, though!

Get your geek on!

Contrary to my expectations, getting a Linux distribution to boot on the Pi was a bit of a hustle. Raspberry Pis don’t have a typical BIOS like laptops or desktop PCs. The firmware is extremely minimal, enough to control the on-board LEDs, hardware monitors and swiftly proceed to booting from a flash drive (SD card, USB stick), or a hard drive. Therefore, one doesn’t actually install a Linux distribution on the Pi. Rather, it’s required to *dump it* onto a disk and plug that disk into a port on the Pi to get it working. There is a fine selection of dedicated distributions out there already – Raspbian, FedBerry, etc. Major projects like FreeBSD, OpenBSD, openSUSE, Fedora and Debian provide ARM-compliant images as well. It’s just a matter of downloading an image, putting it onto an SD card (8-16GB in size, preferably) and we’re ready to go.

Pushing the limits

Not everything is as smooth as it may sound, however. Some of the distributions like FedBerry suggest desktop environments and utilities that are clearly too much for the Pi to handle. Even the top-of-the-line Pi 3 model B is not strong enough to run Firefox smoothly. Part of the problem is the GUI-heavy trend in software design, the other part being the still evolving design of the Pi. At the moment we’re at 1 GB RAM. That’s quite modest per today’s standards. With increasing hardware needs, more care should be taken in regards to the board itself also. Both the CPU and GPU will quickly overheat without at least a basic heat sink. I like ideas such as this, which try to provide the extra add-ons to turn a Raspberry Pi into a full-blown computer. Personally, I use minimalist tools such as Vim/Emacs, Openbox, Dillo so the limitations aren’t really there for me.

IoT for the future!

Truth be told, ARM-powered devices are everywhere. Though it’s a resurrected platform, people have not forgotten about the merits of RISC. Raspberry Pi is not the only Pi, nor is it the only single-board computer worth attention. With such an overabundance of almost-open-source hardware, one can do anything. Pi Zero computing cluster? Check. Watering device sensitive to solar light intensity? Check. Minecraft server? Check. NAS for the whole family? Check. It’s there, it’s cheap, it just needs a bit of geek – namely you!

Software Design Be Like…

I stumbled upon this very accurate article from the always-fantastic Dedoimedo, recently. I don’t agree with the notion that replacing legacy software with new software is done only because old software is considered bad. Oftentimes we glorify software that was never written properly and throughout the years accumulated a load of even more ugly crust code as a means of patching core functionalities. At one point a rewrite is a given. However, I do fully agree with the observation that a lot of software nowadays is written with an unhealthy focus on developer experience, rather than user experience. Also, continuity in design should be assumed as sacred.

One of the things that ticks me off (much like it did in the case of the Dedoimedo author) is when developers emphasize how easy it is for prospective developers to continue working on their software. Not how useful it is to the average Joe, but how time-efficient it is to add code. It’s nice, but should not be emphasized so. Among the many characteristics a piece of software may have, I appreciate usefulness the most. Even in video games, mind you! Anything that makes the user spend less time on repetitive tasks is worth its weight in lines of code (or something of that sort). Features that developers consider useful are in reality not so to regular users way too often. Also, too many features are a bad thing. Build a core program with essential features only and mix in a scripting language to add features that users may require on a per-user basis. See: Vim, Emacs, Chimera, PyMOL, etc. Success guaranteed!

Another matter is software continuity. Backward-compatibility should be considered mandatory. It should be the number one priority of any software project as it’s the very reason people keep getting our software. FreeBSD is strong on backward-compatibility and that’s why it’s such a rock-solid operating system. New frameworks are good. New frameworks that break or remove important functionalities are bad. The user is king and he/she should always be able to perform their work without obstructions or having to re-learn something. Forcing users to re-learn every now and then is a *great* way of thinning out the user base. One of the top positions on my never-do list.

Finally, an important aspect of software design is good preparation. Do we want our software to be extensible? Do we want it to run fast and be multi-threaded? Certain features should be considered exhaustively from the very beginning of the project. Otherwise, we end up adding lots of unsafe glue code or hackish solutions that will break every time the core engine is updated. Also, never underestimate documentation. It’s not only for us, but also for the team leader and all of the future developers who *will* eventually work on our project. Properly describing I/O makes for an easier job for everyone!

PC Parts Recycling Nightmares

To warn everyone from the get-go, this will be a rant. Therefore, hold onto your socks and pants, and watch the story unfold. And the gravy thicken…

Recycling computer parts is an extremely important aspect of keeping computers alive. It often lets you turn 5 broken computers into at least 1-2 that are still fully operational. Not to mention rebuilding, tweaking, expanding, etc. Theoretically, you could have a single desktop computer and just keep replacing its “organs” as they die. All is good even if a hard drive fails. We swap hard drive(s), restore the operating system and our data, and we’re good to go in minutes/hours. Since my very first computer was a self-assembled desktop PC, way before laptops were a “thing”, I got used to this workflow. Laptops required me to adjust, because each company would use different connectors and build schematics. Also, there were model lines like the early Dell Latitudes that had quirks one needed to know before opening up the case. That’s laptops, though. A complicated world of its own. I agree that no one should expect a mobile device to be tinkering-friendly. It’s supposed to be light, energy-efficient and just strong enough to get the job done. Fully understandable priorities! However, I would never in my wildest dreams (or nightmares?) expect these priorities to leak into and envenom the world of tower-sized desktop computers.

Imagine this scenario – you get a midi tower workstation computer from an acclaimed manufacturer like Dell or HP. It has a powerful Intel Xeon 4-core processor with hyper-threading. Marvelous beast! You can use it as a build farm or an efficient virtual machine host. However, years go by and you want to expand it a tad – swap in extra drives, a RAID card perhaps. Or maybe put in a decent graphics card to do enterprise-grade 3D modeling in AutoCAD. You open the case, look inside a bit and you instantly begin to cry. The workstation has a shamefully bad 320W power supply unit (PSU). You then wonder how was this PSU able to support both the power-hungry Intel Xeon CPU and the graphics card. You run web-based PSU calculators and all of them tell you the same thing – you’d fry your computer instantly with such a PSU and at least a 450-500W one is needed. Unlike many others, you were lucky to last that long. That’s not the end of the story, though! Your workstation’s current PSU cannot be replaced with a more powerful standard ATX PSU. HP decided to use fully proprietary power connectors. Also, a replacement PSU cannot be bought anymore, because this model line was dropped years ago. Now you’re stuck and need to buy a new server motherboard that would fit your Intel Xeon, a new PSU and a new case, because the HP case was designed for the HP PSU. You drop to the floor and wallow at the unfair world… Many more stories can be found on the Internet here and here.

I fully understand that manufacturers need to make a living. However, using low-grade proprietary computer parts in systems that are considered upgradable by universal standards is not only damaging to the market by introducing artificial constraints, but also a sign of bad design practices. Not to mention the load of useless electronic junk such attitude produces. I believe manufacturers should care more about common standards as in the end it’s beneficial to everyone.

Resources and Limitations

Somewhat inspired by the extensive works of Eric Raymond and Paul Graham I decided to write a more general piece myself. Surprisingly, the topic is almost never touched upon or discussed only indirectly. We programmers often write about software efficiency in terms of resource usage (RAM, CPU cycles, hard drive space, material wear, etc.), however the mentioned resources are actually secondary or even tertiary resources. There is a single fundamental resource, from which all the others are derived – time.

We are all born with a certain selection of genes that predisposes us to a defined lifespan. Thanks to the improvements in Medicine, this lifespan can be adjusted so that we don’t die prematurely due to a genetic defect or an organ failure. Still, the overall limit is quite tangible. In order to sustain our living, we exchange bits of this lifespan (time) for a currency unit by working. With enough units we can afford accomodation, nurishment, entertainment, etc. In essence, to keep ourselves in good spirits and in a healthy body. As part of software design we constantly measure time in combination with previously mentioned resources. We try to spend less time on repetitive tasks that can be easily automated via programs, but also require efficient tools to write those programs. It’s very clear that with the need to make a living, we most likely don’t have enough time to master every major programming language or write every tool we need to get the job done. We need to trust fellow programmers in that respect. As Eric Raymond once wrote, one should typically not need to write a tool twice, unless for learning purposes.

Thereby, provided that the secondary/tertiary computer resources are not limiting, it would be wise to use a tool (operating system, programming language, API, framework, etc.) that gives the highest efficiency. For instance, Ubuntu or OpenSUSE instead of Slackware, Arch Linux or Gentoo. Python, Ruby or Java instead of C or C++. There is absolutely no shame in using a high-level tool! The good enough is far more important than prestige or misdirected elithism. That’s how you win against competition – by being efficient. I think we should all remember that!

On System Complexity

I appreciate how one can often draw a parallel between life sciences and computer sciences. For instance, complexity has similar features in both fields. Although many aspects like the transition from aquatic to terrestrial life are still largely disputed, it is reasonable to believe that complexity is born out of a necessity to adapt to environmental changes or novel needs. This concept can be translated to technological development quite smoothly. Let us consider evolution of telecommunication devices as an example. In the past people were pleased to be able to communicate with fellow humans without the need for letters, which often took weeks or months to be delivered (telegraphs). With time expectations grew, however. Nowadays, it is more or less a norm to not only call and text, but also browse the Web through mobile devices. Though bold, it is fair to claim that the drivers of progress/evolution were in this case the growing needs of the elusive end-user.

Technical savvy is considered a blessing by many. It’s like a catalyst or facilitator of ideas. It’s also a curse, though, because it creates a rift between the ones who can (makers) and the ones who depend (users) on the makers for their prosperity. A quasi-feudal system is formed, linking both parties together. The makers may (if personal ethics suggest) aid users, as they have both the power and moral obligation to do so. This in turn drives progress, which would otherwise be stunted, because makers often do not require elaborate tools to express themselves (assumption!). Alas, with growing needs, inherently grows the complexity of created tools.

It is important to note that the user is a bit of a decisive factor. The system should be complex enough to satisfy the needs of the user deftly, though also simple enough to allow easy maintenance on the side of the makers. A certain equilibrium needs to be reached. In a perfect scenario both parties are content, because the makers can do something good for the society and express themselves in more elaborate ways, while the users live better lives thanks to the technological development sustained by the makers.

Moving on to operating systems, the “complex enough” is usually minimal and many GNU/Linux distributions are able to cover it. What the users expect nowadays is the following:

  • an ability to do to a “test run” of the operating system without installing it; after all, the installation might fail and no one wants to lose their data
  • easy means of installing the operating system, without the need to define complex parameters like hard drive partitioning or file system types
  • out-of-the-box support for internal hardware like graphics cards, wireless adapters, auxiliary keyboard keys, etc. 
  • a reasonable selection of software for daily and specialized needs, clearly defined and documented with optional “best of” categories 
  • straightforward use of extra peripherals like printers, USB drives, headphones, etc.
  • clear, accessible and easy to learn interface(s)

Unfortunately, the same GNU/Linux distributions then go out of their way to tip the balance and make the system not “simple enough” to be reliably maintained. For instance, does every single main distribution require official support for all of the desktop environments? What for? That doesn’t really help the user if he/she is left with too much choice and suddenly has to decide which desktop environment is “better”. Do all of the desktop environments provide the same complete array of functionalities? In terms of “too much complexity” this is not the only problem, of course! I think there are some Unix-like operating systems (including the BSDs here, too) which do better than others in terms of satisfying the expectations of users:

  • Linux Mint caters to the needs of an average user perfectly. All of the above requirements are fulfilled and the selection of officially supported desktop environments is so that the user should not get confused. In addition, they’re similar in their looks and functionalities.
  • Fedora Linux also does a great job by offering a very streamlined and appealing working environment. It’s geared more towards software developers, though even regular users should find Fedora attractive.
  • Arch Linux (and per extension Manjaro Linux) and FreeBSD (per extension TrueOS/PC-BSD also) do NOT cater to the average user, but offer many possibilities and a good initial setup. Building a full-blown, user-friendly system is a matter of minutes.
  • Debian was always good with the out-of-the-box experience and this has not changed since. Granted, the installer has to be prompted to automagically produce a GNOME3-based system.

Ubuntu and its derivatives didn’t make it to the list, because they often break in unpredictable ways, causing headaches even to more technically-inclined people. The lack of consistency between Ubuntu flavors and general over-engineering often prove troublesome. Well, at least to me.

To sum up (and for the TL:DR folk), complexity is an inherent feature of both biology (evolution) and computer sciences. When building operating systems one should remember that the balance between “complex enough to be easily usable” and “simple enough to be easy to maintain” needs to be kept perfectly. Otherwise, we get instabilities, broken packages, lost data and other horrible scenarios…

Why we do need Devuan

Some months ago when I looked at the Devuan efforts I thought they’re directed at eradicating any traces of systemd and the Red Hat agenda from Debian. After all, the wounds were fresh and hatred still strong. However, the problem Devuan bravely tries to address is much deeper and echoes across the whole of the GNU/Linux ecosystem. Articles covering systemd merely graze the tip of the iceberg that is *software compatibility*.

GNU/Linux distributions power most of world’s servers, workstations, mainframes and supercomputers. In such mission critical environments it is absolutely mandatory that the computer is up and running most of the time. Update downtime should be minimized and process supervision for key services simplified (less resources needed for maintenance, less possibilities for bugs via human error, etc.). While systemd may seem like it addresses above problems, there are 2 main problems:
1. The simplified process supervision is only superficial. Systemd “seems” simpler, but in fact it does a lot more than a mere supervisor + init. Therefore, the surface for bugs is in reality far greater. In addition, process supervision is not a new concept and many alternative (and simpler!) suites exist.

2. Systemd breaks compatibility with former service scripts (from sysVinit) and has absolutely no intention of fixing this. The goal is unification and streamlining of the whole GNU/Linux ecosystem, not making things work.

The first point is an obvious bait for administrators who are tired of fighting with inflexible Shell scripts and overall hackery. The second point, however, should serve as a MAJOR deterrent to anyone looking for a reasonable process supervisor. There is plenty of intermittent options like OpenRC and sanely written go-to supervisors like s6, nosh or runit. Those are some low-hanging fruits, just ready to be picked…

Devuan is an important project, because it acts as a nucleator, not against systemd, but for preserving what Unix is all about – maintainability, efficiency and modular design. It tries to encourage people to care about the “Unix way” more and be aware that current mistakes will prove costly soon enough. Nowadays, we have several program packages that do what systemd offers, though in a much more elegant and non-intrusive fashion. They can be seamlessly merged with existing OS frameworks and used interchangeably.