The Open in BSD

I wrote about OpenBSD a bit in the past. Since then I’ve been distro-hopping plenty like a nervous flea that I am. Eventually, I put Debian 9.1 Stable on some of my machines and that’s what I run at work out of convenience and in case someone needs Linux-related help. I cannot say I don’t like it. To me Debian feels like the FreeBSD of the GNU/Linux side of FOSS. It’s sensible. It’s stable. However, I quickly tire of the systemd hiccups, focus on flashy graphical frameworks and other annoyances. Then, I turn to the BSD world with FreeBSD on my home workstation and OpenBSD on this here VAIO laptop. Admittedly, I was somewhat curious about hardware compatibility in release 6.1. This laptop is more powerful than the Intel M based Dell Latitude E5500 I used for testing OpenBSD previously. Also, the VAIO ran Debian 9.1 well enough that I could do actual work without waiting long minutes for a JavaScript-infested Web page to load. How would it cope with OpenBSD however?

Installing OpenBSD is fairly straightforward and if someone has ever installed Gentoo or Arch Linux….well, OpenBSD is easier! Out-of-the-box we even get an X11 server called Xenocara together with the xenodm display/login manager (not mandatory). Somewhat unfortunately, the default window manager CWM looks extremely dated and the black-white-grey dotted background would hurt my eyes. Not to worry, though. Openbox was just a pkg_add away. In fact, most of the tools I use every day were also, hence I didn’t really miss anything. It’s FOSS and I guess I shouldn’t be surprised that I can reproduce a fairly standard setup on another OS. The critical point for me was whether I could install all of the Python machine learning modules I use for writing regression tests. pandas, matplotlib and numpy are usually available from software repositories. Granted, not on every single open-source operating system. Luckily, the Python package installer PIP provides fantastic means of interoperability, which I encourage everyone to use. Even with Windows *cough* *cough*. Soon after PIP completed its work  I was set up and good to go!

OpenBSD

My desktop look – courtesy of myself (and the wallpaper’s author)

Then there is the usual How to make my system more polished? I got myself a nice OpenBSD wallpaper from the Interwebs (see: image above) and proceeded to reading the official documentation to understand the system better. The login environment is handled by the Korn Shell (the extra crispy OpenBSD variant of the Korn Shell, mind you), From there we add packages with pkg_add and modify them with a slew of other pkg_* tools. If anyone is familiar with former releases of FreeBSD, he or she will know the pkg_* commands. The system (kernel + core utilities) and the Ports Collection source code trees are tracked via AnonCVS, a largely improved CVS fork. It’s quite noticeable that the OpenBSD project strives to tweak and improve existing tools in order to make them more secure. I still need to figure out how to adjust the sound volume efficiently via mixerctl. Perhaps I’ll write a thin GUI client in Java or Python (or port my favourite volumeicon) in case none are available. Or just map a set of keyboard keys to mixerctl calls.

When comparing open-source operating systems, especially BSDs vs GNU/Linux distributions, people often consider things like system performance, resource usage, software availability, etc.

  1. Is OpenBSD faster than Debian? Not really. However, on modern PCs any open-source operating system is faster than Windows or MacOS X. This should come as no surprise.
  2. Does it use less system resources? Perhaps a tiny bit, though many open-source programs are portable and any optimizations are rather accidental. To give you an idea Openbox + WordPress opened in Firefox + mpv playing a jazzy tune amount to ~700 MB RAM in total. Not too shabby, right?
  3. Are programs X, Y and Z available? This largely depends on what tools one requires for work. The typical assortment (LibreOffice, GIMP, Inkscape, etc.) is there for the taking. Also, GUI tools can be replaced with CLI tools with minimal effort (for instance, Irssi/WeeChat instead of HexChat). The only real limitation I noticed so far is programs that are only accessible in binary form or certain device drivers with binary-only blobs (see: nVidia). OpenBSD has a strong policy against closed-source software and unless the company in question has a good reputation of providing quality software consistently, I think full source code disclosure is the right way to go.
  4. Is my hardware well supported? For device drivers see above. Other than that most (if not all) Intel-based hardware works as well as on GNU/Linux distributions. For improved 3D performance AMD is a fine choice, too. Perhaps webcam support is a bit lacking, but many models like the MacBook iSight even are supported.

The bottom line is this – OpenBSD is a great Unix-like operating system. It’s super secure and has one of the best documentations out there. If that’s your cup of tea, join the crew. If not, at least give it a try. I can assure you it’s worth it. Finally, a screenfetch for the geeks among us:

screenfetch_openbsd

In Software We Trust

Inspired by the works of Matthew D. Fuller from over-yonder.net I decided to write a more philosophical piece of my own. While distro-hopping recently it came to my mind that whatever we do with our lives, we never do it alone and our well-being depends on other people. It requires us to trust them. Back in prehistoric times a Homo sapiens individual could probably get away with fishing, foraging and hunting for food, and finding shelter in caves. The modern world is entirely different, though. We need dentists to check our teeth, we need groceries to gather food, we need real estate agents to find housing, etc. Dealing with hardware and software is similar. Either we build a machine ourselves or trust that some company X can do a good enough job for us. The same goes for software!

Alright, so we have a computer (or two, or ten, or…) and we want to make it useful by putting an operating system on its drive(s). MacOS X and MS Windows are out of the question for obvious reasons. That leaves us with either Linux or a BSD-based system. Assuming we pick Linux, we can install it from source or in binary form. This is where trust comes into play. We don’t need to trust major GNU/Linux distributions in terms of software packaging and features. We can roll with Gentoo, Linux From Scratch, CRUX or any other source-based distribution and decide on our own what does and doesn’t go into our software. It’s kind of like growing vegetables in a garden. Granted, we ourselves are responsible for any immediate issues like compile errors, file conflicts or missing features. It’s a learning process and one definitely profits from it. However, it’s also time-consuming and requires extremely good understanding of both system design and the feature sets of individual programs. No easy task that. Therefore, it’s far more convenient to use binary distributions like openSUSE, Ubuntu, Fedora, Debian, etc. It requires us to trust that the maintainers and developers are doing a good job at keeping software up-to-date, paying attention to security fixes and not letting bugs through. I myself don’t feel competent enough to be a source-level administrator of my own computer and be able to fix every minor or major issue in C++ code. I prefer to trust people who I’m sure would do it better than me, at least for now.

The (Necessary?) GNU/Linux Fragmentation

I would like to share with you a story of my recent struggles with Debian. They’re partially my fault, but also partially due to the way Debian handles network management, which is quite different from how other GNU/Linux distributions do it.

The story begins with me being happy with a regular desktop install, powered by XFCE4, but then wanting to switch to the less distracting Openbox. I installed Openbox + extras like the tint2 panel, nitrogen (background/wallpaper setter) and other lightweight alternatives to XFCE4 components. While sweeping up XFCE4 leftovers, “apt autoremove” accidentally removed way too many packages, including network-manager. I was instantly left with no network connection and no means of restoring it, as I learned later. By default network management on Debian is handled by the ifupdown scripts, which “up” interfaces listed in /etc/network/interfaces and direct them to dhclient to get the DHCP lease or assign a static IP address. Incidentally, ifupdown utils have no means of directing wireless interfaces to wpa_supplicant for WPA-encrypted networks. Nowadays, this is handled by network-manager, which “Just Works”. network-manager uses wpa_supplicant to handle WPA-encryption (in addition to many other things), whilst performing the rest of network management itself. This is quite different from running wpa_supplicant directly, which simply failed in my case due to a known regression.

It’s quite sad to see that Debian, despite moving from init scripts to systemd for boot + service management, still insists on configuring network interfaces via Shell scripts (the mentioned ifupdown tools), while a mainstream solution in the form of network-manager is available! Why is it recommended as a “Just Works” alternative yet not offered by default? On Red Hat based distributions (say Fedora, CentOS, etc.) the matter is really simple – you get network-manager and you’re good to go out-of-the-box. That stands to reason, though, as Network Manager is a Red Hat project. Still, the “Just Works” approach baffles me (and disturbs even) greatly. “Just Works” sounds like a catch-phrase typical of commercial operating systems. Since they target desktop and laptop computers mainly, it’s enough if an ethernet interface and/or a wireless interface “just work”. What about servers with multiple NICs, routers, gateways, NATs and VMs, each having its IP address or set thereof? What then? Oh, right, we can write systemd units for each interface and control them via systemctl. Or use the ip utilities for a similar purpose. Or the deprecated ifconfig, which we shouldn’t use, but still can because it’s in the repositories of many distributions. Alternatively, perform DHCP requests via one of the selected clients – dhclient, dhcpcd, dhcpd, etc. We end up with a hodgepodge of programs that are best left to their own devices due to incomplete documentation and/or unclear configuration means. Each GNU/Linux distribution having a set of their own base utilities.

Personally, I feel that’s where the BSDs succeed. You get a clearly separated base system with well-documented and easily-configurable tools that are maintained as a whole. Network interface configuration borders on trivial. More so, the installer handles wireless connections almost seamlessly. Why is it so difficult on GNU/Linux? At this point, I believe the GNU/Linux community would profit greatly by agreeing on a common “base system”. Red Hat’s systemd is the first step to the unification of the ecosystem. While I am strongly opposed to systemd, because it gives merely the illusion of improved efficacy by simplifying configuration and obfuscating details, GNU/Linux should be a bit stronger on common standards, at least for system-level utilities.

GNU/Linux and Its Users

I decided to devote this entry to a reflection I made recently, while participating in discussions in the Facebook Linux group. I realized, as many have before me, that there is a strong correlation between the maturity and complexity of an operating system and the savviness of its users. The BSDs and more demanding GNU/Linux distributions like CRUX, Arch and Gentoo attract experienced computer users, while Ubuntu and its derivatives entice beginners mostly. As few express the need to learn the Unix Way, beginner-oriented operating systems (this includes Windows and MacOS X, of course) are far more popular. Consequently, they garner stronger commercial support from hardware and software companies as they constitute a market for new products.

The truth is, we have all been beginners once. More so, unless we’re old enough to remember the ancestral iterations of the original UNIX operating system (I’m not!), we’ve been MacOS X or Windows users way before switching to a modern Unix-like operating system. Alas, as such we have been tainted with a closed-source mindset, encouraging us to take no responsibility for our computers and solve system-level problems with mundane trial-and-error hackery. Not only is such a mindset counter-productive, but also hampers technological progress. Computers are becoming increasingly crucial in our everyday lives and a certain degree of computer-literacy and awareness is simply mandatory. Open-source technologies encourage a switch to a more modern mindset, entailing information sharing, discussions and learning various computer skills in general. The sooner we accustom ourselves with this mindset, the faster we can move on.

The current problem in the GNU/Linux community (much less in non-Linux Unix communities) is that the entry barrier is being continuously lowered as to yield a speedier influx of users. Unfortunately, many of these users are complete beginners not only in terms of Unices, but also in terms of using computers in general. With them the closed-source mentality is carried over and we, the more experienced users, have to deal with it. Some [experienced users] provide help, while others are annoyed with the constant nagging. Within us lies the responsibility to educate newbies and encourage them to embrace the open-source mindset (explained above). However, they don’t want to. They want the instant gratification they received when using Windows or MacOS X, because someone convinced them that GNU/Linux can be a drop-in replacement for their former commercial OS. They want tutorial-grade, easy-to-follow answers to unclear, badly formulated questions. Better yet, they want them now, served on a silver platter. We all love helping newbies, but we shouldn’t encourage them to remain lazy. Otherwise, we’ll eventually share the fate of Windows or MacOS X as just this other mainstream platform. I cannot speak for everyone, though I would personally prefer GNU/Linux to continue its evolution as a tech-aware platform of the future.

On Deprecating Software

In the open-source world software comes and goes much like animal and plant species in the bio world. The reasons are various. Software A was written a long time ago when computers severely lacked in performance. Therefore, it could not adjust to modern programming paradigms easily, and had to be forked and rewritten as software B. Another case – developers were few and at one point they lost interest in software C. Years later someone dug up the project and noticed its many uses. He/she decided to breathe new life into it as software D. The story that everyone talks about nowadays follows an entirely different scenario, though.

Once upon a time, there was a Unix sound system called OSS (Open Sound System). It aligned with the Unix style of device handling and was easy to understand. In fact, it was the first sound system that could be called “advanced”. FreeBSD still relies on a modified version 4 of OSS and it’s perfectly fine for daily use. Then came Linux, based on Unix paradigms, though not Unix itself. In very general terms, it did a lot of things differently and required extra abstraction layers for its sound implementation. OSS was considered cumbersome and too low-level to be worthwhile in the long run. Thus, ALSA (Advanced Linux Sound Architecture) was born. For a long while OSS and ALSA co-existed until OSS was intentionally deprecated. Interestingly, many of the drawbacks of OSS were addressed in OSS v4, making the arguments against it rather moot. However, Linux dominated the open-source world to the point that all OSS-using Unix-based or Unix-like operating systems were marginalized. As a consequence, developers of new sound software primarily targeted ALSA. When I compare it to OSS, there are things it does better and things it does worse. The added abstraction layers theoretically simplify configuration. After all, not everyone needs to know how to address sound I/O on hardware-level. However, due to abstraction it’s more difficult to troubleshoot in cases when by default sound I/O is misconfigured.

Fast forward a few years and some developers now notice that even ALSA is cumbersome and too low level. Due to the rapid expansion of GNU/Linux into the desktop ecosystem, user expectations have changed and various system components have to follow suit as a result. That includes the sound system stack. Lennart Poettering observed that the current solution [ALSA] is flawed and implementing high-level sound features, such as dynamic multi-output setups or mixing is difficult. However, he decided not to (or couldn’t?) fix the underlying problems, but rather build a layer on top of ALSA. Such means of abstraction is likely to add problems, rather than subtract them. On one hand, configuration becomes more intuitive and potentially easier to adjust. On the other hand, the lower-level system (ALSA) still exists and the problems it causes are not addressed, but rather circumvented. Regardless, many projects decided to switch their sound backend from ALSA to the “new cool kid” PulseAudio entirely, for instance Skype, Steam, BlueZ and recently also Firefox.

Curiously enough, replacing ALSA with PulseAudio effectively only streamlines configuration on desktop computers. It’s not a game-changer that magically solves all of the problems attributed to ALSA or OSS, contrary to the claims PulseAudio proponents make. Can OSS or ALSA handle sound output device hot-plugging? Yes. Can volumes be easily adjusted on a per-application basis? Yes. Can multiple applications play sound to the same output? Yes, indeed! Frankly, instead of broken layers on top of broken layers, I would rather see a fix to the underlying components. Still, PulseAudio is here to stay for good and we need to find ways of dealing with it. My favorite is the apulse shim that provides a PulseAudio-like backend for applications and directs all output to ALSA. It’s simple and just works.

The big question I would like to drop, though is whether we should really keep on deprecating software so frivolously? For the majority of cases, both ALSA and OSS can do pretty much the same. Do we then really need something as complex as PulseAudio? Why not a simplified backend so that application developers live happier lives? Food for thought, I believe.

The Right Tool for the Right Job!

This week’s discussion on DistroWatch and this here blog entry motivated me to articulate certain concerns I bear in respect to choosing “the right tool for the right job”. codeinfig makes a couple of extremely valuable points in his writing (linked above). First and foremost, people (and I mean mostly end-users) should learn some old-school, decent modesty. Not everything has to be streamlined ad infinitumLearning is an important human activity. It stimulates the brain and provides skills useful in the future (Earth turning into an orange-ish nuclear mushroom and whatnot). Secondly, each tool was designed for a specific job and one should really spend some time to understand that job. Case in point, the PDF format. It was created to standardize end-point document presentation (especially for printing), including fonts, images, text, etc. However, that’s about it. It’s not meant as a dynamic file format nor for storing high quality images. These things it does badly. For these things we would be better off using HTML, for instance. However, rather than focusing merely on specific tools and their uses, I would like to discuss the consequences of using tools wrongly.

A great example of a tool is the web browser. We use it everyday for most of our modern-world Web-centric (read: almost all) activities. Unfortunately, as you might have noticed if you use average, off-the-shelf PCs, the Internet has become a place overloaded with data that serves no other purpose than to attract your attention. That data consumes a lot of computer resources for very superficial reasons. I browse the Web to read articles, watch videos and communicate with fellow humans. In other words, to experience the world. Reading requires properly formatted, well-contrasted text, sometimes accompanied by movies and/or images to ease the author in conveying his message. What it does not require is flying title bars, jumping windows, aggressive background animations, etc. Apparently, that’s what we mostly get in the Interwebs nowadays. How about we all throw a big, fat “NO”? How about we inform the people misusing web design for eye-candy that we don’t need all of this bollocks? I dare say JavaScript and CSS abuse should be frowned upon. Another thing wrong with the web browser as a tool is the fact that developers try to cramp as many features into it as possible. Chrome/Chromium is a nightmare in that regard. This modern disease is called “featuritis”. Why not then turn a web browser into an operating system, eh?  I forgot, we’ve got that already – ChromeOS!

I guess a rant would be useless without a proper take-home message. I believe we should not fall for first impressions and the notion of Swiss Army knife tools. They get inconvenient, broken, slow or just unsafe way too quickly. Let’s look for tools that do one job and do it well. In need of a GUI-enabled media player? There is mpv and mplayer already. Both can run in the terminal as well as with a pretty UI. They do everything a modern media player should. Still not enough? Try xine then! Minimalist, one-job tools are available in every BSD or GNU/Linux system’s repository. They just require finding.

On System Complexity

I appreciate how one can often draw a parallel between life sciences and computer sciences. For instance, complexity has similar features in both fields. Although many aspects like the transition from aquatic to terrestrial life are still largely disputed, it is reasonable to believe that complexity is born out of a necessity to adapt to environmental changes or novel needs. This concept can be translated to technological development quite smoothly. Let us consider evolution of telecommunication devices as an example. In the past people were pleased to be able to communicate with fellow humans without the need for letters, which often took weeks or months to be delivered (telegraphs). With time expectations grew, however. Nowadays, it is more or less a norm to not only call and text, but also browse the Web through mobile devices. Though bold, it is fair to claim that the drivers of progress/evolution were in this case the growing needs of the elusive end-user.

Technical savvy is considered a blessing by many. It’s like a catalyst or facilitator of ideas. It’s also a curse, though, because it creates a rift between the ones who can (makers) and the ones who depend (users) on the makers for their prosperity. A quasi-feudal system is formed, linking both parties together. The makers may (if personal ethics suggest) aid users, as they have both the power and moral obligation to do so. This in turn drives progress, which would otherwise be stunted, because makers often do not require elaborate tools to express themselves (assumption!). Alas, with growing needs, inherently grows the complexity of created tools.

It is important to note that the user is a bit of a decisive factor. The system should be complex enough to satisfy the needs of the user deftly, though also simple enough to allow easy maintenance on the side of the makers. A certain equilibrium needs to be reached. In a perfect scenario both parties are content, because the makers can do something good for the society and express themselves in more elaborate ways, while the users live better lives thanks to the technological development sustained by the makers.

Moving on to operating systems, the “complex enough” is usually minimal and many GNU/Linux distributions are able to cover it. What the users expect nowadays is the following:

  • an ability to do to a “test run” of the operating system without installing it; after all, the installation might fail and no one wants to lose their data
  • easy means of installing the operating system, without the need to define complex parameters like hard drive partitioning or file system types
  • out-of-the-box support for internal hardware like graphics cards, wireless adapters, auxiliary keyboard keys, etc. 
  • a reasonable selection of software for daily and specialized needs, clearly defined and documented with optional “best of” categories 
  • straightforward use of extra peripherals like printers, USB drives, headphones, etc.
  • clear, accessible and easy to learn interface(s)

Unfortunately, the same GNU/Linux distributions then go out of their way to tip the balance and make the system not “simple enough” to be reliably maintained. For instance, does every single main distribution require official support for all of the desktop environments? What for? That doesn’t really help the user if he/she is left with too much choice and suddenly has to decide which desktop environment is “better”. Do all of the desktop environments provide the same complete array of functionalities? In terms of “too much complexity” this is not the only problem, of course! I think there are some Unix-like operating systems (including the BSDs here, too) which do better than others in terms of satisfying the expectations of users:

  • Linux Mint caters to the needs of an average user perfectly. All of the above requirements are fulfilled and the selection of officially supported desktop environments is so that the user should not get confused. In addition, they’re similar in their looks and functionalities.
  • Fedora Linux also does a great job by offering a very streamlined and appealing working environment. It’s geared more towards software developers, though even regular users should find Fedora attractive.
  • Arch Linux (and per extension Manjaro Linux) and FreeBSD (per extension TrueOS/PC-BSD also) do NOT cater to the average user, but offer many possibilities and a good initial setup. Building a full-blown, user-friendly system is a matter of minutes.
  • Debian was always good with the out-of-the-box experience and this has not changed since. Granted, the installer has to be prompted to automagically produce a GNOME3-based system.

Ubuntu and its derivatives didn’t make it to the list, because they often break in unpredictable ways, causing headaches even to more technically-inclined people. The lack of consistency between Ubuntu flavors and general over-engineering often prove troublesome. Well, at least to me.

To sum up (and for the TL:DR folk), complexity is an inherent feature of both biology (evolution) and computer sciences. When building operating systems one should remember that the balance between “complex enough to be easily usable” and “simple enough to be easy to maintain” needs to be kept perfectly. Otherwise, we get instabilities, broken packages, lost data and other horrible scenarios…