The (Necessary?) GNU/Linux Fragmentation

I would like to share with you a story of my recent struggles with Debian. They’re partially my fault, but also partially due to the way Debian handles network management, which is quite different from how other GNU/Linux distributions do it.

The story begins with me being happy with a regular desktop install, powered by XFCE4, but then wanting to switch to the less distracting Openbox. I installed Openbox + extras like the tint2 panel, nitrogen (background/wallpaper setter) and other lightweight alternatives to XFCE4 components. While sweeping up XFCE4 leftovers, “apt autoremove” accidentally removed way too many packages, including network-manager. I was instantly left with no network connection and no means of restoring it, as I learned later. By default network management on Debian is handled by the ifupdown scripts, which “up” interfaces listed in /etc/network/interfaces and direct them to dhclient to get the DHCP lease or assign a static IP address. Incidentally, ifupdown utils have no means of directing wireless interfaces to wpa_supplicant for WPA-encrypted networks. Nowadays, this is handled by network-manager, which “Just Works”. network-manager uses wpa_supplicant to handle WPA-encryption (in addition to many other things), whilst performing the rest of network management itself. This is quite different from running wpa_supplicant directly, which simply failed in my case due to a known regression.

It’s quite sad to see that Debian, despite moving from init scripts to systemd for boot + service management, still insists on configuring network interfaces via Shell scripts (the mentioned ifupdown tools), while a mainstream solution in the form of network-manager is available! Why is it recommended as a “Just Works” alternative yet not offered by default? On Red Hat based distributions (say Fedora, CentOS, etc.) the matter is really simple – you get network-manager and you’re good to go out-of-the-box. That stands to reason, though, as Network Manager is a Red Hat project. Still, the “Just Works” approach baffles me (and disturbs even) greatly. “Just Works” sounds like a catch-phrase typical of commercial operating systems. Since they target desktop and laptop computers mainly, it’s enough if an ethernet interface and/or a wireless interface “just work”. What about servers with multiple NICs, routers, gateways, NATs and VMs, each having its IP address or set thereof? What then? Oh, right, we can write systemd units for each interface and control them via systemctl. Or use the ip utilities for a similar purpose. Or the deprecated ifconfig, which we shouldn’t use, but still can because it’s in the repositories of many distributions. Alternatively, perform DHCP requests via one of the selected clients – dhclient, dhcpcd, dhcpd, etc. We end up with a hodgepodge of programs that are best left to their own devices due to incomplete documentation and/or unclear configuration means. Each GNU/Linux distribution having a set of their own base utilities.

Personally, I feel that’s where the BSDs succeed. You get a clearly separated base system with well-documented and easily-configurable tools that are maintained as a whole. Network interface configuration borders on trivial. More so, the installer handles wireless connections almost seamlessly. Why is it so difficult on GNU/Linux? At this point, I believe the GNU/Linux community would profit greatly by agreeing on a common “base system”. Red Hat’s systemd is the first step to the unification of the ecosystem. While I am strongly opposed to systemd, because it gives merely the illusion of improved efficacy by simplifying configuration and obfuscating details, GNU/Linux should be a bit stronger on common standards, at least for system-level utilities.

GNU/Linux and Its Users

I decided to devote this entry to a reflection I made recently, while participating in discussions in the Facebook Linux group. I realized, as many have before me, that there is a strong correlation between the maturity and complexity of an operating system and the savviness of its users. The BSDs and more demanding GNU/Linux distributions like CRUX, Arch and Gentoo attract experienced computer users, while Ubuntu and its derivatives entice beginners mostly. As few express the need to learn the Unix Way, beginner-oriented operating systems (this includes Windows and MacOS X, of course) are far more popular. Consequently, they garner stronger commercial support from hardware and software companies as they constitute a market for new products.

The truth is, we have all been beginners once. More so, unless we’re old enough to remember the ancestral iterations of the original UNIX operating system (I’m not!), we’ve been MacOS X or Windows users way before switching to a modern Unix-like operating system. Alas, as such we have been tainted with a closed-source mindset, encouraging us to take no responsibility for our computers and solve system-level problems with mundane trial-and-error hackery. Not only is such a mindset counter-productive, but also hampers technological progress. Computers are becoming increasingly crucial in our everyday lives and a certain degree of computer-literacy and awareness is simply mandatory. Open-source technologies encourage a switch to a more modern mindset, entailing information sharing, discussions and learning various computer skills in general. The sooner we accustom ourselves with this mindset, the faster we can move on.

The current problem in the GNU/Linux community (much less in non-Linux Unix communities) is that the entry barrier is being continuously lowered as to yield a speedier influx of users. Unfortunately, many of these users are complete beginners not only in terms of Unices, but also in terms of using computers in general. With them the closed-source mentality is carried over and we, the more experienced users, have to deal with it. Some [experienced users] provide help, while others are annoyed with the constant nagging. Within us lies the responsibility to educate newbies and encourage them to embrace the open-source mindset (explained above). However, they don’t want to. They want the instant gratification they received when using Windows or MacOS X, because someone convinced them that GNU/Linux can be a drop-in replacement for their former commercial OS. They want tutorial-grade, easy-to-follow answers to unclear, badly formulated questions. Better yet, they want them now, served on a silver platter. We all love helping newbies, but we shouldn’t encourage them to remain lazy. Otherwise, we’ll eventually share the fate of Windows or MacOS X as just this other mainstream platform. I cannot speak for everyone, though I would personally prefer GNU/Linux to continue its evolution as a tech-aware platform of the future.

On Deprecating Software

In the open-source world software comes and goes much like animal and plant species in the bio world. The reasons are various. Software A was written a long time ago when computers severely lacked in performance. Therefore, it could not adjust to modern programming paradigms easily, and had to be forked and rewritten as software B. Another case – developers were few and at one point they lost interest in software C. Years later someone dug up the project and noticed its many uses. He/she decided to breathe new life into it as software D. The story that everyone talks about nowadays follows an entirely different scenario, though.

Once upon a time, there was a Unix sound system called OSS (Open Sound System). It aligned with the Unix style of device handling and was easy to understand. In fact, it was the first sound system that could be called “advanced”. FreeBSD still relies on a modified version 4 of OSS and it’s perfectly fine for daily use. Then came Linux, based on Unix paradigms, though not Unix itself. In very general terms, it did a lot of things differently and required extra abstraction layers for its sound implementation. OSS was considered cumbersome and too low-level to be worthwhile in the long run. Thus, ALSA (Advanced Linux Sound Architecture) was born. For a long while OSS and ALSA co-existed until OSS was intentionally deprecated. Interestingly, many of the drawbacks of OSS were addressed in OSS v4, making the arguments against it rather moot. However, Linux dominated the open-source world to the point that all OSS-using Unix-based or Unix-like operating systems were marginalized. As a consequence, developers of new sound software primarily targeted ALSA. When I compare it to OSS, there are things it does better and things it does worse. The added abstraction layers theoretically simplify configuration. After all, not everyone needs to know how to address sound I/O on hardware-level. However, due to abstraction it’s more difficult to troubleshoot in cases when by default sound I/O is misconfigured.

Fast forward a few years and some developers now notice that even ALSA is cumbersome and too low level. Due to the rapid expansion of GNU/Linux into the desktop ecosystem, user expectations have changed and various system components have to follow suit as a result. That includes the sound system stack. Lennart Poettering observed that the current solution [ALSA] is flawed and implementing high-level sound features, such as dynamic multi-output setups or mixing is difficult. However, he decided not to (or couldn’t?) fix the underlying problems, but rather build a layer on top of ALSA. Such means of abstraction is likely to add problems, rather than subtract them. On one hand, configuration becomes more intuitive and potentially easier to adjust. On the other hand, the lower-level system (ALSA) still exists and the problems it causes are not addressed, but rather circumvented. Regardless, many projects decided to switch their sound backend from ALSA to the “new cool kid” PulseAudio entirely, for instance Skype, Steam, BlueZ and recently also Firefox.

Curiously enough, replacing ALSA with PulseAudio effectively only streamlines configuration on desktop computers. It’s not a game-changer that magically solves all of the problems attributed to ALSA or OSS, contrary to the claims PulseAudio proponents make. Can OSS or ALSA handle sound output device hot-plugging? Yes. Can volumes be easily adjusted on a per-application basis? Yes. Can multiple applications play sound to the same output? Yes, indeed! Frankly, instead of broken layers on top of broken layers, I would rather see a fix to the underlying components. Still, PulseAudio is here to stay for good and we need to find ways of dealing with it. My favorite is the apulse shim that provides a PulseAudio-like backend for applications and directs all output to ALSA. It’s simple and just works.

The big question I would like to drop, though is whether we should really keep on deprecating software so frivolously? For the majority of cases, both ALSA and OSS can do pretty much the same. Do we then really need something as complex as PulseAudio? Why not a simplified backend so that application developers live happier lives? Food for thought, I believe.

The Right Tool for the Right Job!

This week’s discussion on DistroWatch and this here blog entry motivated me to articulate certain concerns I bear in respect to choosing “the right tool for the right job”. codeinfig makes a couple of extremely valuable points in his writing (linked above). First and foremost, people (and I mean mostly end-users) should learn some old-school, decent modesty. Not everything has to be streamlined ad infinitumLearning is an important human activity. It stimulates the brain and provides skills useful in the future (Earth turning into an orange-ish nuclear mushroom and whatnot). Secondly, each tool was designed for a specific job and one should really spend some time to understand that job. Case in point, the PDF format. It was created to standardize end-point document presentation (especially for printing), including fonts, images, text, etc. However, that’s about it. It’s not meant as a dynamic file format nor for storing high quality images. These things it does badly. For these things we would be better off using HTML, for instance. However, rather than focusing merely on specific tools and their uses, I would like to discuss the consequences of using tools wrongly.

A great example of a tool is the web browser. We use it everyday for most of our modern-world Web-centric (read: almost all) activities. Unfortunately, as you might have noticed if you use average, off-the-shelf PCs, the Internet has become a place overloaded with data that serves no other purpose than to attract your attention. That data consumes a lot of computer resources for very superficial reasons. I browse the Web to read articles, watch videos and communicate with fellow humans. In other words, to experience the world. Reading requires properly formatted, well-contrasted text, sometimes accompanied by movies and/or images to ease the author in conveying his message. What it does not require is flying title bars, jumping windows, aggressive background animations, etc. Apparently, that’s what we mostly get in the Interwebs nowadays. How about we all throw a big, fat “NO”? How about we inform the people misusing web design for eye-candy that we don’t need all of this bollocks? I dare say JavaScript and CSS abuse should be frowned upon. Another thing wrong with the web browser as a tool is the fact that developers try to cramp as many features into it as possible. Chrome/Chromium is a nightmare in that regard. This modern disease is called “featuritis”. Why not then turn a web browser into an operating system, eh?  I forgot, we’ve got that already – ChromeOS!

I guess a rant would be useless without a proper take-home message. I believe we should not fall for first impressions and the notion of Swiss Army knife tools. They get inconvenient, broken, slow or just unsafe way too quickly. Let’s look for tools that do one job and do it well. In need of a GUI-enabled media player? There is mpv and mplayer already. Both can run in the terminal as well as with a pretty UI. They do everything a modern media player should. Still not enough? Try xine then! Minimalist, one-job tools are available in every BSD or GNU/Linux system’s repository. They just require finding.

On System Complexity

I appreciate how one can often draw a parallel between life sciences and computer sciences. For instance, complexity has similar features in both fields. Although many aspects like the transition from aquatic to terrestrial life are still largely disputed, it is reasonable to believe that complexity is born out of a necessity to adapt to environmental changes or novel needs. This concept can be translated to technological development quite smoothly. Let us consider evolution of telecommunication devices as an example. In the past people were pleased to be able to communicate with fellow humans without the need for letters, which often took weeks or months to be delivered (telegraphs). With time expectations grew, however. Nowadays, it is more or less a norm to not only call and text, but also browse the Web through mobile devices. Though bold, it is fair to claim that the drivers of progress/evolution were in this case the growing needs of the elusive end-user.

Technical savvy is considered a blessing by many. It’s like a catalyst or facilitator of ideas. It’s also a curse, though, because it creates a rift between the ones who can (makers) and the ones who depend (users) on the makers for their prosperity. A quasi-feudal system is formed, linking both parties together. The makers may (if personal ethics suggest) aid users, as they have both the power and moral obligation to do so. This in turn drives progress, which would otherwise be stunted, because makers often do not require elaborate tools to express themselves (assumption!). Alas, with growing needs, inherently grows the complexity of created tools.

It is important to note that the user is a bit of a decisive factor. The system should be complex enough to satisfy the needs of the user deftly, though also simple enough to allow easy maintenance on the side of the makers. A certain equilibrium needs to be reached. In a perfect scenario both parties are content, because the makers can do something good for the society and express themselves in more elaborate ways, while the users live better lives thanks to the technological development sustained by the makers.

Moving on to operating systems, the “complex enough” is usually minimal and many GNU/Linux distributions are able to cover it. What the users expect nowadays is the following:

  • an ability to do to a “test run” of the operating system without installing it; after all, the installation might fail and no one wants to lose their data
  • easy means of installing the operating system, without the need to define complex parameters like hard drive partitioning or file system types
  • out-of-the-box support for internal hardware like graphics cards, wireless adapters, auxiliary keyboard keys, etc. 
  • a reasonable selection of software for daily and specialized needs, clearly defined and documented with optional “best of” categories 
  • straightforward use of extra peripherals like printers, USB drives, headphones, etc.
  • clear, accessible and easy to learn interface(s)

Unfortunately, the same GNU/Linux distributions then go out of their way to tip the balance and make the system not “simple enough” to be reliably maintained. For instance, does every single main distribution require official support for all of the desktop environments? What for? That doesn’t really help the user if he/she is left with too much choice and suddenly has to decide which desktop environment is “better”. Do all of the desktop environments provide the same complete array of functionalities? In terms of “too much complexity” this is not the only problem, of course! I think there are some Unix-like operating systems (including the BSDs here, too) which do better than others in terms of satisfying the expectations of users:

  • Linux Mint caters to the needs of an average user perfectly. All of the above requirements are fulfilled and the selection of officially supported desktop environments is so that the user should not get confused. In addition, they’re similar in their looks and functionalities.
  • Fedora Linux also does a great job by offering a very streamlined and appealing working environment. It’s geared more towards software developers, though even regular users should find Fedora attractive.
  • Arch Linux (and per extension Manjaro Linux) and FreeBSD (per extension TrueOS/PC-BSD also) do NOT cater to the average user, but offer many possibilities and a good initial setup. Building a full-blown, user-friendly system is a matter of minutes.
  • Debian was always good with the out-of-the-box experience and this has not changed since. Granted, the installer has to be prompted to automagically produce a GNOME3-based system.

Ubuntu and its derivatives didn’t make it to the list, because they often break in unpredictable ways, causing headaches even to more technically-inclined people. The lack of consistency between Ubuntu flavors and general over-engineering often prove troublesome. Well, at least to me.

To sum up (and for the TL:DR folk), complexity is an inherent feature of both biology (evolution) and computer sciences. When building operating systems one should remember that the balance between “complex enough to be easily usable” and “simple enough to be easy to maintain” needs to be kept perfectly. Otherwise, we get instabilities, broken packages, lost data and other horrible scenarios…

Into the OSS Development Fray!

I haven’t written a single entry recently as I was very busy polishing my Python and tcsh scripting skills. Many apologies for that! Meanwhile, I am trying to assemble a simple NAS (Network Attached Storage) from the many bits and pieces I have at home. Quite the task, I have to say. Despite my daily job being extremely time-consuming, I try to hone my programming skills as much as possible to join the vast open-source community of developers and finally make a difference. Not to mention all of the badly written code that’s been around for ages.

As such, I decided to focus on Python as my primary language. It’s extremely simple, has a clear and easy to comprehend syntax and currently there is a significant need for it. As my second language I will probably pick C or Java, though I’m leaning heavily towards C. Java is highly portable and will allow me to produce APIs and software also for mobile devices, though it will not get me much more than I already have with Python. On the other hand, C and C++ complement Python wonderfully. That’s the most common combination – Python for program’s logic, C/C++ for algorithms and engines. Many consider Python to be slow, though it matters not if all it does is just link the input to the number-crunching C code or provide a UI for improved easy of use.

Currently, my platform of choice is the FreeBSD operating system. It’s great for servers, especially when data protection is vital, and has a reasonable selection of tools for virtualization. Unfortunately, it’s not really that popular among OSS developers. Much of the enterprise class software seems to be designed for, well, enterprise class GNU/Linux distributions like Ubuntu Server, SLES (SuSE Linux Enterprise), CentOS or RHEL (Red Hat Enterprise Linux). That’s entirely fine, though if I want to get a well-paid job I have to make the switch. systemd bothers me greatly and I feel the GNOME way is merely the flashy, eye-candy way. However, since we all more or less enjoy the boons of capitalism, money comes first.

The good thing about working with FreeBSD is that one learns a great deal about Unix system management and this knowledge can be easily applied to other Unix-like operating systems such as GNU/Linux. In addition, I intend to transfer some of the tools like tcsh and Emacs to my new environment. I will keep the FreeBSD installation on a separate computer of course and still use it when possible. For software development I will switch to Fedora and later on to CentOS as this is also what my current computer lab uses.

Fedora does a really good job at promoting open-source software and it’s definitely geared towards developers. Some of the attention goes to bleeding-edge experimental software like Wayland, though a more conservative approach is still possible, especially with its cousin CentOS. I hope my experience with Fedora works out fine.

Date with the Gentoo Oxen – Part Trois

larry_the_cow1

I told myself this time would be my last. Before, I wavered and bailed, because I lacked commitment. I prayed this time would be different, that I honed myself through the CRUX experience. Knowing Gentoo rather well already, I dedicated a full weekend to its installation. Usually, it doesn’t take that long, though it’s reasonable to expect things to go wrong at some point. I began my courting attempts with the Archbang Linux live image. Though not Gentoo proper, it makes for a comfortable starting point on UEFI systems as X11 is already set up and in case something goes terribly wrong with the Gentoo chroot, problems can be looked up on the Internet swiftly. Moreover, so far Gentoo’s live images did not support UEFI, making GRUB2 installation with EFI support impossible. Some things to keep in mind prior to beginning work with Gentoo Linux:

  • The installation process is quite tedious and requires good understanding of Unix subsystems, OpenRC specifically.
  • Certain applications like Chromium, WebKit or GCC take a really long time to build. It’s highly advisable to install them overnight (or acquire a decent rig).
  • Knowledge of every single USE flag is not mandatory, though an idea of what applications provide which functionalities and how USE flags describe them is. Alternatively, means of a quick lookup to make sure USE flags will not collide.
  • Manual kernel configuration usually entails good understanding of one’s computer.

With above points in mind, I went for another date with Larry the cow. As expected, this time was truly different. Larry felt charming and smart. I could definitely sense the appeal of Gentoo’s tremendous flexibility. Things did go wrong at some point, though in a recoverable fashion. USE flags were tricky as ever, but I did manage to get them right without breaking the system altogether. This made me shed buckets of manly tears. Really!

After a while, I started seeing the limitations, though. Larry doesn’t do Java, at least not the way other GNU/Linux distributions do. He (it?) said I need to cut a deal with Oracle, sign a license, yet still I can only get the binaries. Not quite how I envisioned my Eclipse work. Far from smooth and freedom-friendly. Not to mention the downtime due to long compiles. I should probably get myself a server to do the heavy-lifting for me, though why bother in the first place? No relationship is without thorns, I guess.

To wrap things up, the prognosis looks good. Larry is a cow with gender disabilities, though swims as fast as a Gentoo penguin. The minor inconveniences I can live with as in return I get a flexible, rolling-release distribution that I can tailor perfectly to my own needs. It boots fast and actually runs faster than most GNU/Linux distributions, too. Not to mention the boring lack of those Ubuntu-esque ‘oh, where did my config go’ moments. Most importantly, the BSD feels are there. Loud and clear, echoing through the ports tree down to the Unix-inspired system management practices. I’m lovin’ it!