Date with the Gentoo Oxen – Part Quatre

This is already my 4th iteration of “Why I love Gentoo and really enjoy the pain of working with it”. Therefore, I decided to do a brief recap of what changed in my impressions and understanding of this unique and enchanting GNU/Linux distributions. My initial experiments could be easily summed up with a meme we all learned to appreciate in times of anxiety.

ihavenoideaI had no idea what USE flags are or do. Kernel configuration was black magic and so was building a GNU/Linux system from scratch. Yes, I managed to install Arch Linux before trying out Gentoo, but that alone was still insufficient. I might’ve as well went “full throttle” and read the whole Linux From Scratch (LTS) handbook at that point. Come to think of it, LTS ought to be mandatory for achieving computing Zanzara. So, my first (and second, in fact) Gentoo installation was an absolute mess. I wanted to play the “hardcore hacker” and blocked all USE flags from the selected profile via “-*”. This was a bold (read: dumb) move and it quickly spiralled into an oblivion of circular dependencies. My third attempt was a bit less dramatic and besides odd configuration problems, Chromium constantly crashing and unbootable kernels, I could actually use the system I sculpted myself. Vae victus!

Now comes my 4th attempt, which thus far is surprisingly successful. To make my life harder I decided to utilize my spare MacBook (late 2009) with its pesky integrated nVidia and Broadcom wireless. First thing I noticed was that beginning with an Arch Linux liveCD is possibly the easiest way to install Gentoo, believe it or not. 1 tty for messing with partitions and chroot, and 1 tty with links/elinks/lynx displaying the Handbook. Since I installed Gentoo 3 times already, preparing the hard drives and entering the chroot environment was easy enough. Though not designed for Gentoo, Arch has an “arch-chroot” script, which does all of the extra /dev mounting steps and chroot for the user. Furthermore, I never used genkernel extensively so I decided to let it choose some “sane defaults” for me and do its compilation magic. That also gives a reasonable .config to work with later on. The full installation took me 3 mornings and 3 evenings, because of $dayjob. It went quite smoothly and on my 4th morning I was ready to boot into my freshly built environment.

While the installation and initial setup are considered tedious, many steps can be automated with well-designed Shell scripts (see: arch-chroot). Gentoo developers are generally against excessive automation, though for personal use I think it’s perfectly reasonable. When a full installation is complete, cloning it is as simple as preparing a tarball of the whole filesystem or utilizing the power of the “cp” copy command. As usual, I had problems with the Broadcom bcm4322 wireless adapter. Apparently, the default kernel didn’t recognize the chip properly and I had to go through the kernel .config anyway. I guess compiling one’s own kernel is crucial after all. Thankfully, after disabling all of the unnecessary features my system would boot and run twice as fast. Also, the Broadcom adapter was recognized by the b43 driver and works since.

One of the key features of Gentoo are the local and global USE flags. They allow the user to precisely select which components should a particular piece of software have. Firstly, this allows for supreme optimization when targeting embedded systems. Secondly, on servers every overlooked or unmaintained software feature is a potential point of entry for attackers. Thirdly, most GNU/Linux distributions ship software with only the “sane defaults” enabled and we users cannot easily inspect what those defaults are. This is a major problem of the GNU/Linux ecosystem, making comparisons between distributions absolutely impossible. Bearing all of those points in mind I am actually amazed major distributions like Ubuntu or Fedora don’t use Portage for building applications. Don’t want proprietary code in your software – block all related USE flags. Want different system branches (desktop flavors, server, etc.) – generate USE flag profiles. Heck, it’s really that simple!

Despite the obvious advantages, throughout my toils I learned that Gentoo is not for the faint of heart, nor for casual computer users, who just want to browse the Web and enjoy their lives. Also, a decent machine (say, core i*, 4+GB RAM) is a must, otherwise compile times get really annoying. For now I can bear with the drawbacks, since the flexibility of Gentoo is truly worth it.

Enter s6 Supervisor

I run Fedora and Manjaro (since yesterday) as my day to day GNU/Linux operating systems. Though still quite intimidating, I use systemd on both of them. However, from time to time I have those grand dreams of a “better init”. System V init seems like a hodgepodge of Shell scripts and has so many downsides that people often feel prompted to write whole blogs about them. Going back to the ways of the init past doesn’t seem viable (or at least convenient) anymore. In addition, modern GNU/Linux distributions crawl with helper scripts and utilities that are supposed to make the lives of users easier. They of course do, but then there is the noticeable overhead and dramatically increased complexity. Obviously, no sane GNU/Linux developer or user wants their favorite open-source creation to turn into the Frankenstein that is Microsoft Windows.

An additional incentive for considering alternative system service supervisors and inits is the rising trend of building Linux containers. Light system-level virtualization has been a standard in non-Linux Unices for decades (Solaris’ zones, FreeBSD jails, etc.) and now the GNU/Linux crowd wants a fair share of it as well. Containers are usually meant to serve a limited number of tasks (kernel building, firewall testing, etc.) so running the whole systemd suite is an overkill. Also, it was never designed for simplicity in the first place. Update: recently, systemd got a utility called systemd-nspawn. It’s the systemd equivalent to a zone/jail builder tool. Of the systemd alternatives it is definitely worth mentioning OpenRC as the prominent “other” supervisor. It powers Gentoo, Manjaro’s OpenRC edition(s) and other GNU/Linux distributions which shy away from systemd. However, there are plenty of less popular supervisors like uselessd (pun at systemd), runit, nosh and s6. The last one deserves probably the most attention as it is often used in Linux containers with great success. Maybe eventually someone will build a complete virtualization technology based on it, who knows.

One can read about s6 primarily here, but also some success container stories here and here. It’s useful to know that the groundwork has been paved and that s6 is indeed viable in terms of light process supervision. However, I would like to go a step further and set up a whole GNU/Linux operating system on it. Firstly, in a chroot environment, then on a separate partition. It will definitely not be easy, but may prove worthwhile for the entirety of the GNU/Linux community. Several points need to be considered and addressed prior:

  • s6 doesn’t have its own init binary, but one was already written by some of the people. Alternatively, /bin/sh can serve as the simplest init substitute. Then, it hands over system control to s6-svscan and things continue smoothly from there.
  • The 3 init stages (init, running, shutting down) need to be connected to the basic utilities, otherwise the system will not boot fully, will lack some processes or will require a hard poweroff.
  • Common processes such as NetworkManager, policykit, wicked, dbus, etc. need to be made compatible with s6. Some additional services can be launched via a desktop manager, but network connectivity is a must in the command-line already.
  • The new supervisor (s6) should not break existing functionalities people are used to, like auto-mounting devices or handling keyboard key events.
  • The whole setup process needs to be documented and partially automated to ease reproducibility and maintenance.

Update: I believe I did not do enough justice to other alternative inits and supervisor suites. Runit is actually worth considering on equal grounds as s6, because it’s been built with Unix system management in mind and already found its testbed in Void Linux. Not to mention, it was tested on several other distributions as a drop-in replacement for systemd. Finally, it has its own runit-init utility that works as the PID 1 init substitute.

Manjaro – The Lazy Geek’s Delight


After tackling RPM packaging for Fedora I decided it is time for something leaner and simpler. That and the fact that Fedora’s packaging tools make working with git repositories a pain in the Great Distal. Unlike scripting PKGBUILDs for Arch-based distribution, which is just too easy. The only easier approach would probably be CRUX’s PKGFILEs. Anyhow, I decided to celebrate my new installation of Manjaro Linux 16.10-dev with a quick scrot…erm, screen shot (above).

The installation process itself is straightforward and will get me/you/her from the liveCD’s Calamares screen to a newborn boot within 15-30 min. Then of course, one proceeds with adding favorite programs. The default desktop environments are KDE, XFCE4 or CLI (hehe…), though the development version comes as a slimmed down XFCE4 (no Parole, Ristretto, etc.).  Additional community editions provide more window managers and desktop environments, all designed in a very consistent manner. For instance, the JWM (Joe’s Window Manager) edition utilizes ncurses or CLI-based tools for network and package management. Perfect for my aging Asus Pundit PH1!

Whenever I use Arch or Manjaro, I feel there are no boundaries to how much one can do with a GNU/Linux operating system. Especially that AUR (Arch User Repository) is just a few clicks away.

  • We can game (Steam, lots of open-source games in the main repositories and AUR)
  • We can watch videos (lots of media players and codecs in main repositories)
  • We can develop (up-to-date programming language stacks, especially Java in main repositories)
  • We can build a server (few processes running in the background, simple system management, easy access to OpenZFS via AUR, up-to-date LAMP components)

Alright, I might be biased with the “we can build a server” bit, though I am actually tempted to try setting up a Manjaro based server. Anyhow, OpenSUSE or CentOS are possibly more suited for servers as they have a myriad of CLI and GUI based utilities for system administration. Then again, that’s pricey overhead on limited hardware. One should go with FreeBSD for servers. Of course, I’m biased in that as well!

Lastly, I love Arch-based distributions, Manjaro especially, for the relative stability. Programs don’t randomly segfault unless they have proper maintenance as parts of desktop environments. The last major problems I remember were due to systemd on Arch proper and even then things were sorted out quickly. Manjaro puts itself in a good filtering position as packages from Arch repositories are further tested and pass through Unstable and Testing before going into the final Stable branch. Much harder for things to just go wrong.

To wrap things up, I believe Manjaro Linux is a very reasonable pick for those of us computer geeks who don’t have the time or luxury to set up an Arch Linux box from scratch, or just want some rogue-ish shine. Joes, Smiths and the Does (I mean John and Jane, of course) will also profit thanks to the many simple GUI tools for configuring drivers, kernels, language, etc. Imagine Ubuntu, just without the silly release codenames and mandatory fix-after-upgrade moments. Sounds encouraging, no?

The (Final) Push to Fedora

After months of shifting between different Unix-like operating systems I decided to finally settle down. My experience with FreeBSD thus far was flawless and positively incomparable to many a GNU/Linux distro. Setting up highly robust RAID arrays with ZFS, sandboxing services via jails and building a resilient firewall with PF. Everything was simply lovely! However, the community is much smaller than that of GNU/Linux and everywhere I look employers typically search for GNU/Linux programmers. Heck, many never even heard of or read about FreeBSD and if so, their knowledge is pretty limited. “Ah yes, that server Unix that Netflix uses, right?” It saddens me greatly, but that’s the reality. If it depended on me, I would run FreeBSD on every single server, be it Sun’s legacy SPARC or a brand new Intel Xeon cluster with petabytes of hard drive space. Alas, I’m not the one calling the shots.

While using CRUX and Arch Linux I also realized that my honest warmth for FreeBSD came from the fact that I actually wanted to use a technically challenging Unix-like OS. Gentoo, Arch and CRUX provide that challenge equally well! The added bonus is that they still belong to the GNU/Linux realm. Thus, troubleshooting can be done with the assistance of the whole user community. Finally, there are distributions tailored to servers (SLES, CentOS, ClearOS, RHEL, Debian, etc.) and desktop systems (Fedora, xUbuntu, Manjaro, Mageia, Linux Mint, etc.) alike. One kernel (though with a myriad of configuration options) + many init systems + tons of tools for everything. It’s chaos, but one we enjoyed as kids when building colorful towers out of Lego blocks. We can actually relive those moments in GNU/Linux thanks to Linux From Scratch and Beyond Linux From Scratch. Quite exciting, isn’t it?

All that being said, I decided to go for Fedora. It offers great balance between low-level system management and development, and ease of use typical of desktop distributions. The double-edged blade of licensing is actually an incentive to me. I do use proprietary software from time to time, but would like to avoid it if a possibility arises. Focus on new Unix-like technologies is an additional boon. From there I can easily migrate to CentOS, Red Hat Enterprise Linux or Oracle Linux if need be. It’s a well integrated network of similar GNU/Linux distributions covering different fields. Quite obviously, I will not forget about the Unix Way and the lessons taught to me by FreeBSD. I will always follow the principles of simplicity, good design practices and sane programming. What I’ve learned about system management will come in handy, no doubt. Who knows, maybe I’ll be able to help in fixing systemd-related issues or showing people that there are genuinely Unix-like approaches to certain problems.

Forking – The Mirrored Efforts

I wrote about software forking before, but some new ideas came to my mind so I decided to share with the community. Forking is one of those phenomena which bridge the gap between sociology, psychology, biology, technology and many related fields of study. Therefore, it is not unique to the open-source world, unlike many might think. People have investigated and artificially produced forks of various kinds for generations. Also, evolution is nature’s way of forking, testing and establishing traits in organism populations. Going along with that thought the open-source universe is really just a highly varied ecosystem. Perfect material for studies! It interests me as the nature of both a biological and an open-source ecosystem is expressed in its capacity, limitations and governing mechanisms. They’re very similar, though some features don’t translate directly. On top of that I would like to add the current turbulent transitions in the open-source world brought about by technologies such as systemd. Biological ecosystems also have those, just in a different form. Onward then!

As I wrote in my previous posts, I consider forking to be a key part of the open-source universe. If we don’t fork, test, explore, etc. we don’t gain anything. Modern technologies become standards, to become stagnant, to end up as relics of the past. If we want to avoid this “stiffening”, we need to fork and do it often. I actually admire people who have sufficient capabilities to build their own distributions. It sure is challenging! Then I look at the BSD world and the GNU/Linux world and see a lot of effort going to waste for all the wrong reasons.

The open-source ecosystem has a limited capacity expressed in manpower. This is showing a lot recently, as more and more GNU/Linux distributions drop 32-bit architecture support or offer a smaller selection of desktop environments. At this rate there might not be enough effort for “just this one distro” in the nearest future. Another problem I see is the motivation behind forking. People should fork when they have some technology that’s worth showcasing. Contrary, we have a lot of GNU/Linux distributions which merely look different, but don’t seem to contribute anything interesting. Why were they created then? Surely, there is some merit to them, no? It stands to reason, though. New generations of open-source citizens have expectations focused on the visage of a distribution, not its technicalities. Is your distro user-friendly? My sister only knows Windows 10, would she be able to use your distro?  Do you support KDE, GNOME, LXDE, XFCE, that-other-desktop-environment, etc.? I like choice. Do you guys have easy to use graphical tools for everything? I don’t like the command-line, because it’s scary. Many more similar questions are asked on forums and chats every day. I might sound bitter, but as an “old-geezer” type computer user, I find the concerns of many GNU/Linux users superficial. That’s the current trend, though. A single man cannot stop the flow of a river, much less that of an ocean.

Recently, a DistroWatch follower exclaimed to me that he would rather read about endless version bumping for various software packages than key kernel advancements. Well, if he’s more interested in useless numbers than potential improvements in hardware compatibility or system performance, then I we have nothing to talk about. That also showed me the gap between GNU/Linux people of contrasting dispositions. It’s sad, actually. This silly drive for distributions’ looks and other shallowness tramples the real, technical reason for forking. More importantly, it abolishes the evolutionary pressure that leads to actual progress (selection of more technically “fit” distributions or inventions) and traps the process of forking in a hamster treadmill. We can all see how the same issues arise over and over again, because everyone has to have their own implementation of a specific desktop environment. Contrary, Arch Linux is a nice example of why going with the “sane defaults” produces a stable and highly reproducible distribution base, lowering the potential costs of forking.

What pains me the most about the current progress of forking is that in the GNU/Linux world it is much easier to survive for distributions that are flashy and garner users through visuals. Naturally, such users will not contribute back to the distribution, but at least they constitute a sufficient user base. Technical solutions (forking for technical reasons?) are considerably less popular, even in favorable conditions. Case in point, the birth of the Manjaro OpenRC effort as an alternative to systemd. It received initial acclaim as worth pursuing, but when comparing its overall popularity to the standard Manjaro systemd editions, it’s marginal. The question of today is – How do we stop the pointless forking and focus on real, technical forking?

The BSD universe is completely different. Forks are less common and if any, they always have a very clear niche – firewall, NAS system, cloud server, etc. BSD people understand and anticipate the costs of forking much better. That’s one of the reasons why I believe I belong more to the BSD community, than to the GNU/Linux community. However, I still use distributions of both, depending on the task. Although not a fan of systemd, I do use Fedora for development as its arguably at the forefront of GNU/Linux novelties. My rock-solid system of choice is FreeBSD, though.

Packaging for Unix


For a while now I felt this urge to finally give back to the Unix community in the form of documentation or new software. Unfortunately, despite my knowledge, writing proper documentation still intimidates me so I decided to start with building packages first. Among the many GNU/Linux and BSD operating systems, some are extremely easy to package for. CRUX, Arch Linux and Gentoo, requiring considerable Unix experience, have extremely lean packaging systems. More so, CRUX and Arch Linux are good examples of how package templates should look like. They use basic text files, easy to read and even easier to parse with our programming language of choice. No wonder AUR (Arch Linux User Repository) is perhaps the biggest GNU/Linux repository in existence. Contrary to these “geek distros”, packaging for Debian was a nightmare to me. In the past I tried helping out the Devuan cause by trimming down XFCE4 and removing the various systemd “poisons”. Installing required -devel packages, preparing the environment and finally building was far too complex, in my opinion. There were too many files with too many interrelated parameters. In the end I failed, while a similar process under CRUX took only minor tweaks to the .configure config (I wanted to build without dbus and policykit) and 1 hour of compile time.

It’s fully understandable that each GNU/Linux distribution follows its own packaging practices and that security and integrity are perhaps of highest importance. This is typically done via various signing mechanisms, the simplest of which is a mere MD5 hash sum. Curiously enough, an MD5 sum is in most cases sufficient. Even better of course would be a SHA256 or SHA512 sum. Hence, a simple Shell script to get the sources and a hash sum for integrity verification should be enough to build a software package. With that in mind, I decided to give Fedora another go – as a programming platform and as a packaging environment.

For reasons that Linus Torvalds knows all too well, Fedora is a fantastic operating system for developers. It has up-to-date packages, sane administration practices and robust packaging tools. I found myself a good case study in the form of OpenTTD (open Transport Tycoon Deluxe). It so happens that the game and graphics libraries are available in the repositories, but the sound files are not. After some half hour of build environment preparations I was ready to write my first .spec (specification) file. Fedora’s rpm templates (.spec files) have a very intuitive, Unix-friendly layout. In addition, a well-documented macro system helps improve ease of maintenance. The main tool, rpmbuild, makes building packages a breeze. Once the package is ready, it can be submitted to the COPR system for initial distribution. Everything was made smooth and easy for the Unix everyman.

In the past I had a lot to say about Fedora due to its bleeding-edge approach to software stability and use of systemd. However, in a broader perspective I now begin to comprehend why the necessary “evil”. To improve adoption and efficiency in software development, a certain degree of freedom needs to be sacrificed. It’s not a sin, but a choice that really had to be made. I cannot imagine administering whole infrastructures with the help of self-daemonizing Shell scripts alone. Arguable, but understandable. It’s an entirely different matter on the desktop, though. We can boot kernels with whatever we want – init scripts, OpenRC, nosh, s6, etc. I’m actually tempted to sink systemd on my Fedora install with a powerful shot from the 120 mm nosh. Of course, related .rpm packages would follow so that other citizens of Unix-shire may enjoy a breath of fresh (free) air in their well-mowed, corporate yards.

Of course my craze will end at some point and I shall return where I truly belong. The “me” in FreeBSD…