The Kernel, the Kernel’s to Blame!

desktop_linux-100276138-orig

When getting my Raspberry Pi 3 set up recently I experienced quite some woes concerning out-of-the-box detection of SD cards. One might expect that an SD card slot is nothing more than a USB-like interface. In theory yes, in practice quite some distributions have problems with accepting that fact. Gentoo failed me numerous times, though partially because I decided to go for an extremely slim kernel config. Manjaro also surprised me in that respect – SD card detected, but not as a USB drive (thereby, not mountable). Fedora and Lubuntu had no problems. Each distribution uses a different set of graphical utilities and desktop environments so users often blame the front-end setup. That’s wrong, though, because the inability of a system to detect a piece of hardware has everything to do with the kernel configuration. Indeed, the kernel’s to blame.

I personally prefer the Arch approach – almost everything as modules. Although this could add significant overhead due to the way modules are loaded, in reality it makes Arch-based systems very light on resources. After all, what’s not in, doesn’t get loaded at all. The drawback is that the distribution or the user is required to ascertain that the initramfs is complete enough to allow a successful boot-up. The alternative is to integrate as many drivers as necessary into the kernel, though that of course makes the kernel bulky and isn’t always the optimal solution. There is a lot in-between that unfortunately causes weird issues like the one I experienced.

I think there should seriously be some consensus between distribution teams regarding what goes into a kernel and what doesn’t. Weird accidents can be avoided and it’s down to individual teams to iron that out. Of course, one can go hunting for drivers on GitHub and trying out 5 versions of a Realtek 8188eu driver, but why should the user be required to do so?

ARMing For the Future

singleboard_computers
Image taken from edn.com

For some time now I’ve been itching to get my hands on a Raspberry Pi single-board computer. Unfortunately, retailers like Saturn and MediaMarkt would shrug my inquiries off with a “we’re expecting them soon”. To my dismay the “soon” seemed like it would never come. Surprising, since the computer geek culture is constantly expanding and the demand is definitely there. Finally, after months of waiting, the Pi arrived to Austria. I quickly armed myself (pun intended) with a RPi 3 model B, a Pi-compatible power supply (5.1 V, 2.5 A) and a mate black case. The rest I already had since I collect various adapters, SD cards, etc. as a hobby. Always handy, it seems. Without further ado, though!

Get your geek on!

Contrary to my expectations, getting a Linux distribution to boot on the Pi was a bit of a hustle. Raspberry Pis don’t have a typical BIOS like laptops or desktop PCs. The firmware is extremely minimal, enough to control the on-board LEDs, hardware monitors and swiftly proceed to booting from a flash drive (SD card, USB stick), or a hard drive. Therefore, one doesn’t actually install a Linux distribution on the Pi. Rather, it’s required to *dump it* onto a disk and plug that disk into a port on the Pi to get it working. There is a fine selection of dedicated distributions out there already – Raspbian, FedBerry, etc. Major projects like FreeBSD, OpenBSD, openSUSE, Fedora and Debian provide ARM-compliant images as well. It’s just a matter of downloading an image, putting it onto an SD card (8-16GB in size, preferably) and we’re ready to go.

Pushing the limits

Not everything is as smooth as it may sound, however. Some of the distributions like FedBerry suggest desktop environments and utilities that are clearly too much for the Pi to handle. Even the top-of-the-line Pi 3 model B is not strong enough to run Firefox smoothly. Part of the problem is the GUI-heavy trend in software design, the other part being the still evolving design of the Pi. At the moment we’re at 1 GB RAM. That’s quite modest per today’s standards. With increasing hardware needs, more care should be taken in regards to the board itself also. Both the CPU and GPU will quickly overheat without at least a basic heat sink. I like ideas such as this, which try to provide the extra add-ons to turn a Raspberry Pi into a full-blown computer. Personally, I use minimalist tools such as Vim/Emacs, Openbox, Dillo so the limitations aren’t really there for me.

IoT for the future!

Truth be told, ARM-powered devices are everywhere. Though it’s a resurrected platform, people have not forgotten about the merits of RISC. Raspberry Pi is not the only Pi, nor is it the only single-board computer worth attention. With such an overabundance of almost-open-source hardware, one can do anything. Pi Zero computing cluster? Check. Watering device sensitive to solar light intensity? Check. Minecraft server? Check. NAS for the whole family? Check. It’s there, it’s cheap, it just needs a bit of geek – namely you!

The (Necessary?) GNU/Linux Fragmentation

I would like to share with you a story of my recent struggles with Debian. They’re partially my fault, but also partially due to the way Debian handles network management, which is quite different from how other GNU/Linux distributions do it.

The story begins with me being happy with a regular desktop install, powered by XFCE4, but then wanting to switch to the less distracting Openbox. I installed Openbox + extras like the tint2 panel, nitrogen (background/wallpaper setter) and other lightweight alternatives to XFCE4 components. While sweeping up XFCE4 leftovers, “apt autoremove” accidentally removed way too many packages, including network-manager. I was instantly left with no network connection and no means of restoring it, as I learned later. By default network management on Debian is handled by the ifupdown scripts, which “up” interfaces listed in /etc/network/interfaces and direct them to dhclient to get the DHCP lease or assign a static IP address. Incidentally, ifupdown utils have no means of directing wireless interfaces to wpa_supplicant for WPA-encrypted networks. Nowadays, this is handled by network-manager, which “Just Works”. network-manager uses wpa_supplicant to handle WPA-encryption (in addition to many other things), whilst performing the rest of network management itself. This is quite different from running wpa_supplicant directly, which simply failed in my case due to a known regression.

It’s quite sad to see that Debian, despite moving from init scripts to systemd for boot + service management, still insists on configuring network interfaces via Shell scripts (the mentioned ifupdown tools), while a mainstream solution in the form of network-manager is available! Why is it recommended as a “Just Works” alternative yet not offered by default? On Red Hat based distributions (say Fedora, CentOS, etc.) the matter is really simple – you get network-manager and you’re good to go out-of-the-box. That stands to reason, though, as Network Manager is a Red Hat project. Still, the “Just Works” approach baffles me (and disturbs even) greatly. “Just Works” sounds like a catch-phrase typical of commercial operating systems. Since they target desktop and laptop computers mainly, it’s enough if an ethernet interface and/or a wireless interface “just work”. What about servers with multiple NICs, routers, gateways, NATs and VMs, each having its IP address or set thereof? What then? Oh, right, we can write systemd units for each interface and control them via systemctl. Or use the ip utilities for a similar purpose. Or the deprecated ifconfig, which we shouldn’t use, but still can because it’s in the repositories of many distributions. Alternatively, perform DHCP requests via one of the selected clients – dhclient, dhcpcd, dhcpd, etc. We end up with a hodgepodge of programs that are best left to their own devices due to incomplete documentation and/or unclear configuration means. Each GNU/Linux distribution having a set of their own base utilities.

Personally, I feel that’s where the BSDs succeed. You get a clearly separated base system with well-documented and easily-configurable tools that are maintained as a whole. Network interface configuration borders on trivial. More so, the installer handles wireless connections almost seamlessly. Why is it so difficult on GNU/Linux? At this point, I believe the GNU/Linux community would profit greatly by agreeing on a common “base system”. Red Hat’s systemd is the first step to the unification of the ecosystem. While I am strongly opposed to systemd, because it gives merely the illusion of improved efficacy by simplifying configuration and obfuscating details, GNU/Linux should be a bit stronger on common standards, at least for system-level utilities.

GNU/Linux and Its Users

I decided to devote this entry to a reflection I made recently, while participating in discussions in the Facebook Linux group. I realized, as many have before me, that there is a strong correlation between the maturity and complexity of an operating system and the savviness of its users. The BSDs and more demanding GNU/Linux distributions like CRUX, Arch and Gentoo attract experienced computer users, while Ubuntu and its derivatives entice beginners mostly. As few express the need to learn the Unix Way, beginner-oriented operating systems (this includes Windows and MacOS X, of course) are far more popular. Consequently, they garner stronger commercial support from hardware and software companies as they constitute a market for new products.

The truth is, we have all been beginners once. More so, unless we’re old enough to remember the ancestral iterations of the original UNIX operating system (I’m not!), we’ve been MacOS X or Windows users way before switching to a modern Unix-like operating system. Alas, as such we have been tainted with a closed-source mindset, encouraging us to take no responsibility for our computers and solve system-level problems with mundane trial-and-error hackery. Not only is such a mindset counter-productive, but also hampers technological progress. Computers are becoming increasingly crucial in our everyday lives and a certain degree of computer-literacy and awareness is simply mandatory. Open-source technologies encourage a switch to a more modern mindset, entailing information sharing, discussions and learning various computer skills in general. The sooner we accustom ourselves with this mindset, the faster we can move on.

The current problem in the GNU/Linux community (much less in non-Linux Unix communities) is that the entry barrier is being continuously lowered as to yield a speedier influx of users. Unfortunately, many of these users are complete beginners not only in terms of Unices, but also in terms of using computers in general. With them the closed-source mentality is carried over and we, the more experienced users, have to deal with it. Some [experienced users] provide help, while others are annoyed with the constant nagging. Within us lies the responsibility to educate newbies and encourage them to embrace the open-source mindset (explained above). However, they don’t want to. They want the instant gratification they received when using Windows or MacOS X, because someone convinced them that GNU/Linux can be a drop-in replacement for their former commercial OS. They want tutorial-grade, easy-to-follow answers to unclear, badly formulated questions. Better yet, they want them now, served on a silver platter. We all love helping newbies, but we shouldn’t encourage them to remain lazy. Otherwise, we’ll eventually share the fate of Windows or MacOS X as just this other mainstream platform. I cannot speak for everyone, though I would personally prefer GNU/Linux to continue its evolution as a tech-aware platform of the future.

Just Linux Things

chillin_penguins

As a follow-up to my previous post, I noticed that a lot of the recent open-source technologies are extremely Linux-centric. It all started around the time GNOME3 and systemd were introduced. Both heavily rely on facilities present in Linux-based operating systems, making them difficult to port to other Unix platforms like the BSDs. These technologies are also strongly promoted to entice prospective developers. While I understand the need for platform-centric efficiency inherently tied to Linux-specific features (cgroups, etc.), it is also important not to ignore the rest of the IT ecosphere. Yes, I mean even Windows, which is normally not considered on equal terms as Unix, but is relevant when talking about C# or .NET applications.

A recent example of this trend is Docker. Containers are now the new pink and everyone wants to get a piece of the pie. Docker barely reached release 1.x, yet some companies already make claims about “widespread adoption”. I thought industries prefer stable and tested-for-years solutions. I find this new craze odd the least. As expected, Docker is a Linux thing. While the containers are indeed OS-level on GNU/Linux systems, much like LXC (Linux Containers), they’re not on Windows and neither on MacOS X. Oddly enough, the latter uses a Unix virtual machine manager xhyve (based on FreeBSD’s bhyve). Therefore, despite the fact that developer interfaces are similar or even identical, the engines running underneath will have a substantial overhead on non-Linux systems. At that point one might consider whether native and more established solutions are not already available and more suitable for multi-container setups. On FreeBSD we have Jails and a ton of Jail managers to make one’s life easier. I have a feeling that Jails on FreeBSD would do a lot better than Docker containers on GNU/Linux. Not to mention that a FreeBSD base system is a lot slimmer than whatever one could consider a base system in GNU/Linux world. “Widespread adoption” seems to be lacking, because most of world’s servers run GNU/Linux.

Another weird trend I notice is identifying everything Linux-related with Ubuntu, as if Ubuntu was the only True Linux Distribution. I often read articles that claim to touch on Linux, but in reality discuss Ubuntu. A square is a rectangle, but not all rectangles are squares! That’s so obvious, no? This “Linux = Ubuntu” assumption hurts the whole ecosystem quite a lot. People learn how to use Ubuntu, think they’ve mastered Linux. Then, they’re dropped into a den full of Gentoos and CentOS’, and they end up suffering. Ouch! A fallout of this worrying trend is the fact that people deploy Ubuntu in places where a more lightweight GNU/Linux distribution would be a lot more suitable. That’s one of the reasons why Docker switched from Ubuntu to Alpine Linux eventually. With the wealth and diversity of the GNU/Linux ecosystem, one doesn’t even need to go far.

It’s Time for FreeBSD!

For the last couple of weeks I have been delving deeper into the arcane arts of FreeBSD, paying extra attention to containers (jails), local/remote package distribution (poudriere) and storage utilities (ZFS RAID, mirrors). Truth be told, with every ounce of practical knowledge I was increasingly impressed by this Unix-like operating system. It’s nothing short of amazing, really! The irony lies in the fact that FreeBSD is an underdog in the Unix world (less than OpenBSD or Illumos, but still), despite the fact that it excels as a server environment and established many technologies currently in focus (process isolation, efficient networking and firewalls, data storage, etc.) years ago already. GNU/Linux is picking up pace, but it still has a long way to go.

I feel traditional Unices got the “base system + additional applications” separation properly. One might think it’s just a matter of personal taste – the “order vs chaos” debate going on for ages. The truth is that this separation is not only reasonable, but also extremely useful when one begins to treat the operating system as more than a mere Internet browser or music player. Order is paramount to organizing and securing data from ill-intent or hardware failure. I really appreciate the use of /usr/local, /var/db, /var/cache and other typical Unix volumes on FreeBSD, as it makes the system more predictable, and therefore hustle-free. When we have a multitude of systems to care about, hustle-free becomes a necessity. With the new container technologies like Docker it’s a realistic scenario – 1 host system serving N guest systems. One doesn’t need to run a server farm to get a taste of that.

This is basically where (and when) FreeBSD comes in. It’s a neatly organized Unix-like system with great storage capabilities (ZFS), process isolation for guest systems (jails and bhyve), network routing (ipfw, pf, etc.) and package distribution (synth, poudriere, etc.). And everything is fully integrated! The “pkg” package manager knows what jails are and do, and can easily install programs to them. Poudriere coordinates building new packages, using jail containers so that the host system is not compromised. These packages can later be distributed via HTTP/HTTPS/FTP remotely or locally via the file protocol. Such low-level integration is somewhat foreign to the GNU/Linux world, though among server distributions like OpenSUSE, CentOS or Ubuntu Server it is constantly improving.

Still, whenever I think about the divide between BSD and GNU/Linux, I see a tall brick wall that both sides are struggling to tear down. FreeBSD wants to become more desktop-oriented, while GNU/Linux is trying to reinforce its server roots. Difficult to tell whether this is good or bad. The BSDs do indeed excel as server systems, as recently highlighted in a NASA study. GNU/Linux is more suited for heavy computation and leisure. The brick wall has plenty of nicks, yet it stands strong. Maybe there is a “third option”? Why not let each do the job they do best? What I mean to say is that FreeBSD has its place in the world and the time is ripe to truly begin to appreciate it!

OpenSUSE Tumbleweed vs Fedora 25

I haven’t done a side-by-side review for a while and since others might have similar dilemmas, here it is. OpenSUSE Tumbleweed vs Fedora 25. Developer’s perspective in a moderately fair comparison. As test hardware I used my main S301LA VivoBook from ASUS. It’s light, FOSS-friendly and since I had swapped in an Intel wireless chip, has never let me down. OpenSUSE was installed from the network installer, while for Fedora I used respective desktop spins. Tested desktop environments were XFCE and LXDE. I like old, stable and lightweight. Let’s see what gives!

Getting the installation medium
Fedora 25 wins this one hands down. In fact, any distribution would compared to OpenSUSE. The full OpenSUSE Tumbleweed installation disc is 4.7 GB in size. However you look at it, that’s an absolute joke. Not only does it not fit on a single regular DVD disc, but also takes ages to download. If you need an extra disc for 32-bit hardware, you need to download again! This might have been excusable in the age of disc-only distribution, however nowadays it’s just unreasonable. Fedora offers 3 main GNOME3 discs (Workstation, Server and Cloud) + community spins with KDE, XFCE, LXDE, MATE and Cinnamon. Quite the choice, I must say.

Installation
This one goes to OpenSUSE Tumbleweed, easily. OpenSUSE sports perhaps the best installer I’ve seen in a free operating system. It’s so good and reliable, it’s just enterprise grade. My favorite feature is the ability to cherry-pick individual packages or follow metapackage patterns. Fedora’s network installer is customizable as well, though not to such a high degree. OpenSUSE just shines.

Out-of-the-box customization
OpenSUSE wins again, unfortunately. While Fedora is properly customized when you install it from a prepared LiveCD, that’s not the case with the network installer. All of the extras like a graphical front-end to the package manager need to be configured manually. In contrast, OpenSUSE is fully configured even if you select a desktop environment that’s neither KDE nor GNOME3 from the network installer. The polish is there as I mentioned in one of my earlier entries.

System management
OpenSUSE has the great YaST tool for configuring networks, NFS shares, firewall, kernels, etc. Fedora relies on desktop-specific applications and doesn’t have a dedicated tool. However, Yumex is less cumbersome than the GUI in OpenSUSE. I think at this point the general focus of each distribution starts to show as well. OpenSUSE emphasizes system management, while Fedora tries to be a FOSS all-rounder. There is no good or bad here, just differences. I prefer the Fedora-way as it’s a bit more lightweight.

Selection of packages
Both Fedora 25 and OpenSUSE Tumbleweed require some tinkering. Codecs are a no no due to licensing issues. It’s quite a shame, but when we recall the dismal Windows Media Player…Anyhow, licensed programs can be acquired either from RPM Fusion (Fedora) or Packman (OpenSUSE) repositories. OpenSUSE wins in terms of package numbers, though Fedora’s approach makes for a more stable environment. Some of the Packman packages are testing-grade (Factory), thus are prone to breakage.

As a developer platform
Both distributions are geared towards developers and both do it rather well. However, as mentioned earlier, OpenSUSE Tumbleweed favors streamlined system management and focuses more on server-centric features. In theory, it’s a separate product from OpenSUSE Leap, but in practice it shares its goals. Fedora is THE developer platform. The sheer number of programming language libraries and IDE plugins is a win in my book. Even Arch Linux doesn’t come close. Then we have the COPR (Fedora) and OBS (OpenSUSE and others) servers for package building and distribution. Both frameworks are straightforward and reliable. No clear winner here.

Epilogos
Thus, I conclude – a draw. That would explain my dilemma, I guess. OpenSUSE Tumbleweed and Fedora 25 are both great development platforms. However, they clearly focus on different things. OpenSUSE is more server-centric – database management, data storage, safety and recovery, etc. Even though Tumbleweed is the development line, this still shows. The upside is that it’s extremely streamlined and the extra hand-holding might be useful. Fedora is the true FOSS dev platform. No wonder Linus uses it! Great focus on programming tools and libraries. Things are not as streamlined, but less restrictive as a consequence. Server appliances are also available, though it’s rather deployment than management. I chose Fedora, because I don’t mind my system breaking occasionally. OpenSUSE Tumbleweed might be the easier choice, though.