Linux and the BSDs

Throughout my many months of using various open-source and proprietary operating systems I have made certain observations that might be useful to some. I started with Linux, though at some point migrated to the BSDs for personal and slightly more pragmatic reasons. I quickly became a lot more familiar with FreeBSD and OpenBSD than I ever was with openSUSE or Ubuntu. It may seem odd as Linux is far easier to get into, however seasoned UNIX admins will surely understand. BSDs have this technical appeal, which Linux steadily loses in favor of other features. To the point, though:

1. Save for Windows, most operating systems are extremely similar. macOS X, as it is now referred to, relies on a huge number of BSD utilities, because at one point in time they were more accessible (well-documented and permissively licensed). In turn, the open-source BSD family operating systems, such as OpenBSD and FreeBSD adopted Clang with its LLVM back-end (Apple’s compiler toolchain) as their main system compiler. A number of former, now defunct, proprietary operating systems were based on some revision of UNIX – IRIX, HP-UX, Solaris, etc. There is also a significant overlap of other tools, such as sysctl and ifconfig, which were forked, modified and adjusted to fit individual systems, but bare functional resemblance between various flavors of UNIX. The remainder (text editors, desktop environments, etc.) is typically BSD/MIT/GPL-licensed and available as packages or ports. Therefore, the high-level transition between BSDs and Linux isn’t as dramatic.

2. Above being said, the BSDs and Linux follow different philosophies, of which the BSD philosophy seems a lot more practical in the long-run to me. What most people forget (or never become familiar with) is the fact that Linux is just a kernel and development teams creating distributions (the actual Linux-based operating systems!) can do almost anything they want with it. This leads to myriads of possible feature sets already on the kernel level. It is also up to the distributions to assemble the kernel and userland utilities into a full-fledged operating system. Unfortunately, distribution teams are often tempted to add a bit of an artistic touch to their work, which causes Linux distributions to differ in key aspects. While on a higher level this is hardly noticeable, it may bite one back when things get sour or manual configuration is required. This Lego blocks or bazaar concept makes it difficult for upstream software developers to identify bugs and for companies to support Linux with hardware and software properly. Eventually, only certain distributions are recognized as significant, such as Ubuntu, CentOS, openSUSE, Fedora or Debian. BSDs take a more organized approach to system design, which I believe is highly advantageous. An operating system consists of the kernel and basic utilities, which make it actually useful, such as a network manager, compiler, process manager, etc. Depending on the BSD in question, the scope of the base system is defined differently. For instance, OpenBSD ships with quite some servers, including a display server (Xenocara). FreeBSD, in turn, focuses on providing server capabilities.

3. Recently (or not so much), the focus of Linux has switched from being merely a server platform to a desktop replacement. That’s the ballpark of MS Windows and macOS X, both of which are fairly tried as desktop platforms. The crux of the matter is that utilities had to be adjusted to fulfill more GUI-oriented roles, making command-line work slightly trickier. The other problem is that the software turnover in Linux-land is extremely rapid and programs either become stale way too quickly or they break too often. That’s clearly a no-go for server scenarios. This is where BSDs come in. FreeBSD was designed as a multi-purpose operating system, however with a strong focus on networking, process sandboxing and privilege separation, data storage, etc. In these aspects it clearly excels. NetBSD favors portability and supports many server and embedded platforms, which act as routers, switches, load-balancers, etc. OpenBSD emphasizes code correctness, security and complete documentation. Last, but not least, DragonflyBSD focuses on multi-processing and leverages filesystem features to improve performance. One could say that due to greater resources Linux surpassed all of these operating systems. However, one should not underestimate quality BSD utilities and the almost legendary stability of Berkley-derived OS’. One of the main problems I ever had with Linux was the inconsistent breakage of individual distributions. Upgrading packages would eventually render them useless or impossible to troubleshoot due to uninformative error messages. The lack or staleness of documentation made matters only worse. Having to deal with above problems, I simply jumped ship and joined the BSD crowd. Granted, neither OpenBSD nor FreeBSD make my PCs snappier. Quite the opposite, Linux still wins in that respect. However, I now have full access to the operating system source code and can fix issues first-hand should such a need arise. Not to mention being actually able to read clearly written documentation and learn how to use the tools my operating system offers. I doubt Linux can beat that.

Advertisements

On Using Computers

I’ve been planning to write this piece for a while now, though due to work related stuff I was somewhat hampered in my efforts. It’s a bit harsh at times, but I feel it should become a must read for beginner Linux users nevertheless.

I am a part of the open-source community and as a member I try to contribute to projects
with code, documentation and advice. I fully understand that for the open-source way of
producing content (not merely software!) to succeed, everyone has to give something. However, in the recent months I noticed a sharp influx of new users (newbies), who want to be part of the community, but are extremely confused as to its principles. Incidentally, these newbies “contaminate” the open-source community with former habits and expectations, and make it harder for both existing members and themselves to cope with this temporary shift in the user expertise equilibrium. I blame two main phenomena for the confusion of new users:

1. The open-source way is advertised as inherently “better”, which is misleading.

2. The open-source way requires members to think about what they do and possibly to contribute however they can.

Since the imbalance has reached a peak of being unbearable for me and other existing members of the open-source community, I decided to write this introductory article so that newbies quickly adjust and the equilibrium is restored.

I. User-friendliness is a lie
Following up on the thoughts laid out at over-yonder.org, I want to make this statement extra clear. There is no such thing as user-friendliness. It.does.not.exist. The Internet is crawling with click-bait articles entitled “The best user-friendly Linux distribution!” or “The most user-friendly desktop environment!”. These articles were crafted in order to increase the view count of the host website, not to provide useful information on the topic. Alternatively, they were written by people who are as confused as newbies. “User friendly” just like “intuitive” is a catchphrase – an advertising gimmick used to get you to buy/get a product. There is no extra depth to it. What people wrongly label as “user-friendly” is in fact “hand-holding” – the software/hardware is expected to do something for the user. Not enable the user to perform an action, but actually do the action for him/her. A stewardess on a cruiser or an aircraft is helpful, because she answers passengers’ questions, however she does not hold anyone’s hand, as that would mean leading every single passenger to their seat. If anyone ever tells you that something is user-friendly, ignore them and move on. You know better :).

II. Qualities, quantity and gradation
Generalized comparative statements are being thrown about virtually everywhere. This annoys me and should also annoy you after reading this paragraph. The truth is that most of those statements are fundamentally wrong, because they assume objects of different qualities can be compared using abstract terms. They CANNOT. A useful reference point is comparing apples to oranges. Can it be said that oranges are better than apples? No. What about apples being better than oranges? Neither! “Better” is an abstract term, which by itself means nothing. Therefore, saying “OpenSUSE is better than Ubuntu” means absolutely nothing, also! However, what can be done is comparing specific features of A and B. You cannot say “Apples are better than oranges”, but you can claim that an average apple is heavier than an average orange with specific examples of both. Color-wise, you can say that apples tend to be green-red, while oranges yellow-orange-reddish. You cannot directly compare colors, mind you, unless you express the color of A and B in a uniform color scale, like “the amount of red”. No fallacy has been committed that way. Therefore, neither software, nor hardware can be directly compared, though you can say, for instance that “openSUSE has a number of tools like YaST, which make it potentially more convenient for system administrators than Ubuntu”. Remember that!

III. The “use case” concept
Knowing that user-friendliness does not exist and that many things cannot be directly compared, the next step is understanding the “How” inherent to all problems. You have an issue or an inquiry. What is that you want to achieve? What are the exact requirements to reach your goal? What is the situation in which you experienced your problem? Being specific and being able to disassemble large problems into smaller tasks is paramount to understanding the problem and finding the possible solutions to it. This is true not only for computers, but for everything in life alike. Once you know your “use case”, you will know which hardware and software (including the operating system) to choose. Different operating systems cover various use cases or use scenarios, thereby understanding  your use case well will allow you to find the perfect operating system or any other piece of software quicker.

IV. Options, decisions and the “good enough”
All of the above being said, humans have this need to always aim for optimal solutions. Subconsciously,  they want only the “best” for them. What if it’s impossible to identify the best option? What if all of them satisfy our requirements equally well? Thus, the concept of “good enough” comes into play. Sometimes, the “best” solution is the first solution we decide upon and stick with it. No second thoughts allowed! Until we identify a legitimate reason why solution #1 no longer satisfies our needs for a prolonged period of time. Wondering which operating system to choose? Linux Mint? Ubuntu? Debian? Fedora? Perhaps not a Linux based OS, but a pure UNIX-like BSD? There are so many! If you’re a beginner, it doesn’t matter which you choose. Pick one, stick with it and change only if experimenting or your first choice was completely wrong.

V. Thinking and the individual responsibility
This will be a harsh one. Proprietary operating systems create this illusion of user friendliness (it’s a lie, we know it now!) and that the user is not required to take responsibility for his/her actions done on his/her software/hardware. This is one of the major fallacies in the computer world. The moment you buy a computer, you are completely responsible for it. Consider it your “child”. You need to make sure it’s always clean, powered up etc. No one will ever do it for you. Others can recommend solutions, give advice, provide support even, but the final decision is on you and you alone. Whatever you do with your computer, it is your success or failure. The primary reason why malware spreads like wildfire is that people are convinced that they don’t need to actively care for the safety of their computers. Dead. wrong.

The open-source way is not better than the proprietary/closed-source way. It’s different, nothing else. I chose it, because it aligns with my personal preferences well and I believe that it will prevail. It is for you to decide whether you can accept that. If the answer is “Yes”, I congratulate you. Go forth, learn and become a full-fledged member of the open-source community :).

GNOME3 Oversimplified?

gnome_logo

Seeing as how GNOME3 and KDE (4? 5? Plasma? Neon? Ion?) are the leading desktop environments nowadays, I decided to give GNOME3 a try on my openSUSE Leap 42.3 workstation (my current main distribution on most hardware, including a Raspberry Pi 3). There are good and bad things and some of it agrees with my former assessment of GNOME3. However, I assumed that since I’m more pro-desktop now, my opinion might change. Well, it did perhaps…

The Good:
In material design alone GNOME3 wins a trophy. There is a lot of MacOS X mimicry and I think that speaks well of the project. The guys (and gals!) from Apple know their stuff so why not get inspired by them a little? The overall UX (user experience) is also positive. Thanks to the highly intuitive interface finding important desktop features is a breeze. One just needs to browse a bit and not follow the imprinted this option must be hidden somewhere philosophy that other desktop environments teach us. Also, I greatly appreciate the attention to useful features like the one-click offloading of graphically intensive applications to the discreet nVidia card on Optimus laptops. If our day-to-day tasks focus on office work and leisure, GNOME3 could potentially be the desktop environment of the future. It stands to reason, because it’s an open-source project, molded and shaped into perfection by the user and developer communities. It constantly evolves so there is no limit to its improvements.

The Bad:
Unfortunately, it seems that simplicity of design has its price. Troubleshooting GNOME3 is extremely painful and many of the applications (including the Gnome Shell and the Gnome Display Manager) throw the most uninformative error messages.

something_went_wrong

Authored by the Interwebs

Case in point, the above error screen. The Oh no! Something has gone wrong is a phrase typically used in commercial applications to shield end users from the headaches of reading crash logs. By willfully choosing Linux we demonstrate that we’re no mere end users so treating us as such is quite rude. To dwell on this a bit more, the above error screen appears even when the Gnome Display Manager login panel crashes. How is one supposed to log out without being logged in to begin with? To make matters worse, since many Linux distributions have the display manager set to restart on failure, this screen will keep re-appearing until proper troubleshooting is done in one of the TTY consoles (ctrl  + alt + F1 – F9 keys). This very much reeks of Windows and MacOS X problems where the user interface basically took over the OS. All we can do is just reboot and hope for the best. Other applications show similar An error occurred messages without any means of actual troubleshooting.

My take-home from this experience is Thanks, but no thanks. If I want to get some work done, I would rather rely on LXDE, XFCE, LxQt and maybe even KDE. Traditional desktop environments without bells and whistles.

AI, Lisp and Why Languages Die

In my exploration of the things arcane and mythical, I stumbled upon a forgotten book by Peter Norvig – Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. My IT colleague was delighted to see it and highly recommended the study of Common Lisp as a sort of meta language. As programming languages interest me greatly and my understanding of functional programming is rather lacking (I loath Python lambdas), I decided to give it a go. What I discovered was a language with an unusual syntax (defining all structures as lists of objects), yet a potential for writing useful tools efficiently. Then, I learned various Lisp dialects were heavily used during the computer boom of 1960-80s when the US government would pump billions of dollars into military and NASA projects (machine learning, AI algorithms, behavioral simulations,, etc.). The trend died down in the early 1990s and together with it Lisps either gave rise to modern languages like Clojure (also from the Lisp family) or simply disappeared. From the old generation Scheme and Common Lisp are still in use, though less and less by the day.

Artificial Intelligence has always been an extremely vital (and interesting) field of computer science. In fact, now more than ever, as rapid growth of the Internet forces us to develop smart tools to sieve through the wild abundance of information in real-time. No wonder projects like Alexa (Amazon) or Cortana (Microsoft) are on the rise. There are 2 crucial aspects of AI that seem to garner much interest in my opinion – human language interfaces (How to make programs understand us humans in all/most of our natural languages?) and intelligent filtering algorithms (How to make programs increasingly aware of our human needs and remember them?). The second aspect involves machine learning, which delves into data extrapolation, approximation and the progressive nature of filtering algorithms. It all boils down to making computers more human and doing some (most?) of our work for us. There are many quite realistic pitfalls, of course, like algorithms deciding that humans are the limiting factor in making our (human) lives easier. When we consider emotions as a complete contradiction to reason, this makes perfect sense. Unpredictable humans are the weakest link in an approach that relies on predictable values.

Going back to Lisp and its dialects, after its inception in 1959 it quickly became the language of choice for writing mathematical algorithms, especially in the field of AI. It was clear that the Lisp S-expression syntax makes code easy to read and the language itself has a strong propensity for evolution. From more modern times (1990-2000) there are plenty of success stories on how Lisp saved the day. Finally, Lisps pioneered crucial concepts like functional recurrence, concurrency and interactive programming (the famous REPL read-eval-print-loop, nowadays a common feature of Haskell, Python and other languages). Taking all of this into consideration it is quite difficult to understand why Common Lisp (the standardized Lisp effort) stopped being the hot stuff. Some of the sources I found mentioned that Lisps were pushed aside for political reasons. Budget cuts made a lot of NASA projects struggle for survival or meet swift demise. Also, new cool languages (*cough* *cough* Perl) came to be and Lisps were supposedly too arcane to be picked up and used easily. However, to me Common Lisp seems far less verbose (obfuscated?) than for example Java and far more orderly than said Perl. Its performance is also supposedly on par with Java, which might be interesting to people who would like to write useful tools quickly (as quickly as in Python, for instance), yet not get into the memory management details of vanilla C or C++ for better performance.

The truth is that no language is really dead until it becomes naturally obsolete. Even if it suddenly loses enterprise backing. While Lisps have some viable descendants, one would be hard pressed to find a language that directly supersedes Lisps. There are of course multiple functional languages that share Lisps’ strengths, yet they typically sport a vastly less approachable syntax, devoid of easily readable S-expressions. Therefore, I believe Scheme, Common Lisp and other modern Lisps deserve not only attention, but also proper appreciation.

Fedora 26 – RTL8188EU the Hard Way!

Following my former entry preaching on the greatness of Fedora 26, I decided to share some wisdom regarding USB wireless adapters (aka dongles) with the Realtek RTL8188EU chip. These and many other Realtek-based (usually RTL8188EU and RTL8192CU) adapters are affordable and extremely common. Companies like Hama, Digitus, TP-LINK and Belkin fit them into the cheapest 150N and 300N dongles, claiming that they’re compatible with Linux. In principle, they are. In practice, the kernel moves so fast that these companies have problems keeping up with driver updates. As a result,  poor quality drivers remain in the staging kernel tree. Some Linux distributions like Debian and Ubuntu include them, but Fedora doesn’t (for good reasons!) so Fedora users have to jump through quite some hoops to get them working…

The standard approach is to clone the git repository for the stand-alone RTL8188EU driver, compile it against our kernel + headers (provided by the Linux distribution of choice) and modprobe load if possible. Alas, since the stand-alone driver isn’t really in-sync with the kernel, it often requires manual patching and is in general quite flaky. An alternative, more fedorian approach is to build a custom kernel with the driver included. The rundown is covered by the Building a custom kernel article from the Fedora Wiki. All configuration options are listed in the various kernel-*.config files (standard kernel .config files prepped for Fedora), where “*” defines the processor architecture. Fortunately, we don’t have to mess with the kernel .configs too much, merely add the correct CONFIG_* lines to the “kernel-local” text file and fedpkg will add the lines prior to building the kernel. The lines I had to add for the RTL8188EU chip:

# ‘m’ means ‘build as module’, ‘y’ means ‘build into the kernel’
CONFIG_R8188EU=m
CONFIG_88EU_AP_MODE=y

This however will differ depending on the Realtek chip in question and the build will fail with an indication which line in the kernel .config was not enabled when it should’ve been. Finally, if you do not intend to debug your product later on, make sure to build only the regular kernel (without the debug kernel), as that takes quite some time.

 

The Kernel, the Kernel’s to Blame!

desktop_linux-100276138-orig

When getting my Raspberry Pi 3 set up recently I experienced quite some woes concerning out-of-the-box detection of SD cards. One might expect that an SD card slot is nothing more than a USB-like interface. In theory yes, in practice quite some distributions have problems with accepting that fact. Gentoo failed me numerous times, though partially because I decided to go for an extremely slim kernel config. Manjaro also surprised me in that respect – SD card detected, but not as a USB drive (thereby, not mountable). Fedora and Lubuntu had no problems. Each distribution uses a different set of graphical utilities and desktop environments so users often blame the front-end setup. That’s wrong, though, because the inability of a system to detect a piece of hardware has everything to do with the kernel configuration. Indeed, the kernel’s to blame.

I personally prefer the Arch approach – almost everything as modules. Although this could add significant overhead due to the way modules are loaded, in reality it makes Arch-based systems very light on resources. After all, what’s not in, doesn’t get loaded at all. The drawback is that the distribution or the user is required to ascertain that the initramfs is complete enough to allow a successful boot-up. The alternative is to integrate as many drivers as necessary into the kernel, though that of course makes the kernel bulky and isn’t always the optimal solution. There is a lot in-between that unfortunately causes weird issues like the one I experienced.

I think there should seriously be some consensus between distribution teams regarding what goes into a kernel and what doesn’t. Weird accidents can be avoided and it’s down to individual teams to iron that out. Of course, one can go hunting for drivers on GitHub and trying out 5 versions of a Realtek 8188eu driver, but why should the user be required to do so?

ARMing For the Future

singleboard_computers

Image taken from edn.com

For some time now I’ve been itching to get my hands on a Raspberry Pi single-board computer. Unfortunately, retailers like Saturn and MediaMarkt would shrug my inquiries off with a “we’re expecting them soon”. To my dismay the “soon” seemed like it would never come. Surprising, since the computer geek culture is constantly expanding and the demand is definitely there. Finally, after months of waiting, the Pi arrived to Austria. I quickly armed myself (pun intended) with a RPi 3 model B, a Pi-compatible power supply (5.1 V, 2.5 A) and a mate black case. The rest I already had since I collect various adapters, SD cards, etc. as a hobby. Always handy, it seems. Without further ado, though!

Get your geek on!

Contrary to my expectations, getting a Linux distribution to boot on the Pi was a bit of a hustle. Raspberry Pis don’t have a typical BIOS like laptops or desktop PCs. The firmware is extremely minimal, enough to control the on-board LEDs, hardware monitors and swiftly proceed to booting from a flash drive (SD card, USB stick), or a hard drive. Therefore, one doesn’t actually install a Linux distribution on the Pi. Rather, it’s required to *dump it* onto a disk and plug that disk into a port on the Pi to get it working. There is a fine selection of dedicated distributions out there already – Raspbian, FedBerry, etc. Major projects like FreeBSD, OpenBSD, openSUSE, Fedora and Debian provide ARM-compliant images as well. It’s just a matter of downloading an image, putting it onto an SD card (8-16GB in size, preferably) and we’re ready to go.

Pushing the limits

Not everything is as smooth as it may sound, however. Some of the distributions like FedBerry suggest desktop environments and utilities that are clearly too much for the Pi to handle. Even the top-of-the-line Pi 3 model B is not strong enough to run Firefox smoothly. Part of the problem is the GUI-heavy trend in software design, the other part being the still evolving design of the Pi. At the moment we’re at 1 GB RAM. That’s quite modest per today’s standards. With increasing hardware needs, more care should be taken in regards to the board itself also. Both the CPU and GPU will quickly overheat without at least a basic heat sink. I like ideas such as this, which try to provide the extra add-ons to turn a Raspberry Pi into a full-blown computer. Personally, I use minimalist tools such as Vim/Emacs, Openbox, Dillo so the limitations aren’t really there for me.

IoT for the future!

Truth be told, ARM-powered devices are everywhere. Though it’s a resurrected platform, people have not forgotten about the merits of RISC. Raspberry Pi is not the only Pi, nor is it the only single-board computer worth attention. With such an overabundance of almost-open-source hardware, one can do anything. Pi Zero computing cluster? Check. Watering device sensitive to solar light intensity? Check. Minecraft server? Check. NAS for the whole family? Check. It’s there, it’s cheap, it just needs a bit of geek – namely you!