FreeBSD – There and Back Again

Beastie On the Bike – The Blog

I guess it should be no surprise that I returned to FreeBSD once more. One of the reasons I originally started learning C was to be able to help writing/fixing wireless drivers for FreeBSD. Although I haven’t reached that point of proficiency just yet, I feel FreeBSD is truly the place I belong after all. From the intrinsic order of a cathedral, through good programming practices and complete documentation to great system-level tools (jails, zfs, bhyve, etc.). Reading the most recent issue of Admin: Network & Security made it even clearer to me. GNU/Linux is growing strong in the server sector, with new GUI-driven tools and frameworks for container management. Personally, I think that’s awesome! It’s a win for the whole open-source world. However, Unix is more than just GNU/Linux – people often forget about Solaris/OpenIndiana and the various BSD-based operating systems (FreeBSD, OpenBSD, NetBSD, DragonflyBSD, etc.). They too are great server platforms, one can learn a lot from. There are scenarios in which a non-Linux Unix is more suitable for the same or similar tasks.

When I get hyped about a specific GNU/Linux distribution, I often consider how many and what people use that distribution. I typically stay clear of user-friendly distros for beginners. While I used to be a beginner also, the discussions in their forums and/or IRC channels don’t get me involved. Usually, the gist is that someone didn’t manage to accomplish something, because they decided not to read the documentation or not search the Web / said forums for a solution. I understand, we are there to help after all. However, the original poster needs to put in some effort, otherwise our aid is for naught. The other sort of issues appears after major releases. Something gets broken, because it was unintentionally changed and messed up a setup of this 1 in a 100 user. I’m then as infuriated as the sufferer, because these issues should not happen in the first place. Problems and inexperience drive me away from user-friendly distributions, but then again, these distributions garner the most users. We hear about OpenSUSE, Ubuntu, etc., but not so much about Gentoo, CRUX and others. It’s a conundrum I cannot solve. I end up gritting my teeth and plunging head first into the fray. Regrets come sooner than later, though.

Then, quite obviously I turn to FreeBSD once more. It’s extremely solid and doesn’t break, even when running -STABLE. When I don’t know something, I fire up man something from the command-line or read the respective chapter in the FreeBSD Handbook. Seldom, but still, for unanswered or new problems I ask in the forums. In most cases it turns out that the answer was indeed in the Handbook, just not in the chapter I would expect it to be. Fair game. One of the major concerns is hardware support. Agreed, it’s a tad behind GNU/Linux and Windows/MacOS X. However, the hardware that works, does so without a hitch. No forgotten or flaky drivers for common devices. It’s a matter of preference, but I’d rather have a narrower selection of compatible hardware I can trust anytime, than an empty claim that it works, there is a driver for it, while in reality it doesn’t. What about the popularity I mentioned earlier? There are quite some people on the many IRC channels and companies/organizations of importance in the world proactively choose FreeBSD for their servers. Recently, NASA decided to use FreeBSD for their project. Many more success stories are out there and they’re definitely a credit to FreeBSD’s outstanding quality. However, there is very little self-promotion compared to company-sponsored GNU/Linux distributions. The quality of FreeBSD seems to stem more from the honest work of community members, than merely writing about it (which I’m committing right now…). In that respect, FreeBSD is more community-driven than any GNU/Linux distribution that makes such claims.

I might change my mind at some point, but for now I’m happy to be back to FreeBSD. It’s secure, doesn’t break, keeps my data safe and helps me get the job done fairly quickly. All things considered, I would choose it anytime as my go-to server platform. Hope more people begin thinking alike.


PC Parts Recycling Nightmares

To warn everyone from the get-go, this will be a rant. Therefore, hold onto your socks and pants, and watch the story unfold. And the gravy thicken…

Recycling computer parts is an extremely important aspect of keeping computers alive. It often lets you turn 5 broken computers into at least 1-2 that are still fully operational. Not to mention rebuilding, tweaking, expanding, etc. Theoretically, you could have a single desktop computer and just keep replacing its “organs” as they die. All is good even if a hard drive fails. We swap hard drive(s), restore the operating system and our data, and we’re good to go in minutes/hours. Since my very first computer was a self-assembled desktop PC, way before laptops were a “thing”, I got used to this workflow. Laptops required me to adjust, because each company would use different connectors and build schematics. Also, there were model lines like the early Dell Latitudes that had quirks one needed to know before opening up the case. That’s laptops, though. A complicated world of its own. I agree that no one should expect a mobile device to be tinkering-friendly. It’s supposed to be light, energy-efficient and just strong enough to get the job done. Fully understandable priorities! However, I would never in my wildest dreams (or nightmares?) expect these priorities to leak into and envenom the world of tower-sized desktop computers.

Imagine this scenario – you get a midi tower workstation computer from an acclaimed manufacturer like Dell or HP. It has a powerful Intel Xeon 4-core processor with hyper-threading. Marvelous beast! You can use it as a build farm or an efficient virtual machine host. However, years go by and you want to expand it a tad – swap in extra drives, a RAID card perhaps. Or maybe put in a decent graphics card to do enterprise-grade 3D modeling in AutoCAD. You open the case, look inside a bit and you instantly begin to cry. The workstation has a shamefully bad 320W power supply unit (PSU). You then wonder how was this PSU able to support both the power-hungry Intel Xeon CPU and the graphics card. You run web-based PSU calculators and all of them tell you the same thing – you’d fry your computer instantly with such a PSU and at least a 450-500W one is needed. Unlike many others, you were lucky to last that long. That’s not the end of the story, though! Your workstation’s current PSU cannot be replaced with a more powerful standard ATX PSU. HP decided to use fully proprietary power connectors. Also, a replacement PSU cannot be bought anymore, because this model line was dropped years ago. Now you’re stuck and need to buy a new server motherboard that would fit your Intel Xeon, a new PSU and a new case, because the HP case was designed for the HP PSU. You drop to the floor and wallow at the unfair world… Many more stories can be found on the Internet here and here.

I fully understand that manufacturers need to make a living. However, using low-grade proprietary computer parts in systems that are considered upgradable by universal standards is not only damaging to the market by introducing artificial constraints, but also a sign of bad design practices. Not to mention the load of useless electronic junk such attitude produces. I believe manufacturers should care more about common standards as in the end it’s beneficial to everyone.

OpenSUSE Tumbleweed vs Fedora 25

I haven’t done a side-by-side review for a while and since others might have similar dilemmas, here it is. OpenSUSE Tumbleweed vs Fedora 25. Developer’s perspective in a moderately fair comparison. As test hardware I used my main S301LA VivoBook from ASUS. It’s light, FOSS-friendly and since I had swapped in an Intel wireless chip, has never let me down. OpenSUSE was installed from the network installer, while for Fedora I used respective desktop spins. Tested desktop environments were XFCE and LXDE. I like old, stable and lightweight. Let’s see what gives!

Getting the installation medium
Fedora 25 wins this one hands down. In fact, any distribution would compared to OpenSUSE. The full OpenSUSE Tumbleweed installation disc is 4.7 GB in size. However you look at it, that’s an absolute joke. Not only does it not fit on a single regular DVD disc, but also takes ages to download. If you need an extra disc for 32-bit hardware, you need to download again! This might have been excusable in the age of disc-only distribution, however nowadays it’s just unreasonable. Fedora offers 3 main GNOME3 discs (Workstation, Server and Cloud) + community spins with KDE, XFCE, LXDE, MATE and Cinnamon. Quite the choice, I must say.

This one goes to OpenSUSE Tumbleweed, easily. OpenSUSE sports perhaps the best installer I’ve seen in a free operating system. It’s so good and reliable, it’s just enterprise grade. My favorite feature is the ability to cherry-pick individual packages or follow metapackage patterns. Fedora’s network installer is customizable as well, though not to such a high degree. OpenSUSE just shines.

Out-of-the-box customization
OpenSUSE wins again, unfortunately. While Fedora is properly customized when you install it from a prepared LiveCD, that’s not the case with the network installer. All of the extras like a graphical front-end to the package manager need to be configured manually. In contrast, OpenSUSE is fully configured even if you select a desktop environment that’s neither KDE nor GNOME3 from the network installer. The polish is there as I mentioned in one of my earlier entries.

System management
OpenSUSE has the great YaST tool for configuring networks, NFS shares, firewall, kernels, etc. Fedora relies on desktop-specific applications and doesn’t have a dedicated tool. However, Yumex is less cumbersome than the GUI in OpenSUSE. I think at this point the general focus of each distribution starts to show as well. OpenSUSE emphasizes system management, while Fedora tries to be a FOSS all-rounder. There is no good or bad here, just differences. I prefer the Fedora-way as it’s a bit more lightweight.

Selection of packages
Both Fedora 25 and OpenSUSE Tumbleweed require some tinkering. Codecs are a no no due to licensing issues. It’s quite a shame, but when we recall the dismal Windows Media Player…Anyhow, licensed programs can be acquired either from RPM Fusion (Fedora) or Packman (OpenSUSE) repositories. OpenSUSE wins in terms of package numbers, though Fedora’s approach makes for a more stable environment. Some of the Packman packages are testing-grade (Factory), thus are prone to breakage.

As a developer platform
Both distributions are geared towards developers and both do it rather well. However, as mentioned earlier, OpenSUSE Tumbleweed favors streamlined system management and focuses more on server-centric features. In theory, it’s a separate product from OpenSUSE Leap, but in practice it shares its goals. Fedora is THE developer platform. The sheer number of programming language libraries and IDE plugins is a win in my book. Even Arch Linux doesn’t come close. Then we have the COPR (Fedora) and OBS (OpenSUSE and others) servers for package building and distribution. Both frameworks are straightforward and reliable. No clear winner here.

Thus, I conclude – a draw. That would explain my dilemma, I guess. OpenSUSE Tumbleweed and Fedora 25 are both great development platforms. However, they clearly focus on different things. OpenSUSE is more server-centric – database management, data storage, safety and recovery, etc. Even though Tumbleweed is the development line, this still shows. The upside is that it’s extremely streamlined and the extra hand-holding might be useful. Fedora is the true FOSS dev platform. No wonder Linus uses it! Great focus on programming tools and libraries. Things are not as streamlined, but less restrictive as a consequence. Server appliances are also available, though it’s rather deployment than management. I chose Fedora, because I don’t mind my system breaking occasionally. OpenSUSE Tumbleweed might be the easier choice, though.

Unix and Software Design

Getting it right in software design is tricky business. It takes more than a skillful programmer and a reasonable style guide. For all of the shortcomings to be ironed out, we also need users to test the software and share their feedback. Moreover, it is true that some schools of thought are much closer to getting it right than others. I work with both Unix-like operating systems and Windows on a daily basis. From my quite personal experience Unix software is designed much better and there is good reasons for that. I’ll try to give some examples of badly designed software and why Unix applications simple rock.

The very core of Unix is the C programming language. This imposes a certain way of thinking about how software should work and how to avoid common pitfalls. Though simple and very efficient, C is an unforgiving language. By default it lacks advanced object-oriented concepts and exception handling. Therefore, past Unix programmers had to swiftly establish good software design practices. As a result, Unix software is less error-prone and easier to debug. Also, C teaches how to combine small functions and modules into bigger structures to write more elaborate software. While modern Unix is vastly different from the early Unix, good practices remained a driving force as people behind them still live or have left an everlasting impression. It is also important to note that the graphical user interface (Xorg and X11 server) was added to Unix much later and the system itself functions perfectly fine without it.

Windows is entirely different as it was born from more recent concepts, when bitmapped displays were prevalent and the graphical user interface (GUI) began to matter. This high-level approach impacts software design greatly. Windows software is specifically GUI-centred and as such emphasizes the use of UIs much more. Obviously, it’s a matter of dispute, though personally I believe that good software comes from a solid command-line core. GUIs should be used when needed not as a lazy default. To put it a bit into perspective…

My research group uses a very old piece of software for managing lab journals. It’s a GUI to a database manager that accesses remotely hosted journals. Each experiment is a database record consisting of text and image blocks. From the error prompts I encountered thus far I judged that the whole thing is written in C#. That’s not the problem, though. The main issue is that the software is awfully slow and prints the most useless error messages ever. My personal favorite is “cannot authenticate credentials”. Not only is it obvious if one cannot log in, but it contains no information as to why the login attempt failed. Was the username or password wrong? Lack of access due to server issues? Maybe the user forgot to connect to the Internet at all? Each of these should have a separate pop-up message with an optional suggestion on what to do to fix the issue. “Contact your system administrator” not being one of them!

Resources and Limitations

Somewhat inspired by the extensive works of Eric Raymond and Paul Graham I decided to write a more general piece myself. Surprisingly, the topic is almost never touched upon or discussed only indirectly. We programmers often write about software efficiency in terms of resource usage (RAM, CPU cycles, hard drive space, material wear, etc.), however the mentioned resources are actually secondary or even tertiary resources. There is a single fundamental resource, from which all the others are derived – time.

We are all born with a certain selection of genes that predisposes us to a defined lifespan. Thanks to the improvements in Medicine, this lifespan can be adjusted so that we don’t die prematurely due to a genetic defect or an organ failure. Still, the overall limit is quite tangible. In order to sustain our living, we exchange bits of this lifespan (time) for a currency unit by working. With enough units we can afford accomodation, nurishment, entertainment, etc. In essence, to keep ourselves in good spirits and in a healthy body. As part of software design we constantly measure time in combination with previously mentioned resources. We try to spend less time on repetitive tasks that can be easily automated via programs, but also require efficient tools to write those programs. It’s very clear that with the need to make a living, we most likely don’t have enough time to master every major programming language or write every tool we need to get the job done. We need to trust fellow programmers in that respect. As Eric Raymond once wrote, one should typically not need to write a tool twice, unless for learning purposes.

Thereby, provided that the secondary/tertiary computer resources are not limiting, it would be wise to use a tool (operating system, programming language, API, framework, etc.) that gives the highest efficiency. For instance, Ubuntu or OpenSUSE instead of Slackware, Arch Linux or Gentoo. Python, Ruby or Java instead of C or C++. There is absolutely no shame in using a high-level tool! The good enough is far more important than prestige or misdirected elithism. That’s how you win against competition – by being efficient. I think we should all remember that!