It’s Time for FreeBSD!

For the last couple of weeks I have been delving deeper into the arcane arts of FreeBSD, paying extra attention to containers (jails), local/remote package distribution (poudriere) and storage utilities (ZFS RAID, mirrors). Truth be told, with every ounce of practical knowledge I was increasingly impressed by this Unix-like operating system. It’s nothing short of amazing, really! The irony lies in the fact that FreeBSD is an underdog in the Unix world (less than OpenBSD or Illumos, but still), despite the fact that it excels as a server environment and established many technologies currently in focus (process isolation, efficient networking and firewalls, data storage, etc.) years ago already. GNU/Linux is picking up pace, but it still has a long way to go.

I feel traditional Unices got the “base system + additional applications” separation properly. One might think it’s just a matter of personal taste – the “order vs chaos” debate going on for ages. The truth is that this separation is not only reasonable, but also extremely useful when one begins to treat the operating system as more than a mere Internet browser or music player. Order is paramount to organizing and securing data from ill-intent or hardware failure. I really appreciate the use of /usr/local, /var/db, /var/cache and other typical Unix volumes on FreeBSD, as it makes the system more predictable, and therefore hustle-free. When we have a multitude of systems to care about, hustle-free becomes a necessity. With the new container technologies like Docker it’s a realistic scenario – 1 host system serving N guest systems. One doesn’t need to run a server farm to get a taste of that.

This is basically where (and when) FreeBSD comes in. It’s a neatly organized Unix-like system with great storage capabilities (ZFS), process isolation for guest systems (jails and bhyve), network routing (ipfw, pf, etc.) and package distribution (synth, poudriere, etc.). And everything is fully integrated! The “pkg” package manager knows what jails are and do, and can easily install programs to them. Poudriere coordinates building new packages, using jail containers so that the host system is not compromised. These packages can later be distributed via HTTP/HTTPS/FTP remotely or locally via the file protocol. Such low-level integration is somewhat foreign to the GNU/Linux world, though among server distributions like OpenSUSE, CentOS or Ubuntu Server it is constantly improving.

Still, whenever I think about the divide between BSD and GNU/Linux, I see a tall brick wall that both sides are struggling to tear down. FreeBSD wants to become more desktop-oriented, while GNU/Linux is trying to reinforce its server roots. Difficult to tell whether this is good or bad. The BSDs do indeed excel as server systems, as recently highlighted in a NASA study. GNU/Linux is more suited for heavy computation and leisure. The brick wall has plenty of nicks, yet it stands strong. Maybe there is a “third option”? Why not let each do the job they do best? What I mean to say is that FreeBSD has its place in the world and the time is ripe to truly begin to appreciate it!

FreeBSD-Debian ZFS Migration

Since the Zettabyte File System (ZFS) is steadily getting more and more stable on non-Solaris and non-FreeBSD systems, I decided to put my data pool created for the previous entry to the test. In principle, it should be possible to migrate a pool from one operating system to another. Imagine the following scenario – a company is getting new hardware and/or new IT experts and needs to migrate to a different OS. In my case it was from FreeBSD to Debian and vice versa. All data volumes were located in a single pool, but depending on the size of the company, it might be several pools instead. Before even thinking of migrating it is first important to make sure that all I/O related to the pool(s) to be migrated was stopped. When the coast is clear we can “zpool export <pool>” and begin our exodus to another operating system.

From FreeBSD to Debian
After exporting the zdata pool I installed Debian Testing/Stretch onto the system-bound SSD drive. ZFS is not part of the base installation, hence all pool imports need to be done after the system is ready and the zfs kernel module is built from the zfs-dkms and spl-dkms packages. apt resolves all dependencies properly so the only weak link is potential issues with building ZFS on GNU/Linux. Should no problems occur, we can proceed with importing the ZFS pool. GNU/Linux is cautious and warns the user about duplicate partitions/volumes. Those will not be mounted, even if the pool itself is imported successfully. Thankfully, conflicts can be resolved instantly by using a transition partition/drive to move data around. Once that’s done, our ZFS pool is ready for new writes. Notice that the content of /usr/local/ will undergo major changes as FreeBSD uses it for storing installed ports/packages and their configurations. In addition, /var/db will contain the pkg sqlite database with all registered packages. While this does not specifically interfere with either apt or Debian (apt configurations are in /var/lib and cached .deb packages in /var/cache/apt/archives), it’s important to take notice of.

From Debian to FreeBSD
Here, the migration is slightly smoother. The “bsdinstall” FreeBSD installer is designed in a more server-centric fashion (and ZFS is integral to the base system) so the ZFS pool can be connected and imported even before the first boot into the new system. The downside is that FreeBSD does not warn about “overmounting” system partitions from the zdata pool so it’s relatively easy to bork the fresh installation. Also, /var/cache will contain loads of unwanted directories and /usr/src, /usr/obj, /usr/ports and /usr/local need to be populated anew just like during a brand new FreeBSD installation.

Either way, the migration process is not too difficult and definitely not horrendously time-consuming. Should the user/administrator have PostgreSQL, MySQL or other SQL-like databases in /var/db, extra steps might need to be taken to ascertain forward and backward compatibility of the database packages. In the end, it’s a matter of knowing what each OS places where. FreeBSD is structured in a very intuitive and safe (from an administrator’s point of view) way. Debian, just like any other GNU/Linux distribution is a bit more chaotic, hence more caution is required. Both are good in their own regard, hence my incentive for migration testing.

FreeBSD – SSD + 2xHDD ZFS Installation

I recently got an extra 2 TB hard drive for my mighty (cough, cough, maybe some 9-10 years ago) HP Z200 workstation running FreeBSD 11.0-RELEASE so I decided to finally build a proper 2-drive RAID (Redundant Array of Independent Disks) mirror. I read the zfs and zpool manual pages (manpages) thoroughly on top of the related FreeBSD Handbook chapters and got to work. Since I also have a 160GB SSD inside that PC, some tinkering was required. The main issue was that SSD drives make use of TRIM for improved block device balancing. UFS provides TRIM support, but ZFS does not. Initially, I thought of having two separate ZFS pools – zroot for root-on-zfs and boot snapshots on the SSD and zdata for high volume data partitions like /usr and /var on the 2-drive array. However, after careful considerations I came up with a simpler partitioning scheme:

160GB Intel SSD:
141G     freebsd-ufs   (TRIM enabled; mounted as “/”)
8G         freebsd-swap

zdata mirrored array on 1.5T Seagate Barracuda + 2T WD Caviar Green:
1.32T freebsd-zfs (on each drive)

With such a partitioning scheme I lost boot snapshots, though it was a lot easier to install the OS as I could rely on the standard FreeBSD installation procedure (bsdinstall) entirely. First, I performed a standard installation via bsdinstall onto the SSD. Next, I created a 2-drive ZFS pool and named it “zdata” following the Handbook. I made sure that all parent partitions like /usr and /var are mounted from the SSD and only the variable and expandable sub-directories like /var/db, /usr/ports, /usr/src, /usr/local, etc. are placed on the ZFS pool. Since each of those required a parent directory in the ZFS pool, I used /zdata/usr and /zdata/var, respectively. That way the /usr and /var mountpoints did not get overridden with empty /usr and /var directories from the ZFS pool. This protects the core system from getting wiped if one of the ZFS drives fails. In addition, the system can be reinstalled easily and the ZFS pool added later without major setbacks. The trick is that all ports are installed to /usr/local and the package manager database  is in the /var/db directory. Flexible, easy and extremely well documented.

Just to clarify, the above is no rocket science and can be done very easily with the tools immediately available in the core FreeBSD installation. This should really be highlighted more as apart from the descendants of Solaris, FreeBSD is the only operating system that offers such capabilities out-of-the-box. GNU/Linux systems have their own RAID and volume management tools, but they’re definitely not as established as ZFS. The GNU/Linux alternative to ZFS is btrfs, as it too combines a volume manager with a file system. However, key features like RAID-5/6 are still unstable and no GNU/Linux distribution offers btrfs-only setups.

FreeBSD – There and Back Again

Beastie On the Bike – The Blog

I guess it should be no surprise that I returned to FreeBSD once more. One of the reasons I originally started learning C was to be able to help writing/fixing wireless drivers for FreeBSD. Although I haven’t reached that point of proficiency just yet, I feel FreeBSD is truly the place I belong after all. From the intrinsic order of a cathedral, through good programming practices and complete documentation to great system-level tools (jails, zfs, bhyve, etc.). Reading the most recent issue of Admin: Network & Security made it even clearer to me. GNU/Linux is growing strong in the server sector, with new GUI-driven tools and frameworks for container management. Personally, I think that’s awesome! It’s a win for the whole open-source world. However, Unix is more than just GNU/Linux – people often forget about Solaris/OpenIndiana and the various BSD-based operating systems (FreeBSD, OpenBSD, NetBSD, DragonflyBSD, etc.). They too are great server platforms, one can learn a lot from. There are scenarios in which a non-Linux Unix is more suitable for the same or similar tasks.

When I get hyped about a specific GNU/Linux distribution, I often consider how many and what people use that distribution. I typically stay clear of user-friendly distros for beginners. While I used to be a beginner also, the discussions in their forums and/or IRC channels don’t get me involved. Usually, the gist is that someone didn’t manage to accomplish something, because they decided not to read the documentation or not search the Web / said forums for a solution. I understand, we are there to help after all. However, the original poster needs to put in some effort, otherwise our aid is for naught. The other sort of issues appears after major releases. Something gets broken, because it was unintentionally changed and messed up a setup of this 1 in a 100 user. I’m then as infuriated as the sufferer, because these issues should not happen in the first place. Problems and inexperience drive me away from user-friendly distributions, but then again, these distributions garner the most users. We hear about OpenSUSE, Ubuntu, etc., but not so much about Gentoo, CRUX and others. It’s a conundrum I cannot solve. I end up gritting my teeth and plunging head first into the fray. Regrets come sooner than later, though.

Then, quite obviously I turn to FreeBSD once more. It’s extremely solid and doesn’t break, even when running -STABLE. When I don’t know something, I fire up man something from the command-line or read the respective chapter in the FreeBSD Handbook. Seldom, but still, for unanswered or new problems I ask in the forums. In most cases it turns out that the answer was indeed in the Handbook, just not in the chapter I would expect it to be. Fair game. One of the major concerns is hardware support. Agreed, it’s a tad behind GNU/Linux and Windows/MacOS X. However, the hardware that works, does so without a hitch. No forgotten or flaky drivers for common devices. It’s a matter of preference, but I’d rather have a narrower selection of compatible hardware I can trust anytime, than an empty claim that it works, there is a driver for it, while in reality it doesn’t. What about the popularity I mentioned earlier? There are quite some people on the many IRC channels and companies/organizations of importance in the world proactively choose FreeBSD for their servers. Recently, NASA decided to use FreeBSD for their project. Many more success stories are out there and they’re definitely a credit to FreeBSD’s outstanding quality. However, there is very little self-promotion compared to company-sponsored GNU/Linux distributions. The quality of FreeBSD seems to stem more from the honest work of community members, than merely writing about it (which I’m committing right now…). In that respect, FreeBSD is more community-driven than any GNU/Linux distribution that makes such claims.

I might change my mind at some point, but for now I’m happy to be back to FreeBSD. It’s secure, doesn’t break, keeps my data safe and helps me get the job done fairly quickly. All things considered, I would choose it anytime as my go-to server platform. Hope more people begin thinking alike.

PC Parts Recycling Nightmares

To warn everyone from the get-go, this will be a rant. Therefore, hold onto your socks and pants, and watch the story unfold. And the gravy thicken…

Recycling computer parts is an extremely important aspect of keeping computers alive. It often lets you turn 5 broken computers into at least 1-2 that are still fully operational. Not to mention rebuilding, tweaking, expanding, etc. Theoretically, you could have a single desktop computer and just keep replacing its “organs” as they die. All is good even if a hard drive fails. We swap hard drive(s), restore the operating system and our data, and we’re good to go in minutes/hours. Since my very first computer was a self-assembled desktop PC, way before laptops were a “thing”, I got used to this workflow. Laptops required me to adjust, because each company would use different connectors and build schematics. Also, there were model lines like the early Dell Latitudes that had quirks one needed to know before opening up the case. That’s laptops, though. A complicated world of its own. I agree that no one should expect a mobile device to be tinkering-friendly. It’s supposed to be light, energy-efficient and just strong enough to get the job done. Fully understandable priorities! However, I would never in my wildest dreams (or nightmares?) expect these priorities to leak into and envenom the world of tower-sized desktop computers.

Imagine this scenario – you get a midi tower workstation computer from an acclaimed manufacturer like Dell or HP. It has a powerful Intel Xeon 4-core processor with hyper-threading. Marvelous beast! You can use it as a build farm or an efficient virtual machine host. However, years go by and you want to expand it a tad – swap in extra drives, a RAID card perhaps. Or maybe put in a decent graphics card to do enterprise-grade 3D modeling in AutoCAD. You open the case, look inside a bit and you instantly begin to cry. The workstation has a shamefully bad 320W power supply unit (PSU). You then wonder how was this PSU able to support both the power-hungry Intel Xeon CPU and the graphics card. You run web-based PSU calculators and all of them tell you the same thing – you’d fry your computer instantly with such a PSU and at least a 450-500W one is needed. Unlike many others, you were lucky to last that long. That’s not the end of the story, though! Your workstation’s current PSU cannot be replaced with a more powerful standard ATX PSU. HP decided to use fully proprietary power connectors. Also, a replacement PSU cannot be bought anymore, because this model line was dropped years ago. Now you’re stuck and need to buy a new server motherboard that would fit your Intel Xeon, a new PSU and a new case, because the HP case was designed for the HP PSU. You drop to the floor and wallow at the unfair world… Many more stories can be found on the Internet here and here.

I fully understand that manufacturers need to make a living. However, using low-grade proprietary computer parts in systems that are considered upgradable by universal standards is not only damaging to the market by introducing artificial constraints, but also a sign of bad design practices. Not to mention the load of useless electronic junk such attitude produces. I believe manufacturers should care more about common standards as in the end it’s beneficial to everyone.

OpenSUSE Tumbleweed vs Fedora 25

I haven’t done a side-by-side review for a while and since others might have similar dilemmas, here it is. OpenSUSE Tumbleweed vs Fedora 25. Developer’s perspective in a moderately fair comparison. As test hardware I used my main S301LA VivoBook from ASUS. It’s light, FOSS-friendly and since I had swapped in an Intel wireless chip, has never let me down. OpenSUSE was installed from the network installer, while for Fedora I used respective desktop spins. Tested desktop environments were XFCE and LXDE. I like old, stable and lightweight. Let’s see what gives!

Getting the installation medium
Fedora 25 wins this one hands down. In fact, any distribution would compared to OpenSUSE. The full OpenSUSE Tumbleweed installation disc is 4.7 GB in size. However you look at it, that’s an absolute joke. Not only does it not fit on a single regular DVD disc, but also takes ages to download. If you need an extra disc for 32-bit hardware, you need to download again! This might have been excusable in the age of disc-only distribution, however nowadays it’s just unreasonable. Fedora offers 3 main GNOME3 discs (Workstation, Server and Cloud) + community spins with KDE, XFCE, LXDE, MATE and Cinnamon. Quite the choice, I must say.

This one goes to OpenSUSE Tumbleweed, easily. OpenSUSE sports perhaps the best installer I’ve seen in a free operating system. It’s so good and reliable, it’s just enterprise grade. My favorite feature is the ability to cherry-pick individual packages or follow metapackage patterns. Fedora’s network installer is customizable as well, though not to such a high degree. OpenSUSE just shines.

Out-of-the-box customization
OpenSUSE wins again, unfortunately. While Fedora is properly customized when you install it from a prepared LiveCD, that’s not the case with the network installer. All of the extras like a graphical front-end to the package manager need to be configured manually. In contrast, OpenSUSE is fully configured even if you select a desktop environment that’s neither KDE nor GNOME3 from the network installer. The polish is there as I mentioned in one of my earlier entries.

System management
OpenSUSE has the great YaST tool for configuring networks, NFS shares, firewall, kernels, etc. Fedora relies on desktop-specific applications and doesn’t have a dedicated tool. However, Yumex is less cumbersome than the GUI in OpenSUSE. I think at this point the general focus of each distribution starts to show as well. OpenSUSE emphasizes system management, while Fedora tries to be a FOSS all-rounder. There is no good or bad here, just differences. I prefer the Fedora-way as it’s a bit more lightweight.

Selection of packages
Both Fedora 25 and OpenSUSE Tumbleweed require some tinkering. Codecs are a no no due to licensing issues. It’s quite a shame, but when we recall the dismal Windows Media Player…Anyhow, licensed programs can be acquired either from RPM Fusion (Fedora) or Packman (OpenSUSE) repositories. OpenSUSE wins in terms of package numbers, though Fedora’s approach makes for a more stable environment. Some of the Packman packages are testing-grade (Factory), thus are prone to breakage.

As a developer platform
Both distributions are geared towards developers and both do it rather well. However, as mentioned earlier, OpenSUSE Tumbleweed favors streamlined system management and focuses more on server-centric features. In theory, it’s a separate product from OpenSUSE Leap, but in practice it shares its goals. Fedora is THE developer platform. The sheer number of programming language libraries and IDE plugins is a win in my book. Even Arch Linux doesn’t come close. Then we have the COPR (Fedora) and OBS (OpenSUSE and others) servers for package building and distribution. Both frameworks are straightforward and reliable. No clear winner here.

Thus, I conclude – a draw. That would explain my dilemma, I guess. OpenSUSE Tumbleweed and Fedora 25 are both great development platforms. However, they clearly focus on different things. OpenSUSE is more server-centric – database management, data storage, safety and recovery, etc. Even though Tumbleweed is the development line, this still shows. The upside is that it’s extremely streamlined and the extra hand-holding might be useful. Fedora is the true FOSS dev platform. No wonder Linus uses it! Great focus on programming tools and libraries. Things are not as streamlined, but less restrictive as a consequence. Server appliances are also available, though it’s rather deployment than management. I chose Fedora, because I don’t mind my system breaking occasionally. OpenSUSE Tumbleweed might be the easier choice, though.

Unix and Software Design

Getting it right in software design is tricky business. It takes more than a skillful programmer and a reasonable style guide. For all of the shortcomings to be ironed out, we also need users to test the software and share their feedback. Moreover, it is true that some schools of thought are much closer to getting it right than others. I work with both Unix-like operating systems and Windows on a daily basis. From my quite personal experience Unix software is designed much better and there is good reasons for that. I’ll try to give some examples of badly designed software and why Unix applications simple rock.

The very core of Unix is the C programming language. This imposes a certain way of thinking about how software should work and how to avoid common pitfalls. Though simple and very efficient, C is an unforgiving language. By default it lacks advanced object-oriented concepts and exception handling. Therefore, past Unix programmers had to swiftly establish good software design practices. As a result, Unix software is less error-prone and easier to debug. Also, C teaches how to combine small functions and modules into bigger structures to write more elaborate software. While modern Unix is vastly different from the early Unix, good practices remained a driving force as people behind them still live or have left an everlasting impression. It is also important to note that the graphical user interface (Xorg and X11 server) was added to Unix much later and the system itself functions perfectly fine without it.

Windows is entirely different as it was born from more recent concepts, when bitmapped displays were prevalent and the graphical user interface (GUI) began to matter. This high-level approach impacts software design greatly. Windows software is specifically GUI-centred and as such emphasizes the use of UIs much more. Obviously, it’s a matter of dispute, though personally I believe that good software comes from a solid command-line core. GUIs should be used when needed not as a lazy default. To put it a bit into perspective…

My research group uses a very old piece of software for managing lab journals. It’s a GUI to a database manager that accesses remotely hosted journals. Each experiment is a database record consisting of text and image blocks. From the error prompts I encountered thus far I judged that the whole thing is written in C#. That’s not the problem, though. The main issue is that the software is awfully slow and prints the most useless error messages ever. My personal favorite is “cannot authenticate credentials”. Not only is it obvious if one cannot log in, but it contains no information as to why the login attempt failed. Was the username or password wrong? Lack of access due to server issues? Maybe the user forgot to connect to the Internet at all? Each of these should have a separate pop-up message with an optional suggestion on what to do to fix the issue. “Contact your system administrator” not being one of them!