How I migrate(d) to OpenSUSE and Why

I’m a die hard FreeBSD fan. I simply love it! It rubs me the right (UNIX) way. Through trials and tribulations I managed to make it do things it was possibly not designed to do. ZFS? Amazeballs. Cool factor over 9000! However, all of that came at a tremendous cost in energy and time. I reached a point when I don’t want to spend time manually configuring everything and needing to invent ways of automatizing things which should work out-of-the-box. Furthermore, most FreeBSD tools are not compatible with other operating systems, therefore learning FreeBSD (or any other BSD variant, for that matter) locks me in FreeBSD. Despite many incompatibilities, this is not the case with Linux. On a side note, the ZFS on Linux project was a great idea. The Linux ecosystem badly needed a mature storage-oriented filesystem, such as ZFS. BTRFS to me at least “is not there yet”. Other tools, such as containers were reinvented in some many different ways that Linux has outpaced FreeBSD many times over. Importantly, Linux tools were tested in many more real life scenarios and are in general more streamlined. For automation, this is crucial. Again, I don’t want to tinker with virtually every tool I intend to use. Neither do I want to read pages and pages of technical documents to get a simple container running. More so, I should not be forced to, since that’s terribly unproductive. Finally, I like to run the same operating system on most of my computers (be it i386, x86_64 or ARM). FreeBSD support for many desktop and laptop subsystems is spotty at best…

Enter OpenSUSE!

green_lizard

Cute lizard stock photo. Courtesy of the Interweb.

Seemingly, OpenSUSE addresses all of the above issues. True, ZFS support is not reliable and there are no plans to the contrary. The problem is as always licensing. BTRFS is still buggy enough to throw a surprise blow where it hurts the most. Personally, I don’t run RAID 5/6 setups, but that’s BTRFS’ biggest weakness right now. That and occasional “oh shit!” moments. Regardless, I think I’ll need to get used to it. Lots of backups, coffee and prayer – the bread & butter of a sysadmin. On the up side, this is virtually the only concern I have regarding OpenSUSE.

The clear positives:

  • Centralized system management via YaST2 (printers, bootloader, kernel parameters, virtual machines, databases, network servers, etc.). A command-line interface is also available for headless appliances. This is absolutely indispensable.
  • Access to extra software packages via semi-official repositories. Every tool or framework I needed was easily found. This is a much more scalable approach than the Debian/Ubuntu way of downloading ready .deb packages from vendors and having to watch out for updates. Big plus.
  • Impressive versatility. OpenSUSE is theoretically a desktop-oriented platform, though thanks to the many frameworks it offers, it works equally well on servers. In addition, there is the developer-centric rolling-release flavor, Tumbleweed, which tries to follow upstream projects closely. Very important when relying on core libraries like pandas or numpy in Python.

So far, I’ve switched my main desktop machines over to OpenSUSE, but I’m also testing its capabilities as a KVM host and database server. Wish me luck!

Advertisements

Why Golang is not for me…

Recently, I decided the time has come to progress my not-yet-existent game developer career. I always wanted to write games and there is a lot of great old-school games which deserve reiterations using modern technologies. After some discussions with my wife (big kudos to her!) and getting properly inspired by DOS-era gems and jewels, I was ready to pick a language. I’m quite confident in my Python skills, however for games I’d rather use one of the mid- to heavy-weight contestants like Java, C#, C or C++. Despite having some experience in C, pure C is too simplistic, heavily procedural and unfortunately doesn’t provide enough tools to build rich graphical applications. Sure, I could try nuklear.h or similar headers for drawing shapes. That’s sufficient for menus, though not for the entire project. Clearly, C is more suited for number-crunching subroutines. C++ is way too complex for me, though of course most games are written in C++, since rendering libraries are coded in C++ and so are game engines. That makes perfect sense. Something easier perhaps? C# is a Microsoft thing and I would like my game(s) to be easily accessible to all platforms. That left me with Java and a new contender – Go.

golang_gopher

Funky gopher on a funky horse – courtesy of the Web

The Golang project officially began in 2009 and managed to garner quite some appeal throughout the years. It’s not a Google toy anymore. For instance, CloudFlare uses it in their Railgun project (circa 4000 lines of code, last time I checked). Other notable examples include the entire TICK stack for time-based metrics (Telegraf, InfluxDB, Capacitator, Kibana) and Grafana (visualization platform for various database back-ends like InfluxDB, MySQL, ElasticSearch, etc.). I even found a 3D game engine advertised as programmed in Go (~50% was written in C, though). Since it appeared that Go is here to stay and is slowly establishing its position as one of the mainstream languages, I decided to at least take a look at it. Sadly, the more I read about it, the less inclined I was to code in it. The emphasis on concurrency is both important and useful, however I feel the language is severely lacking in many respects.

 

No time for classes

Thanks to my Python background I am well accustomed to object-oriented programming and I consider it relevant to writing DRY code. It’s not always the best approach, though in most cases it provides means of maintaining modular programs. We know that modular is good, because it allows us to exchange bits and pieces without breaking APIs. There was a bit of a switch from old-style classes in Python2 to diamond classes in Python3, which seemed to be inspired by Java. However, Python went one step ahead and introduced multiple inheritance, purposely omitted from Java. As it tends to be quite confusing, I avoid it, rather than abuse it. Pure C, the ancestor of many modern languages, lacks classes and they were never introduced in subsequent revisions of the C/C++ standard. It stands to reason as C++ came along in 2006 and expanded the successful formula of C with multiple useful features, including object-oriented paradigms. Also, back in the days, procedural programming was sufficient and even nowadays it is perfectly adequate for system level programming. Unfortunately, Go’s design follows C rather than C++. Thereby, it demonstrates a strong procedural focus, lacking actual means of data encapsulation. Forget classes, object hierarchies, clean polymorphism, operator overloading, etc. To me that’s a step backwards not forward. It means that Go will suffer from the very same general limitations as C.

 

The emperor’s new clothes

One of the major aspects of a language is its syntax. Python wins against many more performant languages, because it’s simple, encourages the use of a clean and consistent coding style, and makes reading other people’s code a breeze. In fact, so does C (in a way) if it’s not abused. The reason why Java was successful upon its release was that it closely followed the syntax of C and C++. It was meant as a portable, cross-platform language with a familiar look to encourage existing programmers to switch. One could code in C, C++ and Java, covering a multitude of use cases effortlessly. In addition, Java Virtual Machines support other languages like Scala, Clojure, Groovy and Jython for even more potent combinations. In contrast, Go was inspired by C, though it completely overhauled the standard C-like syntax for no apparent reason. This leads to confusion, the need to unlearn old, but useful habits and invest resources in learning a completely foreign language. At this point I’m hardly motivated.

 

Simple == useful?

As I mentioned earlier, Go selectively omits many modern and potentially useful language features like classes. It was originally advertised as a simple to understand systems programming language to make the life of people at Google easier. Yet, it locks prospective programmers in a one could even say dumbed down C/C++ syntax, which is alien to other languages. It is true that C++ is a monster of a language due to its scope. However, it is perfectly viable to establish the use of subsets or dialects to make it easier to understand. What I mean is that it would be more useful for prospective programmers to learn a language with more features than having to re-invent these features in a band-aid manner as they become more and more comfortable with the language.

 

Conclusions

While my general impression of Go is largely negative I do not by any means consider it a useless language. Quite the opposite! It managed to provide the server space with a number of useful engines and applications for networking, data storage and visualization. Actually, in some cases these pieces of software are more robust than existing solutions in C/C++. Personally, that’s quite impressive. However, I still believe the arguments against Go are valid. I would rather continue learning Java or even go straight for C++ and recommend others to do the same.

Why Human Evolution is Incomplete

space_landscape

Unrelated inspirational alien landscape to get your attention 😛

Recently, I finished reading Noam Chomsky’s Optimism over Despair (interviews with C.J. Polychroniou) and now I’m making my way through Stephen Hawking’s Brief History of Time. Both books were written by widely-acclaimed philosophers (and scientists, in fact), and are therefore highly inspiring. The conclusions drawn by Chomsky at the end of his interviews are rather terrifying, though he remains optimistic about the future of mankind in spite of the many threats encroaching our planet. Initially, I was optimistic as well, however as I continued to ponder over the multitude of hazards we brought upon ourselves, I slowly grew pessimistic. The source of my pessimism lies in the flawed nature of our existence. To explain my skepticism, I will attempt to break the problem down into unambiguous concepts.

 

How non-human species evolve

When plants, animals, fungi and microorganisms encounter a novel ecosystem (or a change in the current ecosystem), they try to adapt to it by:

  • producing a lot of (potentially fragile) offspring, some of which will survive
  • developing robust survival mechanisms on the level of a single individual

Typically, there is a balance between the two, the shift of which depends on many factors including the size of the organism, accessibility of nutrients, the cost of offspring bearing, etc. In turn, adaption can be measured as a rate with which favorable traits are fixed in a population via genetic means. Therefore, the rate of adaption (also known as evolution) should be higher in species known to produce large offspring pools as more genetic variants are probed, and due to offspring fragility favorable traits are more easily fixed in the population. The process of adaption is of course iterative and continues through subsequent generations. The general assumption is that the starting point for each iteration is the genetic background of an individual, subject to selective pressure. In other words, the genetic material (DNA sequence + non-genetic modifications of that sequence) contains the entirety of information needed for evolution to take place. Obviously, for the sake of simplicity I am just scraping the surface here. The world of flora and fauna is a lot more complex.

 

How humans evolve

It should not surprise anyone that we humans evolve vastly differently. Our adaption rate is incredibly low as we favor the survival of each individual over producing a lot of offspring. One of the more practical reasons is that bearing children is extremely tedious and energy-consuming. The other being that we consume a lot more resources to sustain ourselves than for instance a turtle. In addition, we have developed artificial means of defending ourselves from predators and protecting ourselves from disease-bearing microorganisms called tools. It stands to reason as DNA offers very limited abstraction compared to technology. Our cumulative technological advancements let us evolve beyond the capabilities of a typical humanoid mammal. While it definitely sounds wonderful and empowering, it does not equate the fact that we stopped being animals and gained limitless potential. On the contrary, in our pursuit of greatness, we are still reduced to bodies of flesh and bones, which decay with time.

 

Challenges of human evolution

Apart from our rather limited average lifespan, much of our adaptive capabilities are external and need to be re-acquired via a process called learning. Unfortunately, this takes time and despite our countless efforts, plateaus for a number of reasons:

  1. The learning process is unique to each individual
  2. Accessibility of information is not absolute
  3. Means of co-individual communication are not universal
  4. The well-being of an individual is often in conflict with the well-being of the society

Firstly, the way we build links between pieces of information and remember things is incredibly complex and far beyond our comprehension. We do have a good understanding of the biology behind memory, but truly it is a lot more than that. To make matters worse, each of us builds associations differently. Therefore, it is impossible to create a universal learning system which would allow us to accumulate information effortlessly. Secondly, there is typically a lot more we wish or need to know given our limited lifespans. Thereby, before absorbing information we spend time acquiring it. The situation has gotten better in recent years with the advent of voice recognition systems, house aides like Amazon’s Alexa and information aggregation engines akin to Wolfram Alpha, though learning is still a process. Thirdly, our species is unfortunately divided by language and we need translation tools to circumvent this problem. Here also we have made significant progress and devices providing almost instantaneous translations are available. Alas, issue 4 is an unsolvable one. We have developed laws and regulations which allow us to balance the rights and responsibilities of society, community and individuals. However, these are perfectly arbitrary and fall under our flawed human judgement. In addition, they are not implicit, therefore unlike laws of physics, human laws cannot be applied equally to all contexts. Finally, right and wrong are human terms. We impose them on the aspects of nature which we consider controllable. However, nature itself deals only with fitness, survival and robustness.

 

Future perspectives?

As our evolutionary capabilities are hampered by our biology (longevity and the ability to absorb information) and abstract terms such as good and bad, I believe it is very unlikely that we will remain at the top of the metaphorical food chain for long. As a species we have the means of addressing major world problems, but we choose not to for individual benefit. This much cannot be resolved, for our nature is flawed. Thereby, we should promptly delegate our efforts to the creation of species which would be capable of superseding us successfully.

FreeBSD 11.1 on ASUS VivoBook S301LA

I decided it is time to write a piece on FreeBSD, since I now officially use it as my main operating system both at home (alongside OpenBSD) and at work. My mobile battle gear of choice is the ASUS VivoBook S301LA. It’s a 4th generation Intel-based ultrabook-class laptop, one of the many released by ASUS every year. It has strong points, though also quite some disadvantages. I would like to discuss it from the perspective of a FreeBSD enthusiast.

csm_dsc_0047_b74600070f

Photo courtesy of notebookcheck.com and the Interwebs

Hardware specifications:

  • Processor: Intel Haswell core i3-4010U @ 1.70 GHz
  • Graphics: Intel HD 4400 integrated GPU with up to 768 MB shared RAM
  • Memory: 4 GB DDRL 1600 MHz (soldered) + empty slot for another 4 GB
  • Hard drive: Western Digital Blue 500 GB 5400 rpm (replaceable)
  • Ethernet: Realtek 8169 Express Gigabit
  • Wireless: Mediatek MT7630e with Bluetooth built in (half-sized, replaceable)
  • Sound: Intel HD Audio (SonicMaster)
  • Webcam: Azurewave USB 2.0 UVC HD Webcam
  • Touchscreen: USB SiS Touch Controller
  • Battery: 4 hours
  • Microphone: yes, next to the Webcam
  • Keyboard: Generic AT keyboard
  • Touchpad: Generic touchpad with integrated click-fields
  • Additional ports:
    – left side: Ethernet, HDMI, USB 3.0, microphone/headphone jack
    – right side: Kensington lock, 2x USB 2.0, SD card slot

 

The good:

  • Extremely lightweight
  • Never overheats
  • Moderately fast after upgrades

The bad:

  • Paper-thin keyboard
  • Slippery touchpad
  • Highly reflective, mirror-like screen
  • Cheap, lower-end wireless card

Overall, this device is a fairly standard consumer-grade ultrabook. The crappy keyboard is something one can get used to rather quickly. I’m not a fan of touchpads, therefore I rely on PC mice for clicking and scrolling unless I’m on a plane or train. Nowadays, reflective screens are no longer an issue thanks to anti-glare screen protection sleeves. The obvious downside is that anti-glare screens lack sharpness typical of reflective screens. In general, the drawbacks can be easily mitigated with upgrades, which however turn the laptop into a moderate investment. The choice is down to the prospective user.  Furthermore, the manufacturer (ASUS) made some choices, which I am not entirely convinced by. Firstly, touchscreens are more useful on hybrid flip-laptops like the Lenovo Yoga. In this model the touchscreen is more of a nuisance when cleaning the plastic cover on the display and draws power needed elsewhere. Secondly, the wireless adapter is perhaps the worst of its generation with a nominal bandwidth of 150 bpms. Still, it’s more of a travesty to see it in high-end ROG gaming models (yes, it’s true…).

 

The FreeBSD perspective:

This might be somewhat disappointing. Depending on what one expects from a mobile device, the S301LA is either average or just plain broken. Not to sound rude, but I’m sure a Thinkpad or an Ideapad would be a far superior choice. Haswell HD 4400 graphics chips have proper (aka working) FreeBSD support since just release 11 and most other components are barely supported. The Azurewave USB webcam actually works (webcamd needs to be attached to USB device ugen0.2 by root, a superuser or a member of the webcamd group), but no VoIP software is available on FreeBSD out-of-the-box. I guess one could get Windows Skype to run via WINE or force the alpha-quality Linux client into submission, but that’s a lesson in futility, I think. Personally, I wouldn’t be using this ultrabook at all if not for the fact I finally managed to replace the trash wireless adapter with something half-decent (albeit from 10 years back) from Intel, namely the WiFi Link 5100. After adding another 4 GB RAM and a Western Digital SSD, I would consider this ultrabook worth the money and time. However, as I mentioned earlier, there are far better choices on the market.

Show Me Your Code!

For the last couple of months I have joined and participated in discussions in multiple Facebook tech groups. As demographically diverse as Facebook is, I noticed a worrying trend. Most of the inquiries have the following features:

  1. Incomplete, badly written and/or fail to explain the problem at hand in an understandable fashion.
  2. Expect immediate answers and solutions.
  3. Demonstrate that the inquirer did not try to address the problem him- or herself prior.

Feature 1. can be explained by the fact that most of the Facebook group members are not native English speakers and struggle with forming comprehensible questions. Still, I find it odd that they invest so little effort. For instance, if an inquiry refers to issues with a specific operating system, it would be wise to provide the full specification of the computer running the operating system or at least name the operating system, no? I would assume that this should be dictated by common sense, though perhaps education also plays a role here? After all, we are taught how to pose questions at school and in university. The consequence is that even if the question is answered, the inquirer may not understand the answer, because his language and/or technical skills are insufficient. It is a sad, but inescapable aspect of discussion groups.

Features 2. and 3. are interconnected, and grind my gears the most. The phrases I often encounter are “suggest me”, “give me solution”, “give me program command”, “give me/show me your code”. All of them assume that the answering party is obliged to provide a solution as quickly as possible, while in reality the opposite is true. The answering party is not obliged to do anything! Rather, the inquirer should display humbleness in order to receive a reliable answer. What is even more insulting and disrespectful is the fact that some of these questions are phrased in such a way that they could easily be executed as a Google query. No additional help from a dedicated technical group is needed. Other questions expect a detailed and clear explanation of an entire framework, which usually takes a year if not years to build. For instance, the inquirer wants to know how to build a fingerprint system for monitoring/registering students at a local school. He/she anticipates a full outline of the entire system in a “ready-to-go” package, best to be described in layman terms so that he/she can proceed with building such a system. In all honesty,  endeavors like this typically require a team of experienced software engineers not a ragtag group of volunteers.

In the end, it boils down to the issue of instant gratification, which plagues modern societies. Many Western business models are based on the premise that much can be achieved with minimal effort and that the evanescent everyman can become a hero instantly. A fantastically enticing end product is shown, together with a set of trivial instructions, which need to be followed. People seek happiness and obviously it’s best if that happiness is achieved quickly. However, instant gratification is not lasting and requires more units of the product or a newer product. That in turn drives the ever increasing demand for the product. Technology is no different. People are made to believe that coding is easy and great programs can be written overnight. Also, everyone can instantly become an experienced hacker, because why not? Reality is different, though. Impressions are cheap, while actual experience resource-intensive. Learning is a process, which takes time. We can disagree, though that will not alter reality. Merely our impression of it.

The Ubuntu Conundrum

Ubuntu is perhaps the most popular Linux-based operating system, however for that very reason it has as many proponents as enemies. I myself use Ubuntu (Xubuntu 16.04 LTS, to be exact) at work, both as a development platform and to host services in libvirt/KVM virtual machines (Ubuntu Server 16.04 LTS there). It performs alright and so far hasn’t let us down, though we haven’t been using it through more than 2 releases so we’re unable to gauge its reliability properly. On more personal grounds I can say it works splendidly on my early-2011 MacBook Pro 15″ with faulty AMD graphics and has since the very beginning (out-of-the-box, as one might say). Singular package upgrades don’t bring about all of the regressions people profess so fervently. However, I can understand where the hate is coming from and I admit it is partially justified.

Product and popularity
For whatever reason human psychology dictates that we equate quality with popularity. If something is extremely popular, it simply must be good, right? Wrong. Completely. A product is popular, because someone with enough resources made it visible to as many consumers as possible. The product was made popular. Quality is a useful, but clearly secondary measure. A good anecdote is the long gone rivalry between VHS and Betamax. We all remember VHS, though most of us do not remember Betamax, which was technically superior. However, it lost the popularity race and will forever be remembered as the second best or not remembered at all. Now, this is not to say that Ubuntu is in any way inferior…

Ubuntu, the (non)universal operating system
The main issue with Ubuntu is that it succeeded as a more open operating system alternative to Windows and macOS X, however did not solve the underlying problem – computer literacy. Of course, not every computer user has to be a geek and hack the kernel. However, when I see that Ubuntu users address their PC-related issues with the same shamanism and hocus pocus as in Windows, my soul twists in convulsions. We did not flee from closed-source operating systems only to change the names of our favorite tools and the look of our graphical user interfaces, though observing current trends, I might be terribly wrong. The other problem is that Ubuntu’s popularity has become self-perpetuating. It’s popular, because it’s popular. Many tutorials online and in magazines assume that if one uses Linux, he or she surely runs Ubuntu on all of his or her computers. This is extremely hurtful to the entirety of the Linux ecosystem, because neither Debian nor Ubuntu represent standard Linux. Both of those systems introduce a number of configuration improvements to applications, which are not defined in upstream documentation and absent in other distributions (so-called Debianisms). Therefore, Ubuntu being a universal operating system is more of a publicity gimmick than a fact. Especially, considering that on servers, SLES (SUSE Linux Enterprise edition), CentOS and Red Hat clearly dominate.

The solution?
I would say it’s high time we begin showing newcomers that there is an amazing world of Linux beyond Ubuntu. To that end, I have a couple of suggestions for specific needs and distributions covering those needs. Related questions come up often in the Linux Facebook group and around the Internet, but get answered superficially via click-bait articles listing top 10 distributions in 2017/18. Not exactly useful. Anyhow, the list:

  • Software development:
    – Fedora (up-to-date packages and developer-centric tools like COPR)
    – Arch Linux (up-to-date with a wide range of packages via AUR and vanilla package configuration for simplicity)
    – openSUSE Tumbleweed (up-to-date with a rolling, snapshot based release cycle, but sharing the Leap / SLES high-quality management tools like YaST2)
  • Servers:
    – openSUSE Leap (3-year long support life cycle, high-quality management tools like YaST2 and straightforward server + database + VM configuration)
    – CentOS (binary compatible with Red Hat Enterprise Linux)
    – FreeBSD (ZFS hard drive pool management + snapshots, reliable service/database separation via jails, rock solid base system)
  • Easy-to-use:
    – Manjaro Linux (based on Arch Linux, with lots of straightforward graphical configuration tools, multiple installable kernels, etc.)
    – Fedora (not only for developers!)
    – openSUSE Leap (for similar reasons as above + a streamlined, user-friendly installer)
  • For learning Linux:
    – Gentoo (painful at first, but extremely flexible with discrete software feature selection at compile-time via USE flags)
    – Arch Linux (Keep It Simple Stupid; no hand-holding, but with high-quality documentation to make the learning curve less steep)
    – CRUX (similar to Gentoo, but without the useful scripts; basically, vanilla Linux with a very simple package manager)
  • For learning BSDs:
    – FreeBSD (as mentioned above)
    – OpenBSD (strong emphasis on code-correctness, system engineering and network management)
    – DragonflyBSD (pioneering data storage and multi-processor systems)

Linux and the BSDs

Throughout my many months of using various open-source and proprietary operating systems I have made certain observations that might be useful to some. I started with Linux, though at some point migrated to the BSDs for personal and slightly more pragmatic reasons. I quickly became a lot more familiar with FreeBSD and OpenBSD than I ever was with openSUSE or Ubuntu. It may seem odd as Linux is far easier to get into, however seasoned UNIX admins will surely understand. BSDs have this technical appeal, which Linux steadily loses in favor of other features. To the point, though:

1. Save for Windows, most operating systems are extremely similar. macOS X, as it is now referred to, relies on a huge number of BSD utilities, because at one point in time they were more accessible (well-documented and permissively licensed). In turn, the open-source BSD family operating systems, such as OpenBSD and FreeBSD adopted Clang with its LLVM back-end (Apple’s compiler toolchain) as their main system compiler. A number of former, now defunct, proprietary operating systems were based on some revision of UNIX – IRIX, HP-UX, Solaris, etc. There is also a significant overlap of other tools, such as sysctl and ifconfig, which were forked, modified and adjusted to fit individual systems, but bare functional resemblance between various flavors of UNIX. The remainder (text editors, desktop environments, etc.) is typically BSD/MIT/GPL-licensed and available as packages or ports. Therefore, the high-level transition between BSDs and Linux isn’t as dramatic.

2. Above being said, the BSDs and Linux follow different philosophies, of which the BSD philosophy seems a lot more practical in the long-run to me. What most people forget (or never become familiar with) is the fact that Linux is just a kernel and development teams creating distributions (the actual Linux-based operating systems!) can do almost anything they want with it. This leads to myriads of possible feature sets already on the kernel level. It is also up to the distributions to assemble the kernel and userland utilities into a full-fledged operating system. Unfortunately, distribution teams are often tempted to add a bit of an artistic touch to their work, which causes Linux distributions to differ in key aspects. While on a higher level this is hardly noticeable, it may bite one back when things get sour or manual configuration is required. This Lego blocks or bazaar concept makes it difficult for upstream software developers to identify bugs and for companies to support Linux with hardware and software properly. Eventually, only certain distributions are recognized as significant, such as Ubuntu, CentOS, openSUSE, Fedora or Debian. BSDs take a more organized approach to system design, which I believe is highly advantageous. An operating system consists of the kernel and basic utilities, which make it actually useful, such as a network manager, compiler, process manager, etc. Depending on the BSD in question, the scope of the base system is defined differently. For instance, OpenBSD ships with quite some servers, including a display server (Xenocara). FreeBSD, in turn, focuses on providing server capabilities.

3. Recently (or not so much), the focus of Linux has switched from being merely a server platform to a desktop replacement. That’s the ballpark of MS Windows and macOS X, both of which are fairly tried as desktop platforms. The crux of the matter is that utilities had to be adjusted to fulfill more GUI-oriented roles, making command-line work slightly trickier. The other problem is that the software turnover in Linux-land is extremely rapid and programs either become stale way too quickly or they break too often. That’s clearly a no-go for server scenarios. This is where BSDs come in. FreeBSD was designed as a multi-purpose operating system, however with a strong focus on networking, process sandboxing and privilege separation, data storage, etc. In these aspects it clearly excels. NetBSD favors portability and supports many server and embedded platforms, which act as routers, switches, load-balancers, etc. OpenBSD emphasizes code correctness, security and complete documentation. Last, but not least, DragonflyBSD focuses on multi-processing and leverages filesystem features to improve performance. One could say that due to greater resources Linux surpassed all of these operating systems. However, one should not underestimate quality BSD utilities and the almost legendary stability of Berkley-derived OS’. One of the main problems I ever had with Linux was the inconsistent breakage of individual distributions. Upgrading packages would eventually render them useless or impossible to troubleshoot due to uninformative error messages. The lack or staleness of documentation made matters only worse. Having to deal with above problems, I simply jumped ship and joined the BSD crowd. Granted, neither OpenBSD nor FreeBSD make my PCs snappier. Quite the opposite, Linux still wins in that respect. However, I now have full access to the operating system source code and can fix issues first-hand should such a need arise. Not to mention being actually able to read clearly written documentation and learn how to use the tools my operating system offers. I doubt Linux can beat that.