The Ubuntu Conundrum

Ubuntu is perhaps the most popular Linux-based operating system, however for that very reason it has as many proponents as enemies. I myself use Ubuntu (Xubuntu 16.04 LTS, to be exact) at work, both as a development platform and to host services in libvirt/KVM virtual machines (Ubuntu Server 16.04 LTS there). It performs alright and so far hasn’t let us down, though we haven’t been using it through more than 2 releases so we’re unable to gauge its reliability properly. On more personal grounds I can say it works splendidly on my early-2011 MacBook Pro 15″ with faulty AMD graphics and has since the very beginning (out-of-the-box, as one might say). Singular package upgrades don’t bring about all of the regressions people profess so fervently. However, I can understand where the hate is coming from and I admit it is partially justified.

Product and popularity
For whatever reason human psychology dictates that we equate quality with popularity. If something is extremely popular, it simply must be good, right? Wrong. Completely. A product is popular, because someone with enough resources made it visible to as many consumers as possible. The product was made popular. Quality is a useful, but clearly secondary measure. A good anecdote is the long gone rivalry between VHS and Betamax. We all remember VHS, though most of us do not remember Betamax, which was technically superior. However, it lost the popularity race and will forever be remembered as the second best or not remembered at all. Now, this is not to say that Ubuntu is in any way inferior…

Ubuntu, the (non)universal operating system
The main issue with Ubuntu is that it succeeded as a more open operating system alternative to Windows and macOS X, however did not solve the underlying problem – computer literacy. Of course, not every computer user has to be a geek and hack the kernel. However, when I see that Ubuntu users address their PC-related issues with the same shamanism and hocus pocus as in Windows, my soul twists in convulsions. We did not flee from closed-source operating systems only to change the names of our favorite tools and the look of our graphical user interfaces, though observing current trends, I might be terribly wrong. The other problem is that Ubuntu’s popularity has become self-perpetuating. It’s popular, because it’s popular. Many tutorials online and in magazines assume that if one uses Linux, he or she surely runs Ubuntu on all of his or her computers. This is extremely hurtful to the entirety of the Linux ecosystem, because neither Debian nor Ubuntu represent standard Linux. Both of those systems introduce a number of configuration improvements to applications, which are not defined in upstream documentation and absent in other distributions (so-called Debianisms). Therefore, Ubuntu being a universal operating system is more of a publicity gimmick than a fact. Especially, considering that on servers, SLES (SUSE Linux Enterprise edition), CentOS and Red Hat clearly dominate.

The solution?
I would say it’s high time we begin showing newcomers that there is an amazing world of Linux beyond Ubuntu. To that end, I have a couple of suggestions for specific needs and distributions covering those needs. Related questions come up often in the Linux Facebook group and around the Internet, but get answered superficially via click-bait articles listing top 10 distributions in 2017/18. Not exactly useful. Anyhow, the list:

  • Software development:
    – Fedora (up-to-date packages and developer-centric tools like COPR)
    – Arch Linux (up-to-date with a wide range of packages via AUR and vanilla package configuration for simplicity)
    – openSUSE Tumbleweed (up-to-date with a rolling, snapshot based release cycle, but sharing the Leap / SLES high-quality management tools like YaST2)
  • Servers:
    – openSUSE Leap (3-year long support life cycle, high-quality management tools like YaST2 and straightforward server + database + VM configuration)
    – CentOS (binary compatible with Red Hat Enterprise Linux)
    – FreeBSD (ZFS hard drive pool management + snapshots, reliable service/database separation via jails, rock solid base system)
  • Easy-to-use:
    – Manjaro Linux (based on Arch Linux, with lots of straightforward graphical configuration tools, multiple installable kernels, etc.)
    – Fedora (not only for developers!)
    – openSUSE Leap (for similar reasons as above + a streamlined, user-friendly installer)
  • For learning Linux:
    – Gentoo (painful at first, but extremely flexible with discrete software feature selection at compile-time via USE flags)
    – Arch Linux (Keep It Simple Stupid; no hand-holding, but with high-quality documentation to make the learning curve less steep)
    – CRUX (similar to Gentoo, but without the useful scripts; basically, vanilla Linux with a very simple package manager)
  • For learning BSDs:
    – FreeBSD (as mentioned above)
    – OpenBSD (strong emphasis on code-correctness, system engineering and network management)
    – DragonflyBSD (pioneering data storage and multi-processor systems)
Advertisements

Linux and the BSDs

Throughout my many months of using various open-source and proprietary operating systems I have made certain observations that might be useful to some. I started with Linux, though at some point migrated to the BSDs for personal and slightly more pragmatic reasons. I quickly became a lot more familiar with FreeBSD and OpenBSD than I ever was with openSUSE or Ubuntu. It may seem odd as Linux is far easier to get into, however seasoned UNIX admins will surely understand. BSDs have this technical appeal, which Linux steadily loses in favor of other features. To the point, though:

1. Save for Windows, most operating systems are extremely similar. macOS X, as it is now referred to, relies on a huge number of BSD utilities, because at one point in time they were more accessible (well-documented and permissively licensed). In turn, the open-source BSD family operating systems, such as OpenBSD and FreeBSD adopted Clang with its LLVM back-end (Apple’s compiler toolchain) as their main system compiler. A number of former, now defunct, proprietary operating systems were based on some revision of UNIX – IRIX, HP-UX, Solaris, etc. There is also a significant overlap of other tools, such as sysctl and ifconfig, which were forked, modified and adjusted to fit individual systems, but bare functional resemblance between various flavors of UNIX. The remainder (text editors, desktop environments, etc.) is typically BSD/MIT/GPL-licensed and available as packages or ports. Therefore, the high-level transition between BSDs and Linux isn’t as dramatic.

2. Above being said, the BSDs and Linux follow different philosophies, of which the BSD philosophy seems a lot more practical in the long-run to me. What most people forget (or never become familiar with) is the fact that Linux is just a kernel and development teams creating distributions (the actual Linux-based operating systems!) can do almost anything they want with it. This leads to myriads of possible feature sets already on the kernel level. It is also up to the distributions to assemble the kernel and userland utilities into a full-fledged operating system. Unfortunately, distribution teams are often tempted to add a bit of an artistic touch to their work, which causes Linux distributions to differ in key aspects. While on a higher level this is hardly noticeable, it may bite one back when things get sour or manual configuration is required. This Lego blocks or bazaar concept makes it difficult for upstream software developers to identify bugs and for companies to support Linux with hardware and software properly. Eventually, only certain distributions are recognized as significant, such as Ubuntu, CentOS, openSUSE, Fedora or Debian. BSDs take a more organized approach to system design, which I believe is highly advantageous. An operating system consists of the kernel and basic utilities, which make it actually useful, such as a network manager, compiler, process manager, etc. Depending on the BSD in question, the scope of the base system is defined differently. For instance, OpenBSD ships with quite some servers, including a display server (Xenocara). FreeBSD, in turn, focuses on providing server capabilities.

3. Recently (or not so much), the focus of Linux has switched from being merely a server platform to a desktop replacement. That’s the ballpark of MS Windows and macOS X, both of which are fairly tried as desktop platforms. The crux of the matter is that utilities had to be adjusted to fulfill more GUI-oriented roles, making command-line work slightly trickier. The other problem is that the software turnover in Linux-land is extremely rapid and programs either become stale way too quickly or they break too often. That’s clearly a no-go for server scenarios. This is where BSDs come in. FreeBSD was designed as a multi-purpose operating system, however with a strong focus on networking, process sandboxing and privilege separation, data storage, etc. In these aspects it clearly excels. NetBSD favors portability and supports many server and embedded platforms, which act as routers, switches, load-balancers, etc. OpenBSD emphasizes code correctness, security and complete documentation. Last, but not least, DragonflyBSD focuses on multi-processing and leverages filesystem features to improve performance. One could say that due to greater resources Linux surpassed all of these operating systems. However, one should not underestimate quality BSD utilities and the almost legendary stability of Berkley-derived OS’. One of the main problems I ever had with Linux was the inconsistent breakage of individual distributions. Upgrading packages would eventually render them useless or impossible to troubleshoot due to uninformative error messages. The lack or staleness of documentation made matters only worse. Having to deal with above problems, I simply jumped ship and joined the BSD crowd. Granted, neither OpenBSD nor FreeBSD make my PCs snappier. Quite the opposite, Linux still wins in that respect. However, I now have full access to the operating system source code and can fix issues first-hand should such a need arise. Not to mention being actually able to read clearly written documentation and learn how to use the tools my operating system offers. I doubt Linux can beat that.

On Using Computers

I’ve been planning to write this piece for a while now, though due to work related stuff I was somewhat hampered in my efforts. It’s a bit harsh at times, but I feel it should become a must read for beginner Linux users nevertheless.

I am a part of the open-source community and as a member I try to contribute to projects
with code, documentation and advice. I fully understand that for the open-source way of
producing content (not merely software!) to succeed, everyone has to give something. However, in the recent months I noticed a sharp influx of new users (newbies), who want to be part of the community, but are extremely confused as to its principles. Incidentally, these newbies “contaminate” the open-source community with former habits and expectations, and make it harder for both existing members and themselves to cope with this temporary shift in the user expertise equilibrium. I blame two main phenomena for the confusion of new users:

1. The open-source way is advertised as inherently “better”, which is misleading.

2. The open-source way requires members to think about what they do and possibly to contribute however they can.

Since the imbalance has reached a peak of being unbearable for me and other existing members of the open-source community, I decided to write this introductory article so that newbies quickly adjust and the equilibrium is restored.

I. User-friendliness is a lie
Following up on the thoughts laid out at over-yonder.org, I want to make this statement extra clear. There is no such thing as user-friendliness. It.does.not.exist. The Internet is crawling with click-bait articles entitled “The best user-friendly Linux distribution!” or “The most user-friendly desktop environment!”. These articles were crafted in order to increase the view count of the host website, not to provide useful information on the topic. Alternatively, they were written by people who are as confused as newbies. “User friendly” just like “intuitive” is a catchphrase – an advertising gimmick used to get you to buy/get a product. There is no extra depth to it. What people wrongly label as “user-friendly” is in fact “hand-holding” – the software/hardware is expected to do something for the user. Not enable the user to perform an action, but actually do the action for him/her. A stewardess on a cruiser or an aircraft is helpful, because she answers passengers’ questions, however she does not hold anyone’s hand, as that would mean leading every single passenger to their seat. If anyone ever tells you that something is user-friendly, ignore them and move on. You know better :).

II. Qualities, quantity and gradation
Generalized comparative statements are being thrown about virtually everywhere. This annoys me and should also annoy you after reading this paragraph. The truth is that most of those statements are fundamentally wrong, because they assume objects of different qualities can be compared using abstract terms. They CANNOT. A useful reference point is comparing apples to oranges. Can it be said that oranges are better than apples? No. What about apples being better than oranges? Neither! “Better” is an abstract term, which by itself means nothing. Therefore, saying “OpenSUSE is better than Ubuntu” means absolutely nothing, also! However, what can be done is comparing specific features of A and B. You cannot say “Apples are better than oranges”, but you can claim that an average apple is heavier than an average orange with specific examples of both. Color-wise, you can say that apples tend to be green-red, while oranges yellow-orange-reddish. You cannot directly compare colors, mind you, unless you express the color of A and B in a uniform color scale, like “the amount of red”. No fallacy has been committed that way. Therefore, neither software, nor hardware can be directly compared, though you can say, for instance that “openSUSE has a number of tools like YaST, which make it potentially more convenient for system administrators than Ubuntu”. Remember that!

III. The “use case” concept
Knowing that user-friendliness does not exist and that many things cannot be directly compared, the next step is understanding the “How” inherent to all problems. You have an issue or an inquiry. What is that you want to achieve? What are the exact requirements to reach your goal? What is the situation in which you experienced your problem? Being specific and being able to disassemble large problems into smaller tasks is paramount to understanding the problem and finding the possible solutions to it. This is true not only for computers, but for everything in life alike. Once you know your “use case”, you will know which hardware and software (including the operating system) to choose. Different operating systems cover various use cases or use scenarios, thereby understanding  your use case well will allow you to find the perfect operating system or any other piece of software quicker.

IV. Options, decisions and the “good enough”
All of the above being said, humans have this need to always aim for optimal solutions. Subconsciously,  they want only the “best” for them. What if it’s impossible to identify the best option? What if all of them satisfy our requirements equally well? Thus, the concept of “good enough” comes into play. Sometimes, the “best” solution is the first solution we decide upon and stick with it. No second thoughts allowed! Until we identify a legitimate reason why solution #1 no longer satisfies our needs for a prolonged period of time. Wondering which operating system to choose? Linux Mint? Ubuntu? Debian? Fedora? Perhaps not a Linux based OS, but a pure UNIX-like BSD? There are so many! If you’re a beginner, it doesn’t matter which you choose. Pick one, stick with it and change only if experimenting or your first choice was completely wrong.

V. Thinking and the individual responsibility
This will be a harsh one. Proprietary operating systems create this illusion of user friendliness (it’s a lie, we know it now!) and that the user is not required to take responsibility for his/her actions done on his/her software/hardware. This is one of the major fallacies in the computer world. The moment you buy a computer, you are completely responsible for it. Consider it your “child”. You need to make sure it’s always clean, powered up etc. No one will ever do it for you. Others can recommend solutions, give advice, provide support even, but the final decision is on you and you alone. Whatever you do with your computer, it is your success or failure. The primary reason why malware spreads like wildfire is that people are convinced that they don’t need to actively care for the safety of their computers. Dead. wrong.

The open-source way is not better than the proprietary/closed-source way. It’s different, nothing else. I chose it, because it aligns with my personal preferences well and I believe that it will prevail. It is for you to decide whether you can accept that. If the answer is “Yes”, I congratulate you. Go forth, learn and become a full-fledged member of the open-source community :).

The Open in BSD

I wrote about OpenBSD a bit in the past. Since then I’ve been distro-hopping plenty like a nervous flea that I am. Eventually, I put Debian 9.1 Stable on some of my machines and that’s what I run at work out of convenience and in case someone needs Linux-related help. I cannot say I don’t like it. To me Debian feels like the FreeBSD of the GNU/Linux side of FOSS. It’s sensible. It’s stable. However, I quickly tire of the systemd hiccups, focus on flashy graphical frameworks and other annoyances. Then, I turn to the BSD world with FreeBSD on my home workstation and OpenBSD on this here VAIO laptop. Admittedly, I was somewhat curious about hardware compatibility in release 6.1. This laptop is more powerful than the Intel M based Dell Latitude E5500 I used for testing OpenBSD previously. Also, the VAIO ran Debian 9.1 well enough that I could do actual work without waiting long minutes for a JavaScript-infested Web page to load. How would it cope with OpenBSD however?

Installing OpenBSD is fairly straightforward and if someone has ever installed Gentoo or Arch Linux….well, OpenBSD is easier! Out-of-the-box we even get an X11 server called Xenocara together with the xenodm display/login manager (not mandatory). Somewhat unfortunately, the default window manager CWM looks extremely dated and the black-white-grey dotted background would hurt my eyes. Not to worry, though. Openbox was just a pkg_add away. In fact, most of the tools I use every day were also, hence I didn’t really miss anything. It’s FOSS and I guess I shouldn’t be surprised that I can reproduce a fairly standard setup on another OS. The critical point for me was whether I could install all of the Python machine learning modules I use for writing regression tests. pandas, matplotlib and numpy are usually available from software repositories. Granted, not on every single open-source operating system. Luckily, the Python package installer PIP provides fantastic means of interoperability, which I encourage everyone to use. Even with Windows *cough* *cough*. Soon after PIP completed its work  I was set up and good to go!

OpenBSD

My desktop look – courtesy of myself (and the wallpaper’s author)

Then there is the usual How to make my system more polished? I got myself a nice OpenBSD wallpaper from the Interwebs (see: image above) and proceeded to reading the official documentation to understand the system better. The login environment is handled by the Korn Shell (the extra crispy OpenBSD variant of the Korn Shell, mind you), From there we add packages with pkg_add and modify them with a slew of other pkg_* tools. If anyone is familiar with former releases of FreeBSD, he or she will know the pkg_* commands. The system (kernel + core utilities) and the Ports Collection source code trees are tracked via AnonCVS, a largely improved CVS fork. It’s quite noticeable that the OpenBSD project strives to tweak and improve existing tools in order to make them more secure. I still need to figure out how to adjust the sound volume efficiently via mixerctl. Perhaps I’ll write a thin GUI client in Java or Python (or port my favourite volumeicon) in case none are available. Or just map a set of keyboard keys to mixerctl calls.

When comparing open-source operating systems, especially BSDs vs GNU/Linux distributions, people often consider things like system performance, resource usage, software availability, etc.

  1. Is OpenBSD faster than Debian? Not really. However, on modern PCs any open-source operating system is faster than Windows or MacOS X. This should come as no surprise.
  2. Does it use less system resources? Perhaps a tiny bit, though many open-source programs are portable and any optimizations are rather accidental. To give you an idea Openbox + WordPress opened in Firefox + mpv playing a jazzy tune amount to ~700 MB RAM in total. Not too shabby, right?
  3. Are programs X, Y and Z available? This largely depends on what tools one requires for work. The typical assortment (LibreOffice, GIMP, Inkscape, etc.) is there for the taking. Also, GUI tools can be replaced with CLI tools with minimal effort (for instance, Irssi/WeeChat instead of HexChat). The only real limitation I noticed so far is programs that are only accessible in binary form or certain device drivers with binary-only blobs (see: nVidia). OpenBSD has a strong policy against closed-source software and unless the company in question has a good reputation of providing quality software consistently, I think full source code disclosure is the right way to go.
  4. Is my hardware well supported? For device drivers see above. Other than that most (if not all) Intel-based hardware works as well as on GNU/Linux distributions. For improved 3D performance AMD is a fine choice, too. Perhaps webcam support is a bit lacking, but many models like the MacBook iSight even are supported.

The bottom line is this – OpenBSD is a great Unix-like operating system. It’s super secure and has one of the best documentations out there. If that’s your cup of tea, join the crew. If not, at least give it a try. I can assure you it’s worth it. Finally, a screenfetch for the geeks among us:

screenfetch_openbsd

In Software We Trust

Inspired by the works of Matthew D. Fuller from over-yonder.net I decided to write a more philosophical piece of my own. While distro-hopping recently it came to my mind that whatever we do with our lives, we never do it alone and our well-being depends on other people. It requires us to trust them. Back in prehistoric times a Homo sapiens individual could probably get away with fishing, foraging and hunting for food, and finding shelter in caves. The modern world is entirely different, though. We need dentists to check our teeth, we need groceries to gather food, we need real estate agents to find housing, etc. Dealing with hardware and software is similar. Either we build a machine ourselves or trust that some company X can do a good enough job for us. The same goes for software!

Alright, so we have a computer (or two, or ten, or…) and we want to make it useful by putting an operating system on its drive(s). MacOS X and MS Windows are out of the question for obvious reasons. That leaves us with either Linux or a BSD-based system. Assuming we pick Linux, we can install it from source or in binary form. This is where trust comes into play. We don’t need to trust major GNU/Linux distributions in terms of software packaging and features. We can roll with Gentoo, Linux From Scratch, CRUX or any other source-based distribution and decide on our own what does and doesn’t go into our software. It’s kind of like growing vegetables in a garden. Granted, we ourselves are responsible for any immediate issues like compile errors, file conflicts or missing features. It’s a learning process and one definitely profits from it. However, it’s also time-consuming and requires extremely good understanding of both system design and the feature sets of individual programs. No easy task that. Therefore, it’s far more convenient to use binary distributions like openSUSE, Ubuntu, Fedora, Debian, etc. It requires us to trust that the maintainers and developers are doing a good job at keeping software up-to-date, paying attention to security fixes and not letting bugs through. I myself don’t feel competent enough to be a source-level administrator of my own computer and be able to fix every minor or major issue in C++ code. I prefer to trust people who I’m sure would do it better than me, at least for now.

The (Necessary?) GNU/Linux Fragmentation

I would like to share with you a story of my recent struggles with Debian. They’re partially my fault, but also partially due to the way Debian handles network management, which is quite different from how other GNU/Linux distributions do it.

The story begins with me being happy with a regular desktop install, powered by XFCE4, but then wanting to switch to the less distracting Openbox. I installed Openbox + extras like the tint2 panel, nitrogen (background/wallpaper setter) and other lightweight alternatives to XFCE4 components. While sweeping up XFCE4 leftovers, “apt autoremove” accidentally removed way too many packages, including network-manager. I was instantly left with no network connection and no means of restoring it, as I learned later. By default network management on Debian is handled by the ifupdown scripts, which “up” interfaces listed in /etc/network/interfaces and direct them to dhclient to get the DHCP lease or assign a static IP address. Incidentally, ifupdown utils have no means of directing wireless interfaces to wpa_supplicant for WPA-encrypted networks. Nowadays, this is handled by network-manager, which “Just Works”. network-manager uses wpa_supplicant to handle WPA-encryption (in addition to many other things), whilst performing the rest of network management itself. This is quite different from running wpa_supplicant directly, which simply failed in my case due to a known regression.

It’s quite sad to see that Debian, despite moving from init scripts to systemd for boot + service management, still insists on configuring network interfaces via Shell scripts (the mentioned ifupdown tools), while a mainstream solution in the form of network-manager is available! Why is it recommended as a “Just Works” alternative yet not offered by default? On Red Hat based distributions (say Fedora, CentOS, etc.) the matter is really simple – you get network-manager and you’re good to go out-of-the-box. That stands to reason, though, as Network Manager is a Red Hat project. Still, the “Just Works” approach baffles me (and disturbs even) greatly. “Just Works” sounds like a catch-phrase typical of commercial operating systems. Since they target desktop and laptop computers mainly, it’s enough if an ethernet interface and/or a wireless interface “just work”. What about servers with multiple NICs, routers, gateways, NATs and VMs, each having its IP address or set thereof? What then? Oh, right, we can write systemd units for each interface and control them via systemctl. Or use the ip utilities for a similar purpose. Or the deprecated ifconfig, which we shouldn’t use, but still can because it’s in the repositories of many distributions. Alternatively, perform DHCP requests via one of the selected clients – dhclient, dhcpcd, dhcpd, etc. We end up with a hodgepodge of programs that are best left to their own devices due to incomplete documentation and/or unclear configuration means. Each GNU/Linux distribution having a set of their own base utilities.

Personally, I feel that’s where the BSDs succeed. You get a clearly separated base system with well-documented and easily-configurable tools that are maintained as a whole. Network interface configuration borders on trivial. More so, the installer handles wireless connections almost seamlessly. Why is it so difficult on GNU/Linux? At this point, I believe the GNU/Linux community would profit greatly by agreeing on a common “base system”. Red Hat’s systemd is the first step to the unification of the ecosystem. While I am strongly opposed to systemd, because it gives merely the illusion of improved efficacy by simplifying configuration and obfuscating details, GNU/Linux should be a bit stronger on common standards, at least for system-level utilities.

GNU/Linux and Its Users

I decided to devote this entry to a reflection I made recently, while participating in discussions in the Facebook Linux group. I realized, as many have before me, that there is a strong correlation between the maturity and complexity of an operating system and the savviness of its users. The BSDs and more demanding GNU/Linux distributions like CRUX, Arch and Gentoo attract experienced computer users, while Ubuntu and its derivatives entice beginners mostly. As few express the need to learn the Unix Way, beginner-oriented operating systems (this includes Windows and MacOS X, of course) are far more popular. Consequently, they garner stronger commercial support from hardware and software companies as they constitute a market for new products.

The truth is, we have all been beginners once. More so, unless we’re old enough to remember the ancestral iterations of the original UNIX operating system (I’m not!), we’ve been MacOS X or Windows users way before switching to a modern Unix-like operating system. Alas, as such we have been tainted with a closed-source mindset, encouraging us to take no responsibility for our computers and solve system-level problems with mundane trial-and-error hackery. Not only is such a mindset counter-productive, but also hampers technological progress. Computers are becoming increasingly crucial in our everyday lives and a certain degree of computer-literacy and awareness is simply mandatory. Open-source technologies encourage a switch to a more modern mindset, entailing information sharing, discussions and learning various computer skills in general. The sooner we accustom ourselves with this mindset, the faster we can move on.

The current problem in the GNU/Linux community (much less in non-Linux Unix communities) is that the entry barrier is being continuously lowered as to yield a speedier influx of users. Unfortunately, many of these users are complete beginners not only in terms of Unices, but also in terms of using computers in general. With them the closed-source mentality is carried over and we, the more experienced users, have to deal with it. Some [experienced users] provide help, while others are annoyed with the constant nagging. Within us lies the responsibility to educate newbies and encourage them to embrace the open-source mindset (explained above). However, they don’t want to. They want the instant gratification they received when using Windows or MacOS X, because someone convinced them that GNU/Linux can be a drop-in replacement for their former commercial OS. They want tutorial-grade, easy-to-follow answers to unclear, badly formulated questions. Better yet, they want them now, served on a silver platter. We all love helping newbies, but we shouldn’t encourage them to remain lazy. Otherwise, we’ll eventually share the fate of Windows or MacOS X as just this other mainstream platform. I cannot speak for everyone, though I would personally prefer GNU/Linux to continue its evolution as a tech-aware platform of the future.