Sid, Arch or Gentoo – Who Would You Rather Roll With?

Windows, OS X, BSD and Linux are all contestants in an intriguing race – a race of technological development. To be more precise, they occupy slightly different niches in the ‘computing ecosystem’, however compete against each other in terms of overall usability (BSD to a much lesser extent). Sadly, as Windows and Mac first became the OS’ of choice for regular users, the majority of development was dedicated to those two platforms. On the other hand, Linux is not far behind. For instance, linux kernel 3.16 introduced such features as response to physical shock, relying on built-in pressure sensors (known to Windows since XP). The current stable edition of the linux kernel is already 3.18, while 3.19 is undergoing thorough testing (release candidate 5, 3.19-rc5, can be downloaded from

Because Linux displays an ever-changing landscape, I decided to settle down with an operating system that would follow the so-called rolling-release development model and would be rather up-to-date. Of the most popular distributions I considered Gentoo, Arch Linux and Debian Sid.

I tried Gentoo a few months ago and was moderately annoyed with its time-consuming installation. I quickly deemed it too difficult for me, though I might try it again once I gain sufficient experience.

Therefore, I narrowed down my choices to Arch and Sid.

Arch Linux is a typical rolling-release distribution, following principles of simplicity and clarity. The installation procedure is almost purely manual (less troublesome than Gentoo, though) and requires editing of multiple configuration files. However, it also provides a very good learning opportunity. The key to success is a combination of up-to-date vanilla binaries (very similar to the ones available from upstream developers) and easy access to compiling tools.

Debian is definitely not a rolling release distribution, though the Unstable/Sid branch is often considered as such. Debian developers refer to it as a testing ground for new software packages. It will never be released per se, thus making it debatable whether Sid is a true rolling-release distribution or not.

I greatly appreciate Debian for its power, versatility and stability, but because even Debian developers themselves deem Sid to be highly unstable (frequent updates may cause dependency errors, etc.) and it does not receive security updates like Testing and Stable in a timely fashion, I decided to leave it for now. There is also the case of how current the available software packages are…

Arch Linux closely follows the work done by development teams responsible for specific software (applications, kernel, drivers, etc.). If a given dev team releases a stable version of their software, it will be shortly after added to Arch Linux repositories. Because of that Arch was humorously called ‘Linux, with a nice package manager’. I agree that if kernel 3.18 is considered stable by the kernel dev team, it most likely is. In addition, while people think Arch Linux is bleeding edge, and thus unstable by design, not so many of them report actual problems with the distribution.

Debian standards are set much higher. Every upstream package that flows into Experimental and Unstable has to be thoroughly processed by Debian dev teams. While this guarantees additional testing and prior configuration of software packages for the average user, packages are not as up-to-date as in Arch.

Thereby, for the time being I decided on Arch Linux.

What about you?

We Row our Boats in the Same Linux-verse, After All…

sailing-boats-at-argenteuil-gustave-caillebotteLike boats at the sea
Like boats on the ocean
We the Penguin People
Row our lives to freedom…

Some time ago I complained about how excessive forking of Linux distributions leads to unwanted dilution of human effort (The Bitter Taste of Mandrake Juice). Then, however I was quite frustrated with the situation of Mandriva Linux. I wrote about how Open Mandriva Lx and Mageia could achieve much more if they became a single distribution, or at least if the communities worked closely together (like those of antiX and Mepis in order to create antiX MX). Exactly to my frustration, it didn’t seem like that would happen in the nearest future.

I have to admit that in my previous musings I neglected the contrary (positive) perspective. It became apparent to me only when I took more interest in various Debian-based distributions and started testing them one by one. The development team responsible for each of them put substantial effort into creating a product that will fulfill the tastes of a certain userbase. Thanks to them, me and many other people have what I prize most – choice!

As Debian is quite a modular and very repository-reliant distribution, one can literally ‘mix and match’ with significant success. Debian Stable can be turned into Testing or Unstable and various derivatives may be morphed into Debian proper (for instance, Semplice based on Debian Unstable/Sid offers quick access to Debian Sid repositories through configuration files). Not without issues, of course, but it is feasible. In the long run the starting point is not entirely relevant for the end result, but it makes the initial setup that much more convenient.

This concept can then be extrapolated further. Whatever Linux distribution we use, all of us rely on the same components – the linux kernel, desktop environment, software, etc. Distributions are mostly defined by their respective package management utilities, while the core aspects remain unchanged. That is why I can refer to the fantastic Arch Linux wiki for general guidance even when I am using Debian or OpenSUSE.

In a peculiar way the unity of Linux is expressed in following common principles and sharing ideas, not necessarily in using the very same distribution.

Developers, Remember about the Penguin People!

The event that inspired me to forge this entry was my recent struggle with a newly purchased laptop. I decided to buy a new computer, because the old one had a dedicated nVidia graphics card, which together with its memory, could not be utilized to full capacity without the risk of overheating instantly. On this occasion I would like to congratulate Samsung on mounting such an inadequate cooling system. Well done!

Back to the matter, though. Having learnt from past experience, I took great caution in choosing my future computer. Dedicated graphics cards were to be avoided, naturally, and the laptop had to be as generic as possible to facilitate proper installation of Linux operating systems. This turned out to be a very tedious task as most electronics retailers fail to provide all the necessary information. There is a clear-cut difference between Intel HD Graphics 4400 and Intel HD Graphics 3000. You cannot throw them into one basket labeled ‘Integrated Intel Graphics’! The crimes committed by laptop companies are even more profound. For the love of God, I could not readily identify all of the hardware components from a given model’s manual. Sometimes even the amount of memory was a mystery! Again, this sort of information should be easily accessible. I, and many other people, truly do care whether my wireless adapter is a Realtek or a cheap no-name knock-off. Indeed, great shame!

And unfortunately, the wireless adapter was the matter of debate in my case. The Asus VivoBook I purchased, shipped with a Mediatek 7630e wireless card. This specific wireless card is a known troublemaker across all platforms. It truly baffles me that Asus decided to substitute a well-tested Realtek wireless adapter for one from a company unheard of.

This led me to a more thorough search of USB wireless adapters, for which the specific chip-set has been identified. As of now I am a proud owner of a TP-LINK 725N USB ‘dongle’ (Realtek 8188eu), which functions flawlessly even on minimalist Linux distributions, such as Arch Linux.  

This in turn leads me to the very point of my musings – Linux hardware compatibility.

People usually praise Windows for its hardware compatibility (‘it just works!’) from a largely  misunderstood perspective of a pre-installed operating system. The fact that Windows ships with almost all computers means that the vendor made triple sure that all the hardware components had been thoroughly tested beforehand. In other words, all of the drivers (together with bonus bloatware) and firmware were installed to make the user’s life easier. Now, things aren’t as comfortable when you have to perform a fresh install. Most of the drivers are missing and sometimes even establishing an Internet connection is impossible…

This would be unthinkable in Linux Land. Every Linux distribution guarantees that at least a wired connection can be immediately initiated (through the DHCP daemon), to download missing stuff. In addition, the Linux kernel comes with modules to support the majority of hardware – DVD-ROMs, wireless and ethernet adapters, sound cards, etc. True, overall hardware compatibility is possibly better on Windows, because companies prioritize Windows in driver development. However, Linux clearly excels in ‘out of the box’ hardware compatibility.

Thereby, I strongly believe that Linux is not getting as much attention as it deserves. For instance, the TP-LINK 725N USB ‘dongle’ advertised Win7, Wn8 and Win8.1 compatibility only. Other USB ‘dongles’ mentioned Linux, but listed outdated kernels (up to 2.8, while currently 3.19 is in development). This is a joke. Hardware vendors should start paying more attention to Linux as it is a growing force on the market. Smartphones run Android, which is based on Linux. Corporate servers utilize Linux for improved security. Even Valve decided that it is worth investing time into Linux (see: SteamOS). I hope hardware developers will soon follow.