Freedom to Freedom

Since my very early childhood I always loved tinkering with electronics. Initially, I disassembled toy trains, jeeps and motorbikes. When I received my first PC, this escalated and items such as joysticks and gaming pads fell victim to my tinkering urges. Naturally, as time went by, I aimed for bigger (and better) stuff, like desktop computers. I truly enjoyed altering and improving objects around me. This concerned software as well, and hence my very first grudge against Windows.

Software for Windows is usually protected by a slew of copyright laws. Thereby, publishing improvements is limited. As an example, if I were to make a certain Windows program more useful, I would most likely be prohibited from publishing my work without the explicit consent of the authors. In many cases this consent (either verbal or in written form) wouldn’t have been given to me, because that would conflict with the interests of the company that released said software. This proprietary model effectively limits development to the work done by companies only. On the other hand, open-source and free software is a loose collective of regular users, hobbyist programmers and developers, which is capable of delivering much more in the same time-frame. Study of freedom in software development eventually led me to the work of Richard Matthew Stallman…

Richard Matthew Stallman (nicknamed ‘rms’ on Internet forums) is a freedom activist, Linux developer and programmer. The list of his achievements is so long that one could easily write a book (or two…) to cover his life. I personally hold rms in very high regards and consider his Four Freedoms to be of equal importance to Asimov’s Three Laws of Robotics. However, when discussing freedom in general and freedom in software development, one should consider the possible extremes – lack of freedom (tyranny) and too much freedom (anarchy).

Just to briefly recapitulate Stallman’s Four Freedoms:

0. The freedom to read the work or watch that work, for any purpose.
1. The freedom to see and study how the knowledge was assembled, and change its form so it becomes what you “know”.
2. The freedom to share so you can help your neighbor.
3. The freedom to distribute copies of your modified works to others.

I completely agree with points 0. and 1. with a small addition that I would gladly donate to support the developers behind a given project, if I find it useful. There is something ethically twisted in the proprietary model to me. The customer (no longer a user, because money is already involved) pays for a promise of software quality, rather than for the software itself, because he/she will be able to verify that quality only after concluding the purchase. Due to the fact that money comes first, this model is awfully misused. Concerning video games, years ago a common practice was to release demo versions of games to give the customer a chance to test the product before buying. Currently, this has been substituted with ethically dubious hyping.

On points 2. and 3. Sharing is not always the case and people often take, but not give back to the community either out of laziness or a simple lack of skills. Rarely, this is pushed towards a capitalistic extreme. People take and resell the software, generating revenue. Some licenses indirectly allow this (for instance, the BSD license), though I think it should be frowned upon and ostracized by the community. Money tends to put a fixed, arbitrary value that fails to capture the subjective value of an object, perceived by each of us individually.

To conclude, I believe Stallman’s Four Freedoms are of grave importance and should be applied to software when possible. However, great care needs to be taken as to avoid misuse and corruption. We are people, after all. Both good and bad is part of our nature…

The Lurking Serpent…

In my previous and some earlier entries I mentioned systemd and how usually it is not a problem as long as it gets the job done well. While this is true in general, I have to reflect on my past claim.

To those who are unfamiliar,  systemd (system daemon) is a Linux init system, but also a device and service manager. It governs the boot process, drive mounting, networking, etc. Originally it was developed and employed by Red Hat programmers in their enterprise Linux platform RHEL, but later quickly adopted by other distributions, such as Ubuntu, Arch Linux or Debian. 

Doesn’t sound too daunting, now does it?

The actual problem resides ‘under the hood’. As it is now, systemd has an extensive network of dependencies and itself is a dependency of major packages like KDE or GNOME.  This fact has been somewhat neglected, because systemd was continuously pushed from upstream as modern, intuitive and useful. This is of course perfectly true, but I feel that the overall Linux ecosystem more suffers than profits from systemd.

The very definition of Linux has always been choice. The right to decide on the distribution, installed software, programming environment…also init system! We used to have pure system V init, upstart, openRC, etc. I admit, some of those init systems were slightly faulty and ‘evolution’ removed them from the Linux-verse as inefficient. However, none of those init systems tried to be everywhere and everything. Let us take openRC, as a case study (my current favorite).

OpenRC utilizes a global configuration file (rc.conf) to manage basic parameters of the boot process. It is also responsible for drive mounting and has an rc-service module dealing with running system services. In principle, it does everything systemd tries to do with a few exceptions:

  • Init scripts are independent of each other and a broken service can be quickly restored (systemd routes services through a global controller systemctl, which if broken, makes systemd unusable)
  • OpenRC does not intend to replace anything or force users to rely on it. It is a service manager / init system and nothing more.

As openRC is not a dependency of anything, I am free to do whatever I want with my operating system. The problems start when I would like to use for instance MATE as my desktop environment. It is light in comparison with Xfce, GNOME or KDE, but equally feature-rich. Hence, a reasonable choice, even for weaker computers. However, in order to use any of the major desktops I need systemd or at least crucial (inseparable) parts of it.

I find it quite reasonable that popular desktop environments try to take advantage of systemd, but why, God why, does it have to be a necessity…? Why do I need to limit myself to a specific init system, and often by extension to a specific range of distributions  (systemd is already quite widely used) to enjoy Linux?

I don’t want to sound like one of those tinfoil hat people, but the current systemd-favoring trend is at least troubling, because it brings us a step closer to Windows and its intrusiveness. Let’s bear that in mind!

The Price of Popularity

To begin with, the operating system (OS) landscape circa half a century ago was tremendously dynamic. Initially, Unix dominated almost the entire market. Later, Microsoft’s and IBM’s Desktop Operating System (DOS) appeared, to be later superseded by MS-DOS with a simple graphical user interface (GUI) and finally Windows. At this point, a revolution happened. Windows became so popular that it gradually consumed around 90% of the consumer market, surpassing even MacOS X. Part of this extraordinary achievement was due to an improved MS-DOS-like GUI, featuring extensible menus in the form of boxes (hence, Windows, I believe). The average Joe was overjoyed (pun unintended) – finally, personal computers made easy!

Today, things are not much different. Windows still dominates the PC market segment. though many of us think that the current Windows (God forbid, how awful it is!) prevails only because it once became popular, Indeed, that is one of the reasons. In fact, the other was also mentioned – ease of use. I think it’s perfectly fine if one needs a computer only for really basic operations – browsing, writing e-mails, gaming, etc. Then, truly, Windows shines. Unfortunately, it’s terrible for everything else, because a lot had to be sacrificed in favor of user-friendliness:

  • Configuration options hidden behind GUIs, which are often quite hard to find
  • Configuration files in inaccessible formats to prevent accidental damage or tampering with
  • Multitude of helper scripts and programs to make the user’s life easier
  • Highly intrusive system management tools
  • Many others

As a result, the things one can do with a Windows installation are very limited and part of the limitation is enforced by copyright regulations (distribution of modifications to system components is against the User Agreement). To me that is simply horrendous!

Currently, Linux is trapped in a vicious circle. It cannot become truly popular, because still not enough software and hardware developers are committed to it, and those will not devote their precious time to development for Linux, because it’s not popular enough in many instances. Clearly, in order to break from this vicious circle, one has to do something. What I am afraid of is that those forced changes will bring Linux closer to proprietary operating systems, such as Windows and MacOS X and eventually strip it off its uniqueness.

Certain changes have already been implemented and surely for the worse: clunky, bloated (but so user-friendly!) GNOME desktop, overly intrusive systemd (will elaborate on that in the next entry), multitude of GUIs, etc.

For now the opposition is still there and many project groups are doing their best to provide alternatives to so-called mainstream products (GNOME and KDE, for instance). How long will this work, however…?

Quo Vadis Computing?

35373Some time ago I took up a new hobby – dumpster raiding. I once randomly figured out that people throw away a lot of electronics and at least some of it is in mildly good condition. Of course most of it is damaged beyond repair and I am too mean to invest into this new business of mine. However, from time to time I find actual treasure. Roughly a month ago I found a completely fine desktop computer, just without the screen and keyboard. I was shocked, because everything was inside – RAM, hard drive (HDD), graphics card, etc. All I needed to do was plug in the monitor and find a keyboard + mouse to get it running. The desktop had a 80 GB HDD, 4 x 256 MB RAM and an old Pentium 4. I set it up with 32-bit Debian and…it was fast. Using only 100 MB RAM with a fairly light window manager (IceWM) I could do a lot. Everyday tasks felt like a breeze, especially thanks to hyper-threading (HR) supported by the CPU. Then I realized Pentium 4 is ancient. It’s early 2015 and the market offers computers with several CPU cores, tens of GB RAM, 6+TB HDDs and what not. However, what for…?

I was born in the 8-bit era and began my computing adventures in the 16-bit era. Until a certain point in time I could feel computers were getting better. My Matrox 3D wouldn’t handle newer games, the HDD was overloaded and the computer would often cry for more swap space due to insufficient RAM. However, Pentium 4 was a milestone. 4 GB RAM was enough (worth mentioning, it’s the current low-end standard) and disk space became relatively cheap already. For everyday tasks – browsing the web, doing homework, paper work, etc. – one didn’t need more…

Of course, 4 GB RAM nowadays is much faster than 4 GB RAM back in Pentium 4 days, especially when comparing high frequency DDR3 with low frequency DDR2 RAM. That’s fully understandable. However, I feel computer retailers are often doing a rather dishonest job, pushing high grade gaming hardware into the hands of clueless shoppers, who usually need a computer to browse the web and write some documents. ‘With our new core i7-8000K Ultra Hyper Book X you will be able to browse the web a thousand times faster!’ Honestly, we know that is not true!

Every year we produce more and more electronic garbage. We generate more than we actually need. Food for thought – this is exactly where Linux comes in. Linux can turn this garbage into treasure…