Curing GUI Phobias

Since some time I’m a happy openSUSE camper, yet often frequenting the main Fedora IRC channel. Truth be told, It was tough to decide between those two distributions as both are extremely solid and bug-free. My third choice would fall to one of the Ubuntu spins (Xubuntu or Lubuntu most likely). Eventually,  I realized I’m less and less inclined to put in that extra time to set up Arch Linux or Gentoo per self-indulgence. I know I would be able to, but why should I? I’m familiar enough with Linux to roll any distribution. It seems my impressions go hand in hand with those of Roger from Dedoimedo (http://www.dedoimedo.com/computers/why-not-arch-linux.html). I’m sure he too would be able to install Arch Linux on any PC of his choosing, though again why should he?

Many modern Linux distributions are solid, finished products. You can install them on anything from a bootable USB or DVD and just get things done. That’s precisely the point. It took years, buckets of sweat and units of blood to reach that state. However, now we’re here! More so, we’ve beaten Windows, because it’s still as much of a pain to install as it used to be a decade ago. Therefore, instead of repeating the rite of passage (read: installing Gentoo or Linux From Scratch on a laptop), we should move forward. There is this wrongly placed elitism in some of us in the Linux and Unix communities (mea maxima culpa!) that if you don’t run a hardcore distribution on your shoddy PC, you’re not nerdy enough. So far from truth this be! Nerdy stuff can be done on virtually any of the mainstream distributions. You can set up servers running Ubuntu Server (duh!). You can build datacentre grade boxes with openSUSE Leap. File server on Raspberry Pi? Yup. And so on. No need to spend hours hacking the Linux kernel to squeeze out that extra 0.000000001% performance gain, thinking that alone makes us Computer Wizards. The important thing about mainstream distributions to bear in mind is that someone poured hours of their free time to assemble together the Linux kernel, utilities, a desktop environment, repositories, etc. All of this so that we wouldn’t have to do it ourselves. Isn’t that golden? Truly, we should build on that rather than shun it.

I understand this stands in stark contrast to my former preachings. I, like many others, have escaped from Windows because it was overburdened with black-box utilities and hidden system services. It was a pain to fix without bricking it entirely. However, it’s actually nice to have a pretty GUI and helper tools to simplify system maintenance. The main difference is that Linux-based operating systems are highly malleable, well-documented and can be adjusted to our liking. I realized that’s probably the main reasons I switched.

openSUSE Leap/Tumbleweed – A New Look

gecko_combined.png

I ran Fedora Workstation on one of my legacy MacBooks for several days. I was quite satisfied with it, however some things proved troublesome, like the GRUB2 config. At the end of the day, one realizes Fedora is best suited for Fedora developers and GNU/Linux novelty enthusiasts. Should one require a stable development platform, Fedora is probably not the top pick. On the other side of the Red Hat spectrum is CentOS, though its conservative approach to software updates makes it more suited to hardened enterprise
environments…What else do we have then?

In the realm of the Red Hat Package Manager (RPM), the European equivalent of Fedora is openSUSE. Few people now know that its predecessor, SUSE Linux was born slightly earlier than Red Hat proper and gained considerable acclaim in the enterprise sector. Along Red Hat, it was one of the options available for HP/CompaQ business desktop workstations. Quite obviously then, the openSUSE daughter project instantly garnered appeal. With its recent changes and closer ties to SUSE Linux Enterprise (SLE), it’s an interesting competitor to Fedora Linux.

openSUSE is available in two flavors – Leap 42.x (currently at point release 42.1) and Tumbleweed. The former is built from SLE packages, hence enterprise-level stability is a given. However, neither the kernel, nor the applications are as stale as in the case of CentOS. This is a great choice for people who want to Get Their Job Done and get on with their lives. The other flavor, Tumbleweed is a rolling-release spin with more bleeding-edge packages, geared rather towards software developers. It lacks some of the features of Leap 42.x and is surely not as ‘hardened’, though I consider it stable enough for day-to-day use. I am currently using it on my almost-OSS-friendly ASUS S301A ultrabook. Apart from the slightly too long boot process, everything is just swell.

Due to the fact that openSUSE relies heavily on GUI applications and systemd, I had strong negative feelings towards it in the past. It used to feel like a less locked-in alternative to MS Windows. However, openSUSE has gone a long way since the dusk of the 13.x line and improved in many regards. Also, I am no longer interested in init wars, since it’s quite clear what the enterprise standard is. I can either live with that or return to my batcave and resume my endeavors to save Linux City. In a moral sense it actually IS a dilemma, mind you!

I tested both Leap 42.1 and Tumbleweed on vastly different hardware and my impression is highly positive thus far. A single word to describe it would be ‘polished’. Something that was not so apparent in the 13.x line, but is all too obvious in Leap 42.1. Of course, one  cannot escape the impression that openSUSE resembles MS Windows. On second thought though, I consider this an advantage. The YaST2 Control Panel does a much better job than the well-known Windows Control Panel. Microsoft should take note of how minor incremental improvements, built on a graphically consistent foundation, can produce a more lasting effect, than a ‘Because 7 ate/8 9’ PR stunt. On the other hand, with the Ubuntu 16.04 LTS compat layer and virtual workspaces/desktops in Windows 10 it seems Microsoft is slowly catching up.

GUI Design vs Software Engineering

I gently touched on the topic of graphics design practices vs software engineering in my last entry, though as it seems to bug me quite some, I decided to elaborate. Referring to Windows’ history, I believe the usefulness of graphical interfaces ended around Windows 98-Windows XP, with the latter containing slightly too much bling than I found comfortable. It might be because Windows XP was the last Microsoft operating system I trusted and used almost until it turned end-of-life. Nostalgia is a human thing, after all! Nevertheless, I trust my judgment and have a strong feeling that anything past Windows XP has far too much emphasis on graphical user interfaces. However, I don’t want this entry to become another Why I hate Windows rant from a hurt keyboard warrior. Moving on, then!

When I write a piece of software I typically start with the raw code and focus only on the CLI (command-line interface), until I’m comfortable with how the program works and I am certain that most bugs were caught. Obviously, if the program requires a lot of user input and some operations will be repeated, a GUI (graphical user interface) is indeed needed. Following the KISS (Keep It Simple Stupid) principle, GUIs are supposed to simplify mandane tasks, but also provide the user with direct means of exerting low level control over the piece of software. Therefore, it is important that the GUI is simple and agrees with WYSIWYG (What You See Is What You Get). Further additions are just for eye-candy.

Hence, it really pains me when the primary focus is development of user interfaces. Alone they mean nothing and should something break, we’re dependent solely on the developer. How many of us have seen useless prompts with error codes that mean nothing or are too ambiguous to interpret? Or those occassions when the GUI crashes and we have no idea what happened? That’s why I prefer CLIs and use them until they become too cumbersome. My perfect GUIs are those built by people to simplify window manager configuration, because out of necessity they contain only the most important set of features and give direct access to variables. Glorification of gloss and shine may make the program pleasing to the eye, but long-time use will prove dissatisfying and tiring.

The only beautifications I accept are those that do not interfere with functionality. Back in the old days Apple developed an elegant, consistent style following the teachings of Italian industrial designers. Slight gloss and a clear, grey-white look dominated all applications. This appearance is still used today across all Apple devices, because it has proven to be the perfect balance between pleasing the eye and the mind alike. While I am not a big fan of Apple, it should be said that in terms of GUI design, they did hit the spot!

Pumping Performance with Asus Pundit!

asus-pundit.jpeg

Some time ago I became good friends with a certain Asus Pundit P1-PH1. I found it, sad and forgotten, in a local electronics dumpster. Initial diagnostics showed a dead PSU, which I replaced with one from an old MSI small-form factor desktop PC. I also expanded the RAM to 2 GB 533 MHz (maximum it can take) and swapped the single-core Intel D for a faster Pentium 4 processor, clocked at 3.20 GHz. After some reading I realized that the memory can be clocked up to 667 MHz. As I have 2x 1GB 667 MHz DDR2 bricks, I will upgrade this box in the nearest future. To top it off, a 120 GB IDE drive.

The Pundit is a rather dated piece of hardware, though thanks to GNU/Linux I can restore it back to its former multimedia centre glory. As my operating system I chose BunsenLabs Linux, based on Debian Stable. Other noteworthy alternatives are Debian Stable itself, Arch LinuxManjaro Linux, Bodhi Linux, AntiX and Peppermint OS. Frankly, Arch Linux  would be the lightest on resources, but I always have problems with theming and Pundit has some slots/ports I am not entirely familiar with. In other words, the less to configure, the better!

BunsenLabs offers a fairly easy installation procedure, as the installer is based on Debian’s original installer and the bl-welcome post-installation script handles many useful to-dos. Still, I did some minor tweaks to improve my user experience. Below, a summary:

  • Spacefm as file manager instead of Thunar
  • Midori as Internet browser instead of Iceweasel/Firefox
  • Volumeicon as volume icon instead of Volti
  • Blacklisted redundant kernel modules in /etc/modprobe.d/
  • Installed firewalld and the firewall system tray applet
  • Installed additional plugins for Geany

I almost never use more than 1 GB of RAM and CPU usage rarely exceeds 50%. Then again, I don’t do anything especially fancy. It is highly likely that Windows XP would run smoother on this machine, though without regular updates, GNU/Linux is the only safe option. Thereby, grab your favorite GNU/Linux distro and make your legacy hardware shine once more!

Can Windows 10 Save Microsoft?

As Windows 10 is already out and Microsoft is driving herds of Windows devotees in its powerful campaign, I decided to tackle this question as well. I don’t use Windows for personal computing anymore and though it’s surely a matter of taste, I feel there are good reasons why any Windows version beyond Windows 7 exists merely for profit.

Fast forwarding to modern times (post year 2000), Windows XP was the most successful Windows version thus far. It was stable, visually pleasing and offered all of the essential features one may expect from an operating system. Its successor, Windows Vista, was almost a complete flop, though it did introduce concepts important to further Windows operating systems, like User Account Control (UAC). No wonder computer users considered Windows 7 a return to Windows tradition and a milestone in Microsoft’s history.

However, this sinusoidal pattern of good-bad Windows versions gives the impression that Microsoft has a hard time grasping user expectations. It is the more puzzling that they managed to get them right twice already (Windows XP and Windows 7), yet they still strive for…what exactly? Windows 8 was an absolutely unnecessary catastrophe and anything beyond just emphasizes the bad taste that was left by it. To top it off, Windows as an operating system architecture is so majorly flawed that a whole software industry was born to fix those flaws. This is nicely elaborated on in some rants I found:

rant on Windows and Windows 10

slightly vulgar rant on Windows

I admit neither Mac OS X, nor GNU/Linux is perfect. However, the latter is free open-source software and flexible enough so that existing issues can be gradually solved. On the other hand, Windows 10 is just plain terrible:

  • The Start Menu that Microsoft promised to return was butchered, and filled with Windows 8 tiles and adverts.
  • The file manager directory tree is a mixture of mounted drives/partitions, linked network directories, favorites, etc. in a completely random order.
  • Key system options can be changed in the legacy Control Panel or the new Settings app. No consistency regarding which one to use.
  • The Edge Internet browser is largely unfinished and buggy.
  • The user interface (UI) looks like a school project in GUI design, using qt graphical libraries.
  • There is completely no guarantee that an upgrade from Windows 7/8/8.1 will be successful and Windows 10 has the drivers to support all of the hardware.

Many of the above qualms were experienced by long-time Windows fans also. Sadly, but I feel Windows 10 is a failed product that should be avoided at all costs. My suggestion is either to stick to Windows 7 or forget Windows entirely and move to something else (Mac OS X? GNU/Linux?). Windows 10 cannot and will not save Microsoft.

Can Tux Go Mobile?

tuxontuxonlaptop

As some people sardonically claim, the year of the Linux desktop is drawing near. However, when analyzing the current ecosystem I noticed that the near is an asymptote, not a fixed destination. I tried to draft a few priorities I think GNU/Linux needs to cover before becoming truly popular on mobile devices (laptops, notebooks, etc.).

1. Hardware support:

More and more hardware vendors openly support GNU/Linux as a platform and offer compatible drivers. However:

  • The only efficient solution for nVidia graphics is the closed-source driver, which is not fully open-source and does not support kernel mode-setting and other Unix-specific features
  • AMD and Intel GPUs have compatible drivers, but their performance is not on par with Windows and Mac OS X drivers
  • Intel, Atheros, Realtek and some other companies provide drivers for wireless network chips, though the coverage is far from complete
  • Features like brightness adjustment, suspend/wake up, fingerprint readers, etc. depend on so many interlaced components that their functionality is mostly down to sheer luck

Granted, most of the above works to a certain degree, though simply not to the same extent as with Windows or Mac OS X. It would really help if computer producers and vendors were to list the hardware components shipped inside their devices. Usually, that’s a tiny piece of information, though for us open-source people makes choosing well-supported hardware substantially easier.

2. Common software standards:

As it stands now, there are too many GNU/Linux distributions (Ubuntu flavor-of-the-week anyone?) with too many applications fulfilling common tasks. The variation in command-line network management tools comes to mind, though by no means is it the only such case. Don’t get me wrong, I adore choice. However, for vendors and third-party software developers too much variety makes understanding GNU/Linux the more confusing. A good idea would be to collaboratively decide on a set of low-level tools common to all distributions to unify at least the Linux base system more.

3. Emphasis on uniqueness:
GNU/Linux is viable as long as it’s an alternative to MacOS X and Windows. The more it mimics, the more it becomes a mere copy. However, mimicry may in many cases prove successful. For instance, KDE managed to expand on Windows’ standard desktop look, emphasizing functionality and ease of use. GNOME3, shunned by many for its bugs (now mostly resolved) and feature obfuscation is in fact a more approachable adaptation of Apple’s Aqua GUI. Many of us are so-called Unix veterans and we don’t care for user-friendliness much. However, end-users and vendors do and we die-hards should at least respect that. I think GNU/Linux, but also other Unices, has enough unique features to be considered an alternative to Windows or Mac OS X for many computing tasks.

To recapitulate, I think the year of the Linux desktop is not so far ahead anymore. In fact, it’s almost here! However, for the world to fully embrace it, open-source developers and hardware vendors should collaborate more. Both sides will surely profit from greater openness and trust.

The Sickness Called ‘User-friendliness’

Originally, the Linux kernel was forged single-handedly by Linus Torvalds, because he didn’t like MS-DOS (yes, that long ago!). Later, operating systems based on the Linux kernel began to appear and their main targets were servers, workstations and mainframes. The point of pride was stability, transparency (lacking then in MS-DOS) and code-correctness. After all, Linux was raised on the UNIX philosophy of sane programming and system design.

Then, something happened. A number of Linux developers and distribution maintainers noticed that MacOS and Windows are popular on the consumer market, because they’re user-friendly. This was the same degree of observation that Adam and Eve made in the biblical Paradise when they tasted a fruit from the Tree of Knowledge of Good and Evil and discovered they are absolutely naked. Then, Linux was not user-friendly at all! They [developers and maintainers] got together and said Hey, we want Linux to be popular among average Joes as well! It deserves it!. Thus, the long trip down the rabbit hole began. Unfortunately, it didn’t lead to Wonderland…

Distributions began to swap tried and tested solutions for design atrocities. Gnome Network Manager (GUI) on top of wpa_supplicant (which has its own GUI!) on top of dhpcd. Pulseaudio on top of ALSA. GRUB2 with its ‘modern’ syntax, ridiculous to the point that it was easier to just auto-configure and forget what the bootloader even does (honestly, a very bad attitude). There are tons of examples. Sadly, user-friendliness is merely a bait. Linux will never in all eternity be as user-friendly as MacOS X…without sacrificing traits valuable to many: flexibility, freedom of choice, PC usage footprint, etc. Is that path really worth going down?

The sickness is spreading. For the proponents of user-friendliness it’s not enough to take the Linux kernel and build user-friendly operating systems on it (that’s how it’s done in the BSD world, more or less). In order to matter, one has to change the upstream. Instead of creating, they want to alter, to mold the whole Linux ecosystem to their vision.

I sincerely hope this will never happen. We have what we value most in Linux as long people don’t try to butcher it with their MacOS X/Windows standards. If we sack the UNIX philosophy, Linux as we remember it will be no more….

The GNU/Linux Revolution…

linux_revolution

Beginning of 2016. A major storm is brewing over the GNU/Linux landscape. Winter is coming and things are starting to change. For the much worse. Thanks to this article: http://slated.org/the_poetterisation_of_gnu_linux it became very clear to me what Red Hat is all about. For years I have respected Red Hat as a major player in the GNU/Linux ecosystem and a supporter of open-source software. I believed they genuinely cared and wanted to prove the world that the open-source development model IS the model of the future. I was wrong.

Red Hat never cared. That’s quite apparent when looking at some of the developer comments on the bug tracker: https://bugzilla.redhat.com/show_bug.cgi?id=534047#c9. They simply wanted a new operating system to sell their software products. Furthermore, the way they organized the coup on GNU/Linux was quite devious. First, through Fedora they allowed the development of modern, streamlined apps. That produced a lot of hype and cheering for open-source software. Alas, steadily, tried and tested UNIX solutions, which worked for years, started to become obsolete, because they were too old and not modern enough. udev and a couple of other projects were created to address this. Next, came systemd, which took over the init process and gradually started to absorb all of the mentioned minor projects. You want to boot your system – you need systemd. You want to run X11 – you need udev which is now part of systemd. You want to mount devices – you need dbus, which is also part of systemd. To be specific, you don’t actually need systemd for all of this to work, but that has become the new default. Somewhere in-between came the GNOME3 project, which massacred the positive vibe left after GNOME2 and quickly linked itself to systemd. It was simple and easy to use, as the proponents claimed. Frankly, it was oversimplified, obfuscated and completely useless, much like the Metro UI of Windows 8, which appeared later.

The overall fuss over Red Hat’s agenda was and still is enormous. However, it really boils down to a single statement – GNU/Linux operating systems are not Windows, nor Mac OS X. The moment this is forgotten and/or forsaken, the GNU/Linux ecosystem will become yet another streamlined, commercial product.

Fortunately, there are options! FreeBSD has set itself apart from the mainstream long ago and will not participate in Red Hat’s machinations. Moreover, it follows the original, true UNIX philosophy. In the Linuxverse there is the OpenRC based version of Manjaro and a relatively new, but greatly promising project – Devuan.

Feature Creep – Unwanted Gifts from Developers?

unwanted_gifts

The open-source community likes software freedom (the ability to choose) and gifts (new software features). However, sometimes there are gifts/features that we don’t need, yet we still get them and they cannot be returned. Unfortunately, the open-source community is recently overflowing with such gratifications. I believe the trend is greatly troubling, because many such giveaways are not discussed in the community properly and imposed by minorities.

One of the most common proceedings is that a new piece of software or feature appears and suddenly a very vocal minority explodes with Oh wow, shiz, this is so cool, we need this on every computer! Alas, such occurrences lack a cool-headed member of the crowd, who should instantly retort with Why exactly do we need this? How does this help the community?  etc. One of such vocal minorities (which in fact is not so small) is the Fedora community.

Fedora, a fantastic Linux distribution, is sadly often treated as a test-bed for innovations and has a bad track record of pushing forward features that are not ready for prime-time. Many such innovations have a small scope and I often welcome them with a faint smile and a thought – Oh? that’s quite interesting. However, from time to time comes a gigantic train engine with sufficient force to distort the Linux landscape. I will mention 3 of the most disturbing train wrecks.

GNOME3 was supposed to be a continuation of the GNOME2 project. Yet things went wrong and the designers responsible for the GUI decided that it needs a complete rework. The new version became oversimplified, heavier on resources and with all relevant features hidden away. Many critics claimed that such a move would alienate many users (which happened) and GNOME3 would eventually die. Surprisingly, Fedora picked it up and said it was cool. Next up was openSUSE and Debian (?!). Now, GNOME3 is the official desktop environment for Fedora and despite being very troublesome to work with, it is considered suitable for a serious workstation class operating system. Who would have thought…

Pulseaudio is another such unwanted gift. I don’t intend to go on a rant and complain about how it destroys everything. The Internet is already overflowing with such rants. Also, it is not completely useless as it makes switching between sound devices marginally easier. Regardless, its features are useful only to a small group of Linux users, who deal with sound production and management to a greater extent. For normal users and non-sound proficient developers who just want to listen to some music it’s troublesome, because it hijacks all sound controls, yet still fully relies on ALSA. The drivers and controls are already there – no need for another abstraction layer.

Finally, there is systemd. Lots of fuss, tears and turd thrown around. Even to the point that Debian was forked as Devuan (a bittersweet incident). Unsurprisingly, systemd comes from the creator of Pulseaudio. Just like Pulseaudio, it violates the very one tool per job UNIX paradigm and doesn’t do anything new and revolutionary.

Unwanted gifts can be monetized, returned or given away to others. We have the freedom to get rid of them. Mentioned software features were pushed down users’ throats the same way one fattens ducks before they are killed for meat. In a commercial ecosystem GNOME3, Pulseaudio and systemd would die painfully as they hinder productivity and stomp over established standards. What pains me the most is that they follow the same top-to-bottom approach of imposing features by developers that is prominent in Windows. Have the users suddenly become irrelevant?

Freedom to Freedom

Since my very early childhood I always loved tinkering with electronics. Initially, I disassembled toy trains, jeeps and motorbikes. When I received my first PC, this escalated and items such as joysticks and gaming pads fell victim to my tinkering urges. Naturally, as time went by, I aimed for bigger (and better) stuff, like desktop computers. I truly enjoyed altering and improving objects around me. This concerned software as well, and hence my very first grudge against Windows.

Software for Windows is usually protected by a slew of copyright laws. Thereby, publishing improvements is limited. As an example, if I were to make a certain Windows program more useful, I would most likely be prohibited from publishing my work without the explicit consent of the authors. In many cases this consent (either verbal or in written form) wouldn’t have been given to me, because that would conflict with the interests of the company that released said software. This proprietary model effectively limits development to the work done by companies only. On the other hand, open-source and free software is a loose collective of regular users, hobbyist programmers and developers, which is capable of delivering much more in the same time-frame. Study of freedom in software development eventually led me to the work of Richard Matthew Stallman…

Richard Matthew Stallman (nicknamed ‘rms’ on Internet forums) is a freedom activist, Linux developer and programmer. The list of his achievements is so long that one could easily write a book (or two…) to cover his life. I personally hold rms in very high regards and consider his Four Freedoms to be of equal importance to Asimov’s Three Laws of Robotics. However, when discussing freedom in general and freedom in software development, one should consider the possible extremes – lack of freedom (tyranny) and too much freedom (anarchy).

Just to briefly recapitulate Stallman’s Four Freedoms:

0. The freedom to read the work or watch that work, for any purpose.
1. The freedom to see and study how the knowledge was assembled, and change its form so it becomes what you “know”.
2. The freedom to share so you can help your neighbor.
3. The freedom to distribute copies of your modified works to others.

I completely agree with points 0. and 1. with a small addition that I would gladly donate to support the developers behind a given project, if I find it useful. There is something ethically twisted in the proprietary model to me. The customer (no longer a user, because money is already involved) pays for a promise of software quality, rather than for the software itself, because he/she will be able to verify that quality only after concluding the purchase. Due to the fact that money comes first, this model is awfully misused. Concerning video games, years ago a common practice was to release demo versions of games to give the customer a chance to test the product before buying. Currently, this has been substituted with ethically dubious hyping.

On points 2. and 3. Sharing is not always the case and people often take, but not give back to the community either out of laziness or a simple lack of skills. Rarely, this is pushed towards a capitalistic extreme. People take and resell the software, generating revenue. Some licenses indirectly allow this (for instance, the BSD license), though I think it should be frowned upon and ostracized by the community. Money tends to put a fixed, arbitrary value that fails to capture the subjective value of an object, perceived by each of us individually.

To conclude, I believe Stallman’s Four Freedoms are of grave importance and should be applied to software when possible. However, great care needs to be taken as to avoid misuse and corruption. We are people, after all. Both good and bad is part of our nature…