GNOME3 Oversimplified?

gnome_logo

Seeing as how GNOME3 and KDE (4? 5? Plasma? Neon? Ion?) are the leading desktop environments nowadays, I decided to give GNOME3 a try on my openSUSE Leap 42.3 workstation (my current main distribution on most hardware, including a Raspberry Pi 3). There are good and bad things and some of it agrees with my former assessment of GNOME3. However, I assumed that since I’m more pro-desktop now, my opinion might change. Well, it did perhaps…

The Good:
In material design alone GNOME3 wins a trophy. There is a lot of MacOS X mimicry and I think that speaks well of the project. The guys (and gals!) from Apple know their stuff so why not get inspired by them a little? The overall UX (user experience) is also positive. Thanks to the highly intuitive interface finding important desktop features is a breeze. One just needs to browse a bit and not follow the imprinted this option must be hidden somewhere philosophy that other desktop environments teach us. Also, I greatly appreciate the attention to useful features like the one-click offloading of graphically intensive applications to the discreet nVidia card on Optimus laptops. If our day-to-day tasks focus on office work and leisure, GNOME3 could potentially be the desktop environment of the future. It stands to reason, because it’s an open-source project, molded and shaped into perfection by the user and developer communities. It constantly evolves so there is no limit to its improvements.

The Bad:
Unfortunately, it seems that simplicity of design has its price. Troubleshooting GNOME3 is extremely painful and many of the applications (including the Gnome Shell and the Gnome Display Manager) throw the most uninformative error messages.

something_went_wrong

Authored by the Interwebs

Case in point, the above error screen. The Oh no! Something has gone wrong is a phrase typically used in commercial applications to shield end users from the headaches of reading crash logs. By willfully choosing Linux we demonstrate that we’re no mere end users so treating us as such is quite rude. To dwell on this a bit more, the above error screen appears even when the Gnome Display Manager login panel crashes. How is one supposed to log out without being logged in to begin with? To make matters worse, since many Linux distributions have the display manager set to restart on failure, this screen will keep re-appearing until proper troubleshooting is done in one of the TTY consoles (ctrl  + alt + F1 – F9 keys). This very much reeks of Windows and MacOS X problems where the user interface basically took over the OS. All we can do is just reboot and hope for the best. Other applications show similar An error occurred messages without any means of actual troubleshooting.

My take-home from this experience is Thanks, but no thanks. If I want to get some work done, I would rather rely on LXDE, XFCE, LxQt and maybe even KDE. Traditional desktop environments without bells and whistles.

Advertisements

Mission: Saving Linux?

Recently, I was really into the BSDs and fiddled around with OpenBSD on some of my PCs. I still keep it on one laptop and FreeBSD on my main workstation. The point being that the BSDs are a lot more consistent, reliable and if something works out-of-the-box, it will most likely stay that way for a long while. Changes are incremental and major frameworks are not being reworked for semi-valid reasons, like oh, this looks old. Also, the very clearly superior BSD documentation – cannot go wrong with that!

Nowadays, Linux seems to be riddled with Windows-like problems. There is a surprisingly strong emphasis on easy-to-use GUI tools and hype for certain technologies (*cough* *cough* Docker) that are far from production-ready. The main rationale seems to be make it easy to set up. Everywhere I look there is loads of fluff. As Matthew Fuller put it in his discourse at over-yonder.net, we should be improving the back-end and extending it into usable front-ends, not the other way round. I agree, it’s important to enable people to do great things with technology, but if the how is there yet the why is missing, we’re not getting far. In addition, there is a lot of wheel re-inventing going on. People forget (or simply don’t know) that many of the great frameworks and tools were implemented already and can still be used effectively. Then, there is systemd (a case study on bad software engineering) and making Linux things so Linux-centric that portability suffers greatly (see: Docker being in fact Linux + Docker on non-Linux platforms). So there, we need a hero. Otherwise, we’re doomed to drown in self-indulgence.

A rather interesting idea would be to take the Linux kernel (for hardware compatibility reasons) and build a BSD-like operating system on top of it. Forget systemd, NetworkManager and other tools that try hard to do something that has already been done successfully. Leave the kernel + GNU coreutils in a steadily improving base and separate third-party software into package repositories. Also, one needs to start fresh – a clean slate to avoid the pitfalls of existing Linux distributions. In fact, there is already a project that tries to implement some of the BSD concepts in Linux and per my very preliminary impressions looks quite successful. Enter Void Linux!

Void seems to fill the gap (pun intended) between the Linux and BSD worlds aptly. There is a simple (limited?) init implementation called runit that takes care of starting the system as our favorite PID1, and launching service daemons to get a certain degree of persistence. It’s a typical Unix-like piece of software – does its job and swiftly moves aside to let us do ours. Then, there is the XBPS package manager and ports builder all-in-one framework. Think Arch’s ABS on steroids. Instead of OpenSSL we get OpenBSD’s LibreSSL (huge plus in my book!) and the musl C library alongside (but not on one installation, mind you) the traditional glibc. Speaking of C, all of these tools seem to be quite robust and xbps is perhaps the snappiest package manager I have seen in a while. Although the developer team is small, it’s doing a decent job at maintaining Void Linux. It’s a clear sign that it’s possible to run Linux without unnecessary bells and whistles, yet with all of the standard desktop features one might expect. My generic ASUS S301LA ultrabook handled Void Linux very well and all of its hardware was both supported and configured correctly (Intel WiFi Link 5100, SiS touchpad, AzureWave webcam, Intel HD 4400 graphics, WD-Green SSD, UEFI). Below a screenfetch rundown of the setup:

void_screencap

Now, it is true that finding a solution to 14 problems generates a 15th problem. There are certain aspects of Linux system design that I don’t completely agree with and adding orderly features on top of a Linux distribution mess doesn’t make the final product more organized. However, it’s a very successful step towards restoring some sanity to Linux Land that we lack so much nowadays.

In Software We Trust

Inspired by the works of Matthew D. Fuller from over-yonder.net I decided to write a more philosophical piece of my own. While distro-hopping recently it came to my mind that whatever we do with our lives, we never do it alone and our well-being depends on other people. It requires us to trust them. Back in prehistoric times a Homo sapiens individual could probably get away with fishing, foraging and hunting for food, and finding shelter in caves. The modern world is entirely different, though. We need dentists to check our teeth, we need groceries to gather food, we need real estate agents to find housing, etc. Dealing with hardware and software is similar. Either we build a machine ourselves or trust that some company X can do a good enough job for us. The same goes for software!

Alright, so we have a computer (or two, or ten, or…) and we want to make it useful by putting an operating system on its drive(s). MacOS X and MS Windows are out of the question for obvious reasons. That leaves us with either Linux or a BSD-based system. Assuming we pick Linux, we can install it from source or in binary form. This is where trust comes into play. We don’t need to trust major GNU/Linux distributions in terms of software packaging and features. We can roll with Gentoo, Linux From Scratch, CRUX or any other source-based distribution and decide on our own what does and doesn’t go into our software. It’s kind of like growing vegetables in a garden. Granted, we ourselves are responsible for any immediate issues like compile errors, file conflicts or missing features. It’s a learning process and one definitely profits from it. However, it’s also time-consuming and requires extremely good understanding of both system design and the feature sets of individual programs. No easy task that. Therefore, it’s far more convenient to use binary distributions like openSUSE, Ubuntu, Fedora, Debian, etc. It requires us to trust that the maintainers and developers are doing a good job at keeping software up-to-date, paying attention to security fixes and not letting bugs through. I myself don’t feel competent enough to be a source-level administrator of my own computer and be able to fix every minor or major issue in C++ code. I prefer to trust people who I’m sure would do it better than me, at least for now.

Curing GUI Phobias

Since some time I’m a happy openSUSE camper, yet often frequenting the main Fedora IRC channel. Truth be told, It was tough to decide between those two distributions as both are extremely solid and bug-free. My third choice would fall to one of the Ubuntu spins (Xubuntu or Lubuntu most likely). Eventually,  I realized I’m less and less inclined to put in that extra time to set up Arch Linux or Gentoo per self-indulgence. I know I would be able to, but why should I? I’m familiar enough with Linux to roll any distribution. It seems my impressions go hand in hand with those of Roger from Dedoimedo (http://www.dedoimedo.com/computers/why-not-arch-linux.html). I’m sure he too would be able to install Arch Linux on any PC of his choosing, though again why should he?

Many modern Linux distributions are solid, finished products. You can install them on anything from a bootable USB or DVD and just get things done. That’s precisely the point. It took years, buckets of sweat and units of blood to reach that state. However, now we’re here! More so, we’ve beaten Windows, because it’s still as much of a pain to install as it used to be a decade ago. Therefore, instead of repeating the rite of passage (read: installing Gentoo or Linux From Scratch on a laptop), we should move forward. There is this wrongly placed elitism in some of us in the Linux and Unix communities (mea maxima culpa!) that if you don’t run a hardcore distribution on your shoddy PC, you’re not nerdy enough. So far from truth this be! Nerdy stuff can be done on virtually any of the mainstream distributions. You can set up servers running Ubuntu Server (duh!). You can build datacentre grade boxes with openSUSE Leap. File server on Raspberry Pi? Yup. And so on. No need to spend hours hacking the Linux kernel to squeeze out that extra 0.000000001% performance gain, thinking that alone makes us Computer Wizards. The important thing about mainstream distributions to bear in mind is that someone poured hours of their free time to assemble together the Linux kernel, utilities, a desktop environment, repositories, etc. All of this so that we wouldn’t have to do it ourselves. Isn’t that golden? Truly, we should build on that rather than shun it.

I understand this stands in stark contrast to my former preachings. I, like many others, have escaped from Windows because it was overburdened with black-box utilities and hidden system services. It was a pain to fix without bricking it entirely. However, it’s actually nice to have a pretty GUI and helper tools to simplify system maintenance. The main difference is that Linux-based operating systems are highly malleable, well-documented and can be adjusted to our liking. I realized that’s probably the main reasons I switched.

Skype for Linux Woes

First and foremost I would like to thank Microsoft and contributors for considering our measly 1+% of desktop coverage and beginning works on the Skype on Linux desktop app. Frankly, previous Skype 4.2 and Skype 4.3 releases felt like rotten meat scraps thrown at a dog (or penguin) and were equally dangerous to Microsoft as to Linux users due to lacking security updates. However, one should be positive as there is light in the form of Skype for Linux Beta. Overall, the app is quite polished and doesn’t have too many graphical glitches. Alas, there are some things to consider still:

  • Only .deb and .rpm packages are available. This strange practice is quite common and I still don’t understand why a tarball with the binary + libraries is not offered alongside. There are many Linux distributions that use other packaging formats. Are vendors afraid that some hacker might reverse engineer the binaries? On top of this, Debian .deb archives can be quite different from Ubuntu .deb archives. Same with Fedora and openSUSE.
  • The website states “Video call your contacts.”. What it doesn’t mention is that this option is not available just yet. We learn it the hard way when calling our Windows-using parents. No talking heads for now, unfortunately.
  • Close to none debugging capabilities. This one pains me the most. The app is clearly denoted as a beta version, therefore one might assume that each user is in fact a beta tester. Any and all feedback would clearly be valuable to Microsoft. Yet, the app doesn’t leak any information to the terminal by default at all. Also, no something went wrong, please send us this pre-formatted report feature is available. I mean, really.
  • Bad software design is bad software design. I accidentally kept typing in the wrong account name, forgetting the domain for users is @hotmail, not @microsoft. Obviously, the account didn’t exist, but instead of telling me this, the app would show a glitched error screen with a Sorry…an unexpected error occurred. This is so Windows 98, honestly. No traceback, no nothing. Also, the Sorry and a cryptic error reference ID don’t really help anyone. What\s up with this attitude?

They’re on the right track, but plenty of polishing is needed. Truth be told, no piece of software shines from the get-go. Uncut diamonds need cutting. That’s just the way the world works.

 

Fedora 26 – RTL8188EU the Hard Way!

Following my former entry preaching on the greatness of Fedora 26, I decided to share some wisdom regarding USB wireless adapters (aka dongles) with the Realtek RTL8188EU chip. These and many other Realtek-based (usually RTL8188EU and RTL8192CU) adapters are affordable and extremely common. Companies like Hama, Digitus, TP-LINK and Belkin fit them into the cheapest 150N and 300N dongles, claiming that they’re compatible with Linux. In principle, they are. In practice, the kernel moves so fast that these companies have problems keeping up with driver updates. As a result,  poor quality drivers remain in the staging kernel tree. Some Linux distributions like Debian and Ubuntu include them, but Fedora doesn’t (for good reasons!) so Fedora users have to jump through quite some hoops to get them working…

The standard approach is to clone the git repository for the stand-alone RTL8188EU driver, compile it against our kernel + headers (provided by the Linux distribution of choice) and modprobe load if possible. Alas, since the stand-alone driver isn’t really in-sync with the kernel, it often requires manual patching and is in general quite flaky. An alternative, more fedorian approach is to build a custom kernel with the driver included. The rundown is covered by the Building a custom kernel article from the Fedora Wiki. All configuration options are listed in the various kernel-*.config files (standard kernel .config files prepped for Fedora), where “*” defines the processor architecture. Fortunately, we don’t have to mess with the kernel .configs too much, merely add the correct CONFIG_* lines to the “kernel-local” text file and fedpkg will add the lines prior to building the kernel. The lines I had to add for the RTL8188EU chip:

# ‘m’ means ‘build as module’, ‘y’ means ‘build into the kernel’
CONFIG_R8188EU=m
CONFIG_88EU_AP_MODE=y

This however will differ depending on the Realtek chip in question and the build will fail with an indication which line in the kernel .config was not enabled when it should’ve been. Finally, if you do not intend to debug your product later on, make sure to build only the regular kernel (without the debug kernel), as that takes quite some time.

 

Fedora 26 Beta – the Killer Distro

Lately, I have been distro-hopping way too much, effectively lowering the output of my Java bytecode. However, that’s over, at least for now. I jumped from Lubuntu to Fedora to openSUSE to Lubuntu again and long story short I ended up with all of my computers (but not my family…) converted to Fedora 26 Beta. One might think it’s too soon since Fedora 25 is not end-of-life just yet. Too soon for the faint of heart maybe, not so for a geeky daredevil such as myself!

fedora_dog

A cute Fedora doggie courtesy of the Internet

I tested Fedora 26 Beta as an upgrade from Fedora 25 on a legacy Dell Latitude E5500 (32-bit Intel Pentium M 575 + Intel GMA graphics), an aging and equally legacy MacBook 2008-2009 (64-bit Intel Core 2 Duo + nVidia Geforce 9400M GT) and yet another fossil PC – the Fujitsu-Siemens Celsius R650 (Intel Xeon E-series + nVidia Geforce 295 GTX). Each installation used the Fedora 25 LXDE spin as its base to keep things similar. No issues whatsover, even despite the fact that I heavily rely on the RPM Fusion repositories for nVidia and Broadcom drivers. This stands in stark contrast to any attempts to update Lubuntu or any Ubuntu spin I have tried thus far. My apologies beforehand, but personal experience with Ubuntu and its children is lacking on all fronts. Upgrading to a new release (even if it’s an LTS!) is like bracing for a tsunami. It will hit, hard. It seems that the dnf system-upgrade plugin has been perfected and is ready for shipping. Fresh installations of Fedora 26 Beta with LXQt were done on 2 PCs – an ASUS Vivobook S301LA (Intel Core i3 + Intel HD 4600 graphics) and an HP-Compaq Z200 workstation (Intel Xeon E-series + nVidia Quadro FX 1800). This time I used the Workstation flavor netinstall disc image as base. Again, only positive surprises here. All of the core Qt apps worked as intended. I was especially curious about Qupzilla, since it would often crash on other distributions (same with the webkit-gtk based Midori, in fact). I managed to write this entry/article without a single crash. I believe it is a testament not only to the various Fedora teams, but also to the Qupzilla, Qt and LXQt people who keep pushing forward with awe-inspiring zeal. Props, kudos and cheers!

Fedora 26 Beta is a great sign that Linux can into space. The experience is bug-free, solid and developer-ready so that I can return to taxing the OpenJDK JVM with peace of mind. Matter of fact, I begin to like Qt as a GUI framework and I am considering contributing to the Fedora project more ardently. They continuously provide me with great tools, I want to give something in return. We all take, we all should give.