GNOME3 Oversimplified?

gnome_logo

Seeing as how GNOME3 and KDE (4? 5? Plasma? Neon? Ion?) are the leading desktop environments nowadays, I decided to give GNOME3 a try on my openSUSE Leap 42.3 workstation (my current main distribution on most hardware, including a Raspberry Pi 3). There are good and bad things and some of it agrees with my former assessment of GNOME3. However, I assumed that since I’m more pro-desktop now, my opinion might change. Well, it did perhaps…

The Good:
In material design alone GNOME3 wins a trophy. There is a lot of MacOS X mimicry and I think that speaks well of the project. The guys (and gals!) from Apple know their stuff so why not get inspired by them a little? The overall UX (user experience) is also positive. Thanks to the highly intuitive interface finding important desktop features is a breeze. One just needs to browse a bit and not follow the imprinted this option must be hidden somewhere philosophy that other desktop environments teach us. Also, I greatly appreciate the attention to useful features like the one-click offloading of graphically intensive applications to the discreet nVidia card on Optimus laptops. If our day-to-day tasks focus on office work and leisure, GNOME3 could potentially be the desktop environment of the future. It stands to reason, because it’s an open-source project, molded and shaped into perfection by the user and developer communities. It constantly evolves so there is no limit to its improvements.

The Bad:
Unfortunately, it seems that simplicity of design has its price. Troubleshooting GNOME3 is extremely painful and many of the applications (including the Gnome Shell and the Gnome Display Manager) throw the most uninformative error messages.

something_went_wrong

Authored by the Interwebs

Case in point, the above error screen. The Oh no! Something has gone wrong is a phrase typically used in commercial applications to shield end users from the headaches of reading crash logs. By willfully choosing Linux we demonstrate that we’re no mere end users so treating us as such is quite rude. To dwell on this a bit more, the above error screen appears even when the Gnome Display Manager login panel crashes. How is one supposed to log out without being logged in to begin with? To make matters worse, since many Linux distributions have the display manager set to restart on failure, this screen will keep re-appearing until proper troubleshooting is done in one of the TTY consoles (ctrl  + alt + F1 – F9 keys). This very much reeks of Windows and MacOS X problems where the user interface basically took over the OS. All we can do is just reboot and hope for the best. Other applications show similar An error occurred messages without any means of actual troubleshooting.

My take-home from this experience is Thanks, but no thanks. If I want to get some work done, I would rather rely on LXDE, XFCE, LxQt and maybe even KDE. Traditional desktop environments without bells and whistles.

Advertisements

AI, Lisp and Why Languages Die

In my exploration of the things arcane and mythical, I stumbled upon a forgotten book by Peter Norvig – Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. My IT colleague was delighted to see it and highly recommended the study of Common Lisp as a sort of meta language. As programming languages interest me greatly and my understanding of functional programming is rather lacking (I loath Python lambdas), I decided to give it a go. What I discovered was a language with an unusual syntax (defining all structures as lists of objects), yet a potential for writing useful tools efficiently. Then, I learned various Lisp dialects were heavily used during the computer boom of 1960-80s when the US government would pump billions of dollars into military and NASA projects (machine learning, AI algorithms, behavioral simulations,, etc.). The trend died down in the early 1990s and together with it Lisps either gave rise to modern languages like Clojure (also from the Lisp family) or simply disappeared. From the old generation Scheme and Common Lisp are still in use, though less and less by the day.

Artificial Intelligence has always been an extremely vital (and interesting) field of computer science. In fact, now more than ever, as rapid growth of the Internet forces us to develop smart tools to sieve through the wild abundance of information in real-time. No wonder projects like Alexa (Amazon) or Cortana (Microsoft) are on the rise. There are 2 crucial aspects of AI that seem to garner much interest in my opinion – human language interfaces (How to make programs understand us humans in all/most of our natural languages?) and intelligent filtering algorithms (How to make programs increasingly aware of our human needs and remember them?). The second aspect involves machine learning, which delves into data extrapolation, approximation and the progressive nature of filtering algorithms. It all boils down to making computers more human and doing some (most?) of our work for us. There are many quite realistic pitfalls, of course, like algorithms deciding that humans are the limiting factor in making our (human) lives easier. When we consider emotions as a complete contradiction to reason, this makes perfect sense. Unpredictable humans are the weakest link in an approach that relies on predictable values.

Going back to Lisp and its dialects, after its inception in 1959 it quickly became the language of choice for writing mathematical algorithms, especially in the field of AI. It was clear that the Lisp S-expression syntax makes code easy to read and the language itself has a strong propensity for evolution. From more modern times (1990-2000) there are plenty of success stories on how Lisp saved the day. Finally, Lisps pioneered crucial concepts like functional recurrence, concurrency and interactive programming (the famous REPL read-eval-print-loop, nowadays a common feature of Haskell, Python and other languages). Taking all of this into consideration it is quite difficult to understand why Common Lisp (the standardized Lisp effort) stopped being the hot stuff. Some of the sources I found mentioned that Lisps were pushed aside for political reasons. Budget cuts made a lot of NASA projects struggle for survival or meet swift demise. Also, new cool languages (*cough* *cough* Perl) came to be and Lisps were supposedly too arcane to be picked up and used easily. However, to me Common Lisp seems far less verbose (obfuscated?) than for example Java and far more orderly than said Perl. Its performance is also supposedly on par with Java, which might be interesting to people who would like to write useful tools quickly (as quickly as in Python, for instance), yet not get into the memory management details of vanilla C or C++ for better performance.

The truth is that no language is really dead until it becomes naturally obsolete. Even if it suddenly loses enterprise backing. While Lisps have some viable descendants, one would be hard pressed to find a language that directly supersedes Lisps. There are of course multiple functional languages that share Lisps’ strengths, yet they typically sport a vastly less approachable syntax, devoid of easily readable S-expressions. Therefore, I believe Scheme, Common Lisp and other modern Lisps deserve not only attention, but also proper appreciation.

Fedora 26 – RTL8188EU the Hard Way!

Following my former entry preaching on the greatness of Fedora 26, I decided to share some wisdom regarding USB wireless adapters (aka dongles) with the Realtek RTL8188EU chip. These and many other Realtek-based (usually RTL8188EU and RTL8192CU) adapters are affordable and extremely common. Companies like Hama, Digitus, TP-LINK and Belkin fit them into the cheapest 150N and 300N dongles, claiming that they’re compatible with Linux. In principle, they are. In practice, the kernel moves so fast that these companies have problems keeping up with driver updates. As a result,  poor quality drivers remain in the staging kernel tree. Some Linux distributions like Debian and Ubuntu include them, but Fedora doesn’t (for good reasons!) so Fedora users have to jump through quite some hoops to get them working…

The standard approach is to clone the git repository for the stand-alone RTL8188EU driver, compile it against our kernel + headers (provided by the Linux distribution of choice) and modprobe load if possible. Alas, since the stand-alone driver isn’t really in-sync with the kernel, it often requires manual patching and is in general quite flaky. An alternative, more fedorian approach is to build a custom kernel with the driver included. The rundown is covered by the Building a custom kernel article from the Fedora Wiki. All configuration options are listed in the various kernel-*.config files (standard kernel .config files prepped for Fedora), where “*” defines the processor architecture. Fortunately, we don’t have to mess with the kernel .configs too much, merely add the correct CONFIG_* lines to the “kernel-local” text file and fedpkg will add the lines prior to building the kernel. The lines I had to add for the RTL8188EU chip:

# ‘m’ means ‘build as module’, ‘y’ means ‘build into the kernel’
CONFIG_R8188EU=m
CONFIG_88EU_AP_MODE=y

This however will differ depending on the Realtek chip in question and the build will fail with an indication which line in the kernel .config was not enabled when it should’ve been. Finally, if you do not intend to debug your product later on, make sure to build only the regular kernel (without the debug kernel), as that takes quite some time.

 

The Kernel, the Kernel’s to Blame!

desktop_linux-100276138-orig

When getting my Raspberry Pi 3 set up recently I experienced quite some woes concerning out-of-the-box detection of SD cards. One might expect that an SD card slot is nothing more than a USB-like interface. In theory yes, in practice quite some distributions have problems with accepting that fact. Gentoo failed me numerous times, though partially because I decided to go for an extremely slim kernel config. Manjaro also surprised me in that respect – SD card detected, but not as a USB drive (thereby, not mountable). Fedora and Lubuntu had no problems. Each distribution uses a different set of graphical utilities and desktop environments so users often blame the front-end setup. That’s wrong, though, because the inability of a system to detect a piece of hardware has everything to do with the kernel configuration. Indeed, the kernel’s to blame.

I personally prefer the Arch approach – almost everything as modules. Although this could add significant overhead due to the way modules are loaded, in reality it makes Arch-based systems very light on resources. After all, what’s not in, doesn’t get loaded at all. The drawback is that the distribution or the user is required to ascertain that the initramfs is complete enough to allow a successful boot-up. The alternative is to integrate as many drivers as necessary into the kernel, though that of course makes the kernel bulky and isn’t always the optimal solution. There is a lot in-between that unfortunately causes weird issues like the one I experienced.

I think there should seriously be some consensus between distribution teams regarding what goes into a kernel and what doesn’t. Weird accidents can be avoided and it’s down to individual teams to iron that out. Of course, one can go hunting for drivers on GitHub and trying out 5 versions of a Realtek 8188eu driver, but why should the user be required to do so?

ARMing For the Future

singleboard_computers

Image taken from edn.com

For some time now I’ve been itching to get my hands on a Raspberry Pi single-board computer. Unfortunately, retailers like Saturn and MediaMarkt would shrug my inquiries off with a “we’re expecting them soon”. To my dismay the “soon” seemed like it would never come. Surprising, since the computer geek culture is constantly expanding and the demand is definitely there. Finally, after months of waiting, the Pi arrived to Austria. I quickly armed myself (pun intended) with a RPi 3 model B, a Pi-compatible power supply (5.1 V, 2.5 A) and a mate black case. The rest I already had since I collect various adapters, SD cards, etc. as a hobby. Always handy, it seems. Without further ado, though!

Get your geek on!

Contrary to my expectations, getting a Linux distribution to boot on the Pi was a bit of a hustle. Raspberry Pis don’t have a typical BIOS like laptops or desktop PCs. The firmware is extremely minimal, enough to control the on-board LEDs, hardware monitors and swiftly proceed to booting from a flash drive (SD card, USB stick), or a hard drive. Therefore, one doesn’t actually install a Linux distribution on the Pi. Rather, it’s required to *dump it* onto a disk and plug that disk into a port on the Pi to get it working. There is a fine selection of dedicated distributions out there already – Raspbian, FedBerry, etc. Major projects like FreeBSD, OpenBSD, openSUSE, Fedora and Debian provide ARM-compliant images as well. It’s just a matter of downloading an image, putting it onto an SD card (8-16GB in size, preferably) and we’re ready to go.

Pushing the limits

Not everything is as smooth as it may sound, however. Some of the distributions like FedBerry suggest desktop environments and utilities that are clearly too much for the Pi to handle. Even the top-of-the-line Pi 3 model B is not strong enough to run Firefox smoothly. Part of the problem is the GUI-heavy trend in software design, the other part being the still evolving design of the Pi. At the moment we’re at 1 GB RAM. That’s quite modest per today’s standards. With increasing hardware needs, more care should be taken in regards to the board itself also. Both the CPU and GPU will quickly overheat without at least a basic heat sink. I like ideas such as this, which try to provide the extra add-ons to turn a Raspberry Pi into a full-blown computer. Personally, I use minimalist tools such as Vim/Emacs, Openbox, Dillo so the limitations aren’t really there for me.

IoT for the future!

Truth be told, ARM-powered devices are everywhere. Though it’s a resurrected platform, people have not forgotten about the merits of RISC. Raspberry Pi is not the only Pi, nor is it the only single-board computer worth attention. With such an overabundance of almost-open-source hardware, one can do anything. Pi Zero computing cluster? Check. Watering device sensitive to solar light intensity? Check. Minecraft server? Check. NAS for the whole family? Check. It’s there, it’s cheap, it just needs a bit of geek – namely you!

Software Design Be Like…

I stumbled upon this very accurate article from the always-fantastic Dedoimedo, recently. I don’t agree with the notion that replacing legacy software with new software is done only because old software is considered bad. Oftentimes we glorify software that was never written properly and throughout the years accumulated a load of even more ugly crust code as a means of patching core functionalities. At one point a rewrite is a given. However, I do fully agree with the observation that a lot of software nowadays is written with an unhealthy focus on developer experience, rather than user experience. Also, continuity in design should be assumed as sacred.

One of the things that ticks me off (much like it did in the case of the Dedoimedo author) is when developers emphasize how easy it is for prospective developers to continue working on their software. Not how useful it is to the average Joe, but how time-efficient it is to add code. It’s nice, but should not be emphasized so. Among the many characteristics a piece of software may have, I appreciate usefulness the most. Even in video games, mind you! Anything that makes the user spend less time on repetitive tasks is worth its weight in lines of code (or something of that sort). Features that developers consider useful are in reality not so to regular users way too often. Also, too many features are a bad thing. Build a core program with essential features only and mix in a scripting language to add features that users may require on a per-user basis. See: Vim, Emacs, Chimera, PyMOL, etc. Success guaranteed!

Another matter is software continuity. Backward-compatibility should be considered mandatory. It should be the number one priority of any software project as it’s the very reason people keep getting our software. FreeBSD is strong on backward-compatibility and that’s why it’s such a rock-solid operating system. New frameworks are good. New frameworks that break or remove important functionalities are bad. The user is king and he/she should always be able to perform their work without obstructions or having to re-learn something. Forcing users to re-learn every now and then is a *great* way of thinning out the user base. One of the top positions on my never-do list.

Finally, an important aspect of software design is good preparation. Do we want our software to be extensible? Do we want it to run fast and be multi-threaded? Certain features should be considered exhaustively from the very beginning of the project. Otherwise, we end up adding lots of unsafe glue code or hackish solutions that will break every time the core engine is updated. Also, never underestimate documentation. It’s not only for us, but also for the team leader and all of the future developers who *will* eventually work on our project. Properly describing I/O makes for an easier job for everyone!

PC Parts Recycling Nightmares

To warn everyone from the get-go, this will be a rant. Therefore, hold onto your socks and pants, and watch the story unfold. And the gravy thicken…

Recycling computer parts is an extremely important aspect of keeping computers alive. It often lets you turn 5 broken computers into at least 1-2 that are still fully operational. Not to mention rebuilding, tweaking, expanding, etc. Theoretically, you could have a single desktop computer and just keep replacing its “organs” as they die. All is good even if a hard drive fails. We swap hard drive(s), restore the operating system and our data, and we’re good to go in minutes/hours. Since my very first computer was a self-assembled desktop PC, way before laptops were a “thing”, I got used to this workflow. Laptops required me to adjust, because each company would use different connectors and build schematics. Also, there were model lines like the early Dell Latitudes that had quirks one needed to know before opening up the case. That’s laptops, though. A complicated world of its own. I agree that no one should expect a mobile device to be tinkering-friendly. It’s supposed to be light, energy-efficient and just strong enough to get the job done. Fully understandable priorities! However, I would never in my wildest dreams (or nightmares?) expect these priorities to leak into and envenom the world of tower-sized desktop computers.

Imagine this scenario – you get a midi tower workstation computer from an acclaimed manufacturer like Dell or HP. It has a powerful Intel Xeon 4-core processor with hyper-threading. Marvelous beast! You can use it as a build farm or an efficient virtual machine host. However, years go by and you want to expand it a tad – swap in extra drives, a RAID card perhaps. Or maybe put in a decent graphics card to do enterprise-grade 3D modeling in AutoCAD. You open the case, look inside a bit and you instantly begin to cry. The workstation has a shamefully bad 320W power supply unit (PSU). You then wonder how was this PSU able to support both the power-hungry Intel Xeon CPU and the graphics card. You run web-based PSU calculators and all of them tell you the same thing – you’d fry your computer instantly with such a PSU and at least a 450-500W one is needed. Unlike many others, you were lucky to last that long. That’s not the end of the story, though! Your workstation’s current PSU cannot be replaced with a more powerful standard ATX PSU. HP decided to use fully proprietary power connectors. Also, a replacement PSU cannot be bought anymore, because this model line was dropped years ago. Now you’re stuck and need to buy a new server motherboard that would fit your Intel Xeon, a new PSU and a new case, because the HP case was designed for the HP PSU. You drop to the floor and wallow at the unfair world… Many more stories can be found on the Internet here and here.

I fully understand that manufacturers need to make a living. However, using low-grade proprietary computer parts in systems that are considered upgradable by universal standards is not only damaging to the market by introducing artificial constraints, but also a sign of bad design practices. Not to mention the load of useless electronic junk such attitude produces. I believe manufacturers should care more about common standards as in the end it’s beneficial to everyone.