We Are Developers 2018 – Day 2

Day 2 of the We Are Developers World Congress is up (at least for me, since I don’t have enough stamina for both the after-party and another full day of talks). Compared to day 1, I made some progress on the food and water front. The local grocery store, Hofer proved extremely useful. Armed with bacon buns and non-sparkling water I was ready for more developer-flavored bliss!

Alas, the first presentation was slightly disappointing. Instead of a talk about accelerated learning, I got a lecture on how learning works, from which I learned nothing. Thankfully, the second talk fully compensated for the shortcomings of the first one. Enter Brenda Romero – one of the legends of game development (think Wizardry 1-8). This talk was doubly important for me, because I would really love to join the game development “circus”, but I’m not yet sure whether I have the guts (or a “more-than-mellow” liver). I’m still not sure, but the take-home message was crystal clear – just do it! Brenda had a lot of important things to say regarding not giving up and not taking comments from others too personally. The audience can be brutal and vicious, and the gaming industry itself is tough. At least I know what I’m up against!

20180517_100610.jpg

Brenda Romero (centre) talking about her childhood toy assembling endeavors

Numero tertio was a continuation of game development goodness. I originally intended to attend the AI talk by Lassi Kurkijarvi, but John Romero. I don’t think I need to say more to anyone who at least heard of Quake or Doom. It was not a replay of last year’s talk, mind you! Rather, we got a full story of Doom’s development, which to me was both interesting and inspiring. John Romero is an amazing game developer and the pace at which he, John Carmack and other programmers at idSoftware produced Doom was simply dazzling. While modern games are of course a lot more complex, developers from the early 1990s didn’t have the tools, such as SDKs or version control we now possess.

20180517_110340

John Romero (centre) on developing and shipping Doom

 

Later on, it just spiraled! I lost track of the talks a bit, since there was some major reshuffling in the schedule. The presentation from Tessa Mero on ChatOps at Cisco was quite interesting. I do use Slack and various IRC clients, but a greater need for ChatOps and its integration with the software development cycle is definitely there. I wasn’t fully aware of that, to be completely honest. Next, Tereza Iofciu from mytaxi gave us a tour of machine learning and showed us the importance of computer algorithms in predictive cab distribution planning. It wasn’t about self-driving cars or reducing manpower, but rather about reducing the load on drivers and improving clientele’s satisfaction. Computer-accelerated supply-demand, so to speak.

In the afternoon I took an accidental detour to a book-signing event hosted by John and Brenda Romero. Not only did I get a chance to talk to them personally (*heavy breathing!*), but also got a copy of Masters of Doom signed (*more heavy breathing!*). John said that if I read it, I’ll definitely get into game development professionally. I’m completely embracing the idea as I type this. One of the last talks I attended was given by Yan Cui on how he used the Akka actor model implementation (together with Netty) to solve latency issues in a mobile multiplayer game (MMO specifically). Obviously, it was a success and his convincing speech makes me want to try it out. It’s about concurrency, but without the overhead of traditional multiprocessing and/or multithreading. Although I don’t code in C# just yet, there is a Python implementation of Akka, which was recently recommended to me.

20180517_154747.jpg

Yan Cui (centre) explaining message relays in the actor model of concurrent programming

In summary, it was great to meet like-minded folks and actually talk to fellow game developers, who like challenges and don’t shy away from trying out new approaches to software design. Perhaps that’s what I’m looking for – challenges? Stay tuned for more exciting impressions from day 3 of the Congress!

 

Advertisements

We Are Developers 2018 – Day 1

To begin with, I attended the We Are Developers World Congress last year (2017) and I was quite amazed by it. I got to see John Romero, the legend of game development and author of titles such as Wolfenstein 3D, Doom and Quake I. Actually, the congress inspired me so much that I decided to finally part with my scientific career and pursue a life as a software developer and/or system administrator (a bit of both in reality). To the point, though. The We Are Developers World Congress is a fairly novel venture and even the Internet knows very little about it outside of  the main website and single blog posts. It hasn’t become a tradition just yet, thereby media coverage is patchy at best. Considering that it grows exponentially in its grandeur (2000 attendees last year, 8000 registered attendees this year!), I decided to cover it myself.

wearedevelopers_2017_logo

wearedevelopers_2018_logo

The logo from the 2017 edition (above) and the logo from the 2018 edition (below)

The Congress started with a treat already – a fireside chat between Monty Munford and Stephen Gary Wozniak (Steve Wozniak, The Great Woz). It was intended as a casual interview, but The Woz proved to be exactly the person as depicted in the 2013 movie with Ashton Kutcher entitled Jobs. Steve Wozniak is extremely chatty and simply adores talking about himself, therefore it was only natural for him to dominate the discussion. Slightly to the detriment of the “chat” aspect of the event. I enjoyed it nevertheless. Many important points were raised – the economy of social media (Should we not get a fair share of the profit made by Facebook and Google off our personal data?), the “I” in “Artificial Intelligence” (It’s not really “intelligence” if it’s programmed!), Elon Musk (Tesla fails to deliver, year after year…), etc. It was somewhat surprising to witness that Steve Wozniak hasn’t really changed since the crazy ventures of his teen years with Steven Paul Jobs. Quite the amazing spirit!

20180516_102109.jpg

Monty Munford (left) having a fireside chat with the Great Woz (right)

The fireside chat was followed by an interesting talk from Joseph Sirosh from Microsoft. He talked about the various machine learning tools offered as part of Microsoft’s Azure hosting platform. To be honest, I am extremely skeptical regarding Microsoft’s ideas, especially when it concerns open-source software, supposedly open to the public. Microsoft has a disappointing track record of using the embrace, extend, extinguish tactic against promising software projects and a sinusoidal quality trend of its flagship product – Windows. Accordingly, I took the with a bucket of salt approach. The mood among other attendees was similarly negative. Unnecessarily, though! Azure’s machine learning tools seemed very promising in the end. I do consider using them for some of my projects.

After the lunch break I joined the Headless CMS track, and after the initial slightly disappointing talk, I was enthusiastic about Jeremiah Lee and his JSON API idea. REST APIs are a big part of the Web nowadays, ever increasingly so. We do need a slightly more elaborate and efficient data format standard built on-top of the venerable JSON. At that point I realized that unlike the Web development track last year, this time programming language animosities were absent. The implementation is irrelevant to  the standard if we all agree on its importance!  The last talk in the Headless CMS track I attended was given by Kaz Sato from Google. The topic being machine learning again, but this time leveraging Google’s AutoML platform and TensorFlow. Machine learning is actually one of the main themes of this year’s edition of the We Are Developers World Congress. It’s very clear that we need it!

20180516_113233.jpg

Joseph Sirosh (centre), showcasing MS Azure AI services and APIs

To sum up, based on the various talks I attended, I begin to form a vision regarding the future of computers. We started with humongous, clunky mainframes and progressed into the personal computer era with the contributions from Steve Wozniak, Steve Jobs and many others. However, the dichotomy returns. Computers turn into mobile “enabling” devices, which aid us in our daily tasks and ease our interaction with the world (and each other). Heaps of data at our fingertips! However, we need a back-end, an infrastructure of powerful serves to store data and organize it in an accessible way. In-between is of course a robust network interface which carries the data from the back-end to us, the clients/users.

 

How I migrate(d) to OpenSUSE and Why

I’m a die hard FreeBSD fan. I simply love it! It rubs me the right (UNIX) way. Through trials and tribulations I managed to make it do things it was possibly not designed to do. ZFS? Amazeballs. Cool factor over 9000! However, all of that came at a tremendous cost in energy and time. I reached a point when I don’t want to spend time manually configuring everything and needing to invent ways of automatizing things which should work out-of-the-box. Furthermore, most FreeBSD tools are not compatible with other operating systems, therefore learning FreeBSD (or any other BSD variant, for that matter) locks me in FreeBSD. Despite many incompatibilities, this is not the case with Linux. On a side note, the ZFS on Linux project was a great idea. The Linux ecosystem badly needed a mature storage-oriented filesystem, such as ZFS. BTRFS to me at least “is not there yet”. Other tools, such as containers were reinvented in some many different ways that Linux has outpaced FreeBSD many times over. Importantly, Linux tools were tested in many more real life scenarios and are in general more streamlined. For automation, this is crucial. Again, I don’t want to tinker with virtually every tool I intend to use. Neither do I want to read pages and pages of technical documents to get a simple container running. More so, I should not be forced to, since that’s terribly unproductive. Finally, I like to run the same operating system on most of my computers (be it i386, x86_64 or ARM). FreeBSD support for many desktop and laptop subsystems is spotty at best…

Enter OpenSUSE!

green_lizard

Cute lizard stock photo. Courtesy of the Interweb.

Seemingly, OpenSUSE addresses all of the above issues. True, ZFS support is not reliable and there are no plans to the contrary. The problem is as always licensing. BTRFS is still buggy enough to throw a surprise blow where it hurts the most. Personally, I don’t run RAID 5/6 setups, but that’s BTRFS’ biggest weakness right now. That and occasional “oh shit!” moments. Regardless, I think I’ll need to get used to it. Lots of backups, coffee and prayer – the bread & butter of a sysadmin. On the up side, this is virtually the only concern I have regarding OpenSUSE.

The clear positives:

  • Centralized system management via YaST2 (printers, bootloader, kernel parameters, virtual machines, databases, network servers, etc.). A command-line interface is also available for headless appliances. This is absolutely indispensable.
  • Access to extra software packages via semi-official repositories. Every tool or framework I needed was easily found. This is a much more scalable approach than the Debian/Ubuntu way of downloading ready .deb packages from vendors and having to watch out for updates. Big plus.
  • Impressive versatility. OpenSUSE is theoretically a desktop-oriented platform, though thanks to the many frameworks it offers, it works equally well on servers. In addition, there is the developer-centric rolling-release flavor, Tumbleweed, which tries to follow upstream projects closely. Very important when relying on core libraries like pandas or numpy in Python.

So far, I’ve switched my main desktop machines over to OpenSUSE, but I’m also testing its capabilities as a KVM host and database server. Wish me luck!

The Ubuntu Conundrum

Ubuntu is perhaps the most popular Linux-based operating system, however for that very reason it has as many proponents as enemies. I myself use Ubuntu (Xubuntu 16.04 LTS, to be exact) at work, both as a development platform and to host services in libvirt/KVM virtual machines (Ubuntu Server 16.04 LTS there). It performs alright and so far hasn’t let us down, though we haven’t been using it through more than 2 releases so we’re unable to gauge its reliability properly. On more personal grounds I can say it works splendidly on my early-2011 MacBook Pro 15″ with faulty AMD graphics and has since the very beginning (out-of-the-box, as one might say). Singular package upgrades don’t bring about all of the regressions people profess so fervently. However, I can understand where the hate is coming from and I admit it is partially justified.

Product and popularity
For whatever reason human psychology dictates that we equate quality with popularity. If something is extremely popular, it simply must be good, right? Wrong. Completely. A product is popular, because someone with enough resources made it visible to as many consumers as possible. The product was made popular. Quality is a useful, but clearly secondary measure. A good anecdote is the long gone rivalry between VHS and Betamax. We all remember VHS, though most of us do not remember Betamax, which was technically superior. However, it lost the popularity race and will forever be remembered as the second best or not remembered at all. Now, this is not to say that Ubuntu is in any way inferior…

Ubuntu, the (non)universal operating system
The main issue with Ubuntu is that it succeeded as a more open operating system alternative to Windows and macOS X, however did not solve the underlying problem – computer literacy. Of course, not every computer user has to be a geek and hack the kernel. However, when I see that Ubuntu users address their PC-related issues with the same shamanism and hocus pocus as in Windows, my soul twists in convulsions. We did not flee from closed-source operating systems only to change the names of our favorite tools and the look of our graphical user interfaces, though observing current trends, I might be terribly wrong. The other problem is that Ubuntu’s popularity has become self-perpetuating. It’s popular, because it’s popular. Many tutorials online and in magazines assume that if one uses Linux, he or she surely runs Ubuntu on all of his or her computers. This is extremely hurtful to the entirety of the Linux ecosystem, because neither Debian nor Ubuntu represent standard Linux. Both of those systems introduce a number of configuration improvements to applications, which are not defined in upstream documentation and absent in other distributions (so-called Debianisms). Therefore, Ubuntu being a universal operating system is more of a publicity gimmick than a fact. Especially, considering that on servers, SLES (SUSE Linux Enterprise edition), CentOS and Red Hat clearly dominate.

The solution?
I would say it’s high time we begin showing newcomers that there is an amazing world of Linux beyond Ubuntu. To that end, I have a couple of suggestions for specific needs and distributions covering those needs. Related questions come up often in the Linux Facebook group and around the Internet, but get answered superficially via click-bait articles listing top 10 distributions in 2017/18. Not exactly useful. Anyhow, the list:

  • Software development:
    – Fedora (up-to-date packages and developer-centric tools like COPR)
    – Arch Linux (up-to-date with a wide range of packages via AUR and vanilla package configuration for simplicity)
    – openSUSE Tumbleweed (up-to-date with a rolling, snapshot based release cycle, but sharing the Leap / SLES high-quality management tools like YaST2)
  • Servers:
    – openSUSE Leap (3-year long support life cycle, high-quality management tools like YaST2 and straightforward server + database + VM configuration)
    – CentOS (binary compatible with Red Hat Enterprise Linux)
    – FreeBSD (ZFS hard drive pool management + snapshots, reliable service/database separation via jails, rock solid base system)
  • Easy-to-use:
    – Manjaro Linux (based on Arch Linux, with lots of straightforward graphical configuration tools, multiple installable kernels, etc.)
    – Fedora (not only for developers!)
    – openSUSE Leap (for similar reasons as above + a streamlined, user-friendly installer)
  • For learning Linux:
    – Gentoo (painful at first, but extremely flexible with discrete software feature selection at compile-time via USE flags)
    – Arch Linux (Keep It Simple Stupid; no hand-holding, but with high-quality documentation to make the learning curve less steep)
    – CRUX (similar to Gentoo, but without the useful scripts; basically, vanilla Linux with a very simple package manager)
  • For learning BSDs:
    – FreeBSD (as mentioned above)
    – OpenBSD (strong emphasis on code-correctness, system engineering and network management)
    – DragonflyBSD (pioneering data storage and multi-processor systems)

Linux and the BSDs

Throughout my many months of using various open-source and proprietary operating systems I have made certain observations that might be useful to some. I started with Linux, though at some point migrated to the BSDs for personal and slightly more pragmatic reasons. I quickly became a lot more familiar with FreeBSD and OpenBSD than I ever was with openSUSE or Ubuntu. It may seem odd as Linux is far easier to get into, however seasoned UNIX admins will surely understand. BSDs have this technical appeal, which Linux steadily loses in favor of other features. To the point, though:

1. Save for Windows, most operating systems are extremely similar. macOS X, as it is now referred to, relies on a huge number of BSD utilities, because at one point in time they were more accessible (well-documented and permissively licensed). In turn, the open-source BSD family operating systems, such as OpenBSD and FreeBSD adopted Clang with its LLVM back-end (Apple’s compiler toolchain) as their main system compiler. A number of former, now defunct, proprietary operating systems were based on some revision of UNIX – IRIX, HP-UX, Solaris, etc. There is also a significant overlap of other tools, such as sysctl and ifconfig, which were forked, modified and adjusted to fit individual systems, but bare functional resemblance between various flavors of UNIX. The remainder (text editors, desktop environments, etc.) is typically BSD/MIT/GPL-licensed and available as packages or ports. Therefore, the high-level transition between BSDs and Linux isn’t as dramatic.

2. Above being said, the BSDs and Linux follow different philosophies, of which the BSD philosophy seems a lot more practical in the long-run to me. What most people forget (or never become familiar with) is the fact that Linux is just a kernel and development teams creating distributions (the actual Linux-based operating systems!) can do almost anything they want with it. This leads to myriads of possible feature sets already on the kernel level. It is also up to the distributions to assemble the kernel and userland utilities into a full-fledged operating system. Unfortunately, distribution teams are often tempted to add a bit of an artistic touch to their work, which causes Linux distributions to differ in key aspects. While on a higher level this is hardly noticeable, it may bite one back when things get sour or manual configuration is required. This Lego blocks or bazaar concept makes it difficult for upstream software developers to identify bugs and for companies to support Linux with hardware and software properly. Eventually, only certain distributions are recognized as significant, such as Ubuntu, CentOS, openSUSE, Fedora or Debian. BSDs take a more organized approach to system design, which I believe is highly advantageous. An operating system consists of the kernel and basic utilities, which make it actually useful, such as a network manager, compiler, process manager, etc. Depending on the BSD in question, the scope of the base system is defined differently. For instance, OpenBSD ships with quite some servers, including a display server (Xenocara). FreeBSD, in turn, focuses on providing server capabilities.

3. Recently (or not so much), the focus of Linux has switched from being merely a server platform to a desktop replacement. That’s the ballpark of MS Windows and macOS X, both of which are fairly tried as desktop platforms. The crux of the matter is that utilities had to be adjusted to fulfill more GUI-oriented roles, making command-line work slightly trickier. The other problem is that the software turnover in Linux-land is extremely rapid and programs either become stale way too quickly or they break too often. That’s clearly a no-go for server scenarios. This is where BSDs come in. FreeBSD was designed as a multi-purpose operating system, however with a strong focus on networking, process sandboxing and privilege separation, data storage, etc. In these aspects it clearly excels. NetBSD favors portability and supports many server and embedded platforms, which act as routers, switches, load-balancers, etc. OpenBSD emphasizes code correctness, security and complete documentation. Last, but not least, DragonflyBSD focuses on multi-processing and leverages filesystem features to improve performance. One could say that due to greater resources Linux surpassed all of these operating systems. However, one should not underestimate quality BSD utilities and the almost legendary stability of Berkley-derived OS’. One of the main problems I ever had with Linux was the inconsistent breakage of individual distributions. Upgrading packages would eventually render them useless or impossible to troubleshoot due to uninformative error messages. The lack or staleness of documentation made matters only worse. Having to deal with above problems, I simply jumped ship and joined the BSD crowd. Granted, neither OpenBSD nor FreeBSD make my PCs snappier. Quite the opposite, Linux still wins in that respect. However, I now have full access to the operating system source code and can fix issues first-hand should such a need arise. Not to mention being actually able to read clearly written documentation and learn how to use the tools my operating system offers. I doubt Linux can beat that.

On Using Computers

I’ve been planning to write this piece for a while now, though due to work related stuff I was somewhat hampered in my efforts. It’s a bit harsh at times, but I feel it should become a must read for beginner Linux users nevertheless.

I am a part of the open-source community and as a member I try to contribute to projects
with code, documentation and advice. I fully understand that for the open-source way of
producing content (not merely software!) to succeed, everyone has to give something. However, in the recent months I noticed a sharp influx of new users (newbies), who want to be part of the community, but are extremely confused as to its principles. Incidentally, these newbies “contaminate” the open-source community with former habits and expectations, and make it harder for both existing members and themselves to cope with this temporary shift in the user expertise equilibrium. I blame two main phenomena for the confusion of new users:

1. The open-source way is advertised as inherently “better”, which is misleading.

2. The open-source way requires members to think about what they do and possibly to contribute however they can.

Since the imbalance has reached a peak of being unbearable for me and other existing members of the open-source community, I decided to write this introductory article so that newbies quickly adjust and the equilibrium is restored.

I. User-friendliness is a lie
Following up on the thoughts laid out at over-yonder.org, I want to make this statement extra clear. There is no such thing as user-friendliness. It.does.not.exist. The Internet is crawling with click-bait articles entitled “The best user-friendly Linux distribution!” or “The most user-friendly desktop environment!”. These articles were crafted in order to increase the view count of the host website, not to provide useful information on the topic. Alternatively, they were written by people who are as confused as newbies. “User friendly” just like “intuitive” is a catchphrase – an advertising gimmick used to get you to buy/get a product. There is no extra depth to it. What people wrongly label as “user-friendly” is in fact “hand-holding” – the software/hardware is expected to do something for the user. Not enable the user to perform an action, but actually do the action for him/her. A stewardess on a cruiser or an aircraft is helpful, because she answers passengers’ questions, however she does not hold anyone’s hand, as that would mean leading every single passenger to their seat. If anyone ever tells you that something is user-friendly, ignore them and move on. You know better :).

II. Qualities, quantity and gradation
Generalized comparative statements are being thrown about virtually everywhere. This annoys me and should also annoy you after reading this paragraph. The truth is that most of those statements are fundamentally wrong, because they assume objects of different qualities can be compared using abstract terms. They CANNOT. A useful reference point is comparing apples to oranges. Can it be said that oranges are better than apples? No. What about apples being better than oranges? Neither! “Better” is an abstract term, which by itself means nothing. Therefore, saying “OpenSUSE is better than Ubuntu” means absolutely nothing, also! However, what can be done is comparing specific features of A and B. You cannot say “Apples are better than oranges”, but you can claim that an average apple is heavier than an average orange with specific examples of both. Color-wise, you can say that apples tend to be green-red, while oranges yellow-orange-reddish. You cannot directly compare colors, mind you, unless you express the color of A and B in a uniform color scale, like “the amount of red”. No fallacy has been committed that way. Therefore, neither software, nor hardware can be directly compared, though you can say, for instance that “openSUSE has a number of tools like YaST, which make it potentially more convenient for system administrators than Ubuntu”. Remember that!

III. The “use case” concept
Knowing that user-friendliness does not exist and that many things cannot be directly compared, the next step is understanding the “How” inherent to all problems. You have an issue or an inquiry. What is that you want to achieve? What are the exact requirements to reach your goal? What is the situation in which you experienced your problem? Being specific and being able to disassemble large problems into smaller tasks is paramount to understanding the problem and finding the possible solutions to it. This is true not only for computers, but for everything in life alike. Once you know your “use case”, you will know which hardware and software (including the operating system) to choose. Different operating systems cover various use cases or use scenarios, thereby understanding  your use case well will allow you to find the perfect operating system or any other piece of software quicker.

IV. Options, decisions and the “good enough”
All of the above being said, humans have this need to always aim for optimal solutions. Subconsciously,  they want only the “best” for them. What if it’s impossible to identify the best option? What if all of them satisfy our requirements equally well? Thus, the concept of “good enough” comes into play. Sometimes, the “best” solution is the first solution we decide upon and stick with it. No second thoughts allowed! Until we identify a legitimate reason why solution #1 no longer satisfies our needs for a prolonged period of time. Wondering which operating system to choose? Linux Mint? Ubuntu? Debian? Fedora? Perhaps not a Linux based OS, but a pure UNIX-like BSD? There are so many! If you’re a beginner, it doesn’t matter which you choose. Pick one, stick with it and change only if experimenting or your first choice was completely wrong.

V. Thinking and the individual responsibility
This will be a harsh one. Proprietary operating systems create this illusion of user friendliness (it’s a lie, we know it now!) and that the user is not required to take responsibility for his/her actions done on his/her software/hardware. This is one of the major fallacies in the computer world. The moment you buy a computer, you are completely responsible for it. Consider it your “child”. You need to make sure it’s always clean, powered up etc. No one will ever do it for you. Others can recommend solutions, give advice, provide support even, but the final decision is on you and you alone. Whatever you do with your computer, it is your success or failure. The primary reason why malware spreads like wildfire is that people are convinced that they don’t need to actively care for the safety of their computers. Dead. wrong.

The open-source way is not better than the proprietary/closed-source way. It’s different, nothing else. I chose it, because it aligns with my personal preferences well and I believe that it will prevail. It is for you to decide whether you can accept that. If the answer is “Yes”, I congratulate you. Go forth, learn and become a full-fledged member of the open-source community :).

GNOME3 Oversimplified?

gnome_logo

Seeing as how GNOME3 and KDE (4? 5? Plasma? Neon? Ion?) are the leading desktop environments nowadays, I decided to give GNOME3 a try on my openSUSE Leap 42.3 workstation (my current main distribution on most hardware, including a Raspberry Pi 3). There are good and bad things and some of it agrees with my former assessment of GNOME3. However, I assumed that since I’m more pro-desktop now, my opinion might change. Well, it did perhaps…

The Good:
In material design alone GNOME3 wins a trophy. There is a lot of MacOS X mimicry and I think that speaks well of the project. The guys (and gals!) from Apple know their stuff so why not get inspired by them a little? The overall UX (user experience) is also positive. Thanks to the highly intuitive interface finding important desktop features is a breeze. One just needs to browse a bit and not follow the imprinted this option must be hidden somewhere philosophy that other desktop environments teach us. Also, I greatly appreciate the attention to useful features like the one-click offloading of graphically intensive applications to the discreet nVidia card on Optimus laptops. If our day-to-day tasks focus on office work and leisure, GNOME3 could potentially be the desktop environment of the future. It stands to reason, because it’s an open-source project, molded and shaped into perfection by the user and developer communities. It constantly evolves so there is no limit to its improvements.

The Bad:
Unfortunately, it seems that simplicity of design has its price. Troubleshooting GNOME3 is extremely painful and many of the applications (including the Gnome Shell and the Gnome Display Manager) throw the most uninformative error messages.

something_went_wrong

Authored by the Interwebs

Case in point, the above error screen. The Oh no! Something has gone wrong is a phrase typically used in commercial applications to shield end users from the headaches of reading crash logs. By willfully choosing Linux we demonstrate that we’re no mere end users so treating us as such is quite rude. To dwell on this a bit more, the above error screen appears even when the Gnome Display Manager login panel crashes. How is one supposed to log out without being logged in to begin with? To make matters worse, since many Linux distributions have the display manager set to restart on failure, this screen will keep re-appearing until proper troubleshooting is done in one of the TTY consoles (ctrl  + alt + F1 – F9 keys). This very much reeks of Windows and MacOS X problems where the user interface basically took over the OS. All we can do is just reboot and hope for the best. Other applications show similar An error occurred messages without any means of actual troubleshooting.

My take-home from this experience is Thanks, but no thanks. If I want to get some work done, I would rather rely on LXDE, XFCE, LxQt and maybe even KDE. Traditional desktop environments without bells and whistles.