Game Design Series – Jurassic Park (NES)

To prepare myself for my future game developer career I decided to play through some gaming classics for various Nintendo (GameBoy, NES) and SEGA (Saturn, Dreamcast, Genesis) consoles and analyze them thoroughly. The truth is that many amazing gameplay elements were invented way back in the 70-90s and haven’t appeared since. It’s a real shame, because frankly speaking they were groundbreaking. In my analyses I will try to focus on game difficulty, graphics, interesting gameplay aspects and the overall appeal of the game. First off is Jurassic Park for the Nintendo Entertainment System.

Official Title: Jurassic Park
Release Year: 1993
Developer: Ocean Software

jp-2

Game main menu – super scary!

Synopsis
The game follows the plot from the movie by Steven Spielberg and the techno-thriller by Michael Crichton (entirely different feel than the movie). You are PhD Alan Grant and your task is to escape from the now wild Jurassic Park located on Isla Nublar. Along the way you have to save Tim and Lex (grandchildren of Prof. Hammond) from being eaten alive by the legendary predator T-Rex or trampled by a stampede of triceratops’.

jp-1

Alan Grant in front of the Park gate

Graphics
Jurassic Park features an isometric view produced by sprites drawn at an angle from various sides. Interestingly, the collision box of some of them was defined only by the sprite’s base, allowing the game’s protagonist or his enemies to vanish behind obstacles. The color palette is crisp, though consists primarily of gray, red and different shades of green. It definitely looks better than early NES games. Projectiles are animated and so are the various dinosaurs infesting the Park. The main menu screen, featuring a viciously looking T-Rex en face with dripping saliva is worth an extra mention. Unfortunately, the impressive visuals would occasionally tax the NES hardware causing graphical glitches and oddities.

Gameplay
In order to successfully escape from the Park, Alan needs to complete various tasks, ranging from saving Tim and Lex to unlocking computer terminals. A major part of the game is collecting turquoise-gray dinosaur eggs in order to reveal key cards, and collecting different types of ammo to combat the vicious dinos. There are several species of dinosaurs, each with a different behavior pattern. Compsognathus individuals are small and easy to kill as they always trot in a straight line towards Grant. Velociraptors are much faster and can actually outrun the player when charging. They also do much more damage on contact. Somewhat sadly all of the dinos drop only basic ammunition (swamp green). Bolas rounds (red), penetrating rounds (gray) and upgraded rounds (green) need to be collected from the ground in designated spots. An interesting aspect of the game are mystery boxes with a question mark on top. They provide extra lives, health packs or contain deadly booby traps. What I appreciate the most is the fact that the game does not follow the standard “stage(s) + boss fight” pattern. In fact, there are only 2 real boss fights against the T-Rex. The gameplay is well-balanced with a mix of regular collection stages, boss fights, puzzles and dynamic rescue missions. In total 6 levels with clear briefing screens explaining the tasks in each level.

Difficulty
Jurassic Park is one of those NES games which seem hard at first, but as the player memorizes enemy attack patterns, locations of health packs, etc. it becomes increasingly easier. In addition, it is not as overwhelming as, for instance Castlevania or Ninja Gaiden. Jurassic Park is definitely a beatable title, though admittedly the T-Rex levels can be quite annoying.

Closing Remarks
While the core of the game (collecting eggs and shooting dinos) is fairly standard among NES titles, the addition of rescue missions and unusual boss fights feels refreshing. I believe that even platformers would profit from such gameplay mix-ins. Actually, they’re often fun regardless of the genre.

Sources

Advertisements

We Are Developers 2018 – Day 3

Finally, day 3 of the Congress. My morning preparations were the same as on the previous day – water, food and loads of coffee to get my gears running. I was locked & loaded for 8 whopping talks. Since it would take me hours to write about all of them, I will only briefly summarize each.

First off was Philipp Krenn from Elastic, talking about the ELK stack (ElasticSearch + Logstash + Kibana). Apparently, the stack has a new member called Beats. It helps with creating handlers for specific types of data streams (file-based, metrics, network packets, etc.). I feel like that feature was missing in the current composition of the stack, though it only makes the stack bigger and more complex. I was actually investigating the use of Logstash + ElasticSearch + Grafana for sorting, filtering and cherry-picking log messages, but the maintenance overhead was a bit too much. I settled with Telegraf + InfluxDB (time-series SQL-like storage back-end) + Grafana. Telegraf’s logparser plugin simulates Logstash and InfluxDB proved to be an extremely robust storage solution. In addition, Grafana’s ability to handle ElasticSearch records was too rigid (pun intended) for our use case. So in general, it’s a “no”, but I’ll keep my log files open for new options in case/when our framework grows.

20180518_105027.jpg

Catalina Butnaru (right) show-casing various AI assessment frameworks

Second up, Catalina Butnaru on AI, however from an ethics perspective. Frankly, I am allergic to ethics and including it in discussions about AI, because ethics often derails or postpones progress. However, Catalina nailed it. Her talk was extremely appealing and real. I learned that ethical considerations should not go into the “wontfix” bucket and genuinely affect all of us. Well done!

Next, Joe Sepi from IBM talked about getting involved in open-source communities and helping build better software together. His recollection was quite personal, because he had to suffer from the same prejudices all of us fear when delving into an alien new project, framework or programming language. The take-home message? Never give up! Fork, commit, send PRs, make software better. Together.

20180518_132945.jpg

I skipped Martin Wezowski‘s talk to save my (metaphorically) dying stomach, but made it to the presentation from Angie Jones (Twitter). She’s an incredibly engaging speaker and the points she raised really resonated with me. All of us write (or should write!) unit and function tests. However, how do you test a machine learning algorithm or neural network? How do you simulate a client of a shop app or a human target of an image recognition module? It turns out that when dealing with people, machine learning can prove finicky and extremely error-prone. Actually, to the point when it’s funny. Until we begin discussing morbid matters like How many kids need to jump in front of an autonomous car for it to slide off a cliff and kill its passengers? 2? 5? 6? or Why does an image recognition application recognize people of darker skin tone as gorillas? Was there a race prejudice when selecting test image sets? 10 points to Angie Jones for the important lesson!

The next talk was given by Diana Vysoka, a young developer advocate, working for the We Are Developer World Congress organization. On one hand, I feel quite old seeing teenagers get into programming. On the other hand, that’s encouraging in terms of our civilization’s future. Listening to people like her makes me still want to live on this planet.

20180518_152044.jpg

Eric Steinberger (right) making convolutional neural networks plain and simple!

If Diana is a rising star, Eric Steinberger is already one for some time. A math and IT prodigy who can explain extremely complex concepts in such simple words that even a fart like me can comprehend them. He believes that AGI (Artificial General Intelligence) is possible and I believe him. After all, how do we define the requirements for AGI, compared to a standard neural network, which can already be purposed for almost any task? Obviously, we should aim higher than simple bio-mimicry. As humans we’re flawed and our potential is limited. Let’s not unnecessarily handicap the development of AI!

Finally, the last talk. Enter Joel Spolsky, the creator of StackOverflow! I attended his talk last year and was ready for more awesomeness. Joel delivered. Continuously. His anecdotes and stories gave a perfect closure to the Congress. It’s great to be a software developer and meet so many amazing people in one place. See you there next year!

We Are Developers 2018 – Day 2

Day 2 of the We Are Developers World Congress is up (at least for me, since I don’t have enough stamina for both the after-party and another full day of talks). Compared to day 1, I made some progress on the food and water front. The local grocery store, Hofer proved extremely useful. Armed with bacon buns and non-sparkling water I was ready for more developer-flavored bliss!

Alas, the first presentation was slightly disappointing. Instead of a talk about accelerated learning, I got a lecture on how learning works, from which I learned nothing. Thankfully, the second talk fully compensated for the shortcomings of the first one. Enter Brenda Romero – one of the legends of game development (think Wizardry 1-8). This talk was doubly important for me, because I would really love to join the game development “circus”, but I’m not yet sure whether I have the guts (or a “more-than-mellow” liver). I’m still not sure, but the take-home message was crystal clear – just do it! Brenda had a lot of important things to say regarding not giving up and not taking comments from others too personally. The audience can be brutal and vicious, and the gaming industry itself is tough. At least I know what I’m up against!

20180517_100610.jpg

Brenda Romero (centre) talking about her childhood toy assembling endeavors

Numero tertio was a continuation of game development goodness. I originally intended to attend the AI talk by Lassi Kurkijarvi, but John Romero. I don’t think I need to say more to anyone who at least heard of Quake or Doom. It was not a replay of last year’s talk, mind you! Rather, we got a full story of Doom’s development, which to me was both interesting and inspiring. John Romero is an amazing game developer and the pace at which he, John Carmack and other programmers at idSoftware produced Doom was simply dazzling. While modern games are of course a lot more complex, developers from the early 1990s didn’t have the tools, such as SDKs or version control we now possess.

20180517_110340

John Romero (centre) on developing and shipping Doom

 

Later on, it just spiraled! I lost track of the talks a bit, since there was some major reshuffling in the schedule. The presentation from Tessa Mero on ChatOps at Cisco was quite interesting. I do use Slack and various IRC clients, but a greater need for ChatOps and its integration with the software development cycle is definitely there. I wasn’t fully aware of that, to be completely honest. Next, Tereza Iofciu from mytaxi gave us a tour of machine learning and showed us the importance of computer algorithms in predictive cab distribution planning. It wasn’t about self-driving cars or reducing manpower, but rather about reducing the load on drivers and improving clientele’s satisfaction. Computer-accelerated supply-demand, so to speak.

In the afternoon I took an accidental detour to a book-signing event hosted by John and Brenda Romero. Not only did I get a chance to talk to them personally (*heavy breathing!*), but also got a copy of Masters of Doom signed (*more heavy breathing!*). John said that if I read it, I’ll definitely get into game development professionally. I’m completely embracing the idea as I type this. One of the last talks I attended was given by Yan Cui on how he used the Akka actor model implementation (together with Netty) to solve latency issues in a mobile multiplayer game (MMO specifically). Obviously, it was a success and his convincing speech makes me want to try it out. It’s about concurrency, but without the overhead of traditional multiprocessing and/or multithreading. Although I don’t code in C# just yet, there is a Python implementation of Akka, which was recently recommended to me.

20180517_154747.jpg

Yan Cui (centre) explaining message relays in the actor model of concurrent programming

In summary, it was great to meet like-minded folks and actually talk to fellow game developers, who like challenges and don’t shy away from trying out new approaches to software design. Perhaps that’s what I’m looking for – challenges? Stay tuned for more exciting impressions from day 3 of the Congress!

 

FreeBSD 11.1 on ASUS VivoBook S301LA

I decided it is time to write a piece on FreeBSD, since I now officially use it as my main operating system both at home (alongside OpenBSD) and at work. My mobile battle gear of choice is the ASUS VivoBook S301LA. It’s a 4th generation Intel-based ultrabook-class laptop, one of the many released by ASUS every year. It has strong points, though also quite some disadvantages. I would like to discuss it from the perspective of a FreeBSD enthusiast.

csm_dsc_0047_b74600070f

Photo courtesy of notebookcheck.com and the Interwebs

Hardware specifications:

  • Processor: Intel Haswell core i3-4010U @ 1.70 GHz
  • Graphics: Intel HD 4400 integrated GPU with up to 768 MB shared RAM
  • Memory: 4 GB DDRL 1600 MHz (soldered) + empty slot for another 4 GB
  • Hard drive: Western Digital Blue 500 GB 5400 rpm (replaceable)
  • Ethernet: Realtek 8169 Express Gigabit
  • Wireless: Mediatek MT7630e with Bluetooth built in (half-sized, replaceable)
  • Sound: Intel HD Audio (SonicMaster)
  • Webcam: Azurewave USB 2.0 UVC HD Webcam
  • Touchscreen: USB SiS Touch Controller
  • Battery: 4 hours
  • Microphone: yes, next to the Webcam
  • Keyboard: Generic AT keyboard
  • Touchpad: Generic touchpad with integrated click-fields
  • Additional ports:
    – left side: Ethernet, HDMI, USB 3.0, microphone/headphone jack
    – right side: Kensington lock, 2x USB 2.0, SD card slot

 

The good:

  • Extremely lightweight
  • Never overheats
  • Moderately fast after upgrades

The bad:

  • Paper-thin keyboard
  • Slippery touchpad
  • Highly reflective, mirror-like screen
  • Cheap, lower-end wireless card

Overall, this device is a fairly standard consumer-grade ultrabook. The crappy keyboard is something one can get used to rather quickly. I’m not a fan of touchpads, therefore I rely on PC mice for clicking and scrolling unless I’m on a plane or train. Nowadays, reflective screens are no longer an issue thanks to anti-glare screen protection sleeves. The obvious downside is that anti-glare screens lack sharpness typical of reflective screens. In general, the drawbacks can be easily mitigated with upgrades, which however turn the laptop into a moderate investment. The choice is down to the prospective user.  Furthermore, the manufacturer (ASUS) made some choices, which I am not entirely convinced by. Firstly, touchscreens are more useful on hybrid flip-laptops like the Lenovo Yoga. In this model the touchscreen is more of a nuisance when cleaning the plastic cover on the display and draws power needed elsewhere. Secondly, the wireless adapter is perhaps the worst of its generation with a nominal bandwidth of 150 bpms. Still, it’s more of a travesty to see it in high-end ROG gaming models (yes, it’s true…).

 

The FreeBSD perspective:

This might be somewhat disappointing. Depending on what one expects from a mobile device, the S301LA is either average or just plain broken. Not to sound rude, but I’m sure a Thinkpad or an Ideapad would be a far superior choice. Haswell HD 4400 graphics chips have proper (aka working) FreeBSD support since just release 11 and most other components are barely supported. The Azurewave USB webcam actually works (webcamd needs to be attached to USB device ugen0.2 by root, a superuser or a member of the webcamd group), but no VoIP software is available on FreeBSD out-of-the-box. I guess one could get Windows Skype to run via WINE or force the alpha-quality Linux client into submission, but that’s a lesson in futility, I think. Personally, I wouldn’t be using this ultrabook at all if not for the fact I finally managed to replace the trash wireless adapter with something half-decent (albeit from 10 years back) from Intel, namely the WiFi Link 5100. After adding another 4 GB RAM and a Western Digital SSD, I would consider this ultrabook worth the money and time. However, as I mentioned earlier, there are far better choices on the market.

Linux and the BSDs

Throughout my many months of using various open-source and proprietary operating systems I have made certain observations that might be useful to some. I started with Linux, though at some point migrated to the BSDs for personal and slightly more pragmatic reasons. I quickly became a lot more familiar with FreeBSD and OpenBSD than I ever was with openSUSE or Ubuntu. It may seem odd as Linux is far easier to get into, however seasoned UNIX admins will surely understand. BSDs have this technical appeal, which Linux steadily loses in favor of other features. To the point, though:

1. Save for Windows, most operating systems are extremely similar. macOS X, as it is now referred to, relies on a huge number of BSD utilities, because at one point in time they were more accessible (well-documented and permissively licensed). In turn, the open-source BSD family operating systems, such as OpenBSD and FreeBSD adopted Clang with its LLVM back-end (Apple’s compiler toolchain) as their main system compiler. A number of former, now defunct, proprietary operating systems were based on some revision of UNIX – IRIX, HP-UX, Solaris, etc. There is also a significant overlap of other tools, such as sysctl and ifconfig, which were forked, modified and adjusted to fit individual systems, but bare functional resemblance between various flavors of UNIX. The remainder (text editors, desktop environments, etc.) is typically BSD/MIT/GPL-licensed and available as packages or ports. Therefore, the high-level transition between BSDs and Linux isn’t as dramatic.

2. Above being said, the BSDs and Linux follow different philosophies, of which the BSD philosophy seems a lot more practical in the long-run to me. What most people forget (or never become familiar with) is the fact that Linux is just a kernel and development teams creating distributions (the actual Linux-based operating systems!) can do almost anything they want with it. This leads to myriads of possible feature sets already on the kernel level. It is also up to the distributions to assemble the kernel and userland utilities into a full-fledged operating system. Unfortunately, distribution teams are often tempted to add a bit of an artistic touch to their work, which causes Linux distributions to differ in key aspects. While on a higher level this is hardly noticeable, it may bite one back when things get sour or manual configuration is required. This Lego blocks or bazaar concept makes it difficult for upstream software developers to identify bugs and for companies to support Linux with hardware and software properly. Eventually, only certain distributions are recognized as significant, such as Ubuntu, CentOS, openSUSE, Fedora or Debian. BSDs take a more organized approach to system design, which I believe is highly advantageous. An operating system consists of the kernel and basic utilities, which make it actually useful, such as a network manager, compiler, process manager, etc. Depending on the BSD in question, the scope of the base system is defined differently. For instance, OpenBSD ships with quite some servers, including a display server (Xenocara). FreeBSD, in turn, focuses on providing server capabilities.

3. Recently (or not so much), the focus of Linux has switched from being merely a server platform to a desktop replacement. That’s the ballpark of MS Windows and macOS X, both of which are fairly tried as desktop platforms. The crux of the matter is that utilities had to be adjusted to fulfill more GUI-oriented roles, making command-line work slightly trickier. The other problem is that the software turnover in Linux-land is extremely rapid and programs either become stale way too quickly or they break too often. That’s clearly a no-go for server scenarios. This is where BSDs come in. FreeBSD was designed as a multi-purpose operating system, however with a strong focus on networking, process sandboxing and privilege separation, data storage, etc. In these aspects it clearly excels. NetBSD favors portability and supports many server and embedded platforms, which act as routers, switches, load-balancers, etc. OpenBSD emphasizes code correctness, security and complete documentation. Last, but not least, DragonflyBSD focuses on multi-processing and leverages filesystem features to improve performance. One could say that due to greater resources Linux surpassed all of these operating systems. However, one should not underestimate quality BSD utilities and the almost legendary stability of Berkley-derived OS’. One of the main problems I ever had with Linux was the inconsistent breakage of individual distributions. Upgrading packages would eventually render them useless or impossible to troubleshoot due to uninformative error messages. The lack or staleness of documentation made matters only worse. Having to deal with above problems, I simply jumped ship and joined the BSD crowd. Granted, neither OpenBSD nor FreeBSD make my PCs snappier. Quite the opposite, Linux still wins in that respect. However, I now have full access to the operating system source code and can fix issues first-hand should such a need arise. Not to mention being actually able to read clearly written documentation and learn how to use the tools my operating system offers. I doubt Linux can beat that.

On Using Computers

I’ve been planning to write this piece for a while now, though due to work related stuff I was somewhat hampered in my efforts. It’s a bit harsh at times, but I feel it should become a must read for beginner Linux users nevertheless.

I am a part of the open-source community and as a member I try to contribute to projects
with code, documentation and advice. I fully understand that for the open-source way of
producing content (not merely software!) to succeed, everyone has to give something. However, in the recent months I noticed a sharp influx of new users (newbies), who want to be part of the community, but are extremely confused as to its principles. Incidentally, these newbies “contaminate” the open-source community with former habits and expectations, and make it harder for both existing members and themselves to cope with this temporary shift in the user expertise equilibrium. I blame two main phenomena for the confusion of new users:

1. The open-source way is advertised as inherently “better”, which is misleading.

2. The open-source way requires members to think about what they do and possibly to contribute however they can.

Since the imbalance has reached a peak of being unbearable for me and other existing members of the open-source community, I decided to write this introductory article so that newbies quickly adjust and the equilibrium is restored.

I. User-friendliness is a lie
Following up on the thoughts laid out at over-yonder.org, I want to make this statement extra clear. There is no such thing as user-friendliness. It.does.not.exist. The Internet is crawling with click-bait articles entitled “The best user-friendly Linux distribution!” or “The most user-friendly desktop environment!”. These articles were crafted in order to increase the view count of the host website, not to provide useful information on the topic. Alternatively, they were written by people who are as confused as newbies. “User friendly” just like “intuitive” is a catchphrase – an advertising gimmick used to get you to buy/get a product. There is no extra depth to it. What people wrongly label as “user-friendly” is in fact “hand-holding” – the software/hardware is expected to do something for the user. Not enable the user to perform an action, but actually do the action for him/her. A stewardess on a cruiser or an aircraft is helpful, because she answers passengers’ questions, however she does not hold anyone’s hand, as that would mean leading every single passenger to their seat. If anyone ever tells you that something is user-friendly, ignore them and move on. You know better :).

II. Qualities, quantity and gradation
Generalized comparative statements are being thrown about virtually everywhere. This annoys me and should also annoy you after reading this paragraph. The truth is that most of those statements are fundamentally wrong, because they assume objects of different qualities can be compared using abstract terms. They CANNOT. A useful reference point is comparing apples to oranges. Can it be said that oranges are better than apples? No. What about apples being better than oranges? Neither! “Better” is an abstract term, which by itself means nothing. Therefore, saying “OpenSUSE is better than Ubuntu” means absolutely nothing, also! However, what can be done is comparing specific features of A and B. You cannot say “Apples are better than oranges”, but you can claim that an average apple is heavier than an average orange with specific examples of both. Color-wise, you can say that apples tend to be green-red, while oranges yellow-orange-reddish. You cannot directly compare colors, mind you, unless you express the color of A and B in a uniform color scale, like “the amount of red”. No fallacy has been committed that way. Therefore, neither software, nor hardware can be directly compared, though you can say, for instance that “openSUSE has a number of tools like YaST, which make it potentially more convenient for system administrators than Ubuntu”. Remember that!

III. The “use case” concept
Knowing that user-friendliness does not exist and that many things cannot be directly compared, the next step is understanding the “How” inherent to all problems. You have an issue or an inquiry. What is that you want to achieve? What are the exact requirements to reach your goal? What is the situation in which you experienced your problem? Being specific and being able to disassemble large problems into smaller tasks is paramount to understanding the problem and finding the possible solutions to it. This is true not only for computers, but for everything in life alike. Once you know your “use case”, you will know which hardware and software (including the operating system) to choose. Different operating systems cover various use cases or use scenarios, thereby understanding  your use case well will allow you to find the perfect operating system or any other piece of software quicker.

IV. Options, decisions and the “good enough”
All of the above being said, humans have this need to always aim for optimal solutions. Subconsciously,  they want only the “best” for them. What if it’s impossible to identify the best option? What if all of them satisfy our requirements equally well? Thus, the concept of “good enough” comes into play. Sometimes, the “best” solution is the first solution we decide upon and stick with it. No second thoughts allowed! Until we identify a legitimate reason why solution #1 no longer satisfies our needs for a prolonged period of time. Wondering which operating system to choose? Linux Mint? Ubuntu? Debian? Fedora? Perhaps not a Linux based OS, but a pure UNIX-like BSD? There are so many! If you’re a beginner, it doesn’t matter which you choose. Pick one, stick with it and change only if experimenting or your first choice was completely wrong.

V. Thinking and the individual responsibility
This will be a harsh one. Proprietary operating systems create this illusion of user friendliness (it’s a lie, we know it now!) and that the user is not required to take responsibility for his/her actions done on his/her software/hardware. This is one of the major fallacies in the computer world. The moment you buy a computer, you are completely responsible for it. Consider it your “child”. You need to make sure it’s always clean, powered up etc. No one will ever do it for you. Others can recommend solutions, give advice, provide support even, but the final decision is on you and you alone. Whatever you do with your computer, it is your success or failure. The primary reason why malware spreads like wildfire is that people are convinced that they don’t need to actively care for the safety of their computers. Dead. wrong.

The open-source way is not better than the proprietary/closed-source way. It’s different, nothing else. I chose it, because it aligns with my personal preferences well and I believe that it will prevail. It is for you to decide whether you can accept that. If the answer is “Yes”, I congratulate you. Go forth, learn and become a full-fledged member of the open-source community :).

GNOME3 Oversimplified?

gnome_logo

Seeing as how GNOME3 and KDE (4? 5? Plasma? Neon? Ion?) are the leading desktop environments nowadays, I decided to give GNOME3 a try on my openSUSE Leap 42.3 workstation (my current main distribution on most hardware, including a Raspberry Pi 3). There are good and bad things and some of it agrees with my former assessment of GNOME3. However, I assumed that since I’m more pro-desktop now, my opinion might change. Well, it did perhaps…

The Good:
In material design alone GNOME3 wins a trophy. There is a lot of MacOS X mimicry and I think that speaks well of the project. The guys (and gals!) from Apple know their stuff so why not get inspired by them a little? The overall UX (user experience) is also positive. Thanks to the highly intuitive interface finding important desktop features is a breeze. One just needs to browse a bit and not follow the imprinted this option must be hidden somewhere philosophy that other desktop environments teach us. Also, I greatly appreciate the attention to useful features like the one-click offloading of graphically intensive applications to the discreet nVidia card on Optimus laptops. If our day-to-day tasks focus on office work and leisure, GNOME3 could potentially be the desktop environment of the future. It stands to reason, because it’s an open-source project, molded and shaped into perfection by the user and developer communities. It constantly evolves so there is no limit to its improvements.

The Bad:
Unfortunately, it seems that simplicity of design has its price. Troubleshooting GNOME3 is extremely painful and many of the applications (including the Gnome Shell and the Gnome Display Manager) throw the most uninformative error messages.

something_went_wrong

Authored by the Interwebs

Case in point, the above error screen. The Oh no! Something has gone wrong is a phrase typically used in commercial applications to shield end users from the headaches of reading crash logs. By willfully choosing Linux we demonstrate that we’re no mere end users so treating us as such is quite rude. To dwell on this a bit more, the above error screen appears even when the Gnome Display Manager login panel crashes. How is one supposed to log out without being logged in to begin with? To make matters worse, since many Linux distributions have the display manager set to restart on failure, this screen will keep re-appearing until proper troubleshooting is done in one of the TTY consoles (ctrl  + alt + F1 – F9 keys). This very much reeks of Windows and MacOS X problems where the user interface basically took over the OS. All we can do is just reboot and hope for the best. Other applications show similar An error occurred messages without any means of actual troubleshooting.

My take-home from this experience is Thanks, but no thanks. If I want to get some work done, I would rather rely on LXDE, XFCE, LxQt and maybe even KDE. Traditional desktop environments without bells and whistles.