Games Then and Now

Inspired by the talks from Brenda and John Romero, I decided to write a short piece on the evolution of gaming. I will not focus on specific time periods, however, as the industry progressed through subsequent phases quite fluidly. Rather, I will try to draw a comparison between then (1980-1990) and now (201x). I was born at the end of 1980s, therefore I still managed to get to know the amazing Nintendo Entertainment System (NES) first-hand. This will be the starting point of our journey, though I will mention other consoles and gaming systems when relevant.

To begin with, the NES and the original Nintendo Gameboy were amazing systems. Such a variety and richness of games as for these two platforms was never seen before. I didn’t own a Gameboy myself, because they were quite expensive, but some of my childhood friends had them so I would often borrow one to play a bit. Also, back then it was perfectly natural for kids to meet in small groups and game in turns. My favorites were Donkey Kong Land and Super Mario Land. Both were quite difficult, but the enjoyment was enormous regardless! I did have a Famiclone (a Japanese Famicom clone) as these were extremely popular in East-Central Europe. Of course, the cartridges were also Famicom imitations and the system itself (branded Pegasus) would never run any of the original NES games without a special converter. I had no idea about that when I was young since it was easy to get games from local flea-markets anyway. I remember playing Contra and Rescue Rangers 2 for hours and hours on until I could perfectly memorize the entire play-through. Many of the games then were platformers, beat-em-ups, racing games or sports games in general. Regardless of the genre, twitch reflexes were a must! Also, most of the games didn’t have password-based checkpoints so once dead, the player had to start from the very beginning. The replay value was in the difficulty of a game and the necessity to master it to complete it and beat the final boss. From today’s perspective this sounds terribly tedious, but the motivation behind making games was also different. They were supposed to bring fun and excitement in its purest form. Beating a game was intended as the supreme reward for mastering a game and honestly, it really felt rewarding back then. DOS games were slightly different due to the lack of a proper controller pad. They weren’t as fast-paced as NES games, but you could actually save the game state in some of them. Regardless, they still posed a considerable challenge.

tmnt2-4

One of the final bosses in Teenage Mutant Ninja Turtles 2 (NES)

Game design is an interesting topic when it comes to NES, DOS, the Nintendo Gameboy and other platforms from that era. Since games had to fit on a single cartridge or diskette (or multiple diskettes, of course), they could not contain information about the entire state of the game, but rather a set of procedures to draw pixels in correct positions at correct times. As a result, programmers had to implement various hacks to define object boundaries or increase the number of available colors. This caused graphical glitches when the bitmaps were too big or allowed the player to abuse the shape of an object to his/her advantage. Also, forget tutorial levels, help menus, maps, etc. Some games were packaged with a manual or booklet, which introduced the game world or explained basic gameplay aspects, but very rarely would a game provide any help features at all. The player had to explore the game to understand it fully and complete it.

final-fantasy-xv-screenshot-023

A screenshot from Final Fantasy XV (PS4)

Fast-forward several decades and games look and feel entirely different. Firstly, they are a lot more graphically appealing and realistic so we are no longer expected to use our imagination to complete the mental image of a character. Almost everything is WYSIWYG (What You See Is What You Get). That helps with immersion a lot! On the down side, gore and violence are a lot more explicit and traumatizing (think, the Dead Space franchise). Game mechanics haven’t changed much, since even nowadays every game has a “core” which defines its gameplay. However, because games are no longer limited by diskettes or computer memory, developers often mix genres and implement novel gameplay aspects which were unknown in the past. In addition, the player is often gradually introduced to the game world so that he or she is not overwhelmed by the game from the very beginning. Finally, there is a major shift towards developing games in franchises or series to generate sustainable revenue and not as one-off hits. This, of course, puts pressure on developers and emphasizes the use of pre-purchase bonuses or advertising to make sure the game sells.

The differences between games then and now don’t mean that games used to be better or worse in the past, compared to modern games. The evolution of games merely expresses the growth of the industry. Nowadays, gaming is more approachable so that everyone can enjoy it. To us, veterans of the early Nintendo and Sega consoles modern games might seem boring or too easy, though that is only our perspective. In addition, when I recently returned to Castlevania and Teenage Mutant Ninja Turtles II (both on the NES) I realized how unnecessarily frustrating games used to be due to technical limitations. In the end, to each their own. Since I have a lot less time nowadays, I prefer casual games and not the challenging monsters of the past. However, I did find Dark Souls enjoyable, to be perfectly honest.

Advertisements

We Are Developers 2018 – Day 3

Finally, day 3 of the Congress. My morning preparations were the same as on the previous day – water, food and loads of coffee to get my gears running. I was locked & loaded for 8 whopping talks. Since it would take me hours to write about all of them, I will only briefly summarize each.

First off was Philipp Krenn from Elastic, talking about the ELK stack (ElasticSearch + Logstash + Kibana). Apparently, the stack has a new member called Beats. It helps with creating handlers for specific types of data streams (file-based, metrics, network packets, etc.). I feel like that feature was missing in the current composition of the stack, though it only makes the stack bigger and more complex. I was actually investigating the use of Logstash + ElasticSearch + Grafana for sorting, filtering and cherry-picking log messages, but the maintenance overhead was a bit too much. I settled with Telegraf + InfluxDB (time-series SQL-like storage back-end) + Grafana. Telegraf’s logparser plugin simulates Logstash and InfluxDB proved to be an extremely robust storage solution. In addition, Grafana’s ability to handle ElasticSearch records was too rigid (pun intended) for our use case. So in general, it’s a “no”, but I’ll keep my log files open for new options in case/when our framework grows.

20180518_105027.jpg

Catalina Butnaru (right) show-casing various AI assessment frameworks

Second up, Catalina Butnaru on AI, however from an ethics perspective. Frankly, I am allergic to ethics and including it in discussions about AI, because ethics often derails or postpones progress. However, Catalina nailed it. Her talk was extremely appealing and real. I learned that ethical considerations should not go into the “wontfix” bucket and genuinely affect all of us. Well done!

Next, Joe Sepi from IBM talked about getting involved in open-source communities and helping build better software together. His recollection was quite personal, because he had to suffer from the same prejudices all of us fear when delving into an alien new project, framework or programming language. The take-home message? Never give up! Fork, commit, send PRs, make software better. Together.

20180518_132945.jpg

I skipped Martin Wezowski‘s talk to save my (metaphorically) dying stomach, but made it to the presentation from Angie Jones (Twitter). She’s an incredibly engaging speaker and the points she raised really resonated with me. All of us write (or should write!) unit and function tests. However, how do you test a machine learning algorithm or neural network? How do you simulate a client of a shop app or a human target of an image recognition module? It turns out that when dealing with people, machine learning can prove finicky and extremely error-prone. Actually, to the point when it’s funny. Until we begin discussing morbid matters like How many kids need to jump in front of an autonomous car for it to slide off a cliff and kill its passengers? 2? 5? 6? or Why does an image recognition application recognize people of darker skin tone as gorillas? Was there a race prejudice when selecting test image sets? 10 points to Angie Jones for the important lesson!

The next talk was given by Diana Vysoka, a young developer advocate, working for the We Are Developer World Congress organization. On one hand, I feel quite old seeing teenagers get into programming. On the other hand, that’s encouraging in terms of our civilization’s future. Listening to people like her makes me still want to live on this planet.

20180518_152044.jpg

Eric Steinberger (right) making convolutional neural networks plain and simple!

If Diana is a rising star, Eric Steinberger is already one for some time. A math and IT prodigy who can explain extremely complex concepts in such simple words that even a fart like me can comprehend them. He believes that AGI (Artificial General Intelligence) is possible and I believe him. After all, how do we define the requirements for AGI, compared to a standard neural network, which can already be purposed for almost any task? Obviously, we should aim higher than simple bio-mimicry. As humans we’re flawed and our potential is limited. Let’s not unnecessarily handicap the development of AI!

Finally, the last talk. Enter Joel Spolsky, the creator of StackOverflow! I attended his talk last year and was ready for more awesomeness. Joel delivered. Continuously. His anecdotes and stories gave a perfect closure to the Congress. It’s great to be a software developer and meet so many amazing people in one place. See you there next year!

We Are Developers 2018 – Day 2

Day 2 of the We Are Developers World Congress is up (at least for me, since I don’t have enough stamina for both the after-party and another full day of talks). Compared to day 1, I made some progress on the food and water front. The local grocery store, Hofer proved extremely useful. Armed with bacon buns and non-sparkling water I was ready for more developer-flavored bliss!

Alas, the first presentation was slightly disappointing. Instead of a talk about accelerated learning, I got a lecture on how learning works, from which I learned nothing. Thankfully, the second talk fully compensated for the shortcomings of the first one. Enter Brenda Romero – one of the legends of game development (think Wizardry 1-8). This talk was doubly important for me, because I would really love to join the game development “circus”, but I’m not yet sure whether I have the guts (or a “more-than-mellow” liver). I’m still not sure, but the take-home message was crystal clear – just do it! Brenda had a lot of important things to say regarding not giving up and not taking comments from others too personally. The audience can be brutal and vicious, and the gaming industry itself is tough. At least I know what I’m up against!

20180517_100610.jpg

Brenda Romero (centre) talking about her childhood toy assembling endeavors

Numero tertio was a continuation of game development goodness. I originally intended to attend the AI talk by Lassi Kurkijarvi, but John Romero. I don’t think I need to say more to anyone who at least heard of Quake or Doom. It was not a replay of last year’s talk, mind you! Rather, we got a full story of Doom’s development, which to me was both interesting and inspiring. John Romero is an amazing game developer and the pace at which he, John Carmack and other programmers at idSoftware produced Doom was simply dazzling. While modern games are of course a lot more complex, developers from the early 1990s didn’t have the tools, such as SDKs or version control we now possess.

20180517_110340

John Romero (centre) on developing and shipping Doom

 

Later on, it just spiraled! I lost track of the talks a bit, since there was some major reshuffling in the schedule. The presentation from Tessa Mero on ChatOps at Cisco was quite interesting. I do use Slack and various IRC clients, but a greater need for ChatOps and its integration with the software development cycle is definitely there. I wasn’t fully aware of that, to be completely honest. Next, Tereza Iofciu from mytaxi gave us a tour of machine learning and showed us the importance of computer algorithms in predictive cab distribution planning. It wasn’t about self-driving cars or reducing manpower, but rather about reducing the load on drivers and improving clientele’s satisfaction. Computer-accelerated supply-demand, so to speak.

In the afternoon I took an accidental detour to a book-signing event hosted by John and Brenda Romero. Not only did I get a chance to talk to them personally (*heavy breathing!*), but also got a copy of Masters of Doom signed (*more heavy breathing!*). John said that if I read it, I’ll definitely get into game development professionally. I’m completely embracing the idea as I type this. One of the last talks I attended was given by Yan Cui on how he used the Akka actor model implementation (together with Netty) to solve latency issues in a mobile multiplayer game (MMO specifically). Obviously, it was a success and his convincing speech makes me want to try it out. It’s about concurrency, but without the overhead of traditional multiprocessing and/or multithreading. Although I don’t code in C# just yet, there is a Python implementation of Akka, which was recently recommended to me.

20180517_154747.jpg

Yan Cui (centre) explaining message relays in the actor model of concurrent programming

In summary, it was great to meet like-minded folks and actually talk to fellow game developers, who like challenges and don’t shy away from trying out new approaches to software design. Perhaps that’s what I’m looking for – challenges? Stay tuned for more exciting impressions from day 3 of the Congress!

 

We Are Developers 2018 – Day 1

To begin with, I attended the We Are Developers World Congress last year (2017) and I was quite amazed by it. I got to see John Romero, the legend of game development and author of titles such as Wolfenstein 3D, Doom and Quake I. Actually, the congress inspired me so much that I decided to finally part with my scientific career and pursue a life as a software developer and/or system administrator (a bit of both in reality). To the point, though. The We Are Developers World Congress is a fairly novel venture and even the Internet knows very little about it outside of  the main website and single blog posts. It hasn’t become a tradition just yet, thereby media coverage is patchy at best. Considering that it grows exponentially in its grandeur (2000 attendees last year, 8000 registered attendees this year!), I decided to cover it myself.

wearedevelopers_2017_logo

wearedevelopers_2018_logo

The logo from the 2017 edition (above) and the logo from the 2018 edition (below)

The Congress started with a treat already – a fireside chat between Monty Munford and Stephen Gary Wozniak (Steve Wozniak, The Great Woz). It was intended as a casual interview, but The Woz proved to be exactly the person as depicted in the 2013 movie with Ashton Kutcher entitled Jobs. Steve Wozniak is extremely chatty and simply adores talking about himself, therefore it was only natural for him to dominate the discussion. Slightly to the detriment of the “chat” aspect of the event. I enjoyed it nevertheless. Many important points were raised – the economy of social media (Should we not get a fair share of the profit made by Facebook and Google off our personal data?), the “I” in “Artificial Intelligence” (It’s not really “intelligence” if it’s programmed!), Elon Musk (Tesla fails to deliver, year after year…), etc. It was somewhat surprising to witness that Steve Wozniak hasn’t really changed since the crazy ventures of his teen years with Steven Paul Jobs. Quite the amazing spirit!

20180516_102109.jpg

Monty Munford (left) having a fireside chat with the Great Woz (right)

The fireside chat was followed by an interesting talk from Joseph Sirosh from Microsoft. He talked about the various machine learning tools offered as part of Microsoft’s Azure hosting platform. To be honest, I am extremely skeptical regarding Microsoft’s ideas, especially when it concerns open-source software, supposedly open to the public. Microsoft has a disappointing track record of using the embrace, extend, extinguish tactic against promising software projects and a sinusoidal quality trend of its flagship product – Windows. Accordingly, I took the with a bucket of salt approach. The mood among other attendees was similarly negative. Unnecessarily, though! Azure’s machine learning tools seemed very promising in the end. I do consider using them for some of my projects.

After the lunch break I joined the Headless CMS track, and after the initial slightly disappointing talk, I was enthusiastic about Jeremiah Lee and his JSON API idea. REST APIs are a big part of the Web nowadays, ever increasingly so. We do need a slightly more elaborate and efficient data format standard built on-top of the venerable JSON. At that point I realized that unlike the Web development track last year, this time programming language animosities were absent. The implementation is irrelevant to  the standard if we all agree on its importance!  The last talk in the Headless CMS track I attended was given by Kaz Sato from Google. The topic being machine learning again, but this time leveraging Google’s AutoML platform and TensorFlow. Machine learning is actually one of the main themes of this year’s edition of the We Are Developers World Congress. It’s very clear that we need it!

20180516_113233.jpg

Joseph Sirosh (centre), showcasing MS Azure AI services and APIs

To sum up, based on the various talks I attended, I begin to form a vision regarding the future of computers. We started with humongous, clunky mainframes and progressed into the personal computer era with the contributions from Steve Wozniak, Steve Jobs and many others. However, the dichotomy returns. Computers turn into mobile “enabling” devices, which aid us in our daily tasks and ease our interaction with the world (and each other). Heaps of data at our fingertips! However, we need a back-end, an infrastructure of powerful serves to store data and organize it in an accessible way. In-between is of course a robust network interface which carries the data from the back-end to us, the clients/users.