Software Design Be Like…

I stumbled upon this very accurate article from the always-fantastic Dedoimedo, recently. I don’t agree with the notion that replacing legacy software with new software is done only because old software is considered bad. Oftentimes we glorify software that was never written properly and throughout the years accumulated a load of even more ugly crust code as a means of patching core functionalities. At one point a rewrite is a given. However, I do fully agree with the observation that a lot of software nowadays is written with an unhealthy focus on developer experience, rather than user experience. Also, continuity in design should be assumed as sacred.

One of the things that ticks me off (much like it did in the case of the Dedoimedo author) is when developers emphasize how easy it is for prospective developers to continue working on their software. Not how useful it is to the average Joe, but how time-efficient it is to add code. It’s nice, but should not be emphasized so. Among the many characteristics a piece of software may have, I appreciate usefulness the most. Even in video games, mind you! Anything that makes the user spend less time on repetitive tasks is worth its weight in lines of code (or something of that sort). Features that developers consider useful are in reality not so to regular users way too often. Also, too many features are a bad thing. Build a core program with essential features only and mix in a scripting language to add features that users may require on a per-user basis. See: Vim, Emacs, Chimera, PyMOL, etc. Success guaranteed!

Another matter is software continuity. Backward-compatibility should be considered mandatory. It should be the number one priority of any software project as it’s the very reason people keep getting our software. FreeBSD is strong on backward-compatibility and that’s why it’s such a rock-solid operating system. New frameworks are good. New frameworks that break or remove important functionalities are bad. The user is king and he/she should always be able to perform their work without obstructions or having to re-learn something. Forcing users to re-learn every now and then is a *great* way of thinning out the user base. One of the top positions on my never-do list.

Finally, an important aspect of software design is good preparation. Do we want our software to be extensible? Do we want it to run fast and be multi-threaded? Certain features should be considered exhaustively from the very beginning of the project. Otherwise, we end up adding lots of unsafe glue code or hackish solutions that will break every time the core engine is updated. Also, never underestimate documentation. It’s not only for us, but also for the team leader and all of the future developers who *will* eventually work on our project. Properly describing I/O makes for an easier job for everyone!

Unix and Software Design

Getting it right in software design is tricky business. It takes more than a skillful programmer and a reasonable style guide. For all of the shortcomings to be ironed out, we also need users to test the software and share their feedback. Moreover, it is true that some schools of thought are much closer to getting it right than others. I work with both Unix-like operating systems and Windows on a daily basis. From my quite personal experience Unix software is designed much better and there is good reasons for that. I’ll try to give some examples of badly designed software and why Unix applications simple rock.

The very core of Unix is the C programming language. This imposes a certain way of thinking about how software should work and how to avoid common pitfalls. Though simple and very efficient, C is an unforgiving language. By default it lacks advanced object-oriented concepts and exception handling. Therefore, past Unix programmers had to swiftly establish good software design practices. As a result, Unix software is less error-prone and easier to debug. Also, C teaches how to combine small functions and modules into bigger structures to write more elaborate software. While modern Unix is vastly different from the early Unix, good practices remained a driving force as people behind them still live or have left an everlasting impression. It is also important to note that the graphical user interface (Xorg and X11 server) was added to Unix much later and the system itself functions perfectly fine without it.

Windows is entirely different as it was born from more recent concepts, when bitmapped displays were prevalent and the graphical user interface (GUI) began to matter. This high-level approach impacts software design greatly. Windows software is specifically GUI-centred and as such emphasizes the use of UIs much more. Obviously, it’s a matter of dispute, though personally I believe that good software comes from a solid command-line core. GUIs should be used when needed not as a lazy default. To put it a bit into perspective…

My research group uses a very old piece of software for managing lab journals. It’s a GUI to a database manager that accesses remotely hosted journals. Each experiment is a database record consisting of text and image blocks. From the error prompts I encountered thus far I judged that the whole thing is written in C#. That’s not the problem, though. The main issue is that the software is awfully slow and prints the most useless error messages ever. My personal favorite is “cannot authenticate credentials”. Not only is it obvious if one cannot log in, but it contains no information as to why the login attempt failed. Was the username or password wrong? Lack of access due to server issues? Maybe the user forgot to connect to the Internet at all? Each of these should have a separate pop-up message with an optional suggestion on what to do to fix the issue. “Contact your system administrator” not being one of them!

The Price of Free Software…?


As part of my recent pursuit of GUI-heavy Linux operating systems I decided to take a closer look at Fedora. According to its website, it is an extensive, feature-rich distribution with software and hardware developers in mind. In principle it is a community-driven project, though thanks to its patronage by Red Hat it became tremendously successful.

I found Fedora interesting, because of its strict approach to licensed software. Considering its link to an entirely commercial entity such as Red Hat Enterprise Linux (RHEL), it’s surprising, to say the least. Most Linux distributions offer licensed programs as part of their non-free (Debian), restricted (Ubuntu), etc. repositories. Accessing those repositories is not mandatory, but usually offered to improve user experience. Most programs we know are licensed or even proprietary (‘non-free’ by definition), hence to me it was only natural to have access to those programs. Fedora makes it a tad more difficult, because its non-free repositories (RPM Fusion) are not available out-of-the-box due to them being against Fedora’s principal ethics. Therefore, one has to manually (either through the terminal or using a GUI) add them to the package manager. Though troublesome, I consider this to be an honest solution.

At this point it should be made clear that free software does not equate open-source software. Free software has been very well defined by the Free Software Foundation (FSF), headed by Richard M. Stallman. Thence, software is truly free only when it complies with the Four Freedoms. Otherwise, it is merely open-source software. Now, ‘free’ in ‘free software’ means ‘freedom’ (French libre, liberte) not ‘lack of price’. Therefore, a free software developer may require a donation or fee for his work, though once this has been paid, the piece of software is completely free to use, to modify, to redistribute, etc.

To the titular question though – what is the price of free software? Obviously, the direct cost of using a specific piece of free software is typically zero. However, once one decides to use free software exclusively, the moral cost is enormous. It requires the user to severely limit his options of work and leisure. A more detailed picture can be drawn using an example.

I have an old Intel-based MacBook Pro 2008 with Fedora 22 64-bit on it. I cannot use the AirPort Extreme Broadcom 4322 wireless chip, because only the proprietary broadcom-sta driver supports this specific chip revision (a common problem, in fact). Therefore, I am forced to buy a USB dongle with kernel support and permanently keep it in one of my 2 USB ports. Next, of the available browsers I can only use Midori, which is still very buggy and incomplete, or Firefox’s libre equivalent IceCat. Therefore, I’m left with Firefox/IceCat and a teaspoon of plug-ins considered to be libre. Should I decide to commit to graphics design and 3D rendering, I would probably need proper drivers for my integrated Nvidia card (320M or 9400M) to profit the most from my out-dated hardware. Not possible, because the default nouveau driver for Nvidia graphics cards is frankly speaking still light years behind its proprietary counterpart. Summing up, I’m left with a decent, but grossly limited computer that feels more like a ‘demo version’ of itself.

In my opinion the concept of free software is very noble and definitely facilitates good developer practices. Alas, for now it is completely Utopian and will never gain full momentum as long as we, flawed human beings continue to cling to our earthly principles, such as property and money.