AI, Lisp and Why Languages Die

In my exploration of the things arcane and mythical, I stumbled upon a forgotten book by Peter Norvig – Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. My IT colleague was delighted to see it and highly recommended the study of Common Lisp as a sort of meta language. As programming languages interest me greatly and my understanding of functional programming is rather lacking (I loath Python lambdas), I decided to give it a go. What I discovered was a language with an unusual syntax (defining all structures as lists of objects), yet a potential for writing useful tools efficiently. Then, I learned various Lisp dialects were heavily used during the computer boom of 1960-80s when the US government would pump billions of dollars into military and NASA projects (machine learning, AI algorithms, behavioral simulations,, etc.). The trend died down in the early 1990s and together with it Lisps either gave rise to modern languages like Clojure (also from the Lisp family) or simply disappeared. From the old generation Scheme and Common Lisp are still in use, though less and less by the day.

Artificial Intelligence has always been an extremely vital (and interesting) field of computer science. In fact, now more than ever, as rapid growth of the Internet forces us to develop smart tools to sieve through the wild abundance of information in real-time. No wonder projects like Alexa (Amazon) or Cortana (Microsoft) are on the rise. There are 2 crucial aspects of AI that seem to garner much interest in my opinion – human language interfaces (How to make programs understand us humans in all/most of our natural languages?) and intelligent filtering algorithms (How to make programs increasingly aware of our human needs and remember them?). The second aspect involves machine learning, which delves into data extrapolation, approximation and the progressive nature of filtering algorithms. It all boils down to making computers more human and doing some (most?) of our work for us. There are many quite realistic pitfalls, of course, like algorithms deciding that humans are the limiting factor in making our (human) lives easier. When we consider emotions as a complete contradiction to reason, this makes perfect sense. Unpredictable humans are the weakest link in an approach that relies on predictable values.

Going back to Lisp and its dialects, after its inception in 1959 it quickly became the language of choice for writing mathematical algorithms, especially in the field of AI. It was clear that the Lisp S-expression syntax makes code easy to read and the language itself has a strong propensity for evolution. From more modern times (1990-2000) there are plenty of success stories on how Lisp saved the day. Finally, Lisps pioneered crucial concepts like functional recurrence, concurrency and interactive programming (the famous REPL read-eval-print-loop, nowadays a common feature of Haskell, Python and other languages). Taking all of this into consideration it is quite difficult to understand why Common Lisp (the standardized Lisp effort) stopped being the hot stuff. Some of the sources I found mentioned that Lisps were pushed aside for political reasons. Budget cuts made a lot of NASA projects struggle for survival or meet swift demise. Also, new cool languages (*cough* *cough* Perl) came to be and Lisps were supposedly too arcane to be picked up and used easily. However, to me Common Lisp seems far less verbose (obfuscated?) than for example Java and far more orderly than said Perl. Its performance is also supposedly on par with Java, which might be interesting to people who would like to write useful tools quickly (as quickly as in Python, for instance), yet not get into the memory management details of vanilla C or C++ for better performance.

The truth is that no language is really dead until it becomes naturally obsolete. Even if it suddenly loses enterprise backing. While Lisps have some viable descendants, one would be hard pressed to find a language that directly supersedes Lisps. There are of course multiple functional languages that share Lisps’ strengths, yet they typically sport a vastly less approachable syntax, devoid of easily readable S-expressions. Therefore, I believe Scheme, Common Lisp and other modern Lisps deserve not only attention, but also proper appreciation.

Advertisements

Skype for Linux Woes

First and foremost I would like to thank Microsoft and contributors for considering our measly 1+% of desktop coverage and beginning works on the Skype on Linux desktop app. Frankly, previous Skype 4.2 and Skype 4.3 releases felt like rotten meat scraps thrown at a dog (or penguin) and were equally dangerous to Microsoft as to Linux users due to lacking security updates. However, one should be positive as there is light in the form of Skype for Linux Beta. Overall, the app is quite polished and doesn’t have too many graphical glitches. Alas, there are some things to consider still:

  • Only .deb and .rpm packages are available. This strange practice is quite common and I still don’t understand why a tarball with the binary + libraries is not offered alongside. There are many Linux distributions that use other packaging formats. Are vendors afraid that some hacker might reverse engineer the binaries? On top of this, Debian .deb archives can be quite different from Ubuntu .deb archives. Same with Fedora and openSUSE.
  • The website states “Video call your contacts.”. What it doesn’t mention is that this option is not available just yet. We learn it the hard way when calling our Windows-using parents. No talking heads for now, unfortunately.
  • Close to none debugging capabilities. This one pains me the most. The app is clearly denoted as a beta version, therefore one might assume that each user is in fact a beta tester. Any and all feedback would clearly be valuable to Microsoft. Yet, the app doesn’t leak any information to the terminal by default at all. Also, no something went wrong, please send us this pre-formatted report feature is available. I mean, really.
  • Bad software design is bad software design. I accidentally kept typing in the wrong account name, forgetting the domain for users is @hotmail, not @microsoft. Obviously, the account didn’t exist, but instead of telling me this, the app would show a glitched error screen with a Sorry…an unexpected error occurred. This is so Windows 98, honestly. No traceback, no nothing. Also, the Sorry and a cryptic error reference ID don’t really help anyone. What\s up with this attitude?

They’re on the right track, but plenty of polishing is needed. Truth be told, no piece of software shines from the get-go. Uncut diamonds need cutting. That’s just the way the world works.

 

Fedora 26 – RTL8188EU the Hard Way!

Following my former entry preaching on the greatness of Fedora 26, I decided to share some wisdom regarding USB wireless adapters (aka dongles) with the Realtek RTL8188EU chip. These and many other Realtek-based (usually RTL8188EU and RTL8192CU) adapters are affordable and extremely common. Companies like Hama, Digitus, TP-LINK and Belkin fit them into the cheapest 150N and 300N dongles, claiming that they’re compatible with Linux. In principle, they are. In practice, the kernel moves so fast that these companies have problems keeping up with driver updates. As a result,  poor quality drivers remain in the staging kernel tree. Some Linux distributions like Debian and Ubuntu include them, but Fedora doesn’t (for good reasons!) so Fedora users have to jump through quite some hoops to get them working…

The standard approach is to clone the git repository for the stand-alone RTL8188EU driver, compile it against our kernel + headers (provided by the Linux distribution of choice) and modprobe load if possible. Alas, since the stand-alone driver isn’t really in-sync with the kernel, it often requires manual patching and is in general quite flaky. An alternative, more fedorian approach is to build a custom kernel with the driver included. The rundown is covered by the Building a custom kernel article from the Fedora Wiki. All configuration options are listed in the various kernel-*.config files (standard kernel .config files prepped for Fedora), where “*” defines the processor architecture. Fortunately, we don’t have to mess with the kernel .configs too much, merely add the correct CONFIG_* lines to the “kernel-local” text file and fedpkg will add the lines prior to building the kernel. The lines I had to add for the RTL8188EU chip:

# ‘m’ means ‘build as module’, ‘y’ means ‘build into the kernel’
CONFIG_R8188EU=m
CONFIG_88EU_AP_MODE=y

This however will differ depending on the Realtek chip in question and the build will fail with an indication which line in the kernel .config was not enabled when it should’ve been. Finally, if you do not intend to debug your product later on, make sure to build only the regular kernel (without the debug kernel), as that takes quite some time.

 

Fedora 26 Beta – the Killer Distro

Lately, I have been distro-hopping way too much, effectively lowering the output of my Java bytecode. However, that’s over, at least for now. I jumped from Lubuntu to Fedora to openSUSE to Lubuntu again and long story short I ended up with all of my computers (but not my family…) converted to Fedora 26 Beta. One might think it’s too soon since Fedora 25 is not end-of-life just yet. Too soon for the faint of heart maybe, not so for a geeky daredevil such as myself!

fedora_dog

A cute Fedora doggie courtesy of the Internet

I tested Fedora 26 Beta as an upgrade from Fedora 25 on a legacy Dell Latitude E5500 (32-bit Intel Pentium M 575 + Intel GMA graphics), an aging and equally legacy MacBook 2008-2009 (64-bit Intel Core 2 Duo + nVidia Geforce 9400M GT) and yet another fossil PC – the Fujitsu-Siemens Celsius R650 (Intel Xeon E-series + nVidia Geforce 295 GTX). Each installation used the Fedora 25 LXDE spin as its base to keep things similar. No issues whatsover, even despite the fact that I heavily rely on the RPM Fusion repositories for nVidia and Broadcom drivers. This stands in stark contrast to any attempts to update Lubuntu or any Ubuntu spin I have tried thus far. My apologies beforehand, but personal experience with Ubuntu and its children is lacking on all fronts. Upgrading to a new release (even if it’s an LTS!) is like bracing for a tsunami. It will hit, hard. It seems that the dnf system-upgrade plugin has been perfected and is ready for shipping. Fresh installations of Fedora 26 Beta with LXQt were done on 2 PCs – an ASUS Vivobook S301LA (Intel Core i3 + Intel HD 4600 graphics) and an HP-Compaq Z200 workstation (Intel Xeon E-series + nVidia Quadro FX 1800). This time I used the Workstation flavor netinstall disc image as base. Again, only positive surprises here. All of the core Qt apps worked as intended. I was especially curious about Qupzilla, since it would often crash on other distributions (same with the webkit-gtk based Midori, in fact). I managed to write this entry/article without a single crash. I believe it is a testament not only to the various Fedora teams, but also to the Qupzilla, Qt and LXQt people who keep pushing forward with awe-inspiring zeal. Props, kudos and cheers!

Fedora 26 Beta is a great sign that Linux can into space. The experience is bug-free, solid and developer-ready so that I can return to taxing the OpenJDK JVM with peace of mind. Matter of fact, I begin to like Qt as a GUI framework and I am considering contributing to the Fedora project more ardently. They continuously provide me with great tools, I want to give something in return. We all take, we all should give.

The Kernel, the Kernel’s to Blame!

desktop_linux-100276138-orig

When getting my Raspberry Pi 3 set up recently I experienced quite some woes concerning out-of-the-box detection of SD cards. One might expect that an SD card slot is nothing more than a USB-like interface. In theory yes, in practice quite some distributions have problems with accepting that fact. Gentoo failed me numerous times, though partially because I decided to go for an extremely slim kernel config. Manjaro also surprised me in that respect – SD card detected, but not as a USB drive (thereby, not mountable). Fedora and Lubuntu had no problems. Each distribution uses a different set of graphical utilities and desktop environments so users often blame the front-end setup. That’s wrong, though, because the inability of a system to detect a piece of hardware has everything to do with the kernel configuration. Indeed, the kernel’s to blame.

I personally prefer the Arch approach – almost everything as modules. Although this could add significant overhead due to the way modules are loaded, in reality it makes Arch-based systems very light on resources. After all, what’s not in, doesn’t get loaded at all. The drawback is that the distribution or the user is required to ascertain that the initramfs is complete enough to allow a successful boot-up. The alternative is to integrate as many drivers as necessary into the kernel, though that of course makes the kernel bulky and isn’t always the optimal solution. There is a lot in-between that unfortunately causes weird issues like the one I experienced.

I think there should seriously be some consensus between distribution teams regarding what goes into a kernel and what doesn’t. Weird accidents can be avoided and it’s down to individual teams to iron that out. Of course, one can go hunting for drivers on GitHub and trying out 5 versions of a Realtek 8188eu driver, but why should the user be required to do so?

ARMing For the Future

singleboard_computers

Image taken from edn.com

For some time now I’ve been itching to get my hands on a Raspberry Pi single-board computer. Unfortunately, retailers like Saturn and MediaMarkt would shrug my inquiries off with a “we’re expecting them soon”. To my dismay the “soon” seemed like it would never come. Surprising, since the computer geek culture is constantly expanding and the demand is definitely there. Finally, after months of waiting, the Pi arrived to Austria. I quickly armed myself (pun intended) with a RPi 3 model B, a Pi-compatible power supply (5.1 V, 2.5 A) and a mate black case. The rest I already had since I collect various adapters, SD cards, etc. as a hobby. Always handy, it seems. Without further ado, though!

Get your geek on!

Contrary to my expectations, getting a Linux distribution to boot on the Pi was a bit of a hustle. Raspberry Pis don’t have a typical BIOS like laptops or desktop PCs. The firmware is extremely minimal, enough to control the on-board LEDs, hardware monitors and swiftly proceed to booting from a flash drive (SD card, USB stick), or a hard drive. Therefore, one doesn’t actually install a Linux distribution on the Pi. Rather, it’s required to *dump it* onto a disk and plug that disk into a port on the Pi to get it working. There is a fine selection of dedicated distributions out there already – Raspbian, FedBerry, etc. Major projects like FreeBSD, OpenBSD, openSUSE, Fedora and Debian provide ARM-compliant images as well. It’s just a matter of downloading an image, putting it onto an SD card (8-16GB in size, preferably) and we’re ready to go.

Pushing the limits

Not everything is as smooth as it may sound, however. Some of the distributions like FedBerry suggest desktop environments and utilities that are clearly too much for the Pi to handle. Even the top-of-the-line Pi 3 model B is not strong enough to run Firefox smoothly. Part of the problem is the GUI-heavy trend in software design, the other part being the still evolving design of the Pi. At the moment we’re at 1 GB RAM. That’s quite modest per today’s standards. With increasing hardware needs, more care should be taken in regards to the board itself also. Both the CPU and GPU will quickly overheat without at least a basic heat sink. I like ideas such as this, which try to provide the extra add-ons to turn a Raspberry Pi into a full-blown computer. Personally, I use minimalist tools such as Vim/Emacs, Openbox, Dillo so the limitations aren’t really there for me.

IoT for the future!

Truth be told, ARM-powered devices are everywhere. Though it’s a resurrected platform, people have not forgotten about the merits of RISC. Raspberry Pi is not the only Pi, nor is it the only single-board computer worth attention. With such an overabundance of almost-open-source hardware, one can do anything. Pi Zero computing cluster? Check. Watering device sensitive to solar light intensity? Check. Minecraft server? Check. NAS for the whole family? Check. It’s there, it’s cheap, it just needs a bit of geek – namely you!

Lessons on the Future of Technology

I attended the We Are Developers 2017 conference recently and returned from it completely changed. All of my former grudges and qualms are long gone, to be replaced by a strong need for Getting The Job Done. I wanted to share some of my observations so people will hopefully avoid making similar mistakes in the future:

  1. It doesn’t matter which operating system (OS) you use. Philosophy and personal preference aside, it really doesn’t matter. You write .NET and C# software on MS Windows? Fine. You do Objective-C and iOS development in MacOS X’s Xcode? Fine. You like Ubuntu, Fedora, openSUSE, this-other-super-lean-Linux-distribution, because it’s open-source? That’s also fine! In the end, we need extra libraries, frameworks, IDEs, etc. to code efficiently, regardless of the platform of choice. We use the same commonly accessible (often open-source) tools as well.
  2. Networking/collaborating is mandatory. Technology is moving faster than ever (as Martin Wezowski put it – exponentially) and we need to adjust our approach to its pace. We’re in this together so we need to work together. Otherwise, we’re severely limiting ourselves and I am sure that no one wants that. If you have/know a piece of technology that may be useful to someone else, share it. Let’s help each other fulfilling this one goal we all have – making the world a better place to live in.
  3.   Software development doesn’t need to be hard. Long gone are the days of Fortran, Assembly and C89. To create useful programs you don’t have to struggle with manual memory management or arcane coding techniques anymore. There is Python, Ruby, Java, Javascript and a load of utilities that aid us in our endeavors. In addition, we can teach ourselves and our offspring how to code effectively via Scratch and tinkering with ARM-powered devices such as Raspberry Pi or BeagleBone.
  4. It doesn’t matter which language you code in. Most languages can handle SQL databases, file I/O,  URLs, system signaling, I/O streams, etc. Pick one, learn the syntax, learn the libraries/frameworks and roll out great code. In the end it’s the goal that matters not the language it’s fulfilled in.

Summarizing the empowering vibe I felt at the conference is difficult. It was simply incredible! I believe the main take-home message is Work together to make great things and a bright future for ourselves and the generations to come.