Linux Freedom
I first heard about Linux a couple of months after returning from the 1992 UN’s Earth Summit in Rio de Janeiro. I was working part-time as a Research Associate in one of the social research centers of my college while doing consultancies on my own – thus the Rio trip. The academic job guaranteed Internet access, which back then meant email, FTP, Telnet and Gopher, a primitive version of the Web. I also frequently used newsgroups and subscribed to quite a few. One day, I accidentally read something about a new operating system (OS) being developed by a Finnish programmer I had never heard about.
Six months later, I was already working full-time in an international development job. I was the manager of a 40-plus country Global Program trying to foster Internet access and use for development purposes. As none of those countries were connected to the Internet (nor had they heard much about it!), the idea was to deploy local servers connected via modems using UUCP to an Internet-linked central server in my office. The local servers were also required to host local networks for clients running DOS or Windows and be able to connect to one or more modems accepting local calls from users who wanted to have remote access. The software solution for the client side of the equation was easy. I could easily combine Waffle, a UUCP-compatible DOS software, with Pegasus Mail. Both of them were freeware, albeit not FOSS.
Options for the server side were limited, financially speaking. Microsoft was not in the running as it offered no options whatsoever. The company did not think much of the Internet. And when it started to do so, it tried to develop its own “Internet” via MSN. On the other hand, Sun Microsystems was the most common platform for Internet access, but their top-notch hardware was beyond budgets and required software licensing, too. Everything else was totally off the charts and impractical in any case. Using PC-based servers was vital, as most countries had local hardware support.
Facing this conundrum, I remembered the mysterious Finnish programmer who, coincidentally, had recently released Linux 0.99, available at various FTP servers, MIT included. Before making any decisions, I had to evaluate Linux. So I download the OS from the Internet, install it on my server, and play with it. The first step was simple, albeit time-consuming, as Internet speeds were anything but speedy. The installation was a total nightmare. I had to copy the downloaded Linux archive into several 3.5-inch floppy disks sequentially and then use the newly minted collection of disks to install. Success was not guaranteed as hardware incompatibilities were prevalent. I do recall that I failed the first couple of times. But once installed, I was blown away. The decision was a no-brainer – plus, Linux was FOSS! Fortunately, companies like Slackware and Yggdrasil started selling Linux distributions on CD-ROMs, enormously facilitating installation.
Nevertheless, I was shocked by the reaction of most to my no-brainer decision. Corporate IT staff and managers were against anything free. “If it’s free, it’s because it is bad,” was their slogan repeated ad nauseam. That came from people who had little to no idea how the Internet, based on “free” protocols, worked. I stood my ground using technical arguments and examples. More surprising was the reaction of the tech staff running the Internet and the institution’s Sun servers and workstations. For them, Sun OS was god-like and anything else was utter nonsense. Although they never demanded switching, they usually poked fun at poor Linux and me. By 1998, Linux had become a renowned and influential platform. Former critics had no choice but to bite their misdirected bullets. And my team and I looked like tech wizards who could foresee the future anytime. Bring it on!
AI and FOSS
25 years later, FOSS seems to be grabbing quite a few headlines again, thanks to the ongoing AI boom. The context, however, is dramatically different. As FOSS went mainstream, after intense battles with Microsoft piggybacking on the infamous Halloween documents, many evangelists either changed careers or attracted little media attention. FOSS lost some appeal as the Internet saw the rise of mobile apps, social media, and open and big data. For example, a new set of programmers started to develop mobile apps either as freeware or available at a meager cost. And no one was demanding to see the source code before using it. Instead, people were very concerned about data use. Furthermore, the new generation of coders had little to no knowledge about FOSS, nor were they asking related questions. GitHub is the epitome of such developments.
Such displacement was not the result of some technological flaw. Not at all. Instead, the economic and socio-political context changed dramatically as the Internet and its innumerable tentacles occupied most, if not all, social interstices. The issues at stake were much more significant than source code availability. Moreover, as my personal case summarized above shows, FOSS was always a means to a much larger end but never the ultimate outcome. Indeed, it was cool to have it at our disposal, but what mattered most was the on-the-ground impact of our development interventions.
AI’s resurgence in the early 2010s and the resounding success of Machine Learning (ML) and Deep Learning (DL) brought computer programming back to the forefront. However, as I argued in a previous blog, AI programming substantially differs from the good old coding process. Indeed, it is sustained by a triad that must be available together to achieve “programming” success. In most of its flavors, AI demands that data, coding and models act in concert throughout to reach the much-sought outputs. If any of them is missing in action, getting results will be almost impossible. Moreover, the actual coding is geared towards developing a semiautonomous computational agent and not targeting end users directly, as in the case of Linux. Finally, leading-edge ML/DL models have been generated and/or supported by Big Tech, given the relatively large data and computational capacity required to undertake such tasks. Unlike Linux, entry barriers here are much taller and thicker.
Data cannot be classified as Open Source as there is no source code to read, study, modify and distribute. So, data is either open or closed, which is not the same as public or private. While not all public data is open, we often have open private data that can be used according to specific licensing agreements, some being part of FOSS’ legacy. Given the intense competition in the sector, most Big Tech data is private and not shared with anyone.
However, Big Tech has played a crucial role in making coding tools and AI models Open Source. In the former case, ML/DL, software libraries such as TensorFlow, PyTorch and Keras are Open Source, released under permissible licenses (Apache and BSD) – that just means one can commercialize products created by such libraries. Regardless, while we can use the libraries for free, that does not mean we also get data access. On the contrary, we have programming tools that I can only use if I have access to data, usually big data. In a way, releasing such libraries as Open Source could be considered self-serving as the idea behind it was to increase the number of coders and developers that could master ML/DL.
Before the release of ChatGPT, state-of-the-art ML/DL models were private in nature and, given the heart of the technology, not geared towards mass consumption, except the early chatbots and the various voice assistants that I always avoided at all costs. In this case, the model is the output, so to speak. With the success of LLMs, the model, initially an output of a complex process, is now used as an input to generate new outputs (text, images, code, etc.) after being prompted by end users. By design, the model thus requires mass consumption – as it can also learn from user interaction – for effective implementation. But that does not mean the model should be Open Source. On the contrary, one could argue that the success of the first version of ChapGPT triggered a new push to keep the code secret and even hide from the public how GPT-4 was built. Unlike ChatGPT 3.5, we do not know how many parameters are part of the former. Again, competition and profit maximization are the main drivers. And as I noted before, access to the model’s API became a source of revenue for the company. Others in the sector are following suit. And the Internet advertising revenue model might be at stake.
Open Source got a big push when Meta released LLaMA to the public in late February. The model, which has 65 billion parameters and 1.4 trillion tokens, seemed very competitive vis-a-vis GPT 3.5. However, the company did not release the value of the parameters or weights that numerically depict the links among the millions of neural nodes. However, such weights were leaked out and became public knowledge. That unleashed a tide of Open Source LLM innovation the MIT article mentioned in the first part of this post neatly describes and we do not need to repeat here.
Two issues not mentioned in the article are worth discussing. First, we should ask why Meta would take such a counterintuitive step. Such action also challenges the now pervasive view of a unified and monolithic “platform economy” dominated by private monopolies, intellectual or not. In my book, such a step results from intense competition in the sector. Almost 98 percent of Meta’s revenues come from advertising. If such a model is under attack, then action must be taken. Secondly, Meta is well aware of the risks of such a decision. So LLaMA was released under GPL 3, which forbids all derivative products’ commercialization. As a result, the now long list of Open Source LLMs available are licensed similarly, with a few exceptions.
FOSS has indeed come a long way since the guerrilla days of the 1990s. Now, it seems to be playing a very different role, however. And unlike the past, it will probably not be a stellar one. I agree that Open Source LLMs will never be at the leading edge, as Linux was. But that does not mean we should ignore them. I believe Open Source LLMs can play the same role Linux played for me when we were trying to help connect the Global South to the Internet. Indeed, one does not need to lead from the start of the marathon, especially when starting late. Today, the stakes are much higher and developing countries should be ready to join the race now, not because they want to win but because they need to close those pervasive socio-economic and political gaps.
In that sense, they are running a very different marathon.
Raúl