Today, I would like to address a topic that affects us all in some way: is AI going to replace software engineers? This is not just a fear I read and hear from different actors of the tech industry. For instance, in March 2024, NVIDIA CEO Jen Sen Huang said kids should stop learning programming as in future, people will describe their problem in natural language and AI will generate code. AWS CEO Matt Garman told employees that most developers could stop coding soon as AI will mostly do the job and they can focus on other tasks.
These kind of discussions are not new, in fact, we can attribute this to many other changes in the past. No-Code or Low-Code solutions were addressed to automate company workflows and reduce the amount of custom software solutions, software creating ground plans should make architects obsolete and do you remember the end of traditional banking thanks to Blockchain? In all these cases, it was always predicted that technological progress would come and make discipline X superfluous – such as replace software engineers by AI.
What is Artificial Intelligence?
AI has it’s roots in the early 1940s-1950s where Alan Turing introduced the idea of machines potentially thinking in his “Computing Machinery and Intelligence” paper . First systems solved problems by applying logic and rules but were very limited and so, an “AI Winter” period (1970s-1980s) followed. Later in the 1980s up to the 2000s, Machine Learning and Neural Networks helped evolving applications in data mining, speech recognition and other expert systems. However, they were still very limited. Beginning in the 2010s, a major breakthrough took place in the field of “Deep Learning” – a subset of Machine Learning that uses multiple layers of Neural Networks to process vast amounts of data. For instance, the first time in history and earlier than predicted, Google’s AlphaGo won against the world champion Lee Sedol in Go, a traditional considered as the most complex of all known strategy games in the world.
Beginning around mid 2010, the era of Large Language Models (LLM) such as ChatGPT began. LLMs are specifc type of AI models designed to process and generate natural language by leveraging vast amount of text data. To set this in cotext: a person getting 80 years old, that starts reading books and does nothing else during his entire life, would read around 30.000 books. A LLM – such as ChatGPT – is trained with about 570.000 books. So basically, a LLM has a lot of knowledge, can guess the likelihood of subsequent words (or tokens) and generates results in the form of natural language.
Why AI can not replace Humans
Take a moment to think about how LLMs (AI) work: a complex mathematical model that predicts the next word – a token – that follows the current as an answer to what prompted. The crucial word in the sentence is “predicts”. LLM results are not reproducable and will usually stay superficial. Doing a prompt twice, you will usually get a different answer. This is contradictionary to what we know as “computing” and is less suitable for automation (in software engineering).
Further, AI systems in general are as good as the input we provide them. For instance, Tay, a Twitter-Chatbot by Microsoft based on Machine Learning launched in 2016, started to tweet racist and sexist statements since it learned by users delivering deliberately harmful content.
COMPAS, another algorithm that should help the US justice to predict the likelihood of a convicted person reoffending. However, research showed that the algorithm systematically categorized black defendants as higher risk than white defendants, although there were no differences in actual recidivism rates. Recruiting tools of big tech companies started systematically to disadvantage women and face recognition tools were classifying people with dark skin.
Another point that is only indirectly related, is the thirst for energy of AI systems. Reports say, AI already uses as much energy as a small city and will likely double it in 2026. Another reports say that training ChatGPT-3 required around 5.4 million liters of water.
AI scratches the surface
AI systems are error prone. Human supervising is required to monitor the results and adjust if needed. But apart from that, there is another point we need to consider: todays AI systems are all trained by public data. This allows AI to handle general tasks, but for specialized use cases – such as companies with proprietary data – additional training on custom, domain-specific data is necessary.
Public data is not even the surface of our modern and complex world. For getting optimal results, data should be accessible as much as possible – including private data, company secrets or things like contracts, etc. For getting optimal results, companies of the same industry must offer their data equally to the same model. Even there are approaches to federate this learning process, my personal opinion is, that this will not happen. Whether people nor companies are interested in making their data public.
Final Thoughts
The current events around AI are very important and not only one step forward. But AI is not what it was propagated the last two years: remember when Elon Musk warned of the end of human being (but founded an AI company in parallel), some countries launched an AI ministry or companies wrapped everything with AI? I think these times are over now.
Managers such as Jen Sen Huang or Matt Garman dream of a world in which they can replace expensive software engineers through AI – no invest, maximum profit. In reality, history has shown us that things do not simply disappear, but change their nature. AI will likely change the way developers write code – it even did. Many developers use tools like ChatGPT for generating boilerplate code, getting new things explained and validating their code. While we spend now less time for things that takes over the AI, we can increase the level of productivity and spend time for other things.
This is how it will progress. We do not know how, when and why – but progress cannot be stopped. We just can adapt.