“Everything that can be invented has been invented.”
This famous quote is attributed to Charles H. Duell, who was the Commissioner of the US Patent Office in 1889. However, some studies suggest that he did not actually say such a thing. Ironically, I feel the same way more than a hundred years later.
Even though the statement is obviously not true, I was struck by this feeling when I started to think about all the potential of using LLMs specifically as a computing unit. In theory, it could replace all custom-written software by simply providing a prompt where you type in the idea and it outputs the result. Hence, all that's left to do is to design a wrapper for an AI engine.
Instead of:
Code → Compile → Output
You could:
Prompt → AI Engine → Output
Hence, the value captured by handwriting software will decrease, and the art of translating business logic into code will no longer be necessary. Instead, you will just have to describe the logic in plain English or any other natural language, and the AI will output the corresponding code or result.
This is fascinating in terms of productivity. It can speed up the process of implementing new ideas! Architecting, logic, and coding take time to be crafted. Now, one can write many software programs by simply specifying, resulting in a tremendous increase in software-based solutions.
However, you need to be very specific about what you want to accomplish. Lay out all the context, including the language, framework, tests, and any other details. Don't spare any details, plan ahead before starting to write the code, which sometimes can take as long or even longer than writing the code itself. It's almost like doing behavior-driven development without a framework.
Even though LLMs are powerful enough to output sound results given a prompt, they should be used as co-pilots to reduce repeated tasks. Keep in mind that outsourcing your whole thought process can weaken your mind. It's like using a motorized wheelchair when you could walk; your legs will become weaker. Another point is that machines do not have "skin in the game," meaning they lack a basic survival instinct that forces us to learn as quickly as possible due to the fear of death. This means they don't suffer consequences from a wrongful result. I know you can correct it, but the machine doesn't die out of chances nor have any idea about the size of the impact one decision can lead to.
To close, here is an interesting take from ThoughtWorks' Technology Radar Volume 28:
“..the next generation of AI will take on chores to relieve technology workers, including developers, by replacing tedious tasks that require knowledge (but not wisdom).”
In conclusion, whenever something revolutionary comes up, there is a flood of anxiety that it may lead to misinterpretation. LLMs are a great tool that can push countless areas forward. However, we need to understand that just because it can communicate well with us, it doesn't mean it understands what it is saying.
“AI has knowledge but not wisdom.”