Found in 1 comment on Hacker News
prohobo · 2025-05-24 · Original thread
Agreed with most of this except the last point. You are never going to make a foundational model, although you may contribute to one. Those foundational models are the product, yes, but if I could use an analogy: foundational models are like the state of the art 3D renderers in games. You still need to build the game. Some 3D renderers are used/licensed for many games.

Even the basic chat UI is a structure built around a foundational model; the model itself has no capability to maintain a chat thread. The model takes context and outputs a response, every time.

For more complex processes, you need to carefully curate what context to give the model and when. There are many applications where you can say "oh, chatgpt can analyze your business data and tell you how to optimize different processes", but good luck actually doing that. That requires complex prompts and sequences of LLM calls (or other ML models), mixed with well-defined tools that enable the AI to return a useful result.

This forms the basis of AI engineering - which is different from developing AI models - and this is what most software engineers will be doing in the next 5-10 years. This isn't some kind of hype that will die down as soon as the money gets spent, a la crypto. People will create agents that automate many processes, even within software development itself. This kind of utility is a no-brainer for anyone running a business, and hits deeply in consumer markets as well. Much of what OpenAI is currently working on is building agents around their own models to break into consumer markets.

I recommend anyone interested in this to read this book: https://www.amazon.com/AI-Engineering-Building-Applications-...