Drew Breunig divides AI uses cases into Gods, Interns, and Cogs:
- Gods: Super-intelligent, artificial entities that do things autonomously.
- Interns: Supervised copilots that collaborate with experts, focusing on grunt work.
- Cogs: Functions optimized to perform a single task extremely well, usually as part of a pipeline or interface.
Each category comes with different costs:
- Gods may cost billions or trillions of dollars to develop. The feasibility of building them is unclear, as they don’t exist yet.
- Interns are free or cheap to use. The cost of building them can range from negligible (e.g., creating a custom GPT via prompting) to the millions of dollars (e.g., Bloomberg spent millions building a specialised model on its finance data).
- Cogs are free or cheap to build and run, and they are reliable enough to run independently.
This categorisation is useful when considering startup and product ideas, and your use of AI:
- Don’t build AI Gods, unless you have access to ungodly amounts of capital. This may be obvious to most people, but I still see pitches for model-centric startups, and God-building model companies are the focus of much AI hype.
- Build with AI Interns, as they can significantly increase your productivity. Interns include writing assistants, meeting transcribers, and programming copilots. Ignoring the wealth of AI interns is foolish for individuals and companies alike.
- Don’t build complex AI Interns, unless you have a use case that justifies the costs and risks. For example, Bloomberg’s multi-million dollar model was outperformed by GPT-4 a few months after its release, and it’s unclear whether it ended up powering any features.
- Build with and build AI Cogs, but ensure you can manage non-deterministic components in production. This includes anything from traditional machine learning to the plethora of AI tasks that have become commodified in recent years like object recognition and text summarisation.
Above all, start by defining the problem and assessing the impact rather than making AI use your goal. Focusing on problems and solution impact is robust to hype cycles. I’ve considered this focus to be the hardest problem in data science since at least 2015. Amazon data scientist Dzidas Martinaitis has recently captured a similar sentiment in his flowchart for data science projects. Similarly, Douglas Gray and Evan Shellshear have found that data science and AI projects typically fail due to issues with strategy and process, rather than tech and people shortfalls.
Ignore at your own risk.
Acknowledgement: This post was produced with the help of AI interns. One version of Gemini produced the cover image, while another made helpful suggestions on earlier drafts.
Public comments are closed, but I love hearing from readers. Feel free to contact me with your thoughts.