The Limits of AI Pattern Recognition

October 30, 2025
Pavers in traditional pattern

Modern AI excels at pattern recognition. Similar to the human mind, machine learning algorithms—including regression, decision trees, and neural networks—make sense out of complex, high-dimensional data. We’re at a point now where machine pattern recognition is rivaling that of humans. Seventy-five years ago, Alan Turing asked a simple question: “Can machines think?” Turing’s question forces us to examine what machines actually do. Computers have already surpassed humans in storing and retrieving information and in calculating problems within known mathematics. The sophisticated algorithms that engineers are writing today can look a lot like reasoning, understanding, or even thought. AI technology is incredibly powerful, but there are some fundamental limits.

When I was an undergraduate student at the University of Minnesota Duluth taking linear regression, my teacher told us something important: “Don’t extrapolate beyond the range of your data.” She was right then, and it’s still true today. Statisticians have known this for decades. In ML terms this is the training distribution problem. You can interpolate within your data, but you can’t reliably extrapolate beyond it. More intelligence, in the form of more sophisticated algorithms, doesn’t solve this fundamental constraint. The stock market is a cautionary tale of why extrapolation doesn’t work. If you go back to 1998, there were no economic models that predict the dotcom crash. If you look to 2006, models did not predicted the impending housing bubble, mortgage crisis, and stock market collapse. There were models built by very smart people. But they failed catastrophically when they were used to attempt to predict what would happen in the future—i.e., outside of the training distribution. The problem wasn’t intelligence, or the necessity for more of it. The problem was structural.

These failures point to a broader pattern. Intelligence—whether human or artificial—faces different constraints depending on the type of problem we’re trying to solve. Here are three categories of problems where additional intelligence produces diminishing returns.

Category 1: Well-defined objective function. (Protein folding) AlphaFold uses known protein structures to train their model and can verify the accuracy on these known proteins. This provides a clear objective function—a measure of success. Does the predicted structure match experimental validation? This closed feedback loop allows researchers to refine the model. Once trained, the models can interpolate within the space of known protein structures to predict the structure of unknown proteins. This is where AI excels: bounded optimization problems with clear metrics.

Category 2: The constraint of time and testing. Intelligence can generate theories, but validating them requires real-world experiments—and these often take decades. When competing theories exist (such as string theory versus loop quantum gravity), added intelligence can elaborate on each, but it can't replace the data needed to choose between them. Here are two examples. Higgs boson: Predicted in 1964. In order to test it, particle accelerator technology needed to improve. Finally, in 1994, CERN was ready to go forward with a plan to build the world's largest particle accelerator to test the theory. After 18 years of construction and preparation, CERN was able to validate the result. This happened in 2012, 48 years after the boson was predicted. As a second example, gravitational waves were predicted by Einstein's theory of relativity back in 1916. The LIGO project was eventually designed and constructed to test this hypothesis. It relies on some of the most precise machinery ever built, capable of detecting changes smaller than one-thousandth the diameter of a proton. Gravitational waves were confirmed in 2015, 99 years after their prediction. Artificial intelligence may help scientists to zero in on the correct theories to test and may help better design experiments, but we also need to remember that the theories themselves are getting more complex. So the lag between prediction and verification is likely to remain long.

Category 3: Contested objectives. (Climate policy) The scientific consensus is clear: Humans are causing climate change. But that doesn’t change the fact that society as a whole disagrees on the correct direction to move on climate policy. Valid questions such as how much economic disruption is acceptable, what obligations current generations have to future ones, and the trade-offs between development and conservation are answered by different people in different ways. Intelligence, whether at our current levels or augmented with modern AI, can’t resolve fundamental value conflicts. Artificial intelligence will likely lead to much improved climate monitoring and climate modeling. However, even perfect climate modeling won’t tell us what we should do. The constraint here is human coordination and competing preferences, not knowledge.

Artificial intelligence will continue to help us better understand our world and will drive advancements in many fields. For bounded problems with clear metrics, these advances will be transformative. For problems requiring extrapolation or physical testing, they will be limited by time and experimental constraints. Even if we solve these problems, we may hit other limits—power consumption, computational constraints, or bottlenecks we haven’t yet identified. For problems involving human values, artificial intelligence will be nearly irrelevant. Intelligence is necessary, but not sufficient, for solving most meaningful problems. Humans remain essential—not because we’re smarter, but because we generate new data points, build experiments, and navigate value conflicts. The real test of human intelligence isn’t building more powerful AI—it’s understanding where the boundaries lie and staying within the lines.

— Justin Eberhardt

← Back to all posts