The AI Pareto Paradox: More computing power – diminishing AI impact? (DiPLO)

For the last few years, the tech world has been locked in a high-stakes arms race for raw computing power. The prevailing logic suggested that if we simply threw more NVIDIA GPUs at Large Language Models (LLMs), intelligence would scale indefinitely. But as we approach 2026, we are hitting a ‘glass ceiling’ of silicon. We are sliding into the AI Pareto trap. Currently, 80% of global AI investment is allocated to hardware and computing power, yet this substantial expenditure is yielding only about 20% of the actual improvements in model performance. We observe this in the plateauing of major models like GPT-5; despite exponential increases in chip capacity, the gains in precision and reasoning are becoming marginal, and in some cases, the models are even losing their edge to previous ones (e.g., we experienced more hallucination with GPT-5 compared to GPT-4). The truth is becoming unavoidable: the next frontier of AI isn’t in the server room. It’s in the office, the classroom, and the human mind.

The AI Pareto Paradox: More computing power – diminishing AI impact?  – Diplo

Latest articles

Related articles