Wharton Business School professor Ethan Mollick: Solve the AI hallucination problem with an “organizational design” mindset

ChainNewsAbmedia

AI hallucinations are still one of the most frustrating problems for large language models (LLMs), but Wharton Business School professor Ethan Mollick offered an intriguing perspective on X: humans have spent hundreds of years developing mature mechanisms that can produce reliable outputs from unreliable sources—that set of mechanisms is called “organizational structures,” and we can apply similar methods to AI. This post earned 329 likes, 35 reposts, and 44 replies, sparking an in-depth discussion about how to respond pragmatically to AI hallucinations.

What is the “organizational structures” analogy?

Mollick’s core argument directly points to a fact that’s often overlooked: humans have never been perfectly reliable sources of information. Historically, whether it’s accounting records, medical diagnoses, or legal rulings, human outputs always carry a risk of errors. However, civilization works the way it does because we’ve developed a whole set of “organizational structures” to manage these risks.

In essence, these organizational structures are a sophisticated set of “error interception machines”: through division of labor, hierarchical review, cross-validation, and institutionalized processes, we transform individual unreliability into system-level reliability. Mollick believes that instead of obsessing over building an “AI that never makes mistakes,” we should change the approach—just as we do with human employees, we should set up a systematic quality-control framework for AI.

Concrete applications: reviews, tests, and cross-checks

In the follow-up discussion sparked by the post, Mollick and other participants further explored several concrete methods that can be borrowed directly from organizational management. First are “reviews”—just like supervisors reviewing work in a company or peer reviewers—so that another AI model or human expert can systematically check an LLM’s outputs.

Second are “tests,” similar to unit tests and quality assurance processes in software development, setting verifiable standards for every AI output. Third are “cross-checks,” where multiple independent AI models or information sources provide answers to the same question, and the results are compared for consistency—just like the checks and balances among different departments within an organization.

The shared logic behind these methods is: don’t rely on the perfection of a single node; instead, lower the overall error rate through system design. This aligns closely with the concept of the “Swiss Cheese Model” in modern quality management theory—each layer of protection has a flaw, but once you stack multiple layers, the chance that errors penetrate all levels drops significantly.

Implications for enterprise AI deployment

This way of thinking from Mollick’s framework is especially insightful for companies rolling out AI. When dealing with the problem of AI hallucinations, many companies fall into two extremes: either they don’t use AI at all because they fear mistakes, or they place too much trust in AI outputs and ignore verification. The mindset of organizational design offers a middle path—acknowledge that AI will make mistakes, but control those mistakes within an acceptable range through institutional design.

Specifically, companies can establish an “AI quality management process”: treat AI as an “employee” in the organization, equip it with review mechanisms, define clear boundaries of responsibility, build an anomaly detection system, and retain human oversight at key decision points. This approach is not only more practical, but also more consistent with the management logic companies already understand. For the AI industry, Mollick’s perspective reminds us that the answer to solving AI hallucinations may not lie solely in the technical layer, but also in rethinking the organizational structure of human-AI collaboration.

This article Wharton Business School professor Ethan Mollick: solving AI hallucinations with an “organizational design” mindset first appeared on ChainNews ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments