Amazon's Mitigating AI Hallucinations Through This Mathematical Method - The Legend of Hanuman

Amazon’s Mitigating AI Hallucinations Through This Mathematical Method


eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

On Feb. 5, Amazon’s cloud-computing unit detailed an established mathematical method they intend to put to work reducing hallucinations in generative AI.

Amazon didn’t clarify whether the new technique will be used on Amazon.com; instead, according to The Wall Street Journal, the company wants to use it to get more customers to trust Amazon Web Services’s generative AI offerings.

Automated reasoning differs from the reasoning method that has recently become hot among frontier models, such as Gemini 2.0. While reasoning slows down processes to provide more thorough answers, Amazon’s automated reasoning relies on mathematical proofs to ensure the AI will produce a certain result.

How does automated reasoning work?

Put simply, automated reasoning works by defining certain statements as inarguable truths. From there, it “verifies” using logic chains such as, per Amazon’s example, “if cats are mammals and mammals live on land, cats live on land.”

AWS Vice President and Distinguished Scientist Byron Cook told The Wall Street Journal that automated reasoning stems from symbolic AI, a specialization within mathematics from 2,000-year-old research. Unlike the prediction used by many machine learning and generative AI systems, symbolic AI is rule-based. Amazon has been snatching up the relatively small pool of mathematicians fluent in this field.

Amazon saw success in drawing more customers to its cloud business by deploying Automated Reasoning Checks, a tool built out of automated reasoning mathematical proofs. The company hopes the same can be done to win over CIOs who may not trust AI-generated answers.

Automated reasoning is already used in AWS products including CodeGuru Reviewer, Inspector Classic’s Network Reachability feature, AWS IAM Access Analyzer, its Virtual Private Cloud Reachability Analyzer, and the enterprise AI guidebook Bedrock Guardrails. Elsewhere, as Amazon illustrated, electronics design engineers might use automated reasoning to define terms and confirm a specific hardware design meets specifications.

Automated reasoning can’t erase all gen AI hallucinations

Automated reasoning does have some limitations; for instance, it can’t be used to make “predictions or generalizations,” as Amazon said above. For example, a system running entirely on automated reasoning wouldn’t be able to argue, incorrectly, that “all mammals live on land.” Automated reasoning is best for cases in which the data provided follows strictly defined rules, such as important company policies.

Automated reasoning is just one method of reducing generative AI hallucinations; retrieval-augmented generation offers an alternate way to double-check AI-generated content.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment