Amazon introduces new system to counter AI-generated "hallucinations"

Amazon Web Services (AWS), the cloud computing arm of Amazon, introduced a new tool on Tuesday aimed at addressing AI hallucinations—situations in which an AI model demonstrates unreliable behavior.

Amazon introduces new system to counter AI-generated "hallucinations"
Amazon Web Services (AWS), the cloud computing arm of Amazon, introduced a new tool on Tuesday aimed at addressing AI hallucinations, which refer to instances where an AI model produces unreliable outputs.

The new service, known as Automated Reasoning checks, works by checking and validating a model's responses against information provided by customers to ensure accuracy. In a press release, AWS described this tool as the "first" and "only" safeguard against hallucinations.

Accessible via AWS' Bedrock model hosting service, the tool aims to analyze how a model reaches its conclusions and evaluate the correctness of its answers.

Customers can upload information to create a reliable ground truth, and the tool generates rules that can be refined and applied to a model, according to AWS.

As a model produces responses, the tool verifies them, and if a potential hallucination is detected, it references the established ground truth for the appropriate answer. This correct answer is presented alongside the likely error, allowing customers to assess the extent of the model's inaccuracies.

AWS reported that PwC is already utilizing Automated Reasoning checks to develop AI assistants for its clientele.

"With the launch of these new capabilities, we are innovating on behalf of customers to solve some of the top challenges that the entire industry is facing when moving generative AI applications to production," stated Swami Sivasubramanian, VP of AI and data at AWS.

AWS asserts that its tool incorporates "logically accurate" and "verifiable reasoning" to reach its conclusions. However, as noted in a report by TechCrunch, the company has not provided data to demonstrate its reliability.

AI models experience hallucinations due to their nature as statistical systems that identify patterns in datasets and predict subsequent data based on previous examples. They do not offer definitive answers but rather predict how questions should be addressed within a margin of error, according to the report.

Earlier this summer, Microsoft launched its Correction feature, which highlights AI-generated text that may contain factual inaccuracies. Google has also introduced a tool in Vertex AI, its AI development platform, to help customers "ground" models using data from third-party sources, their own datasets, or Google Search.

Emily Johnson contributed to this report for TROIB News