Home » Microsoft claims its new AI correction feature can fix hallucinations. Does it work?

Microsoft claims its new AI correction feature can fix hallucinations. Does it work?

by Marko Florentino
0 comment


Microsoft claims it has a new capability that detects and corrects false or misleading statements from AI.

ADVERTISEMENT

Microsoft unveiled a new artificial intelligence (AI) feature this week that it says will help to correct models’ false statements.

The new “Correction” capability will identify AI output inaccuracies and fix them, according to the technology giant.

So-called AI hallucinations will be corrected in real-time “before users of generative AI applications encounter them,” Microsoft said, with a spokesperson calling it a “new first-of-its-kind capability”.

The feature works by scanning and highlighting the inaccurate part of a response. It can then generate a response about why the segment is wrong and use generative AI to correct the section to make sure “that the rewritten content better aligns with connected data sources,” a Microsoft spokesperson said.

It’s a part of Microsoft’s Azure AI Content Safety software interface, which can also now be embedded on devices.

Why does AI hallucinate?

AI models are trained on extensive datasets to make predictions, but they can also “hallucinate,” which means they generate incorrect or false statements. This can be due to incomplete or biased training data.

Jesse Kommandeur, a strategic analyst at the Hague Centre for Strategic Studies, compares it to baking a cake without the full recipe – where you guess based on prior experiences what may work. Sometimes the cake comes out well but other times it doesn’t.

“The AI is trying to ‘bake’ the final output (like a text or decision) based on incomplete information (‘recipes’) it has learned from,” Kommandeur said in an email.

There have been many high-profile examples of AI chatbots providing false or misleading answers, from lawyers submitting fake legal cases after using an AI model to Google’s AI summaries providing misleading and inaccurate responses earlier this year.

An analysis by the company Vectara last year found that AI models hallucinated between 3 and 27 per cent of the time depending on the tool. Meanwhile, non-profit Democracy Reporting International said ahead of the European elections that none of the most popular chatbots provided “reliably trustworthy” answers to election-related queries.

Could this new tool fix hallucinations?

Generative AI “doesn’t really reflect and plan and think. It just responds sequentially to inputs… and we’ve seen the limitations of that,” said Vasant Dhar, a professor at New York University’s Stern School of Business and Center for Data Science in the US.

“It’s one thing to say [the new correction capability] will reduce hallucinations. It probably will, but it’s really impossible to get it to eliminate them altogether with the current architecture,” he added.

Ideally, Dhar added, a company would want to be able to claim it reduces a certain percentage of hallucinations.

“That would require a huge amount of data on known hallucinations and testing to see if this little prompt engineering method actually reduces them. That’s actually a very tall order, which is why they haven’t made any kind of quantitative claim about how much it reduces hallucinations”.

Kommandeur looked at a paper Microsoft confirmed was published about the correction feature and said while it “looks promising and chooses a methodology I haven’t seen before, it’s likely that the technology is still evolving and may have its limitations”.

ADVERTISEMENT

‘Incremental improvements’

Microsoft says hallucinations have held back AI models in high-stakes fields. such as medicine, as well as for their broader deployment.

“All of these technologies including Google Search are technologies where these companies just continue to make incremental improvements in the product,” said Dhar.

“That’s kind of the mode once you have the main product ready, then you keep improving it,” he said.

“From my perspective, in the long term, investment in AI can become a liability if the models keep hallucinating, especially if these errors keep leading to misinformation, flawed decision-making etc,” said Kommandeur.

ADVERTISEMENT

“In the short term, however, I think the [large language models] LLMs add so much value to the daily lives for a lot of people in terms of efficiency, that the hallucinations are something we seem to take for granted,” he said.



Source link

You may also like

Leave a Comment

NEWS CONEXION puts at your disposal the widest variety of global information with the main media and international information networks that publish all universal events: news, scientific, financial, technological, sports, academic, cultural, artistic, radio TV. In addition, civic citizen journalism, connections for social inclusion, international tourism, agriculture; and beyond what your imagination wants to know

RESIENT

FEATURED

                                                                                                                                                                        2024 Copyright All Right Reserved.  @markoflorentino