AI Hallucinations Fixable but Tests Needed to Avoid Chatbot Mishaps, says Ex-Google Researcher
-
AI hallucinations will likely be solvable within a year according to ex-Google researcher Raza Habib, but some fabrication may be necessary for creativity.
-
Habib believes models already have the knowledge to avoid hallucinating; it's just about preserving that knowledge during training.
-
Air Canada's chatbot issues were "completely avoidable" through better testing and guardrails, according to Habib.
-
The chatbot gave misleading advice about bereavement tickets and refunds.
-
Air Canada claims the chatbot did not use AI, but older technology preceding capabilities like ChatGPT.