To overcome the obstacles, many companies have incorporated Generative AI in their decision making. Generative AI (such as ChatGPT, BARD) is dominating media and social media with promising deployment on HR hiring, create software code, facilitate drug development, health diagnosis, stock picks and investment, detect fraudulent transactions, text and content generation, composing poetry, ...
The very nature of statistical word generation disallows the makers of these Large Language Models (LLM) applications to create effective safety net on the information rendered. There may be one instance that the information is 100% correct and true, follow by the next response that could be totally false and untrue or somewhere in between. These LLM models often generate text with an authoritative tone while rendering hallucination. In many instances, reference sources provided by Generative AI do not exist.
Example:
- Lawyer fined for filing bogus case law created by ChatGPT, more ...
- Bard creates bogus management team of a security company , more ...
- With the right prompts Google's Bard can easily jump its guardrails and generate misinformation on 8 out of 10 topics, research group finds, , more ...
Generative AI presents you a lot of information but without reference source and context.
Even if it can provide reference source, it still takes time to validate the information.