Natural language is created for human communication. Human intelligence is good at understanding the context and semantics of written narratives, but falls short in doing it at scale to avoid fallacies, conjecture, and misleading statements. Worse yet, when Generative AI is used to summarize a narrative for important decisions. Generative AI does not understand context based on semantics, instead, it uses token pattern derived from text and calls it context. It is a source of misinformation when Generative AI generated summary is being treated as a correct summary of natural language documents.
Economically, misunderstanding of context and semantics in critical documents such as legal works, contracts, statements of work, sales transcripts, meeting minutes, opinions, earning transcripts, or business intelligence will result in unwanted outcome from poor decisions.