Microsoft's warning about becoming obsolete in a year is a stark reminder of the challenges the tech industry faces. The market is grappling with the volatility of tech stocks, which is trending downward. The primary concern is the excessive investment in AI, leading to a potential shock in Value at Risk (VaR). This issue is further complicated by the market's struggle to determine the return on investment (ROI) of hyperscalers and the impact of AI capacity on various industries. The situation is complex, and it's not just about the technology itself but also about the human element. AI's ability to mimic human-like responses, known as 'hallucinations', is a result of its optimization for fluency and the absence of a global checking mechanism. This optimization disables the verification process, allowing for errors and creating a need for 'post hoc coherence', which is only locally applicable. The AI's storytelling loop, where it collects relevant facts and creates a coherent narrative, is a feature, not a bug, and it's what makes LLMs useful. However, this mechanism can lead to hallucinations, which are locally coherent but not globally accurate. The challenge lies in the fact that current mitigations only address surface-level behavior, not the core engine. To truly fix this, one would need access to ground truth, explicit world models, epistemic reasoning layers, or slower generation, each with its own trade-offs. The philosophical implication is that LLMs are more like rhetorical engines than epistemic engines, focusing on how knowledge is talked about rather than how it is established. This distinction is crucial for expert users, as local coherence can be dangerous in certain domains. The debate continues on whether LLMs can be fully 'fixed' without changing the paradigm, and the answer lies in the development of hybrid systems that combine LLMs with symbolic reasoning, databases, simulators, and proof systems. Ultimately, the human element remains a critical factor, as AI's post hoc coherence mirrors our own subjectivity and personal experiences, raising questions about the labor market and the value proposition of AI in high-ranking positions.