Abstract
Using qualitative reasoning and epistemic graphs, we introduce a simulation-based framework for modeling epistemic divergence in human-generative artificial intelligence (GenAI) interaction. Human and GenAI agents maintain a symbolic framework of beliefs and causal/correlation (e.g., Chain-of-thought reasoning) knowledge, which guides their decisions toward shared goals. Through simulation, we show how misaligned beliefs and knowledge lead to persistent action
divergence, signaling epistemic risk. By formalizing these belief structures as graphs, we provide a transparent method for diagnosing misalignment. Our results suggest the value of epistemic modeling for improving interpretability and safety in collaborative artificial intelligence (AI) systems.
divergence, signaling epistemic risk. By formalizing these belief structures as graphs, we provide a transparent method for diagnosing misalignment. Our results suggest the value of epistemic modeling for improving interpretability and safety in collaborative artificial intelligence (AI) systems.
| Original language | English |
|---|---|
| Publication status | Published - 16 Aug 2025 |
| Event | 38th International Workshop on Qualitative Reasoning - Montreal, Canada Duration: 16 Aug 2025 → 16 Aug 2025 |
Conference
| Conference | 38th International Workshop on Qualitative Reasoning |
|---|---|
| Country/Territory | Canada |
| Period | 16/08/25 → 16/08/25 |
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver