Abstract: Modern Bayesian decision theories are widely applied in the design of various AI systems, building on the foundations of classical decision theory. While classical decision theory is a mathematically elegant framework, it relies on highly idealized assumptions, such as the assumption of logical omniscience. These assumptions pose challenges when implementing decision theory in AI systems, which operate within computational, physical, and mathematical constraints. As a result, issues related to reliability, explainability, fairness, and bias in AI design—widely discussed in the literature—may arise. In this talk, I will highlight these challenges in classical Bayesian decision theory and use a case study to explore how decision theory can be made more realistic.

Speaker Bio: Dr. LIU Yang earned his PhD in philosophy under the supervision of Professor Haim Gaifman at Columbia University. Following his doctoral studies, he joined the Faculty of Philosophy at the University of Cambridge as a junior research fellow. Shortly after its establishment, he became a senior research fellow at the Cambridge-based Leverhulme Centre for the Future of Intelligence, which is dedicated to ensuring AI serves as a force for good. Later, he brought his research closer to home, joining The Hong Kong University of Science and Technology (UST), where he was elected as a fellow of the Institute for Advanced Study. Dr. Liu’s research interests include logic, decision theory, and the philosophy of AI.

Powered by Froala Editor