P2: Disagreement
Most real-world collective decisions start with disagreement, i.e. differences in beliefs or credences among group members. The overall research question addressed by this project is whether and how a group can choose to preserve disagreement as long as it is rational to do so (economically speaking, welfare-enhancing) and reduce disagreement when it becomes harmful. Put differently, what are the epistemic and institutional prerequisites for groups to optimally “manage” disagreement?
Artificial intelligence raises novel challenges about disagreement. Under what conditions, if any, should one consider machine-learning algorithms as epistemic peers? A related question concerns the relation between the epistemic standards (e.g., consistency and correctness) against which belief systems of human epistemic subjects are measured, and the standards applied to corresponding opinions generated by artificial intelligence.
In politics, disagreement is often regarded as a resource, not a liability, but it is not clear in which contexts ‘stubbornness pays’ in keeping alive non-standard beliefs and decision alternatives. Under conditions of polarization, even withdrawal of the assumption of epistemic peerhood among citizens seems warranted. One key issue is when citizens should reasonably bow to expert opinion in policy questions, which appears to create a difficulty for public reason-based accounts.
Objectives
- To investigate strategies of immunizing group decision procedures against the influence of propaganda.
- To explore the implications of disagreement with AI agents.
- To evaluate the conditions under which the withdrawal of the assumption of civic peerhood seems warranted.