About the project:
Current investments in artificial intelligence (AI) are massive. There is optimism but also concern. Still, our knowledge of how we got here remains weak. This project aims to rewrite the history of AI by studying the conception of mistakes. In the history of AI, mistakes could be indicators of both trivial problems and advanced reasoning. They could be distinctive of machines or of humans. Should we try to eliminate them or, conversely, try to imitate them?
This study argues that the various conceptualizations of mistakes – as objects of investigation – provide a privileged track to understand AI not only from an engineering perspective, but also as a criticized undertaking. Hence, the project turns to promoters and critics both and thereby engages a variety of source materials: from coding designs and program architectures to political pamphlets and philosophical treaties.
The investigation is designed as three case studies. The first turns to experiments of self-correcting systems in early post-war US. The second deals with so-called expert systems around the 1960s and 1970s. The third case centers on developments in Sweden toward the 1980s within academia, government, and the military including critique thereof. The project adds knowledge to two research areas: critical AI studies and the study of mistakes in the history of technology. Considering the prioritized position of AI in current public and private spending it is imperative that we better understand its past.
Funded by The Swedish Research Council (VR) and is active 2022–2025.