ESTONIAN ACADEMY
PUBLISHERS
eesti teaduste
akadeemia kirjastus
PUBLISHED
SINCE 1997
 
TRAMES cover
TRAMES. A Journal of the Humanities and Social Sciences
ISSN 1736-7514 (Electronic)
ISSN 1406-0922 (Print)
Impact Factor (2022): 0.2
SHOULD WE TRUST ARTIFICIAL INTELLIGENCE?; pp. 499–522
PDF | https//doi.org/10.3176/tr.2019.4.07

Author
Margit Sutrop
Abstract

Trust is believed to be a foundational cornerstone for artificial intelligence (AI). In April 2019 the European Commission High Level Expert Group on AI adopted the Ethics Guidelines for Trustworthy AI, stressing that human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology. Trustworthy AI is defined as ethical, lawful and robust AI. Three things strike me about the EC Guidelines. Firstly, though building trust in AI seems to be a shared aim, it is not explicated what trust is, and how it can be built and maintained. Secondly, the Guidelines ignore the widespread distinction made in philosophical literature between trust and reliance. Thirdly, it is not clear how the values have been selected with which AI has to align and what would happen if they came into conflict. In this paper, I shall provide a conceptual analysis of trust in contrast to reliance and ask when it is warranted to talk about trust in AI and trustworthy AI. I shall show how trust and risk are related and what benefits and risks are associated with narrow and general AI. Also, I shall point out that metaphorical talk about ethically aligned AI ignores the real disagreements we have about ethical values.

References

Al-Rodhan, N. (2015) “The many ethical implications of emerging technologies. Scientific American, March 13. Available online at <http://www.scientificamerican.com/article/the-many-ethical-implications-of-emerging-technologies/>. Accessed on 10 November 2019.

Amodei, D., C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané (2016) “Concrete problems in AI safety”. ArXiv, 25 July, v2. Available online at <https://arxiv.org/abs/1606.06565>. Accessed on 10 November 2019.

Baier, A. (1986) “Trust and anti-trust”. Ethics 96, 231–260.
https://doi.org/10.1086/292745

Banja, J. (2019) “Welcoming the ‘intel-ethicist’”. Hastings Centre Report 49, 1, 33–36.
https://doi.org/10.1002/hast.976

Bauer, W. A. (2018) “Virtuous vs. utilitarian artificial moral agents”. AI & Society, 2018. doi:10.1007/s00146-018-0871-3.
https://doi.org/10.1007/s00146-018-0871-3

Beck, U. (1992) “Risk society revisited: theory, politics, and research programmes”. In B. Adam, U. Beck, and J. Loon, eds. Risk society: towards a new modernity, 211–227. Trans. Mark Ritter. London: Sage.
https://doi.org/10.4135/9781446219539.n12

Beck, U. (2000) The risk society and beyond: critical issues for social theory. London: Sage.

Beck, U. (2016) The metamorphosis of the world. London: Polity Press.

Bessi, A. and E. Ferrara (2016) “Social bots distort the 2016 U.S. Presidential election online discussion”. First Monday, 21. Available online at <https://firstmonday.org/ojs/index.php/fm/article/view/7090/5653>. Accessed on 10 November 2019.
https://doi.org/10.5210/fm.v21i11.7090

Bien, N., P. Rajpurkar, R. L. Ball, J. Irvin, A. Park, E. Jones, et al. (2018) “Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet”. PLoS Med 15, 11, e1002699.
https://doi.org/10.1371/journal.pmed.1002699

Boddington. P. (2017) Towards a code of ethics for artificial intelligence. Cham: Springer.
https://doi.org/10.1007/978-3-319-60648-4

Bostrom, N. (2014) Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press.

Bostrom, N. and E. Yudkowsky (2014) “The ethics of artificial intelligence”. In K. Frankish and W. M. Ramsey, eds. Cambridge handbook of artificial intelligence, 316–334. Cambridge: Cambridge University Press.
https://doi.org/10.1017/CBO9781139046855.020

Brundage, M., S. Avin et al. (2018) The malicious use of artificial intelligence: forecasting, prevention, and mitigation. Available online at <https://maliciousaireport.com/>. Accessed on 10 November 2019.

Bryson, J. J. (2010) “Why robot nannies probably won’t do much psychological damage”. Interaction Studies 11, 2, 196–200. doi:10.1075/is.11.2.03bry.
https://doi.org/10.1075/is.11.2.03bry

Bryson, J. (2018) “No one should trust artificial intelligence”. Science & Technology: Innovation, Governance, Technology 11, 14. Available online at <http://ourworld.unu.edu/en/no-one-should-trust-artificial-intelligence>. Accessed on 10 November 2019.

Coeckelbergh, M. (2009) “Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents”. AI & Society 24, 2, 181–189.
https://doi.org/10.1007/s00146-009-0208-3

Coeckelbergh, M. (2012) “Can we trust robots?”. Ethics and Information Technology 14, 1, 53–60.
https://doi.org/10.1007/s10676-011-9279-1

Chadwick R. and K. Berg (2001) “Solidarity and equity: new ethical frameworks for genetic databases”. Nature Reviews. Genetics, 2, 318–321.
https://doi.org/10.1038/35066094

Chadwick R. (2011) “The communitarian turn: myth or reality?” Cambridge Quarterly of Healthcare Ethics 20, 4, 546–553
https://doi.org/10.1017/S0963180111000284

Chalmers, D. (2010) “The singularity: a philosophical analysis”. Journal of Consciousness Studies 17, 9–10, 7–65.

Dafoe, A. (2018) AI governance: a research agenda. University of Oxford. Available online at <http://www.fhi.ox.ac.uk/govaiagenda>. Accessed on 10 November 2019.

Etzioni, A. and O. Etzioni (2018) “Incorporating ethics into artificial intelligence”. In A. Etzioni. Happiness is the wrong metric: a liberal communitarian response to populism, 235–252. (Library of Public Policy and Public Administration, 11.) Cham: Springer. Available online at < https://www.springer.com/gp/book/9783319696225. Accessed on 10 November 2019.
https://doi.org/10.1007/978-3-319-69623-2_15

EU Commission (2019a) A definition of AI: main capabilities and disciplines. Available online at <https://www.aepd.es/media/docs/ai-definition.pdf>. Accessed on 10 November 2019.

EU Commission (2019b) Ethics guidelines for trustworthy AI. Available online at <https://ec.europa.eu/futurium/en/ai-alliance-consultation>. Accessed 10 November 2019.

Farquhar, S., J. Halstead, O. Cotton-Barratt, S. Schubert, H. Belfield, and A. Snyder-Beattie (2017) Existential risk diplomacy and governance. Global Priorities Project. Available online at <https://www.fhi.ox.ac.uk/wp-content/uploads/Existential-Risks-2017-01-23.pdf>. Accessed on 10 November 2019.

Gladden, M. E. (2014) “The social robot as ‘charismatic leader’: a phenomenology of human submission to nonhuman power”. Frontiers in Artificial Intelligence and Applications 273, 329–339. doi:10.3233/978-1-61499-480-0-329.

Grace, K., J. Salvatier, A. Dafoe, B. Zhang, and O. Evans (2018) “When will AI exceed human performance? Evidence from AI experts”. ArXiv. Available online at <https://arxiv.org/abs/1705.08807>. Accessed on 10 November 2019.
https://doi.org/10.1613/jair.1.11222

Gregory, A. (2012) “Changing direction on direction of fit”. Ethical Theory and Moral Practice 15, 603–14.
https://doi.org/10.1007/s10677-012-9355-6

Habermas, J. [1962] (1991) The structural transformation of the public realm. Thomas Burger, trans. Cambridge, MA: MIT Press.

Hardin, R. (1996) “Trustworthiness”. Ethics 107, 26–42.
https://doi.org/10.1086/233695

Hardin, R. (2002) Trust and trustworthiness. New York: Russell Sage Foundation.

Hawking, S. (2018) Brief answers to big questions. New York: Bantam Books.

Hengstler, M., E. Enkel, and S. Duelli (2016) “Applied artificial intelligence and trust – the case of autonomous vehicles and medical assistance devices”. Technological Forecasting & Social Change 105, 105–120.
https://doi.org/10.1016/j.techfore.2015.12.014

Holton, R. (1994) “Deciding to trust, coming to believe”. Australasian Journal of Philosophy 72, 63–76.
https://doi.org/10.1080/00048409412345881

Howard, D. and I. Muntean (2017) “Artificial moral cognition: moral functionalism and autonomous moral agency”. In T. M. Powers, ed. Philosophy and computing, 121–160. (Philosophical studies series, 128.) New York: Springer.
https://doi.org/10.1007/978-3-319-61043-6_7

Hwang, T. and L. Rosen (2017) Harder, better, faster, stronger: international law and the future of online PsyOps. (ComProp Working Paper, 1.) Available online at <http://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2017/02/Comprop-Working-Paper-Hwang-and-Rosen.pdf>. Accessed on 10 November 2019.

Iphofen, R. and M. Kritikos (2019) “Regulating artificial intelligence and robotics: ethics by design in a digital society”. Contemporary Social Science 2041, 1–15.
https://doi.org/10.1080/21582041.2018.1563803

Jones, K. (2012) “Trustworthiness”. Ethics, 123, 1, 61–85.
https://doi.org/10.1086/667838

Keren, A. (2014) “Trust and belief: a preemptive reasons account”. Synthese 191, 2593–2615. doi:10.1007/s11229-014-0416-3.
https://doi.org/10.1007/s11229-014-0416-3

Kuipers, B. (2018) “How can we trust a robot?” Communication of the ACM 61, 3, 86–95.
https://doi.org/10.1145/3173087

Kurzweil, R. (2005) The singularity is near. New York: Viking.

Lee, J. D., and K. A. See (2004) “Trust in automation: designing for appropriate reliance”. Hum. Factors 46 1, 50-80.
https://doi.org/10.1518/hfes.46.1.50.30392
https://doi.org/10.1518/hfes.46.1.50_30392

Lee, J.-G., K. J. Kim, S. Lee, and D.-H. Shin (2015) “Can autonomous vehicles be safe and trustworthy? Effects of appearance and autonomy of unmanned driving systems”. International Journal of Human-Computer Interaction 31, 682–691.
https://doi.org/10.1080/10447318.2015.1070547

Lagerspetz, O. (1998) Trust: the tacit demand. Dordrecht: Kluwer Academic Publishers.
https://doi.org/10.1007/978-94-015-8986-4

Li, X., T. J. Hess, and J. S. Valacich (2008) “Why do we trust new technology? A study of initial trust formation with organizational information systems”. Journal of Strategic Information Systems 17, 39–71.
https://doi.org/10.1016/j.jsis.2008.01.001

Lucas, G. M., J. Gratch, A. King, and L.-P. Morency (2014) “It’s only a computer: virtual humans increase willingness to disclose”. Computers in Human Behavior 37, 94–100. doi:10.1016/j.chb.2014.04.043.
https://doi.org/10.1016/j.chb.2014.04.043

Luhmann, N. (1979) Trust and power. Toronto: Wiley.

McLeod, C. (2015) “Trust”. In Edward N. Zalta, ed. The Stanford encyclopedia of philosophy. Available online at <https://plato.stanford.edu/archives/fall2015/entries/trust/>. Accessed on 10 November 2019.

Mittelstadt, B. D., P. Allo, M. Taddeo, S. Wachter, and L. Floridi (2016) “The ethics of algorithms: mapping the debate”. Big Data & Society 3, 1–21.
https://doi.org/10.1177/2053951716679679

Müller, V. and N. Bostrom (2016) “Future progress in artificial intelligence: a survey of expert opinion”. In V. Müller, ed. fundamental issues of artificial intelligence, 553–571. (Synthese Library, 376.) Springer.
https://doi.org/10.1007/978-3-319-26485-1_33

O’Neill, O. (2018) “Linking trust to trustworthiness”. International Journal of Philosophical Studies 26, 1, 293–300.
https://doi.org/10.1080/09672559.2018.1454637

Pieters, W. (2011) “Explanation and trust: what to tell the user in security and AI?” Ethics and Information Technology 13, 53–64.
https://doi.org/10.1007/s10676-010-9253-3

Potter, N. N. 2002. How can i be trusted? A virtue theory of trustworthiness. Lanham, Maryland: Rowman & Littlefield.

Prinzing, M. (2017) “Friendly superintelligent AI: all you need is love”. In V. Müller, ed. The philosophy & theory of artificial intelligence, 288–301. Berlin: Springer.
https://doi.org/10.1007/978-3-319-96448-5_31

Riedl M. and B. Harrison (2016) Using stories to teach human values to artificial agents. The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence, AI, Ethics, and Society. February 12–13, 2016. (Technical Report, WS-16-02: AI, Ethics, and Society.) Phoenix, Arizona, USA.

Russell, S. (2017a) “Provably beneficial artificial intelligence”. In The next step: exponential life. BBVA OpenMind. Available online at <https://people.eecs.berkeley.edu/~russell/papers/russell-bbvabook17-pbai.pdf>. Accessed on 10 November 2019.

Russell, S., D. Dewey, and M. Tegmark (2015) “Research priorities for robust and beneficial artificial intelligence”. AI Magazine 36, 4, 94–105.
https://doi.org/10.1609/aimag.v36i4.2621
https://doi.org/10.1609/aimag.v36i4.2577

Russell, S., S. Hauert, R. Altman, and M. Veloso (2015) “Robotics: ethics of artificial intelligence”. Nature 521, 7553, 415–418. doi:10.1038/521415a.
https://doi.org/10.1038/521415a

Sample, I. (2017) “Ban on killer robots urgently needed, say scientists”. The Guardian 13 November. Available online at < https://www.theguardian.com/science/2017/nov/13/ban-on-killer-robots-urgently-needed-say-scientists>. Accessed on 10 November 2019.

Seibt, J., R. Hakli, and M. Nørskov, eds. (2014) Sociable robots and the future of social relations. (Frontiers in Artificial Intelligence and Applications, 273.) IOS Press. Available online at <https://www.iospress.nl/book/sociable-robots-and-the-future-of-social-relations/>. Accessed on 10 November 2019.

Sethumadhavan, A. (2019) “Trust in artificial intelligence”. Ergonomics in Design 27, 2, April 1.
https://doi.org/10.1177/1064804618818592

Sharkey, N. and A. Sharkey (2010) “The crying shame of robot nannies: an ethical appraisal”. Interaction Studies 11, 2, 161–190.
https://doi.org/10.1075/is.11.2.01sha

Simpson, T. W. (2012) “What is trust?” Pacific Philosophical Quarterly 93, 550–569.
https://doi.org/10.1111/j.1468-0114.2012.01438.x

Slaughterbots (2017) Arms-control advocacy video. Directed by S. Sugg, produced by M. Nelson, and written by M. Wood. Available online at <https://www.youtube.com/watch?v=9CO6M2HsoIA>. Accessed on 11 November 2019.

Soares N. and B. Fallenstein (2014) Aligning superintelligence with human interests: a technical research agenda. (Technical Report, 2014-8.) Machine Intelligence Research Institute.

Soares, N. (2015) The value learning problem. (Technical Report, 2015-4.) Machine Intelligence Research Institute.

Solomon, R. and F. Flores (2001) Building trust in business, politics, relationships, and life. Oxford: Oxford University Press.

Strout, J. (2014) “Practical implications of mind uploading”. In R. Blackford and D. Broderick, eds. Intelligence unbound: the future of uploaded and machine minds, 201–211. Wiley.
https://doi.org/10.1002/9781118736302.ch13

Suarez-Serrato, P., M. E. Roberts, C. Davis, and F. Menczer (2016) “On the influence of social bots in online protests: preliminary findings of a Mexican case study”. ArXiv. Available online at <https://arxiv.org/abs/1609.08239>. Accessed on 10 November 2019.

Sutrop, M. (2007) “Trust”. In M. Häyry, R. Chadwick, V. Arnason, and G. Arnason, eds. The ethics and governance of human genetic databases, 190–198. Cambridge: Cambridge University Press.
https://doi.org/10.1017/CBO9780511611087.022

Sutrop, M. (2010) “Ethical issues in governing biometric technologies”. In A. Kumar and D. Zhang, eds. Ethics and policy of biometrics, 102–114. Heidelberg: Springer-Verlag.
https://doi.org/10.1007/978-3-642-12595-9_14

Sutrop, M. (2011a) “Changing ethical frameworks: from individual rights to the common good?”. Cambridge Quarterly of Healthcare Ethics 20, 4, 533–545.
https://doi.org/10.1017/S0963180111000272

Sutrop M. (2011b) “How to avoid a dichotomy between autonomy and beneficence: from liberalism to communitarianism and beyond”. Journal of Internal Medicine 269, 4, 375–379.
https://doi.org/10.1111/j.1365-2796.2011.02349_2.x

Sutrop, M. and K. Laas-Mikko (2012) “From identity verification to behaviour prediction: ethical implications of second-generation biometrics”. Review of Policy Research, 29, 1, 22–36.
https://doi.org/10.1111/j.1541-1338.2011.00536.x

Sutrop, M. (2015) “Can values be taught? The myth of value-free education”. Trames 19, 2, 189–202.
https://doi.org/10.3176/tr.2015.2.06

Złotowski, J., K. Yogeeswaran, and C. Bartneck (2017) “Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources”. International Journal of Human Computer Studies 100, 48–54. doi: http://doi.org/10.1016/j.ijhcs.2016.12.008.
https://doi.org/10.1016/j.ijhcs.2016.12.008

Taddeo, M. (2010a) “Modelling trust in artificial agents: a first step towards the analysis of e-trust”. Minds and Machines 20, 2, 243–257.
https://doi.org/10.1007/s11023-010-9201-3

Taddeo, M. (2010b) “Trust in technology: a distinctive and a problematic relation”. Knowledge, Technology and Policy 23, 3-4, 283–286.
https://doi.org/10.1007/s12130-010-9113-9

Taddeo, M. and L. Floridi (2011) “The case for e-trust”. Ethics and Information Technology 13, 1, 1–3. doi:10.1007/s10676-010-9263-1.
https://doi.org/10.1007/s10676-010-9263-1

Tegmark, M. (2017) Life 3.0: being human in the age of artificial intelligence. Allen Lane.

Terrasse, M., M. Gorin, and D. Sisti (2019) “Social media, e-health, and medical ethics”. Hastings Centre Report 49, 1, 24–22.
https://doi.org/10.1002/hast.975

Vakkuri, V. and P. Abrahamsson (2018) “The key concepts of ethics of artificial intelligence”. IEE International Conference Engineering, Technology and Innovation, 17.06.-19.06.2019. Sophia Antipolis.

Varun, H. B, A. Irfan, and M. Mahiben (2018) “Artificial intelligence in medicine: current trends and future possibilities”. British Journal of General Practice 68, 668, 143-144.
https://doi.org/10.3399/bjgp18X695213

Wallach, W. And C. Allen (2009) Moral machines: teaching robots right from wrong. Oxford: Oxford University Press.
https://doi.org/10.1093/acprof:oso/9780195374049.001.0001

Winfield, A. F. and M. Jirotka (2018) “Ethical governance is essential to building trust in robotics and artificial intelligence systems”. Philosophical Transactions of the Royal Society A376. 20180085.
https://doi.org/10.1098/rsta.2018.0085

Wright, S. (2010) “Trust and trustworthiness”. Philosophia 38, 615–627.
https://doi.org/10.1007/s11406-009-9218-0

Yu, H., Z. Shen et al. (2018) “Building ethics into artificial intelligence”. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), 13-19 July 2018. Stockholm. Available online at <https://www.ijcai.org/proceedings/2018/0779.pdf>. Accessed on 10 November 2019.
https://doi.org/10.24963/ijcai.2018/779

Yudkowsky, E. (2004) Coherent extrapolated volition. San Francisco, CA: The Singularity Institute. Available online at <https://intelligence.org/files/CEV.pdf>. Accessed on 10 November 2019.

Back to Issue