Zum Hauptinhalt springen Skip to page footer

Die Tragik der Algorithm Aversion

Ibrahim Filiz, Jan René Judek, Marco Lorenz, Markus Spiwoks

sofia Diskussionsbeiträge 2021, No. 2 https://doi.org/10.46850/sofia.9783941627888

Algorithmen bewältigen viele Aufgaben bereits zuverlässiger als menschliche Experten. Trotzdem zeigen einige Wirtschaftssubjekte eine ablehnende Haltung gegenüber Algorithmen (Algorithm Aversion). In manchen Entscheidungssituationen kann ein Fehler schwerwiegende Konsequenzen haben, in anderen Entscheidungssituationen nicht. Wir untersuchen im Rahmen eines Framing-Experimentes den Zusammenhang zwischen der Tragweite der Entscheidungssituation einerseits und der Häufigkeit der Algorithm Aversion andererseits. Dabei zeigt sich, dass die Algorithm Aversion umso häufiger auftritt, je gravierender die möglichen Konsequenzen einer Entscheidung sind. Gerade bei besonders wichtigen Entscheidungen führt somit die Algorithm Aversion zu einer Reduzierung der Erfolgswahrscheinlichkeit. Das kann man als die Tragik der Algorithm Aversion bezeichnen.

Access full article

References

  1. Alexander, V., Blinder, C. & Zak, P. J. (2018). Why trust an algorithm? Performance, cognition, and neurophysiology, Computers in Human Behavior, 89(2018), 279-288. DOI: https://doi.org/10.1016/j.chb.2018.07.026.
  2. Berger, B., Adam, M., Rühr, A., & Benlian, A. (2020). Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn, Business & In-formation Systems Engineering, 1-14. DOI: https://doi.org/10.1007/s12599-020-00678-5.
  3. Brozovsky, L. & Petříček, V. (2007). Recommender System for Online Dating Service, ArXiv, abs/cs/0703042.
  4. Burton, J., Stein, M. & Jensen, T. (2020). A Systematic Review of Algorithm Aversion in Augmented Decision Making, Journal of Behavioral Decision Making, 33(2), 220-239. DOI: https://doi.org/10.1002/bdm.2155.
  5. Castelo, N., Bos, M. W. & Lehmann, D. R. (2019). Task-dependent algorithm aversion, Journal of Marketing Research, 56(5), 809-825. DOI: https://doi.org/10.1177%2F0022243719851788.
  6. Commerford, B. P., Dennis, S. A., Joe, J. R., & Wang, J. (2019). Complex estimates and auditor reliance on artificial intelligence, DOI: http://dx.doi.org/10.2139/ssrn.3422591.
  7. Cornelissen, J. & Werner, M. D. (2014). Putting Framing in Perspective: A Re-view of Framing and Frame Analysis across the Management and Organizational Literature, The Academy of Management Annals, 8(1), 181-235. DOI: https://doi.org/10.5465/19416520.2014.875669.
  8. Dawes, R., Faust, D. & Meehl, P. (1989). Clinical versus actuarial judgment, Science, 243(4899), 1668-1674. https://doi.org/10.1126/science.2648573.
  9. De-Arteaga, M., Fogliato, R., & Chouldechova, A. (2020). A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores, Proceedings of the 2020 CHI Conference on Human Factors in Compu-ting Systems, Paper 509, 1-12. DOI: https://doi.org/10.1145/3313831.3376638.
  10. Dietvorst, B. J., Simmons, J. P. & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them, Management Science, 64(3), 1155-1170. DOI: https://doi.org/10.1287/mnsc.2016.2643.
  11. Dietvorst, B. J., Simmons, J. P. & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err, Journal of Experimental Psychology: General, 144(1), 114-126. DOI: https://doi.apa.org/doi/10.1037/xge0000033.
  12. Efendić, E., Van de Calseyde, P. P. & Evans, A. M. (2020). Slow response times undermine trust in algorithmic (but not human) predictions, Organizational Behavior and Human Decision Processes, 157(C), 103-114. DOI: https://doi.org/10.1016/j.obhdp.2020.01.008.
  13. Erlei, A., Nekdem, F., Meub, L., Anand, A. & Gadiraju, U. (2020). Impact of Algorithmic Decision Making on Human Behavior: Evidence from Ultima-tum Bargaining, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1), 43-52.
  14. Filiz, I., Judek, J. R., Lorenz, M. & Spiwoks, M. (2021). Reducing Algorithm Aversion through Experience, Wolfsburg Working Papers, 21-01.
  15. Fischbacher, U. (2007). z-Tree: Zurich Toolbox for Ready-made Economic Ex-periments, Experimental Economics, 10(2), 171-178. DOI: https://doi.org/10.1007/s10683-006-9159-4.
  16. Frey, B. S. (1992). Behavioural Anomalies and Economics, in: Economics As a Science of Human Behaviour, 171-195. DOI: https://doi.org/10.1007/978-94-017-1374-0_11.
  17. Germann, M., & Merkle, C. (2019). Algorithm Aversion in Delegated Investing, DOI: http://dx.doi.org/10.2139/ssrn.3364850.
  18. Horne, B. D., Nevo, D., O'Donovan, J., Cho, J. & Adali, S. (2019). Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone? ArXiv, abs/1904.01531.
  19. Ireland, L. (2020). Who errs? Algorithm aversion, the source of judicial error, and public support for self-help behaviors, Journal of Crime and Justice, 43(2), 174-192. DOI: https://doi.org/10.1080/0735648X.2019.1655781.
  20. Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards Algorithms? A comprehensive literature Review on Algorithm aversion, Proceedings of the 28th European Conference on Information Systems (ECIS), https://aisel.aisnet.org/ecis2020_rp/168.
  21. Kahneman, D. & Tversky, A. (1979). Prospect theory: An analysis of decision under risk, Econometrica 47(2), 263-291. DOI: https://doi.org/10.2307/1914185.
  22. Kawaguchi, K. (2020). When Will Workers Follow an Algorithm? A Field Experiment with a Retail Business, Management Science, Articles in Advance, DOI: https://doi.org/10.1287/mnsc.2020.3599.
  23. Köbis, N. & Mossink, L. D. (2020). Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AIgenerated from human-written poetry, Computers in Human Behavior, 114(2021), 1-13. DOI: https://doi.org/10.1016/j.chb.2020.106553.
  24. Ku, C. Y. (2020). When AIs Say Yes and I Say No: On the Tension between AI’s Decision and Human’s Decision from the Epistemological Perspectives, Információs Társadalom, 19(4), 61-76.
  25. Leyer, M., & Schneider, S. (2019). Me, You or Ai? How Do We Feel About Delegation, Proceedings of the 27th European Conference on Information Systems (ECIS), 1-17.
  26. Logg, J., Minson, J. & Moore, D. (2019). Algorithm appreciation: People prefer algorithmic to human judgment, Organizational Behavior and Human Decision Processes, 151 (C), 90-103. DOI: https://doi.org/10.1016/j.obhdp.2018.12.005.
  27. Majumdar, A. & Ward, R. (2011). An algorithm for sparse MRI reconstruction by Schatten p-norm minimization, Magnetic resonance imaging, 29(3), 408-417. DOI: https://doi.org/10.1016/j.mri.2010.09.001.
  28. Mann, H. B., & Whitney, D. R. (1947). On a Test of Whether One of Two Random Variables is Stochastically Larger than the Other, Annals of Mathematical Statistics, 18(1), 50-60. DOI: https://doi.org/10.1214/aoms/1177730491.
  29. Mill, J. S. (1836). On the definition and method of political economy, The philosophy of economics, 41-58. DOI: https://doi.org/10.1017/CBO9780511819025.003.
  30. Niszczota, P. & Kaszás, D. (2020). Robo-investment aversion, PLoS ONE, 15(9), 1-19. DOI: https://doi.org/10.1371/journal.pone.0239277.
  31. Önkal, D., Gönül, M. S., & De Baets, S. (2019). Trusting forecasts, Futures & Foresight Science, 1(3-4), 1-10. DOI: https://doi.org/10.1002/ffo2.19.
  32. Pearson, K. (1900). On the Criterion that a Given System of Deviations from the Probable in the Case of a Correlated System of Variables is Such that it Can be Reasonably Supposed to have Arisen from Random Sam-pling, The London, Edinburgh, and DublinPhilosophical Magazine and Journal of Science, 50(302), 157-175. DOI: https://doi.org/10.1080/14786440009463897.
  33. Persky, J. (1995). The Ethology of Homo Economicus, Journal of Economic Perspectives, 9(2), 221-231. DOI: https://doi.org/10.1257/jep.9.2.221.
  34. Prahl, A. & Van Swol, L. (2017). Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting, 36(6), 691-702. DOI: https://doi.org/10.1002/for.2464.
  35. Rühr, A., Streich, D., Berger, B. & Hess, T. (2019). A Classification of Decision Automation and Delegation in Digital Investment Systems, Proceedings of the 52nd Hawaii International Conference on System Sciences, 1435-1444. DOI: https://doi.org/10.24251/HICSS.2019.174.
  36. Sawaitul, S. D., Wagh, K. & Chatur, P.N. (2012). Classification and Prediction of Future Weather by using Back Propagation Algorithm-An Approach, International Journal of Emerging Technology and Advanced Engineering, 2(1), 110-113.
  37. Shariff, A., Bonnefon, J. F., & Rahwan, I. (2017). Psychological roadblocks to the adoption of self-driving vehicles, Nature Human Behaviour, 1(10), 694-696. DOI: https://doi.org/10.1038/s41562-017-0202-6.
  38. Simon, H. A. (1959). Theories of Decision-Making in Economics and Behavioral Science, The American Economics Review, 49(3), 253-283.
  39. Simpson, B. (2016). Algorithms or advocacy: does the legal profession have a future in a digital world? Information & Communications Technology Law, 25(1), 50-61. DOI:https://doi.org/10.1080/13600834.2015.1134144.
  40. Tversky, A. & Kahneman, D. (1981). The framing of decisions and the psychology of choice, Science, 211(4481), 453-458. DOI: https://doi.org/10.1126/science.7455683.
  41. Tversky, A. & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases, Science, 185(4157), 1124-1131. DOI: https://doi.org/10.1126/science.185.4157.1124.
  42. Ueda, M., Takahata, M. & Nakajima, S. (2011). User's food preference extraction for personalized cooking recipe recommendation, Proceedings of the Second International Conference on Semantic Personalized Information Management: Retrieval and Recommendation, 781, 98-105.
  43. Wang, R., Harper, F. M., & Zhu, H. (2020, April). Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Paper 684, 1-14. DOI: https://doi.org/10.1145/3313831.3376813.
  44. Wilcoxon, F. (1945). Individual Comparisons by Ranking Methods, Biometrics Bulletin, 1(6), 80-83. DOI: https://doi.org/10.2307/3001968.
  45. Yeomans, M., Shah, A. K., Mullainathan, S. & Kleinberg, J. (2019). Making Sense of Recommendations, Journal of Behavioral Decision Making, 32(4), 403-414. DOI: https://doi.org/10.1002/bdm.2118.