The ethical challenges of regulating algorithms

Begin - Einde 
2019 - 2022 (lopend)
Vakgroep(en) 
Vakgroep Wijsbegeerte en Moraalwetenschap
Onderzoeksgebied 

Tabgroup

Abstract

Since Google decided in 2001 to start using data logs to generate predictions about users’ click-throughs and thus about the relevance of certain advertisements for a user, the collection and analysing of consumers’ behavioural data has become an essential part of the commercial strategies of all sorts of businesses. (Naughton 2019) As a result, we are now witnessing an exponential growth in the development of technologies and methods to ‘harvest’ such data. There is a constant flurry of new smartphone applications and digital platforms in which users give their permission to ‘collect, use, transfer, sell and disclose non-personal information for any purpose’ and to use this information for building ‘market research products and services.’ (Chen 2017)

The collection, aggregation and analysis of data – personal and non-personal data, but mainly data that can tell us something about human behaviour – is a for profit business model. As Jean-François Lyotard already predicted, information has become a commercial asset, that is produced in order to be sold. (Lyotard 2004 [1979]: 4–5, cited by (Prainsack 2019, 10). According to Shoshana Zuboff (2015, 2019), the current developments even mark the beginning of a new form of ‘information capitalism’, that ‘aims to predict and modify human behaviour as a means to produce revenue and market control.’ (Zuboff 2015, 75) The reason this new model has become so successful so quickly is the strong pull it exerts on its users: many of us depend on the use of all sorts of ‘smart’ applications and platforms to organise our daily lives, social interactions, work, education, health and self-care. (AlgorithmWatch and Bertelsmann Stiftung 2019; Zuboff 2019) As a result, consumers and users of these services have been all but reluctant to consent to the ‘invasion of their privacy’, when the services they get in return offer them a direct reward. (Athey, Catalini, and Tucker 2017)

These developments have drawn significant scholarly attention from various disciplines. In the ethics literature, a significant part of the debate over ‘big data’ revolves around questions of informed consent, privacy, anonymisation and data protection. (Mittelstadt and Floridi 2017) In this context, various problems have been identified: when the aim is for data to be aggregated and reused in search for unforeseen connections between data points, ‘consent cannot be informed’ (Mittelstadt and Floridi 2017, 454–55), and the scope of the data that is collected has been qualified as a privacy issue, especially as data subjects do not expect their data to be analysed outside the (often highly personal) specific contexts in which it is created. (Mittelstadt and Floridi 2017, 459) Much research focusses on the protection of the individual against abuses and harms resulting from (unauthorized) uses of their personal data (Oostveen 2018), but it has also become clear that the consent-model – that relies on individuals ‘to decide for themselves how to weigh the costs and benefits of the collection, use, or disclosure of their information’ – has reached its limits. (Solove 2012, 1880) For social media platforms, up to 98% of users do not even pay attention to what they are agreeing to (Obar and Oeldorf-Hirsch 2018), and it seems that by individualizing the responsibility to control one’s data, current privacy protection regulations in fact facilitate this. (Hull 2015)

At the same time, several potential harmful effects of the use of algorithm-based decision processes have been identified. Concerns have been raised that algorithm-based decision-making tools may capture and even reinforce existing discriminatory biases (Cohen and Graver 2017; Vedder and Naudts 2017; Redden 2018), as algorithms are now used not only to determine the advertisements we get to see on the websites we visit, but also to decide who gets a loan, calculate insurance premiums, and even to decide whether people should be hired or fired. (Martin 2018) Furthermore, there are algorithms in use that are labelled as ‘black-boxes,’ as these base decisions on ‘complex rules that challenge and confound human capabilities for action and comprehension’ and have the capacity to ‘define or modify decision-making rules autonomously’ based on the input of data and recognition of patterns. (Mittelstadt et al. 2016, 3) In addition to problems relating to the use of inconclusive and misguided data and evidence to base decisions on, this has resulted in debates about the transparency, interpretability and validation of algorithms, and whether or not providers of algorithmic systems should be obliged to explain the decision basis. (Wachter, Mittelstadt, and Floridi 2017) However, as even disclosure does not ensure comprehensibility (Mittelstadt et al. 2016; de Laat 2017), preliminary proposals suggest addressing these problems by offering counterfactual explanations (answers to ‘what if’ questions). (Wachter, Mittelstadt, and Russell 2018)

Objectives

As the collection and processing of data are escaping the control of individual data subjects and algorithm-based decision mechanisms infiltrate our lives, there is an urgent need to evaluate the use and regulation of these technologies from an ethical perspective. The main aim of this research project is to advance insights on the problematic aspects of the use of behavioral data and algorithm-based decision mechanisms, and to shed light on the difficulties that arise in addressing these ethical concerns. To achieve this objective, this project will focus on a distinct form of data-driven algorithmic decision-making: the use of personalized pricing-algorithms, that adjust prices based on consumers’ profiles and the algorithm’s prediction about the top price a customer would be willing to pay for a product.

Work packages

1. Micro-targeting consumers: personalized pricing, algorithms, and their distributive effects

‘Personalized pricing’ is a distinct form of data-driven algorithmic decision-making that adjusts prices based on consumers’ profiles. This type of dynamic pricing algorithms uses behavioral data to predict the top price a customer would be willing to pay for a product. Based on the data collected about customers social media profiles, browsing history, payment methods and past purchases, retailers can fluctuate prices depending on customers, for example by targeting wealthy customers with an above-average price making them believe they have bought an above-average quality product, while offering bargain chasers an elusive deal. (‘Complete Guide to Dynamic Pricing’ 2016; Brodmerkel 2017) In a 2018 study with ‘mystery shoppers’, the European Commission’s DG Justice found that 61% of the 160 e-commerce websites included in the investigation already use pricing algorithms to present customers with personalized offers (CMA 2018, 38), and the introduction of such tactics in stores might be closer than we think: In 2016, Amazon proposed its prototype of the grocery store of the future, Amazon Go, a store without checkouts, where sensors and cameras detect each item shoppers put into a shopping basket, and an app that automatically billed them as they left the store. (Brodmerkel 2017)

In contrast to the scholarly attention for political micro-targeting and the widespread conviction there is something problematic about targeted political advertisements and voter manipulation based on data-driven voter research (Zuiderveen Borgesius et al. 2018; Gizzi 2018; Natasha Singer 2018), it is often assumed that ‘there is nothing wrong’ with using data to micro-target consumers with personalized adds for consumer products or services. (Cooper 2018) Likewise, for businesses, differentiating their products and prices has long been an essential marketing strategy in the fight for market share and profit. On the other hand, considerations of equality, fairness, and transparency immediately raise questions with respect to a mechanism that aims to misleads and discriminate, for profit.

Question 1: What are the ethical issues relating to ‘personalized pricing’?

Question 2: What does it mean for a ‘personalized pricing’ algorithmic decision-making system to be ‘fair’ or ‘non-discriminatory’?

Methodology 1&2: I will first further explore the recent developments in the application and regulation of ‘personalized pricing’ practices. Next, I will analyze and review the relevant ethical and philosophical scholarly literature on the discriminatory and distributive effects of the use of algorithms to conduct a ‘systematic reviews of reasons.’ To critically assess the arguments identified and to identify potential arguments that are not mentioned in the published literature, I will examine the coherence of the identified reasons with relevant theoretical approaches in political philosophy with respect to discrimination, procedural fairness and fairness of outcomes. To conclude this case study, I will investigate the applicability of the identified ethical and philosophical arguments to the case study to draw conclusions.

2. From privacy ‘self-management’ to group privacy

It has been acknowledged that the solution for privacy concerns in the context of big data might not lie with the individual. (Solove 2012; Mittelstadt and Floridi 2017; Baruh and Popescu 2017) Especially for data that are de-identified and aggregated, traditional privacy protection and consent models fall short in providing answers. What is more, the effects of the processing of data often surpass the individual, as perfectly anonymised data sets still allow profiling practices that can result in group-level harms, for example, when gender, age, and geographical, socioeconomic, ethnic, health-related or other data reveal relevant characteristics are used as parameters in algorithm-based decisions. (Zwitter 2014; Taylor, Floridi, and van der Sloot 2017) Furthermore, for individuals it is difficult to ‘opt-out’ and avoid that one’s data is being collected and used for marketing profiling or micro-targeting, as the use of digital tools has becomes normalized (Montgomery, Chester, and Kopp 2018), and not using such digital tools or platforms is often directly ‘punished’ with incomplete access to information, limitations on the availability of goods and services, and may even result in the incurring of additional costs – for example when not ‘signing up’ results in missing out on advantageous offers and prices.

Question 3: What are solutions for privacy concerns in the context of big data beyond ‘privacy self-management’?

Question 4: What are the ethical merits of collective and group approaches to privacy?

Methodology 3&4: I will first identify solutions ‘beyond consent’ and ‘privacy self-management’ that have been proposed in the scholarly literature and in relevant policy documents from various stakeholders, such as businesses that engage in data processing and algorithmic profiling and NGOs and governmental organisations that are engaged in the protection of privacy. A key point of departure here will be the preliminary insights of Taylor and Floridi and colleagues (2017) with respect to group privacy, and particular attention will be given to the insights developed in research ethics and medical ethics scholarship with respect to alternative models of consent and privacy protection that seek to transfer (part of) the responsibility for protecting individuals or groups against abuses and harms resulting from data processing and algorithm-based decision-making to collective mechanisms. To critically assess the arguments and models identified and to identify strengths and weaknesses that are not mentioned in the published literature, I will examine whether the identified approaches can be reconciled with relevant theoretical approaches in political philosophy with respect to the philosophical foundations of privacy.

3. Regulating algorithms: the challenge of ownership

Considerations of privacy and fairness both indicate that algorithmic decision-outcomes must to some extent be transparent and verifiable. (Mittelstadt et al. 2016; Dreyer and Schulz 2019; Manheim and Lyric Kaplan 2018) In addition, as many algorithms are ‘black boxes’ or at least ‘opaque’, it has been suggested that algorithm-based decision mechanisms should be tested for undesired outcomes and validated, and even the ‘ethical certification’ of algorithms has been proposed to ensure that the use of algorithm-based decision mechanisms does not result in discriminatory outcomes. (Mittelstadt 2017) However, transparency, interpretability and validation of algorithms can be hindered by data and algorithms being kept secret and proprietary (Cohen et al. 2014, Price 2016). For example, in 2014, the German Federal Supreme Court ruled that people who have been assessed by an algorithm-based decision-mechanism (in the context of credit approval procedures) are not entitled to know how the evaluation of their future behaviour has been calculated, as an ‘abstract method of score value calculation need not be communicated’ and that such information, in addition to other data, is protected as a trade secret. (Lischka, Klingel, and Bertelsmann Stiftung 2017, 34) Hence, an important element in the debate about regulating the use of algorithms is the question of ownership of algorithm-based prediction methods and decision-mechanisms, and the need to balance the interest in transparency with the commercial protection of these assets.

Question 5: In light of the ethical issues relating to the use of algorithm-based prediction methods and decision-mechanisms, is it ethically justifiable that algorithms can be protected as trade secrets and what would be equitable conditions to mandate the disclosure and audit of algorithms?

Methodology 5: To address this question, I will first analyze and review the relevant ethical scholarly literature on the right to obtain meaningful information about automated decision-making processes (included in the EU General Data Protection Regulation) to identify the ethical arguments that have been used against and in favor of disclosing valuable commercial information, and the conditions and criteria that have been set in this context. To identify potential arguments that are not mentioned in this specific body of literature, I will review the relevant legal and philosophical literature on the protection of trade secrets and the mandatory disclosure of valuable commercial information for public interest purposes. Finally, I will investigate the applicability of the identified ethical and philosophical arguments to protection of algorithms to draw conclusions.

Methodology

This research project will employ several methodological approaches:
(1) To identify relevant ethical reasons and arguments I will conduct a ‘systematic reviews of reasons’, a recently developed innovative methodology. (Strech and Sofaer 2012) This specially adapted version of a PRISMA-type of literature review (Moher et al. 2009) takes into account the particular nature of literature that is ‘reasons-based’, such as ethical and legal literature. I will search all relevant databases (e.g. Web of Science, HeinOnline, JSTOR, Westlaw International) to collect and select publications in English, Dutch and French that mention relevant reasons and arguments, and search for relevant books (e.g. through Google Scholar) and policy reports and examine the bibliographies of included publications to identify other relevant ones.
(2) After having identified the fullest possible picture of the various reasons and arguments, I will subject these reasons to an ethical analysis by carefully examining the coherence of these reasons with background theories in moral and political philosophy. I will use this method to critically assess the arguments identified in the systematic review of reasons and to identify potential arguments that are not mentioned in the published literature.
(3) For each work package of the project, I will actively seek to promote a critical dialogue by developing an international network of scholars working on these issues. Frequent participation in relevant conferences will be instrumental in this regard, as well as research visits. Subject to additional funding, I will undertake a research visit at the Oxford Internet Institute, University of Oxford (UK), hosted by Prof. Luciano Floridi. Moreover, seen as to how technological developments and legal regulations are an important point of reference for our project, I will also ensure an ongoing exchange of ideas within a broader community of scholars from various areas that are specifically relevant to this research project.

Planning

Year 1: Work package 1, questions 1&2
Year 2: Work package 2, questions 3&4
Year 3: Finalization of work package 2, question 4 + Work package 3, question 5

Publications and science communication

For each of the work packages, it is the aim to publish research results in one or more international peer-reviewed academic journals (in the disciplines of applied ethics, philosophy, or law). In addition, as the topics of my research project are relevant for the general public, I will share findings of this research with the general public through blogpost and opinion pieces in magazines and newspapers. Furthermore, I aim to disseminate my findings to policy makers, legislators, politicians by active participation in policy conferences and public debates.

References

AlgorithmWatch, and Bertelsmann Stiftung. 2019. ‘Automating Society: Taking Stock of Automated Decision-Making in the EU’. https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society....
Athey, Susan, Christian Catalini, and Catherine Tucker. 2017. ‘The Digital Privacy Paradox: Small Money, Small Costs, Small Talk’. Working Paper 23488. National Bureau of Economic Research. https://doi.org/10.3386/w23488.
Baruh, Lemi, and Mihaela Popescu. 2017. ‘Big Data Analytics and the Limits of Privacy Self-Management’. New Media & Society 19 (4): 579–96. https://doi.org/10.1177/1461444815614001.
Brodmerkel, Sven. 2017. ‘Retailers Using Artificial Intelligence to Work out Top Price You’ll Pay’. ABC News, 27 June 2017. https://www.abc.net.au/news/2017-06-27/dynamic-pricing-retailers-using-a....
Chen, Brian X. 2017. ‘How to Protect Your Privacy as More Apps Harvest Your Data’. The New York Times, 22 December 2017, sec. Technology. https://www.nytimes.com/2017/05/03/technology/personaltech/how-to-protec....
CMA. 2018. ‘Pricing Algorithms: Economic Working Paper on the Use of Algorithms to Facilitate Collusion and Personalised Pricing’. CMA94. London.
Cohen, I Glenn, and Harry S Graver. 2017. ‘Cops, Docs, and Code: A Dialogue Between Big Data in Health Care and Predictive Policing’. UCDL Rev. 51: 437.
‘Complete Guide to Dynamic Pricing’. 2016. Cleverism. 11 May 2016. https://www.cleverism.com/complete-guide-dynamic-pricing/.
Cooper, Olly. 2018. ‘Micro-Targeting: The Good, the Bad and the Unethical’. Cambridge Network. 17 August 2018. https://www.cambridgenetwork.co.uk/news/micro-targeting-the-good-the-bad....
Dreyer, Stephan, and Wolfgang Schulz. 2019. ‘Monitoring Algorithmic Systems Will Need More than the EU’s GDPR’. Ethics of Algorithms. 24 January 2019. https://ethicsofalgorithms.org/2019/01/24/monitoring-algorithmic-systems....
Gizzi, Dan. 2018. ‘The Ethics of Political Micro-Targeting’. Data Driven Investor (blog). 3 December 2018. https://medium.com/datadriveninvestor/the-ethics-of-political-micro-targ....
Hull, Gordon. 2015. ‘Successful Failure: What Foucault Can Teach Us about Privacy Self-Management in a World of Facebook and Big Data’. Ethics and Information Technology 17 (2): 89–101. https://doi.org/10.1007/s10676-015-9363-z.
Laat, Paul B. de. 2017. ‘Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?’ Philosophy & Technology, November, 1–17. https://doi.org/10.1007/s13347-017-0293-z.
Lischka, Konrad, Anita Klingel, and Bertelsmann Stiftung. 2017. ‘When Machines Judge People’. Discussion Paper Ethics of Algorithms. https://doi.org/10.11586/2017031.
Manheim, Karl M., and Lyric Kaplan. 2018. ‘Artificial Intelligence: Risks to Privacy and Democracy’. Yale Journal of Law & Technology, October. https://papers.ssrn.com/abstract=3273016.
Martin, Kirsten. 2018. ‘Ethical Implications and Accountability of Algorithms’. Journal of Business Ethics, June. https://doi.org/10.1007/s10551-018-3921-3.
Mittelstadt, Brent Daniel. 2017. Conference Presentation: Ethical Aspects of Automated Algorithm-Based Medical Decision Making. Ghent, Belgium.
Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. 2016. ‘The Ethics of Algorithms: Mapping the Debate’. Big Data & Society 3 (2): 1–21. https://doi.org/10.1177/2053951716679679.
Mittelstadt, Brent Daniel, and Luciano Floridi. 2017. ‘The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts’. In The Ethics of Biomedical Big Data, edited by Brent Daniel Mittelstadt and Luciano Floridi, 445–80. Springer.
Moher, David, Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, and PRISMA Group. 2009. ‘Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement’. Annals of Internal Medicine 151 (4): 264–69, W64.
Montgomery, Kathryn, Jeff Chester, and Katharina Kopp. 2018. ‘Health Wearables: Ensuring Fairness, Preventing Discrimination, and Promoting Equity in an Emerging Internet-of-Things Environment’. Journal of Information Policy 8: 34–77. https://doi.org/10.5325/jinfopoli.8.2018.0034.
Natasha Singer. 2018. ‘“Weaponized Ad Technology”: Facebook’s Moneymaker Gets a Critical Eye’. The New York Times, 16 August 2018. https://www.nytimes.com/2018/08/16/technology/facebook-microtargeting-ad....
Naughton, John. 2019. ‘“The Goal Is to Automate Us”: Welcome to the Age of Surveillance Capitalism’. The Guardian, 20 January 2019. https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-o....
Obar, Jonathan A., and Anne Oeldorf-Hirsch. 2018. ‘The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services’. Information, Communication & Society, July, 1–20. https://doi.org/10.1080/1369118X.2018.1486870.
Oostveen, Manon. 2018. Protecting Individuals against the Negative Impact of Big Data: Potential and Limitations of the Privacy and Data Protection Law Approach. Information Law Series (INFO), volume 42. Alphen aan den Rijn, The Netherlands: Wolters Kluwer.
Prainsack, Barbara. 2019. ‘Data Donation: How to Resist the ILeviathan’. In The Ethics of Medical Data Donation, edited by Jenny Krutzinna and Luciano Floridi, 137:9–22. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-04363-6_2.
Redden, Joanna. 2018. ‘The Harm That Data Do’. Scientific American, 1 November 2018. https://www.scientificamerican.com/article/the-harm-that-data-do/.
Solove, Daniel J. 2012. ‘Introduction: Privacy Self-Management and the Consent Dilemma Symposium: Privacy and Technology’. Harvard Law Review 126: 1880–1903. https://heinonline.org/HOL/P?h=hein.journals/hlr126&i=1910.
Strech, Daniel, and Neema Sofaer. 2012. ‘How to Write a Systematic Review of Reasons’. Journal of Medical Ethics 38 (2): 121–26. https://doi.org/10.1136/medethics-2011-100096.
Taylor, Linnet, Luciano Floridi, and Bart van der Sloot, eds. 2017. Group Privacy: New Challenges of Data Technologies. Philosophical Studies Series, volume 126. Switzerland: Springer.
Vedder, Anton, and Laurens Naudts. 2017. ‘Accountability for the Use of Algorithms in a Big Data Environment’. International Review of Law, Computers & Technology 31 (2): 206–224. https://doi.org/10.1080/13600869.2017.1298547.
Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. 2017. ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’. International Data Privacy Law 7 (2): 76–99. https://doi.org/10.1093/idpl/ipx005.
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. ‘Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR’. Harvard Journal of Law & Technology, November. https://doi.org/10.2139/ssrn.3063289.
Zuboff, Shoshana. 2015. ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’. Journal of Information Technology 30 (1): 75–89. https://doi.org/10.1057/jit.2015.5.
———. 2019. The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power.
Zuiderveen Borgesius, Frederik J., Judith Möller, Sanne Kruikemeier, Ronan Ó Fathaigh, Kristina Irion, Tom Dobber, Balazs Bodo, and Claes De Vreese. 2018. ‘Online Political Microtargeting: Promises and Threats for Democracy’. Utrecht Law Review 14 (1): 82. https://doi.org/10.18352/ulr.420.
Zwitter, Andrej. 2014. ‘Big Data Ethics’. Big Data & Society 1 (2): 205395171455925. https://doi.org/10.1177/2053951714559253.

 

Onderzoekers

Promotor(en)

Postdoctorale medewerker(s)