PhD and post-doctorates

Post-doctorate of Valentine Crosset (2020-2022)

Valentine Crosset

Valentine Crosset is a postdoctoral researcher at the medialab and funded by the Good In Tech Chair since November 2020. Her current research deals with the moderation of content on digital platforms, with a particular focus on controversies and the normative expectations of Internet users.
Valentine Crosset holds a doctorate in criminology from the University of Montreal. Her thesis focused on the online visibility of radical groups, at the crossroads of STS, the sociology of activism and the sociology of visibility. During her doctoral studies, she worked on various digital projects. In particular, she worked at the International Center for Comparative Criminology on a research project on the far right and the digital world (2014-2019), but also on the project "Empowerment of judicial actors through cyberjustice" (AJC) implemented by the Cyberjustice Laboratory of the University of Montreal (2019-2020). Finally, she worked within the Montreal Declaration for the Responsible Development of Artificial Intelligence (2018-2019). She has recently published articles in Critique Internationale and New Media & Society.

Research work Axis 1: Data, algorithms and society


Content moderation and responsible governance of digital freedoms


In many ways, we live today in a period of turbulent structural transformation of the digital space. In recent years, social media platforms have been at the heart of an important debate on content moderation. Many concerns have been raised about the spread of hate speech, jihadist propaganda and false information on digital platforms. In this context, the regulation of web giants has often been presented as an essential response to guarantee democratic balance and geopolitical stability within our societies. Faced with pressure from governments and civil society, platforms have been forced to expand their regulations and policies, expand their moderation team, outsource moderation operations as well as add new technologies algorithmic detection based on Artificial Intelligence (AI). Despite this strengthening of moderation, the fact remains that the public debate remains particularly divided, between those who criticize the platforms for their inaction and their passivity and those who see it as a real attack on freedom of expression.
This turning point in the governance of freedom of expression invites us to question the status of the “private” regulation of digital content. This necessarily raises the question of “What internet do we want”? How to ensure responsible governance of the major digital players that would make it possible to articulate freedom of expression and protection of the public? Considering that the public problem of digital freedoms is a collective enterprise of knowledge production, this research project proposes an analytical framework allowing these questions to be examined both from the point of view of regulators and public actors, but also of internet users. Empirical knowledge on audience perceptions regarding the status of content and platform regulation is still anecdotal and requires further analysis, given the current debate and proliferation of content moderation laws.

Rather than limiting critical reflection to who should regulate digital content and platforms and how, the project seeks to explore governance “in the making”: its boundaries and complexities, its controversies and debates, its practices and normative expectations, taking into account the experience and the point of view of the users. By pursuing this type of questioning, the project will seek to identify and explore possible forms of democratic regulation of content on the Internet, so that it remains a space for debate, commitment and freedom.

The research takes place in three phases:

  1. Mapping moderation policies

A first level of the project will consist of mapping and summarizing the various content moderation and platform regulation policies. Particular attention will be paid to the different procedures implemented depending on the nature of the content published (terrorism, nudity, hate speech, disinformation). This is an essential preliminary step to understanding the current landscape of content moderation and platform regulation. The interest of this objective is to explore the institutional, political and legal structures underlying online content and to draw up a critical analysis.

    2. Digital User Survey

We will first investigate user reports. We will focus on routine reporting activities by building a directory of problematic content expressed and reported by Internet users. This first part of the survey will allow us to observe how Internet users implement the limits of what they wish to see or avoid on social networks. Secondly, we will explore the complaints made by users regarding the moderation operations carried out by digital platforms. Our database will consist a priori of content published on social networks. It will allow us to draw up the state of social criticism on the issues of freedom of expression and moderation and to pose it as a starting point for a politicization of the problem.

     3. Collective experimentation with moderation

The third aspect of the investigation will be to carry out an exploratory investigation through interviews and participatory workshops. This exploratory survey will be conducted with five target audiences: Internet user communities identified from our online survey; associations for the defense of freedoms; administrative authorities and public bodies; digital players; specialized lawyers and magistrates. Audiences will be led to experiment with problematic content, to discuss cases that have been moderated by the platforms and to imagine moderation devices. These experiments will have two main aims. On the one hand, it will be a question of feeding the digital survey, by collecting the values and normative expectations of the public concerned, on the other hand, they will make it possible to co-construct a responsible governance of digital freedoms.

PhD from Ahmad Haidar (In progress)

Ahmad Haidar

Ahmad is a researcher at the LITEM laboratory of the University of Paris Saclay and funded by the Good In Tech Chair since November 2020. His research focuses on the societal contribution of the Net of Artificial Intelligence algorithms: an econometric model supervised by Pr Christine Balague . Ahmad holds a master's degree in the field of economics and business management from the Lebanese University. His memoir "The impact of digital transformation and artificial intelligence on economic growth".
For nine countries and for a period from 2000 to 2017. He also worked as a research assistant to design a database and prepare data for the American University of Beirut project. Besides at the Central Bank of Lebanon and other alfa banks as an intern. During his academic and professional work, Ahmad realized the power of data and its impact on well-being. Currently, the focus area is expanded to cover the high power of algorithms.



Research work Axis 1: Data, algorithms and society and Axis 2: Digital Corporate Responsibility

Impact of Artificial Intelligence on Society

 
Meanwhile, there is an acceleration in developing new technologies that can learn from themselves and provide more and more relevant results, i.e., Artificial Intelligence (AI). With this increasing competence, machine learning is increasingly being employed in real-world socio-technical contexts of high consequence. It's not a secret that people and machines are starting to be more partners in various aspects of life, starting by using Google Maps, Siri to self-driving cars. Some can't even imagine life without these technologies; businesses read this mentality as an increase in demand for innovative technologies by society, so new lines to maximize their profit. For this reason, we will see an increase in investment vehicles focused on ethical tech such as electric cars, besides adapting business models which are hyper-compliant with GDPR.

This AI era could help solve significant problems such as medicine and healthcare, energy usage, transportation, and other domains. Yet, this advancement associates harmful consequences on society using an intelligent algorithm such as manipulating the voting system of a presidential election, discrimination –in different fields such as hiring process, data abuse, cybersecurity, privacy violation, job losses, etc.
This intersection of machine learning with society has fueled this dissertation to raise questions and concerns about the risks and advantages of Machine Learning. A small segment of research effort is devoted to measuring this relationship in all its prospects. Hence, this dissertation will study the various aspects of responsible machine learning that are more engaged with society by creating an econometric model. These measures concluded from four major declarations and regulations to adopt the consequences of this advancement in technologies; Montreal declaration for a Responsible Development of Artificial Intelligence (2018). EU High-Level Expert Group On Artificial Intelligence (2018). Recommendation of the Council on OECD Legal Instruments Artificial Intelligence (2019). UNESCO Recommendation on the Ethics of Artificial Intelligence, Open Dialogue on AI Ethics (ODAI) (2020). These guidelines and recommendations aim to promote and foster Trustworthy AI. The framework of Responsible AI has five main principles, and each has several dimensions. First, Respect for Autonomy includes human agency and solidarity. Second, the Prevention of Harm includes privacy and data governance, societal and environmental well-being, robustness, and prudence. Third, Fairness covers accountability, equity, diversity inclusion, and environmental well-being. Fourth, Explicability also involves accountability besides transparency, interoperability, and data governance. The last principle is Democratic Participation. These principles attached to their actions will be considered input data to measure its influence on society and individuals and tech companies’ well-being.

The research will take place in three phases:

1. Mapping of moderation policies

The first level of the dissertation will consist of a massive overlook on previous studies, theoretical and practical, the impact of artificial intelligence on society. A literature review describes the negative effect of machine learning and the reasons behind the regulations made for these “superpower” algorithms. Particular attention will be paid to various international regulators, Montreal Declaration, EU High-Level Expert Group On Artificial Intelligence. OECD Legal Instruments Artificial Intelligence. UNESCO Recommendation on the Ethics of Artificial Intelligence, Open Dialogue on AI Ethics (ODAI). It is an essential preliminary step in understanding the current landscape of net societal contribution to AI for constructing an econometric model.

2. Methodology and data modeling

The primary objective of this dissertation is to understand the impact of AI on society better. In contrast to quantitative research, which focuses on measuring and validation, qualitative research helps address these exploratory questions by allowing researchers to address “why” and “how” questions. As a result, qualitative research is appropriate for addressing the objectives of this study. This study employs a qualitative approach based on Grounded Theory involving semi-structured and digital interviews with experts from the AI field in industries interested in this topic and academia. As such, we will deal with primary and recent data to create our model.

3. Experiment with moderation

The third aspect is to make several experiments and run the model in various startup firms interested in responsible AI investment. Conclude main results to investigate how they serve the dissertation's objective. Motivate the tech sector businesses to have dual goals, maximize their profit in parallel to be social friendly.

Post-doctorate of Alexis Vouillon (2021-2022)

Alexis Vouillon



Research work

The function of DPO is established by the General Regulations on Data Protection (GDPR), voted in April 2016 and entered into force in May 2018. His appointment is mandatory in private companies and in public bodies required to create files and process personal data. Responsible for many tasks, in connection with its main mission of ensuring compliance with data protection rules, the DPO is undoubtedly the face of the regulation of personal data in organizations.
 
Who are the DPOs? What are their trajectories? How do they build their skills, expertise and professional ethos? Under what conditions do they carry out their missions within organizations? How, concretely, do they carry out these missions on a daily basis? Are their skills adjusted to these missions, should they evolve? How and by whom are they assessed? To what extent are they likely to influence and determine the way in which the economy of personal data is structured? The questions raised by the emergence of the figure of the DPO are numerous.
 
The preferred perspective in this research is that of the sociology of work and professional groups (Demazière and Gadéa, 2009) as well as approaches in Human Resources Management on the aspects of skills and skills development. It invites us to explore the training dynamics of the DPO group from the angle of professionalization: construction of expertise and codification of skills, role of training and professional associations, sociability, ethical standards, autonomy and legitimacy, legal definition of the status and negotiation of it within organisations, etc. A survey conducted in 2019 by the General Delegation for Employment and Vocational Training among 1,265 DPOs reveals the diversity of profiles measured by the original area of expertise: 31% are lawyers, 34% IT specialists, 34% belong to other areas of expertise. Dispersed and potentially isolated in their organizations, backed by very heterogeneous expertise, how do DPOs construct their professional identity?
 
This perspective of the sociology of professions could be supplemented and articulated with two other sociological approaches. First, on the side of the sociology of law, the emergence of new "legal intermediaries" endowed with expanded prerogatives and means invites us to continue analyzes of the "endogeneity" of law and economic activities (Edelman and Suchman , 1997; Bessy, Delpeuch and Pélisse, 2011). Given the ambiguity of the rule of law, its application is necessarily guided by the interpretation given to it by the actors, and in particular those of them occupying a particular position, at the interface of the economy, technology and law. Lenglet (2012) looked, for example, at market ethics officers, whose role is to authorize or prohibit certain transactions by market operators in the financial markets. Do DPOs impose themselves in organizations as data management or even IT ethics officers? Then, from a digital economic sociology perspective, the DPOs are responsible for ensuring, within their organization, the protection of personal data, and therefore the supervision of the enrichment and development activities of which they are likely to be the subject. “Data” is thus constituted as economic assets endowed with certain properties – lifespan, ability to circulate within and outside organizations, ability to be combined (Beauvisage, Mellet, 2020). What role do DPOs play in these operations? To what extent do their decisions, their tools, the control they exercise contribute to the economic valuation of personal data?
 
These questions call for empirical, qualitative and/or quantitative surveys. Entry into the field will be through training (facilitated access to the field with partner training: the specialized Master's "Data Protection Management" from IMT Business School and the "Data Protection Officer" certificate from Sciences Po Paris), professional associations, professional networking platforms. It may be focused on a particular area or open to a wide range of sectors and types of organizations. It will materialize through the writing of a research report, scientific publications and through exchanges and promotions, in particular with associated training.
 
Bibliographic references

Beauvisage, T., & Mellet, K. (2020). Datassets: assetizing and marketizing personal data. In Birch K. & F. Muniesa (eds), Assetization: turning things into assets in technoscientific capitalism, MIT Press
Bessy, C., Delpeuch, T., & Pélisse, J. (2011). Droit et régulations des activités économiques : perspectives sociologiques et institutionnalistes. LGDJ.
Demazière D. et C. Gadéa (dir.), Sociologie des groupes professionnels. Acquis récents et nouveaux défis, La Découverte, Paris, 2009
Edelman, L. B., & Suchman, M. C. (1997). The legal environments of organizations. Annual review of sociology, 23(1), 479-515.
Lenglet, M. (2012). Ambivalence and ambiguity: The interpretive role of compliance officers. In Finance: The Discreet Regulator, Palgrave Macmillan, London.

Doctorate of Jean-Marie John-Mathews (2019-2021)

Jean-Marie John-Mathews

Jean-Marie John-Mathews is a data scientist and doctoral student at IMTBS in algorithmic ethics on the impacts of so-called "ethical by design" algorithms in artificial intelligence. He is also a teacher at Sciences Po in quantitative methods for the social sciences and at PSL University in mathematics and probability. In the past, he worked as a data scientist in consulting and then in industry after training in mathematics, economics and then philosophy.


The global development of digital uses and services for more than twenty years has led to a massive production of digital data by individuals, whether on websites, blogs, forums, social networks, or even via connected objects present on oneself (smartphone, wearables), at home (smarthome) or in the city (smartcity). In this context, the collection and use of this data have become major challenges for researchers, companies and States alike. Algorithms and supervised or unsupervised learning methods, through Machine learning and Deep learning, are today in many digital services, whether search engines, social networks, recommendation systems, online advertising, or even chatbots or robots. Thus, algorithms are intermediaries that are increasingly present in the interactions between companies and consumers, and between individuals themselves. More generally, they also interact in understanding the economic, political or even social environment. This phenomenon is a source of concern and debate, which has given rise in recent years to the production of several government reports and missions and to the development in several countries of an interdisciplinary research stream on the need to develop ethical algorithms by design.

The first goal of this thesis is to clarify the concepts around the ethics of algorithms by placing them in their context of use and their context of technical development. Then, it is a question of evaluating the impact of the methods of ethical design by design of the algorithms on society. Issues on the technical tools for certification of ethical criteria will be explored, at the state of research still currently. This impact assessment calls us to think of a governance model for companies as well as a regulatory model for public authorities in order to respond to the various ethical issues.


This thesis was defended in December 2021 and Jean Marie John Matthews received two thesis prizes in Management Sciences in 2022: Andese Prize and Chancellery Prize.


Post Doctorate of Zeling Zhong (2020-2021)

Zeling Zhong

Zeling Zhong completed a thesis at the Institut Mines-Télécom Business School, Litem laboratory, University of Paris Saclay on the appropriation of connected objects. She completed her post-doctorate at Good in Tech


Many research works have been published in recent years on the negative impacts of algorithmic systems on individuals and society: Eipstein (2015) showed the effects of search engine manipulation on voting intentions; Basky et al. (2015) revealed in the journal Science the phenomenon of filter bubbles on Facebook and the limitation of access to a diversity of opinion; Lambrecht and Tucker (2019) have highlighted the effects of gender discrimination in recruitment algorithms; Obermayer et al. (2019) in Science show the discrimination of black populations in the health algorithms widely used in the United States for access to care. Several books have also highlighted the growing inequalities and threats to democracy from the development of algorithmic systems (Eubanks, 2017; O’Neil, 2016). In the same vein, a number of initiatives and reports have been published (Montreal Declaration, 2018; European Commission HLEG report, 2019), defending the need to develop artificial intelligence and algorithmic systems respecting certain principles ( justice, autonomy, beneficence and non-maleficence), generating trust among users, and non-discriminatory. Our research project is positioned in this international current of thought on the ethics and responsibility of technologies on individuals and society.
Although the recent literature is rich in publications on the ethics of algorithms, these are particularly numerous in computer science, sociology, philosophy and law. However, computer science publications often have a “responsible by design” vision of developing technologies, which assumes that the solution lies in code and programming, by making algorithms primarily more explainable, accountable, non-discriminatory and non-biased. . This gave rise to several streams of research on the concepts of "fairness, accountability, transparency" and the creation of the interdisciplinary conference FAT (Fairness, Accountability, Transparency). On the other hand, the study in human and social sciences of the perception of artificial intelligence algorithms by individuals has been the subject of little research, as has the process of appropriation of algorithmic systems, a subject little treated in the literature. .
Our research therefore aims to dig deeper into this subject. We will seek to better understand and model the appropriation of algorithmic systems by individuals, in particular by mobilizing a certain number of concepts from the literature. Several publications focus on the perception of algorithmic decisions, but the results diverge, with some showing that individuals prefer algorithmic decisions, others human decisions. Lee (2018) for example demonstrates that the perception of algorithmic decisions depends on the type of task performed by the algorithm (mechanical vs. human tasks): for mechanical tasks, human and algorithmic decisions are perceived at the same level of justice, trust and evoke the same emotions; for human tasks, algorithmic decisions are perceived as less fair and at a lower level of confidence, while evoking more negative emotions than human decisions. Logg, Minson and Moore (2019) show on the contrary that individuals prefer algorithms to human judgment, which they call the algorithm appreciation effect. Other authors have looked at other variables. Ekstrand et al. (2014) for example studied the evaluation of recommender systems by their users, showing that user satisfaction predicts their final selection. Bucher (2017) highlights the concept of algorithmic imagination and the way individuals experience and think about algorithms. Another trend relevant to our research project is that on the appropriation of technologies by users. This concept was modeled in the thesis of Zeling Zhong (2019), leader of this research project, which studied the appropriation of a specific technology (connected objects). We therefore aim to study the appropriation of algorithmic systems by their users, drawing on both the research work of Zeling Zhong's thesis on appropriation and on previous work on the perceptions of algorithms by individuals, to develop a quantitative model of appropriation. In particular, perceptions of opacity, potential bias or discriminatory effects, or breach of privacy will be studied in order to underline the need to develop “responsible by design” algorithms.
The methodology will consist in a first step in developing a theoretical model of the appropriation of algorithmic systems on the basis of the literature, then in testing this model on real data from users of virtual voice assistants (such as Good Home or Alexa). ), whose services are based on artificial intelligence learning algorithms.
A quantitative questionnaire will be developed in order to measure each of the variables of the model, then a quantitative field will be carried out with an access panel company in order to obtain a sufficient number of users of virtual voice assistants, and answers to the questionnaire.
Finally, a PLS (Partial Least Square) model will be used to validate the various hypotheses formulated in the theoretical model.