Content moderation and responsible governance of digital freedoms workcamps

Valentine Crosset post-doctorate

Within the framework of axis 4 Governance of innovation and responsible technologies, the Good in Tech Chair is initiating a research-experimentation project aimed at moderating content in the age of digital platforms.

In many ways, we are living in a period of turbulent structural transformation in the digital space today. In recent years, social media platforms are at the heart of an important debate on content moderation. Many concerns have been raised about the dissemination of hate speech, jihadist propaganda and false information on digital platforms. In this context, the regulation of web giants has often been presented as an essential response to guarantee democratic balance and geopolitical stability within our societies. Faced with pressure from public authorities and civil society, platforms were forced to expand their regulations and policies, expand their moderation team, outsource moderation operations as well as add new technologies. algorithmic detection systems based on Artificial Intelligence (AI). Despite this strengthening of moderation, the fact remains that the public debate remains particularly divided, between those who criticize the platforms for their inaction and passivity and those who see them as a real attack on freedom of expression.

This turning point in the governance of freedom of expression invites us to question the status of “private” regulation of digital content. This necessarily raises the question of knowing "What internet do we want"? How to ensure responsible governance of the major digital players which would make it possible to combine freedom of expression and protection of the public? Considering that the public problem of digital freedoms is a collective enterprise of knowledge production, this research project proposes an analytical framework allowing to examine these questions both from the point of view of regulators and public actors, but also of Internet users. Empirical knowledge on public perceptions regarding the status of content and platform regulation is still anecdotal and requires further analysis, given the current debate and the proliferation of laws on content moderation.

Rather than limiting critical thinking to knowing who must regulate digital content and platforms and How? 'Or' What, the project wishes to explore governance "by train"to do: its boundaries and complexities, its controversies and debates, its normative practices and expectations, taking into account the experience and point of view of users. By pursuing this type of questioning, the project will seek to identify and explore possible forms of democratic regulation of content on the Internet, so that it remains a space for debate, commitment and freedom.

The research will take place in three phases:

1. Mapping of moderation policies

A first level of the project will consist of mapping and synthesizing the various policies for moderation of content and regulation of platforms. Particular attention will be paid to the various procedures implemented depending on the nature of the content published (terrorism, nudity, hate speech, disinformation). This is an essential preliminary step in understanding the current landscape of content moderation and platform regulation. The interest of this objective is to explore the institutional, political and legal structures underlying online content and to carry out a critical analysis.

2. Digital user survey

We will first investigate user reports. We will focus on routine reporting activities by building a directory of problematic content expressed and reported by Internet users. This first part of the survey will allow us to observe how Internet users implement the limits of what they want to see or avoid on social networks. Secondly, we will explore the complaints made by users with regard to moderation operations operated by digital platforms. Our database will be made up a priori of content published on social networks. It will allow us to draw up the state of social criticism on the issues of freedom of expression and moderation and to pose it as a starting point for a politicization of the problem.

3. Collective experimentation with moderation

The third aspect of the survey will be to carry out an exploratory survey through interviews and participatory workshops. This exploratory survey will be conducted with five target audiences: Internet user communities identified from our online survey; associations for the defense of freedoms; administrative authorities and public bodies; digital players; specialized lawyers and magistrates. Audiences will be encouraged to experiment with problematic content, to debate cases that have been moderated by the platforms and to imagine moderation mechanisms. These experiments will have two main aims. On the one hand, it will be a question of feeding the digital survey, by collecting the normative values and expectations of the audiences concerned, on the other hand, they will make it possible to co-construct a responsible governance of digital freedoms.


Valentine Crosset is a postdoctoral researcher at médialab and funded by the Good In Tech Chair since November 2020. Her current research deals with the moderation of content on digital platforms, with a particular interest in controversies and the normative expectations of Internet users.

Valentine Crosset holds a doctorate in criminology from the University of Montreal. His thesis focused on the online visibility of radical groups, at the intersection of STS, the sociology of activism and the sociology of visibility. During her doctoral studies, she worked on various digital projects. She worked in particular within the International Center for Comparative Criminology on a research project on the far right and the digital (2014-2019), but also on the project "Empowerment of judicial actors through cyberjustice" (AJC) put in place by the Cyberjustice Laboratory of the University of Montreal (2019-2020). Finally, she worked within the Montreal Declaration for a Responsible Development of Artificial Intelligence (2018-2019). She recently published articles in the journal International Critic and New Media & Society.

Impact of responsible technologies in artificial intelligence
PhD from Jean-Marie John-Mathews

Within the framework of axis 2Development of responsible technologies, the Good in Tech Chair is initiating a research project aimed at the development of responsible technologies in artificial intelligence.

The global development of digital uses and services for more than twenty years has led to a massive production of digital data by individuals, whether on websites, blogs, forums, social networks, or even via connected objects present on oneself (smartphone, wearables), at home (smarthome) or in the city (smartcity). In this context, the collection and use of these data have become major challenges for researchers, companies and States alike. Algorithms and supervised or unsupervised learning methods, through machine learning and deep learning, are today in many digital services, whether they are search engines, social networks, recommendation systems, etc. online advertising, or chatbots or robots. Thus algorithms are increasingly present intermediaries in interactions between companies and consumers, and between individuals themselves. More generally, they also interact in understanding the economic, political or even social environment.This phenomenon is a source of concern and debate, which has given rise in recent years to the production of several government reports and missions and to the development in several countries of an interdisciplinary research stream on the need to develop ethical algorithms by design.

The primary goal of this thesis is to clarify the concepts around the ethics of algorithms by placing them in their context of use and their context of technical development. Then it isassess the impact of ethical design methods by design algorithms on the company. Issues on technical tools for certification of ethical criteria will be explored, still in the research stage. This impact assessment calls us to think of a governance model for companies as well as a regulatory model for public authorities in order to respond to the various ethical issues.


Jean-Marie John-Mathews is a data scientist and doctoral student at IMTBS in algorithmic ethics on the impacts of so-called “ethical by design” algorithms in artificial intelligence. He is also a lecturer at Sciences Po in quantitative methods for the social sciences and at PSL University in mathematics and probability. In the past, he worked as a data scientist in consulting and then industry after training in mathematics, economics and philosophy.

Impact of Artificial intelligecence on Sociéty
PhD from Ahmad Haidar

Meanwhile, there is an acceleration in developing new technologies that can learn from themselves and provide more and more relevant results, ie, Artificial Intelligence (AI). With this increasing competence, machine learning is increasingly being employed in real-world socio-technical contexts of high consequence. It's not a secret that people and machines are starting to be more partners in various aspects of life, starting by using Google Maps, Siri to self-driving cars. Some can't even imagine life without these technologies; businesses read this mentality as an increase in demand for innovative technologies by society, so new lines to maximize their profit. For this reason, we will see an increase in investment vehicles focused on ethical tech such as electric cars, besides adapting business models which are hyper-compliant with GDPR.

This AI era could help solve significant problems such as medicine and healthcare, energy usage, transportation, and other domains. Yet, this advancement associate harmful consequences on society using an intelligent algorithm such as manipulating the voting system of a presidential election, discrimination –in different fields such as hiring process, data abuse, cybersecurity, privacy violation, job losses, etc.
This intersection of machine learning with society has fueled this dissertation to raise questions and concerns about the risks and advantages of Machine Learning. A small segment of research effort is devoted to measuring this relation in all its prospects. Hence, this dissertation will study the various aspects of responsible machine learning that are more engaged with society by creating an econometric model. These measures concluded from four major declarations and regulations to adopt the consequences of this advancement in technologies; Montreal declaration for a Responsible Development of Artificial Intelligence (2018). EU High-Level Expert Group On Artificial Intelligence (2018). Recommendation of the Council on OECD Legal Instruments Artificial Intelligence (2019). UNESCO Recommendation on the Ethics of Artificial Intelligence, Open Dialogue on AI Ethics (ODAI) (2020). These guidelines and recommendations aim to promote and foster Trustworthy AI. The framework of Responsible AI has five main principles, and each has several dimensions. First, Respect for Autonomy includes human agency and solidarity. Second, the Prevention of Harm including privacy and data governance, societal and environmental well-being, robustness, and prudence. Third, Fairness covers accountability, equity, diversity inclusion, and environmental well-being. Fourth, Explicability also involves accountability besides transparency, interoperability, and data governance. The last principle is Democratic Participation. These principles attached to their actions will be considered input data to measure its influence on society and individuals and tech companies' well-being.

The research will take place in three phases:

1. Mapping of moderation policies
The first level of the dissertation will consist of a massive overlook on previous studies, theoretical and practical, the impact of artificial intelligence on society. A literature review describes the negative effect of machine learning and the reasons behind the regulations made for these “superpower” algorithms. Particular attention will be paid to various international regulators, Montreal Declaration, EU High-Level Expert Group On Artificial Intelligence. OECD Legal Instruments Artificial Intelligence. UNESCO Recommendation on the Ethics of Artificial Intelligence, Open Dialogue on AI Ethics (ODAI). It is an essential preliminary step in understanding the current landscape of net societal contribution to AI for constructing an econometric model.

2. Methodology and data modeling
The primary objective of this dissertation is to understand the impact of AI on society better. In contrast to quantitative research, which focuses on measuring and validation, qualitative research helps address these exploratory questions by allowing researchers to address “why” and “how” questions. As a result, qualitative research is appropriate for addressing the objectives of this study. This study employs a qualitative approach based on Grounded Theory involving semi-structured and digital interviews with experts from the AI field in industries interested in this topic and academia. As such, we will deal with primary and recent data to create our model.

3. Experimentation with moderation
The third aspect is to make several experiments and run the model in various startup firms interested in responsible AI investment. Conclude main results to investigate how they serve the dissertation's objective. Motivate the tech sector businesses to have dual goals, maximize their profit in parallel to be social friendly.


Ahmad is a researcher at the LITEM laboratory at Paris Saclay University and funded by the Chair Good In Tech since November 2020. His research is about the Net societal contribution of Artificial Intelligence algorithms: an econometric model supervised by Prof Christine Balague.Ahmad has an MS degree in the field of Economics & Business Management from the Lebanese University. His memoir “the impact of Digital Transformation and Artificial Intelligence on Economic Growth.” For nine countries and for a period of time between 2000-2017. He also worked as a research assistant to design a database and prepare data for the American University of Beirut project. Besides in the Central Bank of Lebanon and other alfa banks as a trainee. During his academic and professional work, ahmad realized the power of data and its impact on well-being. Currently, the focus area is enlarged to cover the significant power of algorithms.

Le métier DPO
Post-doctorat d'Alexis Vouillon

Dans le cadre de l’axe 4  Gouvernance de l’innovation et des technologies responsables, la Chaire Good in Tech initie un projet de recherche-expérimentation dont l’objet porte sur le métier du DPO.

La fonction de DPO est instaurée par le Règlement Général sur la Protection des Données (RGPD), voté en avril 2016 et entré en application en mai 2018. Sa désignation est obligatoire dans les entreprises privées et dans les organismes publics amenés à constituer des fichiers et traiter des données à caractère personnel. Chargé de nombreuses tâches, en lien avec sa mission principale consistant à veiller au respect des règles relatives à la protection des données, le DPO est assurément le visage de la régulation des données personnelles dans les organisations.
Qui sont les DPO ? Quelles sont leurs trajectoires ? Comment construisent-ils leurs compétences, leur expertise et leur ethos professionnel ? Dans quelles conditions exercent-ils leurs missions au sein des organisations ? Comment, concrètement, exercent-ils ces missions au quotidien ? Leurs compétences sont-elles ajustées à ces missions, doivent-elles évoluer ? Comment et par qui sont-ils évalués ? Dans quelle mesure sont-ils susceptibles d’infléchir et de déterminer la manière dont se structure l’économie des données personnelles ? Les questions soulevées par l’émergence de la figure du DPO sont nombreuses.
La perspective privilégiée dans cette recherche est celle de la sociologie du travail et des groupes professionnels (Demazière et Gadéa, 2009) ainsi que les approches en Gestion des Ressources Humaines sur les aspects compétences et développement des compétences. Elle invite à explorer la dynamique de formation du groupe des DPO sous l’angle de la professionnalisation : construction des expertises et codification des compétences, rôle des formations et des associations professionnelles, sociabilités, normes déontologiques, autonomie et légitimité, définition juridique du statut et négociation de celui-ci au sein des organisations, etc. Une enquête menée en 2019 par la Délégation Générale à l’Emploi et à la Formation Professionnelle auprès de 1265 DPO révèle la diversité des profils mesurée par le domaine d’expertise d’origine : 31 % sont juristes, 34 % informaticiens, 34 % appartiennent à d’autres domaines d’expertise. Dispersés et potentiellement isolés dans leurs organisations, adossés à des expertises très hétérogènes, comment les DPO construisent-ils leur identité professionnelle ?
Cette perspective de sociologie des professions pourra être complétée et articulée à deux autres approches sociologiques. D’abord, du côté de la sociologie du droit, l’émergence de nouveaux « intermédiaires du droit » dotés de prérogatives et de moyens élargis invite à poursuivre les analyses sur l’« endogénéité » du droit et des activités économiques (Edelman et Suchman, 1997 ; Bessy, Delpeuch et Pélisse, 2011). Compte tenu de l’ambiguïté de la règle de droit, son application est nécessairement guidée par l’interprétation qu’en donnent les acteurs, et en particulier ceux d’entre eux occupant une position particulière, à l’interface de l’économie, de la technique et du droit. Lenglet (2012) s’est intéressé par exemple aux déontologues de marché, dont le rôle est d’autoriser ou d’interdire certaines transactions aux opérateurs de marché, sur les marchés financiers. Les DPO s’imposent-ils dans les organisations comme des déontologues de la gestion des données, voire de l’informatique ? Ensuite, dans une perspective de sociologie économique du numérique, les DPO sont chargés de veiller, au sein de leur organisation, à la protection des données personnelles, et donc à l’encadrement des activités d’enrichissement et de valorisation dont elles sont susceptibles de faire l’objet. Les « data » sont ainsi constituées en actifs économiques dotés de certaines propriétés – durée de vie, capacité à circuler dans et à l’extérieur des organisations, capacité à être combinés (Beauvisage, Mellet, 2020). Quel rôle jouent les DPO dans ces opérations ? Dans quelle mesure leurs décisions, leurs outils, le contrôle qu’ils exercent participent-il de la valorisation économique des données personnelles ?
Ces questionnements appellent la réalisation d’enquêtes empiriques, qualitatives et/ou quantitatives. L’entrée sur le terrain se fera par le biais des formations (accès au terrain facilité auprès des formations partenaires : le Master spécialisé « Data Protection Management » d’IMT Business School et le certificat « Data Protection Officer » de Sciences Po Paris), d’associations professionnelles, de plateformes de networking professionnel. Elle pourra être centrée sur un domaine particulier ou ouverte à une palette large de secteurs et de types d’organisations. Elle se concrétisera par la rédaction d’un rapport de recherche, de publications scientifiques et par des échanges et valorisations, en particulier auprès des formations associées.
Références bibliographiques

Beauvisage, T., & Mellet, K. (2020). Datassets: assetizing and marketizing personal data. In Birch K. & F. Muniesa (eds), Assetization: turning things into assets in technoscientific capitalism, MIT Press
Bessy, C., Delpeuch, T., & Pélisse, J. (2011). Droit et régulations des activités économiques : perspectives sociologiques et institutionnalistes. LGDJ.
Demazière D. et C. Gadéa (dir.), Sociologie des groupes professionnels. Acquis récents et nouveaux défis, La Découverte, Paris, 2009
Edelman, L. B., & Suchman, M. C. (1997). The legal environments of organizations. Annual review of sociology23(1), 479-515.
Lenglet, M. (2012). Ambivalence and ambiguity: The interpretive role of compliance officers. In Finance: The Discreet Regulator, Palgrave Macmillan, London.