Content moderation and responsible governance of digital freedoms workcamps
Valentine Crosset post-doctorate
Within the framework of axis 4 Governance of innovation and responsible technologies, the Good in Tech Chair is initiating a research-experimentation project aimed at moderating content in the age of digital platforms.
In many ways, we are living in a period of turbulent structural transformation in the digital space today. In recent years, social media platforms are at the heart of an important debate on content moderation. Many concerns have been raised about the dissemination of hate speech, jihadist propaganda and false information on digital platforms. In this context, the regulation of web giants has often been presented as an essential response to guarantee democratic balance and geopolitical stability within our societies. Faced with pressure from public authorities and civil society, platforms were forced to expand their regulations and policies, expand their moderation team, outsource moderation operations as well as add new technologies. algorithmic detection systems based on Artificial Intelligence (AI). Despite this strengthening of moderation, the fact remains that the public debate remains particularly divided, between those who criticize the platforms for their inaction and passivity and those who see them as a real attack on freedom of expression.
This turning point in the governance of freedom of expression invites us to question the status of “private” regulation of digital content. This necessarily raises the question of knowing "What internet do we want"? How to ensure responsible governance of the major digital players which would make it possible to combine freedom of expression and protection of the public? Considering that the public problem of digital freedoms is a collective enterprise of knowledge production, this research project proposes an analytical framework allowing to examine these questions both from the point of view of regulators and public actors, but also of Internet users. Empirical knowledge on public perceptions regarding the status of content and platform regulation is still anecdotal and requires further analysis, given the current debate and the proliferation of laws on content moderation.
Rather than limiting critical thinking to knowing who must regulate digital content and platforms and How? 'Or' What, the project wishes to explore governance "by train"to do: its boundaries and complexities, its controversies and debates, its normative practices and expectations, taking into account the experience and point of view of users. By pursuing this type of questioning, the project will seek to identify and explore possible forms of democratic regulation of content on the Internet, so that it remains a space for debate, commitment and freedom.
The research will take place in three phases:
1. Mapping of moderation policies
A first level of the project will consist of mapping and synthesizing the various policies for moderation of content and regulation of platforms. Particular attention will be paid to the various procedures implemented depending on the nature of the content published (terrorism, nudity, hate speech, disinformation). This is an essential preliminary step in understanding the current landscape of content moderation and platform regulation. The interest of this objective is to explore the institutional, political and legal structures underlying online content and to carry out a critical analysis.
2. Digital user survey
We will first investigate user reports. We will focus on routine reporting activities by building a directory of problematic content expressed and reported by Internet users. This first part of the survey will allow us to observe how Internet users implement the limits of what they want to see or avoid on social networks. Secondly, we will explore the complaints made by users with regard to moderation operations operated by digital platforms. Our database will be made up a priori of content published on social networks. It will allow us to draw up the state of social criticism on the issues of freedom of expression and moderation and to pose it as a starting point for a politicization of the problem.
3. Collective experimentation with moderation
The third aspect of the survey will be to carry out an exploratory survey through interviews and participatory workshops. This exploratory survey will be conducted with five target audiences: Internet user communities identified from our online survey; associations for the defense of freedoms; administrative authorities and public bodies; digital players; specialized lawyers and magistrates. Audiences will be encouraged to experiment with problematic content, to debate cases that have been moderated by the platforms and to imagine moderation mechanisms. These experiments will have two main aims. On the one hand, it will be a question of feeding the digital survey, by collecting the normative values and expectations of the audiences concerned, on the other hand, they will make it possible to co-construct a responsible governance of digital freedoms.
Valentine Crosset is a postdoctoral researcher at médialab and funded by the Good In Tech Chair since November 2020. Her current research deals with the moderation of content on digital platforms, with a particular interest in controversies and the normative expectations of Internet users.
Valentine Crosset holds a doctorate in criminology from the University of Montreal. His thesis focused on the online visibility of radical groups, at the intersection of STS, the sociology of activism and the sociology of visibility. During her doctoral studies, she worked on various digital projects. She worked in particular within the International Center for Comparative Criminology on a research project on the far right and the digital (2014-2019), but also on the project "Empowerment of judicial actors through cyberjustice" (AJC) put in place by the Cyberjustice Laboratory of the University of Montreal (2019-2020). Finally, she worked within the Montreal Declaration for a Responsible Development of Artificial Intelligence (2018-2019). She recently published articles in the journal International Critic and New Media & Society.
Impact of responsible technologies in artificial intelligence
PhD from Jean-Marie John-Mathews
Within the framework of axis 2Development of responsible technologies, the Good in Tech Chair is initiating a research project aimed at the development of responsible technologies in artificial intelligence.
The global development of digital uses and services for more than twenty years has led to a massive production of digital data by individuals, whether on websites, blogs, forums, social networks, or even via connected objects present on oneself (smartphone, wearables), at home (smarthome) or in the city (smartcity). In this context, the collection and use of these data have become major challenges for researchers, companies and States alike. Algorithms and supervised or unsupervised learning methods, through machine learning and deep learning, are today in many digital services, whether they are search engines, social networks, recommendation systems, etc. online advertising, or chatbots or robots. Thus algorithms are increasingly present intermediaries in interactions between companies and consumers, and between individuals themselves. More generally, they also interact in understanding the economic, political or even social environment.This phenomenon is a source of concern and debate, which has given rise in recent years to the production of several government reports and missions and to the development in several countries of an interdisciplinary research stream on the need to develop ethical algorithms by design.
The primary goal of this thesis is to clarify the concepts around the ethics of algorithms by placing them in their context of use and their context of technical development. Then it isassess the impact of ethical design methods by design algorithms on the company. Issues on technical tools for certification of ethical criteria will be explored, still in the research stage. This impact assessment calls us to think of a governance model for companies as well as a regulatory model for public authorities in order to respond to the various ethical issues.
Jean-Marie John-Mathews is a data scientist and doctoral student at IMTBS in algorithmic ethics on the impacts of so-called “ethical by design” algorithms in artificial intelligence. He is also a lecturer at Sciences Po in quantitative methods for the social sciences and at PSL University in mathematics and probability. In the past, he worked as a data scientist in consulting and then industry after training in mathematics, economics and philosophy.
Impact of Artificial intelligecence on Sociéty
PhD from Ahmad Haidar
Meanwhile, there is an acceleration in developing new technologies that can learn from themselves and provide more and more relevant results, ie, Artificial Intelligence (AI). With this increasing competence, machine learning is increasingly being employed in real-world socio-technical contexts of high consequence. It's not a secret that people and machines are starting to be more partners in various aspects of life, starting by using Google Maps, Siri to self-driving cars. Some can't even imagine life without these technologies; businesses read this mentality as an increase in demand for innovative technologies by society, so new lines to maximize their profit. For this reason, we will see an increase in investment vehicles focused on ethical tech such as electric cars, besides adapting business models which are hyper-compliant with GDPR.
This AI era could help solve significant problems such as medicine and healthcare, energy usage, transportation, and other domains. Yet, this advancement associate harmful consequences on society using an intelligent algorithm such as manipulating the voting system of a presidential election, discrimination –in different fields such as hiring process, data abuse, cybersecurity, privacy violation, job losses, etc.
This intersection of machine learning with society has fueled this dissertation to raise questions and concerns about the risks and advantages of Machine Learning. A small segment of research effort is devoted to measuring this relation in all its prospects. Hence, this dissertation will study the various aspects of responsible machine learning that are more engaged with society by creating an econometric model. These measures concluded from four major declarations and regulations to adopt the consequences of this advancement in technologies; Montreal declaration for a Responsible Development of Artificial Intelligence (2018). EU High-Level Expert Group On Artificial Intelligence (2018). Recommendation of the Council on OECD Legal Instruments Artificial Intelligence (2019). UNESCO Recommendation on the Ethics of Artificial Intelligence, Open Dialogue on AI Ethics (ODAI) (2020). These guidelines and recommendations aim to promote and foster Trustworthy AI. The framework of Responsible AI has five main principles, and each has several dimensions. First, Respect for Autonomy includes human agency and solidarity. Second, the Prevention of Harm including privacy and data governance, societal and environmental well-being, robustness, and prudence. Third, Fairness covers accountability, equity, diversity inclusion, and environmental well-being. Fourth, Explicability also involves accountability besides transparency, interoperability, and data governance. The last principle is Democratic Participation. These principles attached to their actions will be considered input data to measure its influence on society and individuals and tech companies' well-being.
The research will take place in three phases:
1. Mapping of moderation policies
The first level of the dissertation will consist of a massive overlook on previous studies, theoretical and practical, the impact of artificial intelligence on society. A literature review describes the negative effect of machine learning and the reasons behind the regulations made for these “superpower” algorithms. Particular attention will be paid to various international regulators, Montreal Declaration, EU High-Level Expert Group On Artificial Intelligence. OECD Legal Instruments Artificial Intelligence. UNESCO Recommendation on the Ethics of Artificial Intelligence, Open Dialogue on AI Ethics (ODAI). It is an essential preliminary step in understanding the current landscape of net societal contribution to AI for constructing an econometric model.
2. Methodology and data modeling
The primary objective of this dissertation is to understand the impact of AI on society better. In contrast to quantitative research, which focuses on measuring and validation, qualitative research helps address these exploratory questions by allowing researchers to address “why” and “how” questions. As a result, qualitative research is appropriate for addressing the objectives of this study. This study employs a qualitative approach based on Grounded Theory involving semi-structured and digital interviews with experts from the AI field in industries interested in this topic and academia. As such, we will deal with primary and recent data to create our model.
3. Experimentation with moderation
The third aspect is to make several experiments and run the model in various startup firms interested in responsible AI investment. Conclude main results to investigate how they serve the dissertation's objective. Motivate the tech sector businesses to have dual goals, maximize their profit in parallel to be social friendly.
Ahmad is a researcher at the LITEM laboratory at Paris Saclay University and funded by the Chair Good In Tech since November 2020. His research is about the Net societal contribution of Artificial Intelligence algorithms: an econometric model supervised by Prof Christine Balague.Ahmad has an MS degree in the field of Economics & Business Management from the Lebanese University. His memoir “the impact of Digital Transformation and Artificial Intelligence on Economic Growth.” For nine countries and for a period of time between 2000-2017. He also worked as a research assistant to design a database and prepare data for the American University of Beirut project. Besides in the Central Bank of Lebanon and other alfa banks as a trainee. During his academic and professional work, ahmad realized the power of data and its impact on well-being. Currently, the focus area is enlarged to cover the significant power of algorithms.