In 2020, Eurogroup Consulting publishes a study on organizational ethics in the digital age: towards responsible artificial intelligence. In 2 years, The changing societal context, the development of technologies and uses continue to question the ethical issues of AI for organizations and Eurogroup Consulting has enriched and updated its findings and recommendations including these questions:
- How can a regulatory framework be transformed into a competitive advantage for organizations?
- Why design ethical AI natively?
- How should AI ethics issues be integrated into organizations' strategies?
SUMMARY OF CHANGES OVER 2 YEARS
More mature organizations overall

Under pressure from user demands and increasing regulations, ethical issues are better known. Initiatives are emerging to certify algorithms.
Visit players involved in the design and development of algorithmic solutions are better trained and more aware of their role to the issues at stake. Top management is more aware of the opportunities and risks of ethical bias.
Visit communication is becoming more proactive, organizations are aware of the need to reassure both users and regulators that ethical issues are being taken into account.
Nevertheless, the approach to AI ethics still seems to be guided by risk control, not value creation. Organizations still need to prove that their solutions respect a framework of ethical constraints, and demonstrate that self-regulation can be sufficient to limit the risks of drift.
Our recommendations are still valid
- Ethical issues are better known, but are not always taken into account. integrated as sustainable and essential elements of an organization's strategy with regard to artificial intelligence, and even more generally to its digital or even professional development. Governance bodies responsible for expressing an opinion on the subject remain rare. This limits its visibility as a strategic subject.
- The design of algorithmic solutions takes greater account of ethical needs. Nevertheless, the This approach is rarely applied on an ongoing basis, throughout the solution's lifecycle, especially once the projects have been deployed.
- Finally, if communication and mobilization of the organization's ecosystem is much more mature than in 2020This is borne out by the sharp rise in publications on the subject, and the many initiatives launched to promote ethics and develop a framework for the ethical certification of algorithms, information still has to percolate through organizations to be translated into everyday life.
REGULATION STILL LITTLE PERCEIVED AS A SOURCE OF VALUE
If AI regulation raises concerns among the designers and users of these solutionsit can also be a source of value. The example of the General Data Protection Regulation (GDPR) can be read from this angle.
The example of the RGPD: regulation as value creation
Value creation can be studied from a number of angles: compliance with regulatory constraints, inclusion in digital strategy, reliability of data processing, trust in public opinion, perception of value by the company, etc.
Following the entry into force of the European regulation, organizations began the process of compliance, including three parts :
- Compliance with regulatory constraints,
- Inclusion in digital strategy,
- And reliable data processing.
Today, many organizations seem to have acculturated to the RGPD. Indeed, the changes made to achieve compliance are widely documented, and companies are getting their act together on the three aspects mentioned above. It is these three dimensions of transformation that will create value for organizations.
Regulation can only be achieved through the regulatory framework
The law protects individuals and society from certain forms of behavior deemed to be negative. However, the law limits to its ability to anticipate the impact of new technologies of digital technology. Faced with the ethical risks of artificial intelligence, the law alone cannot ensure the protection of individuals. In addition to repressive legislation, we need to develop organizational self-regulation in their use of AI. Their in-depth knowledge of the sector and the possible biases of the forms of AI used makes them a major asset.
In addition to the intrinsic value of ethics as such, organizations also have an interest in these issues, in order to :
- Create value through socially accepted technologies and increase user/customer confidence. Ethical issues are better known, but are not always integrated as sustainable and essential elements of an organization's strategy with regard to artificial intelligence, or, more generally, to its digital or even business development. Governance bodies responsible for expressing an opinion on the subject remain rare. This limits its visibility as a strategic issue.
- Promoting employee acceptance: internally, introducing AI within an ethical and transparent framework, with a view to complementing human work rather than supplanting it, helps reassure employees and minimize the risk of resistance to change.

AN ETHICAL NEED REINFORCED BY GROWING DIGITAL DEPENDENCY

The health crisis has led to an increase in digital usageThis has led to a significant increase in the amount of data produced, and in the sophistication of data processing algorithms. As a result, the amount of data produced has increased considerably, and data processing algorithms have been perfected. These developments have led to an increasingly personalized range of products and services, and the emergence of new services.
These new uses This leads to an increase in the amount of exploitable data, and consequently to new opportunities, but also presents new challenges. ethical risks as wellWe need to take this into account, particularly in the context of discussions and actions concerning the growing challenges of digital sovereignty in France and Europe.
INSTITUTIONAL INITIATIVES TO THINK ABOUT ETHICAL IA AND REGULATE ITS USE
AI cannot take off without ethical reflection on its limits and safeguards. States and public institutions have seized on the subject to define this framework for use.
The European Commission's initiative
In April 2021, the European Commission submitted a draft European regulation on AI. Its purpose is to define a European vision of ethical AI. The Commission adopts a risk assessment and risk reduction approach.
The Commission has identified the following areas of application risks deemed unacceptableIn the field of security and individual rights in particular, there are high risks in the energy, transport and justice sectors, for which a number of solutions have been developed. restrictions on the use of AI can be implemented.
The areas covered by risks considered limited will be recommended a transparency obligation, particularly in terms of user information. First and foremost, the Commission's initiative reflects a growing awareness of the strategic and inescapable nature of artificial intelligence in the future development of societies and economies, and should therefore be encouraged. However, this development must not be at the expense of the Union's common values.
France's national AI strategy
In november 2021 its national strategy for artificial intelligence. She wants to make France a industry leader and, to this end, lays the foundations for the long-term development of the AI ecosystem at every stage: R&D, applications, dissemination, support and deployment management. This development is conceived from the outset within a framework that can only be ethical. France is thus committed to "making efforts to build the responsible and trustworthy AI of tomorrow", on condition that sine qua none for the appropriation of technologies by the user.
Four objectives have been identified for France's action in this area:

- Support the emergence of a international consensus on the benefits and risks of AI;
- Promoting the development of a model and principles based on trusted AI at national, European and international level ;
- Throwing concrete initiatives putting AI to work for humanity;
- Facilitatingemergence of a consensus and common rules at European level.
More concretely, France has set up a digital ethics steering committeeIn December 2019, the French Ministry of Defence will set up a "Comité d'éthique de la défense" (defense ethics committee) within the Comité Consultatif National d'Ethique (French national ethics advisory board), tasked with taking a global view of the ethical issues surrounding digital technology, and artificial intelligence in particular. In the sensitive field of defense, a defense ethics committee has been set up at the French Ministry of the Armed Forces, to lead reflection on issues relating to the development of weaponry. This committee has issued an opinion on autonomous lethal weapon systems (SALA).
These examples illustrate the growing awareness of public authorities and their desire to reconcile two issues:
- Encouraging the development of AI, essential for competitiveness and innovation in tomorrow's world;
- Guarantee that this development will not work against the citizenand that AI will not threaten individual freedoms.
Public institutions must therefore finding the right compromise between freeing up energy for innovation, and consequently competitiveness and dynamism of players in the AI ecosystem, and political, social and legal acceptability of this boom. Distinguishes itself from other initiatives to identify the areas most at risk, and to opt for the establishment of guidelines that can be applied sector by sector.
Aurélie SimardDoctoral student in management sciences Université Paris-Saclay
Responsible innovation", a relevant framework for organizations
Why "innovation"?
Consider AI from the perspective of one of its creators, Marvin Lee Minsky, who defined it as the "construction of computer programs that perform tasks that are, for the time being, more satisfactorily accomplished by human beings "It could then be considered an innovation, itself defined as introducing new ideas, methods or things.Why "responsible"?
According to a philosophical trend known as consequentialism, ethics could be synonymous with "responsible" in that it involves carry out its actions with care to limit undesirable impacts on the environmental and social resources mobilized. And so.., responsible innovation, which refers to a field that has been well known in management science for the past ten years, could inspire organizations to make a resolute commitment to AI, and from the outset a " Ethical AI ". An organization that innovates responsibly makes sure it has the means to do so. honor 4 key principles (Stilgoe et al., 2013):- Anticipation : examining the expected and unexpected consequences of innovation.
- Reflexivity: awareness of the values, biases and social norms that tacitly or explicitly shape innovation.
- Reactivity : adaptation to the emerging effects of innovations and the contexts in which they are deployed.
- Inclusiveness : involvement of stakeholders, including the public, in the development process.
It's up to organizations to seize the opportunity to create value
Aware of this regulatory push, private players working on AI are trying to take the lead and promote self-regulation. These private initiatives certainly testify to an awareness of the imperative need to guarantee ethical AI, but also to the pressure represented by these draft regulations.
One of the fears of the private ecosystem is that an overly restrictive legal framework will be imposed on them, thus weighing on their ability to innovate in a highly competitive field. The steps taken therefore seem to be aimed more at avoiding the introduction of overly restrictive regulations for their activities, by sending positive signals to public authorities about their ability to self-regulate and set acceptable ethical standards themselves, than at producing value through ethics.
That's right, ethics represent an inescapable source of value for intelligence artificial, insofar as it constitutes a potentially differentiating factor in consumer choice.
AI-producing and AI-using organizations are therefore at a pivotal moment: as the regulatory push takes shape, the intrinsic value represented by ethical AI diminishes. It's no longer a question of providing additional value to its algorithms through ethics, which is differentiating on a market, but simply of complying with regulations.
The movement is thus from a logic of value production to one of compliance. With this in mind, we are convinced that organizations need to take a more in-depth look at the ethical implications of artificial intelligence, to transform it into a value proposition, or risk seeing this value disappear in favor of mere compliance with regulations.
To capture this value, they must set up an artificial intelligence ethics management systemThis is why it is essential to take AI ethics into account across the entire organization. This is the whole point of taking AI ethics into account across the entire organization, supported by labels dedicated to these issues.
Alexandre MartinelIiCEO and co-founder of La Javaness
Javaness commits to a more sober, ethical and inclusive digital world
This summer you were awarded the level 2 Responsible Digital label. Why did you commit your company to such an initiative?
At European level, controlling the spread of AI is a prerequisite for collectively ensuring our resilience to crises health, ecological and geopolitical issues and in the shorter term, in international competitionto free up the resources that will enable us to finance our social model tomorrow and beyond.
At the same time, digital is a serious "cost center" for the environment, responsible for 4% of greenhouse gas emissions today, potentially 8% by 2025. Against this backdrop of profound change, La Javaness is also turning a new corner, that of the age of reason, enabling it, after more than 6 years of innovation thanks to data and AI, to enter an industrialization phase. The experience we've built up and our ongoing R&D efforts (20,000 man-days invested in our AI base), mean that today we're a key player in the development of AI on the scale of very large regulated structures such as the AMF, RTE or Pôle emploi. It therefore seemed to me very important, at this stage in our history, to take precise stock of the robustness of our organization and our level of commitment, in order to pursue our ambition of becoming a trusted AI partner to major organizations. The approach proposed by INR seemed to me to be the most successful.
Finally, what are we doing at La Javaness to be a "digitally responsible" organization?
Our responsible digital program is based on 3 pillars. The first concerns our proposal to value as an independent French AI player. "To put trusted AI at the service of competitiveness in Europe and the major challenges of our time, enabling our customers to strengthen their strategic autonomy, while ensuring respect for committed resources and avoiding the risks of drift".
When we work with Pôle Emploi to help people return to work in the long term, or when we help RTE expand its AI capabilities to forecast tomorrow's energy networks, we are obviously mobilized to meet our contractual commitments. But as an innovation partner, we are also driven by their public-interest mission. These choices carry meaning and a shared interest, not only for all our employees, but also in the strong ties we forge with our customers and partners.
The second concerns the solutions we develop for our customers in a way that "responsible by design ". In particular, we are investing in applied research involving work on algorithms that consume very little data. This means less energy consumption, greater protection of personal data, greater security and also greater efficiency. Indeed, access to data is often complicated at our customers' sites. More generally, we involve all our development, design and business teams in eco-design approaches. A third axis is what we call our AI factory : The aim is to develop what we call gas pedals, i.e. reusable technological, functional and methodological components to secure the scaling-up of AI. For example, in the course of our experience, we have developed an "in-house" data annotation tool (an essential step in the use of algorithmic models), which limits the amount of input data required. We've just made it available to the open-source community to help spread virtuous practices.
Our third pillar concerns the quality and value of our relationship with our stakeholders (customers, employees and suppliers) to grow together on these issues.