AI news
May 5, 2024

AI Ethical Frameworks Critique

Are these Frameworks fulfilling their purpose?

Daniel Guala
by 
Daniel Guala

AI Ethical frameworks have been established and released from various companies, governments, supra states, and public organisations. Why are all of them aligned? Why did AI Ethics rise? Why were they created? What concrete actions are being taken in the development of artificial intelligence?

The Context

All the changes we have been seeing in the latest years are what is already known as the “fourth industrial revolution”. This revolution is characterised by continuous developments in fields such as technology, health, physics, biology, etc. And they are changing the world as we know it. AI is one of the driving forces of this revolution and several tools are being built upon it to contribute to this progress, to the extent that this technology is being called “the steam engine of the fourth industrial revolution”.

At the end of 2022, we met ChatGPT and there is no doubt that there is a before and after this event. The year 2023 was the year in which the development and adoption of LLMs (Large Language Models) exploded. This type of AI, together with other generative AIs for images, audio, video, code (and more), is the most extended and used by the general public. AIs are now a technological product and, as such, Big Techs are leading its development, innovation, and offering. Big Techs therefore hold much of the current and future power in this area.

All the dilemmas and debates we have addressed in the previous post had AI as the main character. The public opinion around the application and use of this technology can earn or lose billions to these companies and gain or lose power to the rulers.

It is in this context that the Ethics of Artificial Intelligence arises.

What is AI Ethics?

Ethics can be defined as the branch of philosophy that studies the values and principles that guide human behaviour, reflecting on moral issues and seeking to determine whether actions are right or wrong.

Artificial intelligence ethics is the set of principles and values that apply to the design, development, deployment, use, and regulation of Artificial Intelligence systems.

Image by Daniel Guala .

Why do AI ethics matter?

Philosophical currents, ethical principles, individual reflections, and opinions on specific issues cause debates that shape the values of a society. Societal consensus guides legal codification and legal processes, transforming ethics into law. This has always been the true importance of ethics and philosophy.

How and to what extent something like technology is legislated is determined by the perception of the population and the demand, or passivity, we have towards our rulers — although this is in decline and now it is the rulers who initiate legislative processes according to their interests.

There are other disruptive technologies that are just as important and there is not as much emphasis on legislating them by any international organisation or corporation. We will address this in another post 😉.

If AI Ethics justifies and reassures public opinion about the intentions, actions, and consequences of the activities of companies and states on AI, citizens will question those activities to a lesser extent.

AI Ethical Frameworks

If you are not familiar with these Artificial Intelligence Ethical Frameworks, we will answer some questions related to them to give you some light.

What is an AI Ethical Framework?

An Ethical Framework in AI is the set of moral principles and values that guide the responsible development and use of this technology.

In an analogy, AI ethical frameworks are the compass that guides us toward Responsible AI (RAI).

Responsible AI Governance is the decisions we make as we navigate using that compass, trying to translate Ethical principles into action.

What Ethical Frameworks exist?

All Big Techs such as Microsoft, IBM, Google, Meta, Amazon, and other international companies, have developed their AI Ethical Frameworks in which they share how they, their partners, customers, and users can apply ethical principles and ethical practices in the design, development and even in the use of AI.

The White House, the EU, the UN, and most governments — such as Spain’s National Artificial Intelligence Strategy (ENIA in Spanish) in its strategic axis 6 in particular — have also taken the same path. In November 2021, UNESCO established the first global standard on the Ethics of AI: the “Recommendation on the Ethics of Artificial Intelligence”, adopted by all the 193 member states.

And this same organisation, guided by the experience of the previous standard, in collaboration with the Patrick J McGovern Foundation, The Alan Turing Institute, and the ITU (International Telecommunication Union), created in February 2024 the Global AI Ethics and Governance Observatory, a platform that aims to provide resources such as reliable data, knowledge, models, expert insights and good practices on AI and the Ethics of AI Governance to guide policy and decision making by policymakers, regulators, academics, the private sector, and civil society.

There is even an ISO standard released, the ISO/IEC 42001, which is the first international management system standard for the safe and reliable development and deployment of Artificial Intelligence.

ISO 42001, and AI Ethical Frameworks, contribute to several goals of the 2030 agenda. Screenshot of iso.org.

AI Ethical Principles

All these frameworks include, organised and classified in similar way, the following principles:

  • Responsible Development: Creation of AI systems considering their ethical, social, and environmental impacts, as well as the definition of responsibilities throughout the entire AI development cycle (design, data collection, training, testing, production release, improvements, etc.).
  • Transparency: This aims that the processes of AI systems that collect and analyse data should be open and understandable. It also encompasses practices such as tagging AI-generated content.
  • Explainability: The ability of AI systems to provide understandable and accessible explanations of how they make decisions or perform actions so that users trust and understand the AI.
  • Trustworthiness: The ability of an AI to respond with reliable data and to give the same answers (or as close as possible) to the same questions. This concept is closely related to the transparency and explainability principles.
  • Sustainability: AI development and use practices that seek to minimise environmental impact.
  • Privacy: Protection of users’ personal and sensitive information through privacy and compliance with data protection regulations.
  • Diversity: heterogeneity of IA training data to provide responses with a broader perspective. It also refers to the inclusion of different views and profiles of people in the planning, development, and deployment of IAs.
  • Bias Minimisation: Attempts to reduce unfair biases or prejudices in AI systems by correcting biases in data and algorithms to ensure fair and equitable decisions.
Ethical Principles for AI. Image by harvard.edu

Reflections and Arguments against Artificial Intelligence Ethics

In this fourth industrial revolution, there seems to be no disagreement about the need for ethics in AI: Big Techs, States, supra States, and international organisations have reflected on the values and moral principles that guide our behaviour to face ethical problems related to AI. They have already defined responsible development and established what aspects will be transparent to users and what things to explain so that we trust the AIs they developed. They give us arguments that justify the sustainability of their practices and decide how diverse and how biased these tools can be.

We are being told that all areas related to AI are already ethical.

Let’s think for a moment. Are they trying to chew on these issues to prevent us from reflecting, taking positions, and, more importantly, from taking action for ourselves? What if an individual’s morals and ethics go against those set by these organisms? Are we allowing big tech, states supra states, and international organisations to enter the realm of individuals’ morality? Are they trying to prevent us from questioning their practices and demanding more from them by telling us that they are already ethical? Isn’t this a form of ethical coercion and persuasion?

Researchers like Corinne Cath in Governing Artificial Intelligence: ethical, legal and technical opportunities and Challenges (2018), asked “First, who sets the agenda for AI governance? Second, what cultural logic is instantiated by that agenda, and, third, who benefits from it? Answering these questions is important because it highlights the risks of letting industry drive the agenda and reveals blind spots in current research efforts.”

Frameworks do not question the company’s activities

There are voices such as Luke Munn who states in his article The uselessness of AI Ethics (2022) that AI Ethics is useless for several reasons we will see in this and next sections.

One of these reasons is that AI ethics is “subsumed in corporate logics and incentives”. The values and principles that are given in AI ethics frameworks are closely aligned with corporate values, without questioning “the business culture, revenue models, or incentive mechanisms that continuously push these products into the markets”.

Conveniently, the values presented in the frameworks do not create any path that could make their creators lose any money.

Many organisations and companies choose for their ethical frameworks the values and principles that best fit and justify their current activities, so that they do not have to make any major changes in practice and become automagically ‘Ethical’ AI Companies.

This practice is also known as “Ethics Shopping”.

It is an Experts thing

There is an assumption that ethical design is done only by experts, such as technologists, AI companies, prominent academics in ethics, policymakers, and governments.

This assumption limits the participation of all the other equally important actors in the evolution of this technology, such as end-users and citizens.

Reducing users and consumers to mere “stakeholders” turns invisible and numbs their fundamental role in the development of AI — a role we discussed in the section on Why do AI ethics matter? — and increases the power, weight, and responsibility of technology-related companies and professionals.

“AI that is built without the people it affects, risks being an AI that affects them negatively.”

image by Daniel Guala

Universality

The ethical principles of the frameworks assume that concerns and needs about the positive and negative impacts of Artificial Intelligence are the same across cultures, social groups, and countries and that these can be addressed by applying unique objective measures.

This assumption rests on the idea that ethical implications concerning artificial intelligence remain consistent across cultural, social, and political contexts, despite the fundamental differences in beliefs and values systems among various regions and communities worldwide.

Is there one framework to fit all of them? Should all cultures have an AI Ethical Framework adapted to their context? Or one AI with different alignments per culture? Should there be one AI politically aligned with the left wing and the other for the right wing? One Christian AI, one Muslim, one Jewish, one Buddhist? One for South Americans and one for Russians? One for the young and one for the old?

At a glance, an AI to satisfy all needs and that is aligned with all the interests of all social subgroups is not feasible.

Alignment

Although AI Ethical Frameworks have a universality view when dealing with the challenges presented by this technology, paradoxically most AIs in large-scale use have guardrails and guidelines — not to say censorship — closely associated with a few US companies that have several aspects in common: the popular culture, the laws they obey, the progressive American ideology, and the goal of benefiting rather than harming their corporate owners, among others.

The frameworks developed by these companies, as well as the models they make available to the public, therefore have the biases of the companies offering these AI services. For the same reason, these AI systems are more willing to accept and perform certain actions and reject others based on American morals and culture.

There are practices aligned with some government interests and are accepted by the population in some regions but not in others. For example, facial recognition or social scoring are unacceptable in American or European moral systems but it is widely accepted in China.

Open-Source Uncensored AI

There is one solution that can solve the universality issue, the alignment problem, and at the same time better meet several of the principles presented in the Ethical Frameworks: Open-source uncensored AIs.

Instead of large companies and organizations using the Ethical Frameworks they develop for themselves and the alignment they estimate appropriate, with our own open-source uncensored AI, guardrails and alignments do not exist and each person could apply their own code of ethics and align it to their own needs.

Being able to adjust a model with additional data relevant to the task at hand from an uncensored AI (fine-tuning) gives us these possibilities.

This individual granularity in the alignment with users seems to be possible, for the time being, only through open-source and uncensored AIs.

Eric Hartford, the creator of Dolphin-mixtral-8x7b (an uncensored AI model), comments on his article Uncensored Models (2023):

My knife cuts what I want. Why should the open-source AI running on my computer, get to decide for itself when it wants to answer my question? This is about ownership and control. If I ask my model a question, i want an answer, I do not want it arguing with me.

Mitko Vasilev (CTO) comments the following on a LinkedIn post about alignment, privacy, and user independence when using AI models:

Privacy and Independence: This setup offers 100% privacy, unlimited tokens, and no dependency on external providers.

Open source makes powerful Al tools to be made more accessible and aligned with individual users. It’s a step towards democratizing Al technology, ensuring that everyone can own and control their Al experience.

Make sure you own your Al. Even if you pay, Al in the cloud is not aligned with you, it’s aligned with the company that owns it.

The democratization of AI that Mitko mentions is an aphorism commonly used to refer to the intention to make this technology accessible to a broad spectrum of people, regardless of their background or resources. This is also one of the advantages that freely available source code has always had.

Many people claim that releasing powerful AIs to the general public without any restrictions or censorship is dangerous because many could use it for unethical, dishonest, or harmful purposes causing threats to other people or even humanity itself.

But following these same fears, should we also lock down or censor the code of other technologies just because of the misuse that could be made of them? Has any technology been aligned or censored in the past?

If we think about closing the code of AIs for fear of making it accessible to “bad guys” that can misuse it, there is some bad news: the bad guys will access it anyway and they will do their misdeeds one way or another.

Where would AI be right now if researchers, developers, and testers didn’t have access to source code? Or without open-source AI platforms like Hugging Face 🤗?

If we compare open-source and closed-source AIs we can observe that an open AI complies, by definition, more closely with the principle of transparency, while a closed AI will always comply to a lesser degree. The degree of transparency increases when the models are trained with openly accessible datasets. The more transparent the AI is, the more trustworthy it becomes for users.

Private-source AI, on the other hand, operates behind closed doors, lacking transparency in the development and training data of these models.

Closed-source AI leaves users with the sole option of having faith in the good intends of the companies that provide AI Services rather than having knowledge about the hidden processes and the training data of the AI systems they are using.

In terms of security, the scrutiny of all eyes of a community-driven AI is more likely to identify threats and issues and enables independent verification of issues.

Image by techtarget.com

AI Ethics has no teeth

The AI principles are really well intended but their effectiveness is still to be demonstrated.

There are no tangible sanctions for violating the ethical frameworks that have been created and they may establish normative ideals, but they have no means of enforcing compliance with these values and principles.

The article AI ethics should not remain toothless! A Call to bring back the Teeth of ethics (2020) by Anaïs Rességuier argues that ethics in artificial intelligence is largely ineffective because it is being treated or considered as a softer version of the law. This ‘ethics is the law’ view is problematic as it is trying to use ethics for a purpose it is not designed for. As we mentioned before, norms and laws are the end goals and Ethics are the starting point and the way to them.

Some claim that important academic institutions promoted and supported the idea that big tech companies could manage and control themselves in the use of artificial intelligence if they attach to their ethical codes. But, according to Rodrigo Ochigame in “THE INVENTION OF “ETHICAL AI” (2019), the recommendations of the academia and research institutions on AI that have been made so far are “aligned consistently with the corporate agenda”. This basically served big companies to further develop these technologies at full speed and legitimise their activity in “a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies”.

If public opinion believes these frameworks ensure that companies and organizations that create them are doing everything according to the ethics defined, no one will look around and question whether they are doing anything wrong. There won’t be any great public debate calling on the authorities to keep an eye on companies or calling them for accountability in doubtful activities related to AI development.

Past experiences

The job of saying “no” or “slow down” within these organisations, which are not interested in doing exactly that, is not an easy one.

In this regard, the news that Microsoft fired the “ethics and society” team in its artificial intelligence section has not gone unnoticed. Earlier in 2020, Google fired AI ethics researcher Timnit Gebru after she published an article with “inconvenient truths” about face recognition models, which caused other senior managers in the department to leave the company, further reducing Google’s credibility at the time on responsible AI issues.

Several years and several events have passed since then, but these early years have been key to the development of many of the AIs we use today and one cannot obviate the fact that if they have done it before, what can stop them from doing it again?

“I am…inevitable…”

These Frameworks assume AI and its implementation across almost all facets of life will inevitably happen, and we citizens can only react to its consequences. Therefore, the ethical debate is about how to design appropriately, and never about whether systems should be built in the first place.

The only ethical path is to “build better” maximising the benefits of AI and minimising the negative impacts, and educating the public about the role of AI in their lives. “Not building” is not an option.

Image by Daniel Guala.

AI Ethical principles are isolated from the development

AI frameworks are also criticised for their principles being isolated from practical implementation in AI design and development within the companies and organisations that drive them.

In other words, companies and international organisations appear to apply ethical principles in their processes without actually putting them into practice.

Those who largely advocate that these principles should guide and govern the practices of engineers fail to ensure that developers, AI teams, and data engineers apply ethical principles to the code they develop.

A study called Does ACM’s Code of Ethics Change Ethical Decision Making in Software Development? (2018) by McNamara, Smith, and Murphy-Hill, showed that using the code of ethics of the ACM (Association for Computing Machinery) with 63 software engineering students and 105 professional software developers found that “explicitly instructing participants to consider the ACM code of ethics in their decision-making had no observed effect compared to a control group”.

In the conclusions of Ethically Aligned Design of Autonomous Systems: Industry Viewpoint and an Empirical Study (2020) by Ville Vakkuri, it is said that “developers consider ethics important in principle. However, they consider ethics as a construct impractical and distant from the issues they face in their work. There is thus a clear gap between research and practice in the area as the developers are not aware of the academic discourse on the ethics of AI. The key finding of this study was that none of the case companies utilized any tools or methodologies to implement AI ethics.”

The results of these studies show that it is unusual to apply moral principles to the code or data on which IAs are trained.

AI Ethics Washing

If someone who claims to include Ethical practices in the design, development, and public release of an AI system fails to include them or fails to implement those principles, we are dealing with Ethics Washing.

Is it possible to implement ethical measures in AI development?

It is precisely in this gap between theory and practice that new research and efforts are being focused, proposing governance models such as the methodology set out in Operationalising AI ethics: barriers, enablers and next steps and initiatives such as Ethics as a Service o EaaS, making an analogy to cloud computing practices such as PaaS (Platform as a Service), IaaS (Infrastructure as a Service) or SaaS (Software as a Service). The article on EaaS suggests that the ethics of research and medical ethics (where there is the Hippocratic Oath since 500 BC.) have always used a combination of ethical laws, policies, practices, and procedures. This combined approach has allowed these branches of applied ethics to strike a good balance between being too strict and too flexible, and between being too centralised and too decentralised. It therefore seems reasonable to think that ethics in artificial intelligence would benefit from a similarly customisable approach.

Image by evilaicartoons.com

Conclusion

Does this mean that AI Ethical Frameworks are bad?

No. They must be understood within a business or organisational environment in which products are developed and services are offered to a large number of people. In the same way that business ethics have been implemented and worked on for years, AI ethics should be encompassed within this.

It is therefore up to individuals to actively ask themselves whether these same ethical principles are the ones we want to follow and why we should follow them. AI users should reflect on the obligations to demand large organizations that provide AIs and proactively raise them or, alternatively, try uncensored open-source AIs and fine-tune them for their specific needs.

In law a man is guilty when he violates the rights of others. In ethics he is guilty if he only thinks of doing so. — Immanuel Kant