Are these Frameworks fulfilling their purpose?
AI Ethical frameworks have been established and released from various companies, governments, supra states, and public organisations. Why are all of them aligned? Why did AI Ethics rise? Why were they created? What concrete actions are being taken in the development of artificial intelligence?
All the changes we have been seeing in the latest years are what is already known as the âfourth industrial revolutionâ. This revolution is characterised by continuous developments in fields such as technology, health, physics, biology, etc. And they are changing the world as we know it. AI is one of the driving forces of this revolution and several tools are being built upon it to contribute to this progress, to the extent that this technology is being called âthe steam engine of the fourth industrial revolutionâ.
At the end of 2022, we met ChatGPT and there is no doubt that there is a before and after this event. The year 2023 was the year in which the development and adoption of LLMs (Large Language Models) exploded. This type of AI, together with other generative AIs for images, audio, video, code (and more), is the most extended and used by the general public. AIs are now a technological product and, as such, Big Techs are leading its development, innovation, and offering. Big Techs therefore hold much of the current and future power in this area.
All the dilemmas and debates we have addressed in the previous post had AI as the main character. The public opinion around the application and use of this technology can earn or lose billions to these companies and gain or lose power to the rulers.
It is in this context that the Ethics of Artificial Intelligence arises.
Ethics can be defined as the branch of philosophy that studies the values and principles that guide human behaviour, reflecting on moral issues and seeking to determine whether actions are right or wrong.
Artificial intelligence ethics is the set of principles and values that apply to the design, development, deployment, use, and regulation of Artificial Intelligence systems.
Philosophical currents, ethical principles, individual reflections, and opinions on specific issues cause debates that shape the values of a society. Societal consensus guides legal codification and legal processes, transforming ethics into law. This has always been the true importance of ethics and philosophy.
How and to what extent something like technology is legislated is determined by the perception of the population and the demand, or passivity, we have towards our rulers â although this is in decline and now it is the rulers who initiate legislative processes according to their interests.
There are other disruptive technologies that are just as important and there is not as much emphasis on legislating them by any international organisation or corporation. We will address this in another post đ.
If AI Ethics justifies and reassures public opinion about the intentions, actions, and consequences of the activities of companies and states on AI, citizens will question those activities to a lesser extent.
If you are not familiar with these Artificial Intelligence Ethical Frameworks, we will answer some questions related to them to give you some light.
An Ethical Framework in AI is the set of moral principles and values that guide the responsible development and use of this technology.
In an analogy, AI ethical frameworks are the compass that guides us toward Responsible AI (RAI).
Responsible AI Governance is the decisions we make as we navigate using that compass, trying to translate Ethical principles into action.
All Big Techs such as Microsoft, IBM, Google, Meta, Amazon, and other international companies, have developed their AI Ethical Frameworks in which they share how they, their partners, customers, and users can apply ethical principles and ethical practices in the design, development and even in the use of AI.
The White House, the EU, the UN, and most governments â such as Spainâs National Artificial Intelligence Strategy (ENIA in Spanish) in its strategic axis 6 in particular â have also taken the same path. In November 2021, UNESCO established the first global standard on the Ethics of AI: the âRecommendation on the Ethics of Artificial Intelligenceâ, adopted by all the 193 member states.
And this same organisation, guided by the experience of the previous standard, in collaboration with the Patrick J McGovern Foundation, The Alan Turing Institute, and the ITU (International Telecommunication Union), created in February 2024 the Global AI Ethics and Governance Observatory, a platform that aims to provide resources such as reliable data, knowledge, models, expert insights and good practices on AI and the Ethics of AI Governance to guide policy and decision making by policymakers, regulators, academics, the private sector, and civil society.
There is even an ISO standard released, the ISO/IEC 42001, which is the first international management system standard for the safe and reliable development and deployment of Artificial Intelligence.
All these frameworks include, organised and classified in similar way, the following principles:
In this fourth industrial revolution, there seems to be no disagreement about the need for ethics in AI: Big Techs, States, supra States, and international organisations have reflected on the values and moral principles that guide our behaviour to face ethical problems related to AI. They have already defined responsible development and established what aspects will be transparent to users and what things to explain so that we trust the AIs they developed. They give us arguments that justify the sustainability of their practices and decide how diverse and how biased these tools can be.
We are being told that all areas related to AI are already ethical.
Letâs think for a moment. Are they trying to chew on these issues to prevent us from reflecting, taking positions, and, more importantly, from taking action for ourselves? What if an individualâs morals and ethics go against those set by these organisms? Are we allowing big tech, states supra states, and international organisations to enter the realm of individualsâ morality? Are they trying to prevent us from questioning their practices and demanding more from them by telling us that they are already ethical? Isnât this a form of ethical coercion and persuasion?
Researchers like Corinne Cath in Governing Artificial Intelligence: ethical, legal and technical opportunities and Challenges (2018), asked âFirst, who sets the agenda for AI governance? Second, what cultural logic is instantiated by that agenda, and, third, who benefits from it? Answering these questions is important because it highlights the risks of letting industry drive the agenda and reveals blind spots in current research efforts.â
There are voices such as Luke Munn who states in his article The uselessness of AI Ethics (2022) that AI Ethics is useless for several reasons we will see in this and next sections.
One of these reasons is that AI ethics is âsubsumed in corporate logics and incentivesâ. The values and principles that are given in AI ethics frameworks are closely aligned with corporate values, without questioning âthe business culture, revenue models, or incentive mechanisms that continuously push these products into the marketsâ.
Conveniently, the values presented in the frameworks do not create any path that could make their creators lose any money.
Many organisations and companies choose for their ethical frameworks the values and principles that best fit and justify their current activities, so that they do not have to make any major changes in practice and become automagically âEthicalâ AI Companies.
This practice is also known as âEthics Shoppingâ.
There is an assumption that ethical design is done only by experts, such as technologists, AI companies, prominent academics in ethics, policymakers, and governments.
This assumption limits the participation of all the other equally important actors in the evolution of this technology, such as end-users and citizens.
Reducing users and consumers to mere âstakeholdersâ turns invisible and numbs their fundamental role in the development of AI â a role we discussed in the section on Why do AI ethics matter? â and increases the power, weight, and responsibility of technology-related companies and professionals.
âAI that is built without the people it affects, risks being an AI that affects them negatively.â
The ethical principles of the frameworks assume that concerns and needs about the positive and negative impacts of Artificial Intelligence are the same across cultures, social groups, and countries and that these can be addressed by applying unique objective measures.
This assumption rests on the idea that ethical implications concerning artificial intelligence remain consistent across cultural, social, and political contexts, despite the fundamental differences in beliefs and values systems among various regions and communities worldwide.
Is there one framework to fit all of them? Should all cultures have an AI Ethical Framework adapted to their context? Or one AI with different alignments per culture? Should there be one AI politically aligned with the left wing and the other for the right wing? One Christian AI, one Muslim, one Jewish, one Buddhist? One for South Americans and one for Russians? One for the young and one for the old?
At a glance, an AI to satisfy all needs and that is aligned with all the interests of all social subgroups is not feasible.
Although AI Ethical Frameworks have a universality view when dealing with the challenges presented by this technology, paradoxically most AIs in large-scale use have guardrails and guidelines â not to say censorship â closely associated with a few US companies that have several aspects in common: the popular culture, the laws they obey, the progressive American ideology, and the goal of benefiting rather than harming their corporate owners, among others.
The frameworks developed by these companies, as well as the models they make available to the public, therefore have the biases of the companies offering these AI services. For the same reason, these AI systems are more willing to accept and perform certain actions and reject others based on American morals and culture.
There are practices aligned with some government interests and are accepted by the population in some regions but not in others. For example, facial recognition or social scoring are unacceptable in American or European moral systems but it is widely accepted in China.
There is one solution that can solve the universality issue, the alignment problem, and at the same time better meet several of the principles presented in the Ethical Frameworks: Open-source uncensored AIs.
Instead of large companies and organizations using the Ethical Frameworks they develop for themselves and the alignment they estimate appropriate, with our own open-source uncensored AI, guardrails and alignments do not exist and each person could apply their own code of ethics and align it to their own needs.
Being able to adjust a model with additional data relevant to the task at hand from an uncensored AI (fine-tuning) gives us these possibilities.
This individual granularity in the alignment with users seems to be possible, for the time being, only through open-source and uncensored AIs.
Eric Hartford, the creator of Dolphin-mixtral-8x7b (an uncensored AI model), comments on his article Uncensored Models (2023):
My knife cuts what I want. Why should the open-source AI running on my computer, get to decide for itself when it wants to answer my question? This is about ownership and control. If I ask my model a question, i want an answer, I do not want it arguing with me.
Mitko Vasilev (CTO) comments the following on a LinkedIn post about alignment, privacy, and user independence when using AI models:
Privacy and Independence: This setup offers 100% privacy, unlimited tokens, and no dependency on external providers.
Open source makes powerful Al tools to be made more accessible and aligned with individual users. Itâs a step towards democratizing Al technology, ensuring that everyone can own and control their Al experience.
Make sure you own your Al. Even if you pay, Al in the cloud is not aligned with you, itâs aligned with the company that owns it.
The democratization of AI that Mitko mentions is an aphorism commonly used to refer to the intention to make this technology accessible to a broad spectrum of people, regardless of their background or resources. This is also one of the advantages that freely available source code has always had.
Many people claim that releasing powerful AIs to the general public without any restrictions or censorship is dangerous because many could use it for unethical, dishonest, or harmful purposes causing threats to other people or even humanity itself.
But following these same fears, should we also lock down or censor the code of other technologies just because of the misuse that could be made of them? Has any technology been aligned or censored in the past?
If we think about closing the code of AIs for fear of making it accessible to âbad guysâ that can misuse it, there is some bad news: the bad guys will access it anyway and they will do their misdeeds one way or another.
Where would AI be right now if researchers, developers, and testers didnât have access to source code? Or without open-source AI platforms like Hugging Face đ€?
If we compare open-source and closed-source AIs we can observe that an open AI complies, by definition, more closely with the principle of transparency, while a closed AI will always comply to a lesser degree. The degree of transparency increases when the models are trained with openly accessible datasets. The more transparent the AI is, the more trustworthy it becomes for users.
Private-source AI, on the other hand, operates behind closed doors, lacking transparency in the development and training data of these models.
Closed-source AI leaves users with the sole option of having faith in the good intends of the companies that provide AI Services rather than having knowledge about the hidden processes and the training data of the AI systems they are using.
In terms of security, the scrutiny of all eyes of a community-driven AI is more likely to identify threats and issues and enables independent verification of issues.
The AI principles are really well intended but their effectiveness is still to be demonstrated.
There are no tangible sanctions for violating the ethical frameworks that have been created and they may establish normative ideals, but they have no means of enforcing compliance with these values and principles.
The article AI ethics should not remain toothless! A Call to bring back the Teeth of ethics (2020) by AnaĂŻs RessĂ©guier argues that ethics in artificial intelligence is largely ineffective because it is being treated or considered as a softer version of the law. This âethics is the lawâ view is problematic as it is trying to use ethics for a purpose it is not designed for. As we mentioned before, norms and laws are the end goals and Ethics are the starting point and the way to them.
Some claim that important academic institutions promoted and supported the idea that big tech companies could manage and control themselves in the use of artificial intelligence if they attach to their ethical codes. But, according to Rodrigo Ochigame in âTHE INVENTION OF âETHICAL AIâ (2019), the recommendations of the academia and research institutions on AI that have been made so far are âaligned consistently with the corporate agendaâ. This basically served big companies to further develop these technologies at full speed and legitimise their activity in âa Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologiesâ.
If public opinion believes these frameworks ensure that companies and organizations that create them are doing everything according to the ethics defined, no one will look around and question whether they are doing anything wrong. There wonât be any great public debate calling on the authorities to keep an eye on companies or calling them for accountability in doubtful activities related to AI development.
The job of saying ânoâ or âslow downâ within these organisations, which are not interested in doing exactly that, is not an easy one.
In this regard, the news that Microsoft fired the âethics and societyâ team in its artificial intelligence section has not gone unnoticed. Earlier in 2020, Google fired AI ethics researcher Timnit Gebru after she published an article with âinconvenient truthsâ about face recognition models, which caused other senior managers in the department to leave the company, further reducing Googleâs credibility at the time on responsible AI issues.
Several years and several events have passed since then, but these early years have been key to the development of many of the AIs we use today and one cannot obviate the fact that if they have done it before, what can stop them from doing it again?
These Frameworks assume AI and its implementation across almost all facets of life will inevitably happen, and we citizens can only react to its consequences. Therefore, the ethical debate is about how to design appropriately, and never about whether systems should be built in the first place.
The only ethical path is to âbuild betterâ maximising the benefits of AI and minimising the negative impacts, and educating the public about the role of AI in their lives. âNot buildingâ is not an option.
AI frameworks are also criticised for their principles being isolated from practical implementation in AI design and development within the companies and organisations that drive them.
In other words, companies and international organisations appear to apply ethical principles in their processes without actually putting them into practice.
Those who largely advocate that these principles should guide and govern the practices of engineers fail to ensure that developers, AI teams, and data engineers apply ethical principles to the code they develop.
A study called Does ACMâs Code of Ethics Change Ethical Decision Making in Software Development? (2018) by McNamara, Smith, and Murphy-Hill, showed that using the code of ethics of the ACM (Association for Computing Machinery) with 63 software engineering students and 105 professional software developers found that âexplicitly instructing participants to consider the ACM code of ethics in their decision-making had no observed effect compared to a control groupâ.
In the conclusions of Ethically Aligned Design of Autonomous Systems: Industry Viewpoint and an Empirical Study (2020) by Ville Vakkuri, it is said that âdevelopers consider ethics important in principle. However, they consider ethics as a construct impractical and distant from the issues they face in their work. There is thus a clear gap between research and practice in the area as the developers are not aware of the academic discourse on the ethics of AI. The key finding of this study was that none of the case companies utilized any tools or methodologies to implement AI ethics.â
The results of these studies show that it is unusual to apply moral principles to the code or data on which IAs are trained.
If someone who claims to include Ethical practices in the design, development, and public release of an AI system fails to include them or fails to implement those principles, we are dealing with Ethics Washing.
It is precisely in this gap between theory and practice that new research and efforts are being focused, proposing governance models such as the methodology set out in Operationalising AI ethics: barriers, enablers and next steps and initiatives such as Ethics as a Service o EaaS, making an analogy to cloud computing practices such as PaaS (Platform as a Service), IaaS (Infrastructure as a Service) or SaaS (Software as a Service). The article on EaaS suggests that the ethics of research and medical ethics (where there is the Hippocratic Oath since 500 BC.) have always used a combination of ethical laws, policies, practices, and procedures. This combined approach has allowed these branches of applied ethics to strike a good balance between being too strict and too flexible, and between being too centralised and too decentralised. It therefore seems reasonable to think that ethics in artificial intelligence would benefit from a similarly customisable approach.
Does this mean that AI Ethical Frameworks are bad?
No. They must be understood within a business or organisational environment in which products are developed and services are offered to a large number of people. In the same way that business ethics have been implemented and worked on for years, AI ethics should be encompassed within this.
It is therefore up to individuals to actively ask themselves whether these same ethical principles are the ones we want to follow and why we should follow them. AI users should reflect on the obligations to demand large organizations that provide AIs and proactively raise them or, alternatively, try uncensored open-source AIs and fine-tune them for their specific needs.
In law a man is guilty when he violates the rights of others. In ethics he is guilty if he only thinks of doing so. â Immanuel Kant
â
Cloud Architect at Paradigma Digital