Tag: ethics

A new Code of Conduct for the Spanish advocacy

May 14, 2020

Ph. D. Nielson Sánchez Stewart

On May 9th, 2019, and after more than three years of work, the General Council of Spanish Lawyers, Consejo General de la Abogacía Española (CGAE), the body that coordinates the activity of the 83 Spanish Bar Associations, approved a new Code of Conduct to take effect nationwide.

It was a necessary step to update the regulation of the Legal Profession. That the profession is constantly undergoing profound changes is not a phenomenon that goes unnoticed and the current changes are perhaps the most important ones in its long history. The considerable increase in the number of practitioners, the sometimes insufficient preparation of those who begin their practice, the increase in the proportion of women who embrace the legal profession, the elimination of the barriers that traditionally existed to prevent or hinder the practice outside the borders are causes, among other factors, of this transformation. These changes and the convergence of counselling and advocacy activities with other professionals of different views, backgrounds and training determined the need to carry out a profound revision of the norms that regulate the profession, norms that for years remained unchanged and were transmitted from generation to generation in the privacy of law firms.

The express process of formulation of the regulation began several decades ago. The former Assembly of Deans of the Council in a session held on May 28th and 29th, 1987 approved the so-called „Deontological Standards of Spanish Law”. In another Assembly of Deans of June 29th, 1995, some modifications were introduced and the body was called the „Code of Deontological Conduct”, the first of the Spanish Legal Profession.

The codification of the so-called „standards of behaviour” has been debated, doubting their legitimacy, arguing that they violate the necessary independence and have limited effectiveness. In formulating these criticisms, it is assumed that they are not authentic legal norms, but rather moral or social recommendations. This is not true and has already been declared by the Constitutional Court, which has classified them as strictly legal. It is true that they have an ethical or moral inspiration, but that is common to other legal provisions, such as the Penal Code, without going any further.

The organization of the profession has changed and it has become necessary to adapt the rules of behaviour to these changes. Many of the 1987 norms were becoming prematurely obsolete and in 2001 a new General Statute of the Spanish Legal Profession was approved, which repealed the one from 1982. Many venerable institutions disappeared upon the introduction of the rule of law and the new conception of a Lawyer, not only as an auxiliary to justice, but as one of its main actors and its inclusion in society to provide a service of public interest and under free competition.

In 2002, a Code of Conduct was approved, replacing that of 1987, which has been in force until now. However, in the almost twenty years that have elapsed, there was a need to adapt it to the times.

The CCBE, constituted as the highest representative body of the Legal Profession before the institutions of the European Union, had approved a „Code of Ethics” at its plenary session in Strasbourg on October 28th, 1988, modified at a session held in Lyon (France) on November 28th, 1998. The purpose of this Code was to establish rules of conduct in cross-border professional practice. Said Code was „assumed” by the CGAE. In accordance with the provisions of paragraph 1.3 of the Code, its objectives were to define „common rules applicable to all Lawyers of the European Union and the European Economic Area in their cross-border activity, whichever the Bar Association they belong to”.

This Code – although in force in Spain – upon the assumption by the General Council has a restricted scope of application – ratione materia is the expression it uses – to cross-border activities.

The codification process has not finalised yet and today a phenomenon is underway that is inverse to the trend of localization in infra-national areas of ethics. In the European Union, work is currently being done on the drafting of common Codes of Ethics that will be applicable, at least, in countries whose legal systems come from a common root, notably Italy, France, Portugal and Spain. This good will and desire should be used not only to formulate general principles that are already sufficiently set, but also to try to conjoin new ones taking advantage of the conjunction of international experiences that can serve as basis for comparison as a study method.

In parallel, the CCBE periodically reviews the rules applicable to cross-border practice, which, while still exceptional, will inexorably increase as the European Union and the European Economic Area expand and international traffic increases.
Attempts to create a single Code of Ethics applicable, ideally, to all Lawyers in the world or at least to all Lawyers practicing within the Union, face the natural scruples of countries that have maintained traditional criteria, who have resisted their revision to bring them to a common heritage. A good example turns out to be the professional secret that in some countries, Spain among them, has a public origin closely linked to the special function that the Lawyer develops in society both in the defence aspect, as guarantor of that fundamental right, and in the variant of legal adviser whose activity is part of the right to confidentiality. On the contrary, in other countries, professional secrecy is nothing more than an obligation that arises from the specialties determined by the convention, contract or agreement that discipline the relationship between Lawyer and client. Another example is the remuneration system, with acceptance or rejection of the quota litis agreement, the division of fees and payments for attracting customers. Despite this resistance, it is inevitable that in a globalized world, the application of general solutions will be sought, solving in an intelligent and generous way the difficulties that occur today. Thus, the confidentiality of correspondence between Lawyers, which in some countries is absolute, as in Spain, in others only applies if that attribute is referred to in its text, as occurs in the United Kingdom, and in others, such as Italy, it also applies, unless otherwise indicated in the correspondence. In these times we live in, when correspondence circulates daily outside borders, such a paradoxical situation cannot last long.

The synergies of the Lawyers of the surrounding countries must be used to jointly advance in the formulation of updated norms and not only insist on the formulation of already accepted principles.

In 2016 work began on this new Code of Conduct by the Deontology Commission of the General Council of Spanish Lawyers. The alternative of approving a totally new text or, on the contrary, introducing certain modifications to the current one was considered: this second possibility was chosen so as not to lose the valuable collection of administrative precedents and judicial decisions.

In the elaboration of the new Code, the then current General Statute of the Spanish Legal Profession and the project that was also approved by the Council pending ratification by the authorities, have been taken into account. Although this project is not yet a standard since the concurrence of the administration is required, it does reflect the feeling of the profession and it is hoped that once the difficulties experienced in Spain in recent years have been overcome, it will not take too long to be finally approved.

The existence of the Code of Conduct – as long as they do not restrict competition and is established for the benefit of service consumers and society in general – is not objectionable to competition authorities. This has been stated by the former Competition Court in its famous „Report on the free exercise of professions, Proposal to adapt the regulations on collegiate professions to the free competition regime in force in Spain in June 1992” that gave rise to an important modification in the Law of Professional Institutions.

Already in May 2003, the European Commission announced the preparation of a proposal for a Directive on services in the internal market that would be released before the end of that year. Paragraph 39 of the proposed Directive encourages Member States to adopt uniform codes of conduct. This Directive is known as the Bolkestein Directive which included an important phrase: “Member States, in collaboration with the Commission, should be expected to encourage stakeholders to draw up codes of conduct at Community level, especially with the aim of promoting quality of services taking into account the peculiarities of each profession. The codes of conduct must be in accordance with Community Law, and especially with Competition Law. They cannot be contrary to the binding legal provisions on ethics and professional conduct that are in force in the Member States.”
The evolution of standards of conduct will continue because it will be necessary to adapt them to the new times that are characterized by a different legal profession than the traditional one. Extrajudicial and preventive activity is more and more important, the scope of work is constantly expanding and encompasses territories regulated not only by different laws but by different ethical standards; the conflict between individuals has ceased to be in many cases the fundamental field of work since today public administrations have invaded everything. Finally, relations with other professionals who carry out activities similar and sometimes identical to those carried out by the Legal Profession are becoming more frequent every day, the preferential dedication of which is tax law, urban law and labour law. The practice of the profession has changed in the sense that each day it is more common that one no longer configures oneself as an independent professional, but rather as an employee for another office, for a company or for a private individual who is not from the profession. In this way, one will become a lawyer for a single client. At the same time, it is increasingly observed that organizations of collective representation or with multiple interests provide through a professional, legal advice, consumer groups, unions, public bodies. The relationship with these third parties who are not exactly clients, but rather members of the group for which they provide services, must be subject to ethical regulations.

On the other hand, the areas of legal advice, which are common to the legal profession and to other professionals, have determined the existence of so-called multidisciplinary or multiprofessional law firms that should be subject to specific regulation.

The deontological norm – legal as it has already been insisted – is of obligatory compliance and its violation brings with it a sanction. There is no doubt about this in Spain. There are, however, some peculiarities that characterize it and which have been the subject of judicial debate and analysis by the Courts.

There is no law that regulates the legal profession in our country. The rules are spread across many scattered texts, some of which do not have a category of law. The one that regulates the Bar Associations does not contain a table of misdemeanours or a list of sanctions, but attributes to these bodies in their territorial scope the function of: „… practicing disciplinary authority in the professional and collegiate order”

The violation of the principle of legality of the ethical standards contained in the Codes has therefore been discussed, because they would not in themselves define typical conducts and because they do not meet the requirements of publication in an official journal. Legality, typicality and publicity, are conditions applicable to the punitive and restrictive norm of rights. It has been said that this lack of publication in an official paper would deprive them of compulsory force and of authentic legal status. The Constitutional Court has indicated that there is a relationship of special abidance of the collegiate with their Bar Association, which is precisely what allows reducing the requirement of reserving strict law. Reduce it only, but not do without it. It is necessary for the sanctioning regime to have a legal basis even if the infractions and sanctions are not defined in detail in the law. It is therefore possible that, due to this special relationship of dependency, assumed by the collegiate when applying for admission to the profession, that the actions and sanctions are not defined by law as long as the law refers them to a norm of lower rank.

Thus, the universally accepted principle of nullum crime nulla poena sine lege is respected today. The so-called typicality, the requirement that the conduct be perfectly defined in the norm, is satisfied in Spain with the so-called „predictability” of the norm in the face of the lex certa requirement, since this requirement is not violated by facts, omissions or punishable conduct „By means of undetermined legal concepts, as long as their concretion is reasonably feasible by virtue of logical, technical or experience criteria and allows foreseeing with sufficient security, the nature and essential characteristics of the behaviours constituting the typified infringement”.

The new Code is not the culmination of ideal standards to regulate the profession. It is the updating of the needs to face various current phenomena, such as payment for attracting customers, distribution of fees with third parties outside the profession, the limitations imposed on professional secrecy, the advertising of services, the substitution in performance and relationships that arise between who provided the advice and defence and who takes it over, the second opinion, the obligations to liquidate the funds received and many others.

We have already started work on its update.

AI and Ethics

May 31,2019

Chris Rees

The below intervention was delivered on AEA-EAL conference „AI Beyond Hype” in Edinburgh on May 31, 2019.

Prof Timo Minssen has discussed a number of ethical issues related to AI in a medical context:
• What happens when AI makes a mistake and things go wrong?
• How existing medical regulations will cope with AI
• The tension between patient data privacy and the need for AI to learn from large scale patient data
• The effect of proprietary systems
• Whether AI systems should have legal personalities
• The effect on employment and the potential for unequal wealth distribution
• Bias and discrimination
As he noted, some, indeed many of these issues have wider ramifications than just the medical field. I shall look at some of the same issues, and one or two others, in a broader, societal context. Most of the ethical issues that arise with the implementation of AI systems are not unique to AI. I recently spoke in a debate at the WCIT in London with Professor Richard Harvey, Gresham Professor of IT and Professor of AI at the University of East Anglia on the motion, “There is no such thing as AI ethics, just ethics”. It became apparent in the debate that there are ethical issues which are specific to AI or which AI renders materially more acute. Today I want to look at the main ethical issues that arise in relation to AI, and reflect on the extent to which they are legal issues too.

I shall discuss six ethical topics:

1. Bias
2. Explainability
3. Harmlessness
4. Responsibility for harm
5. The effect on employment and on society
6. AIs impersonating humans

I shall also discuss the question, Does ethics matter in AI? I shall not talk about the trolley problem, because it is boring, old hat, not a real problem even for automated vehicles and was clearly set out by Thomas Aquinas in the 13th century (the doctrine of Double Effect). Nor shall I discuss the ethics of Artificial General Intelligence because I don’t think it’s going to happen.


Bias is definitely not unique to AI. AI systems are biased because we are biased. Bias in AI systems arises from two main causes, unwitting bias in the minds of the engineers who design and build the systems, and secondly bias in the data.
The vast majority of these engineers in Europe and the USA are young, white males. Nothing wrong with that perhaps but they may not even be aware that they are designing systems that work really well for white males but significantly less well for black females. Famously a very sharp, young black researcher at the MIT Media Lab called Joy Buolamwini was offended when Google’s facial recognition system identified her as a gorilla, and did not recognise her as a human being until she put on a white mask. So she constructed an experiment with about 300 photos of light skinned males and similar numbers of light skinned females, dark skinned males and dark skinned females. The system was close to 100% accurate with the first set, light-skinned males, in the 90% range for the second and third and an appalling 66% for the last, dark skinned females.
AI systems learn from the datasets they are trained on, so any biases in the training dataset will be “learned” by the AI. Biases will be embedded in the datasets because we are biased, in many ways, some of which we notice, others we may not be aware of. Then they go on learning from datasets they operate on, which themselves will have embedded biases. The larger the dataset, the greater the incidence of bias.
Does this matter? Not in all applications. In machine translation, you are interested in the quality of translation into the target language. Google translate copes well with translating into French, “time flies like an arrow” offering “le temps file comme une flêche.” But not so good with “Fruit flies like a banana” – “les mouches des fruits comme une banane.” However gender bias can creep in here too. Turkish has genderless pronouns. Google and other automatic translation engines translate “o bir mühendis” as “he is an engineer”, “o bir doktor” as “he is a doctor”, “o bir hemşire” as “she is a nurse”, and “o bir aşçi” as “she is a cook”. This is offensive rather than critical.
But bias certainly does matter in many ways. As many in this audience will be aware, judges in several American states use an AI system called Compas to determine whether to grant bail to alleged offenders and in Wisconsin to help the judge decide the length of a sentence. The system relies on a number of indicators, which do not include race. However it does take into account where the offender or alleged offender lives, and given the racial distribution of populations in American cities, geography becomes a proxy for race. So a black accused who may well not re-offend, given his record, is more likely to be denied bail than a white man with a comparable record. Compas, developed by Northpointe (now Equivant) is proprietary and Equivant will not divulge how it works. Perhaps they cannot. This is surely unethical.
In the case of Wisconsin v. Loomis, the defendant Eric Loomis was found guilty for his role in a drive-by shooting. Pre-trial, Loomis answered a series of questions that were then entered into Compas. The trial judge gave Loomis a long sentence partially because of the „high risk” score the defendant received from this risk-assessment tool. Loomis challenged his sentence, because he was not allowed to assess the algorithm. The state supreme court ruled against Loomis, reasoning that knowledge of the algorithm’s output was a sufficient level of transparency. A legal matter or an ethical one too? Both in my view.
This leads to consideration of another important ethical issue, which has legal consequences as well as more general ones, for instance in finance.


The most popular forms of AI today are based on deep learning via artificial neural networks. It is a characteristic of such systems that they cannot explain how they reach their decisions. Nor can their designers or developers. This is known as the “black box” problem. They continue to learn, once trained and the problem persists. It is a principle of the Common Law that a judge must explain his decisions. If he is relying on a black box system to make or support his decision, he cannot fulfil that obligation. A legal and an ethical problem.
A comparable problem arises if a bank or mortgage company is relying on a black box AI to determine whether to grant you a loan or a mortgage. If it denies your application and cannot tell you why, then you cannot change your application to enhance your chance of success, for instance by increasing your deposit, because you would not know whether this would meet the system’s objections. Many human beings and human institutions refuse to divulge their reasons, but these AI systems cannot do so. A uniquely AI-specific ethical problem. Work is going on to solve it and make them transparent, for instance at MIT Lincoln Labs, but so far no general solution is available as far as I know.


In 1942 Isaac Asimov published his laws of robotics in I Robot. The first was “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. AIs are tools and any tool can be used for good or ill. A knife can be used to cut bread or stab someone. AI can be used for the benefit of mankind or to hurt people. The use of AI-driven drones (let alone stray ones wandering around Gatwick airport) as Lethal Autonomous Weapons Systems (LAWS) is already controversial. Many at the UN argue for them to be banned like chemical and biological weapons. Some nations, including the UK, argue against it. Undoubtedly all the major powers are developing such systems and a number probably deploying them already. Could a LAWS abort a mission if its target had entered a hospital or hidden in a group of children? A drone operated by a human could and would be aborted in such circumstances. I doubt whether an AI system could make such a sophisticated judgement. Similarly would an AI guided drone be able to decide whether to crash land in a populated or less populated area? Again possible but I doubt it. Unethical? Yes, but ethics is not always the first consideration in military thinking. Definitely an issue in international law.
Secondly facial recognition, an AI technology, can be and is being used by repressive, authoritarian regimes to facilitate persecution. The most egregious example in the world is the widely publicised use of facial recognition technology by the Chinese government to identify, arrest and incarcerate over a million Uighurs in Xinjiang inn Western China for no misdemeanour other than being Uighurs and Moslems. Totally unethical. We were there in 2016 and it pains us to witness it. The one positive note in this sorry tale is that the Trump administration is considering blacklisting five Chinese companies (including Megvii, Zhejiang Dahua Technology Co., and Hangzhou Hikvision Digital Technology Co.) which supply these systems to the Chinese government.
More generally the ethical issues raised by the use of facial recognition technology are gaining wide notice. It is useful and convenient that my PC recognises my face so that I don’t need to use a password to access the machine; more significantly it has enabled police to identify and arrest elusive criminals including the suspect in a mass shooting in an Annapolis MD newspaper office last June. But San Francisco has banned its use by the police and public authorities and Ed Bridges is suing South Wales Police over its use of the technology without his permission. He will argue that it is an unlawful violation of his privacy, and his rights to free expression and protest and that it breaches data protection and equality laws.
AI can also be used for criminal purposes, to take over an autonomous vehicle and turn it into a weapon, to attack infrastructure facilities such as telephone networks and power grids, and to gather the data to increase the effectiveness and frequency of spear phishing. There are many other comparable concerns in IoT. Obviously unethical, indeed criminal. The risks need to be assessed in such situations. What this concern emphasises is the crucial importance of cybersecurity in relation to AI, well covered in an excellent report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation published in February 2018 by a group of scholars who met at Oxford University the previous February. (1)
They identified that the growing use of AI systems would lead to changes in the landscape of threats:

Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labour, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.

Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders. So AIs attacking AIs.

Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems.

In response to this changing threat landscape they made four high-level recommendations:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. We should actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.


Timo identified the issues arising when the use of AI in a medical context goes wrong. Of course this issue is not limited to the medical field, and is of undoubted concern to lawyers at this conference. It may be argued that it is a legal rather than ethical issue but in my view it is both. It is most commonly cited in relation to autonomous vehicles (AVs), which, like doctors, and human drivers, can kill people. As far as AVs are concerned, the UK parliament passed a law last July, the Automated and Electric Vehicles Act 2018, which has yet to be implemented. It assigned clear responsibility to the insurer in the case of an AV causing harm or death, assuming that the AV was insured. If not, responsibility lies with the owner. This should ensure that injured parties are compensated relatively quickly, while investigation of the root cause can take its time. The insurer is entitled under the Act to have recourse later to the manufacturer or developer of a failing component, sub-assembly or software module. The Act also has provisions for, for example, failure by the owner to install safety-critical software updates issued by the manufacturer and unauthorised modifications to the software by the user. The Act does not cover the situation where an AV has been hacked and taken over by a hostile party, as I discussed before. Conceptually however, the Act addresses the issue of the absence of a driver who can be held to account. Nevertheless the ethical and legal issues of liability in other domains rest open as far as I know, though work is being undertaken by the EU Commission on potential updates to the Product Liability Directive to take more recent developments like AI into its ambit. Whether and how this will affect us of course depends somewhat on Brexit.

5. Effect on employment and the distribution of the benefits of AI

Will AI create new jobs or destroy old ones? Undoubtedly both. What are the ethical implications? Like industrial revolutions before it, going back to the first industrial revolution, the so-called 4th industrial revolution will first destroy jobs, replacing human operators by faster, cheaper, more efficient machines that don’t take holidays, demand a pay rise or fret about Brexit. In due course they will create the demand for new functions to be performed by humans, that we cannot now even imagine. The problem is the sequence and the unemployment that it will cause in the interim. We cannot estimate the scale of either phenomenon. Some of AI’s most ardent advocates say that there is nothing to worry about. This is wrong. At the other extreme, some have exaggerated the dangers and predicted an employment Armageddon. This is unhelpful. We should be concerned. To displace people from work without helping them to find new work is unethical and harms society.
What is to be done? Retraining is key. There are jobs that AIs cannot touch – user interface design is an example. Who should fund the retraining – government, corporations, individuals? Probably some combination. There are some excellent examples of corporate endeavours in this area. AT&T’s Future Ready initiative is a $1 billion, web-based, multiyear effort that includes online courses; collaborations with Coursera, Udacity and leading universities; and a career centre that allows employees to identify and train for the kinds of jobs the company needs today and will need. By 2020 AT&T will have re-educated 100,000 employees for new jobs with cutting-edge skills. BT has a similar programme and the government sponsored Institute of Coding, supported by BT, IBM, Cisco, Microsoft and others as well as 25 universities and my own institute, the BCS, is a very positive initiative in this direction. We cannot sit idly by.
There is another economic and societal ethical issue relating to AI – the risk that the benefits will accrue disproportionately to a privileged few while the costs fall on those lower down the social scale. There is no quick fix for this risk but it is one that policy makers need to have in mind.

6. Is AI impersonating a human unethical?

Famously Alan Turing devised the Imitation Game, now known as the Turing test, whether a machine could convince a human being that it is another human being. Until recently no machine has passed the Turing test, though many human beings have failed it. However at the Google developer conference last year, Sundar Pichai, the CEO, demonstrated Duplex, an AI that convincingly called a beauty parlour and a restaurant to make a hair appointment and a table booking respectively. Neither receptionist realised that they were talking to a machine. It was so realistic. This technology is now live and available from Google. Although the audience at the conference applauded the demonstration, the reaction on social media was that this was unethical. Not to identify that the machine is a machine is unethical. [As Karmen Turk has said,] the EU High Level Expert group has stated that this contravenes one of their principles.

7. Does ethics matter in AI?

Yes, profoundly. If the public concludes that AI or the use of AI is unethical, it will lose public trust. There are many examples of this happening, with or without scientific justification. GM Foods and the Boeing 737 Max are obvious examples. To quote the EU AI High Level Expert Group,
Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems. Without AI systems – and the human beings behind them – being demonstrably worthy of trust, unwanted consequences may ensue and their uptake might be hindered, preventing the realisation of the potentially vast social and economic benefits that they can bring. They wrote, “To help Europe realise those benefits, our vision is to ensure and scale Trustworthy AI.”

The Group identified four ethical principles to which AI should adhere. These are:
(i) Respect for human autonomy
(ii) Prevention of harm
(iii) Fairness
(iv) Explicability
They noted that while many legal obligations reflect ethical principles, adherence to ethical principles goes beyond formal compliance with existing laws. This is a cogent argument.

Ethics is often seen as a bunch of “don’ts”. Certainly, there are unethical threats that need to be countered. But ethics should be seen as a positive. We want to buy from companies that we perceive as ethical. People want to work for ethical employers. Academics want to conduct research in an ethical way. Being ethical should be and increasingly is seen as an important element of an organisation’s stance and strategy. And ethical considerations need to be front of mind in the development and use of AI at every stage, from conception through design and build to deployment.
To conclude, I have considered six ethical issues in relation to AI: Bias, Explainability, Harmlessness, Responsibility for harm, the effect on Employment and Society, and AIs Impersonating Humans. I have argued that ethics really matter if AI is to be trusted and the benefits of AI are to accrue to society.
How does this affect the law? Self-evidently any law, including regulation, should be based on ethical principles. Drafting good regulation is difficult in such a fast-moving domain; excessive regulation runs the risk of inhibiting innovation. But the EU has shown itself to be adept at it and should continue to set the standard.
In this context it is encouraging to note that 42 OECD countries have just signed an accord to support a global governance framework for AI (2) “Recommendations of the Council on Artificial Intelligence – Principles for Responsible Stewardship of Trustworthy AI”. It has no force of law but it’s a good start if the signatories act on it.
So for the lawyers here today, the message is clear, we must consider ethics at every stage, whether advising clients or developing or deploying AI in your practices or in the administration of justice, and that consideration must extend from the Senior Partner or Chief Executive all the way down the organisation. In all matters relating to AI, ethics must come first.


(1) https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf
(2) https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

AEA-EAL is committed to handling your Personal data in ccompalince with applicable data protection laws, including the General Data Protection Regulation ("GDPR"). If you have any questions, please contact us at privacy@aea-eal.eu. Our privacy policy and cookies policy are available here.
AEA-EAL aisbl - siege social/headquartes: Avenue Louise 235 - 1050 Bruxelles/Brussels - Belgique/Belgium KBO/BCE 0465.302.664
phone: +32 (0)2 467 34 24 e-mail: aea-eal@aea-eal.eu

Copyright 2023 AEA-EAL