Culture Eats Strategy for Breakfast
Implicit in the title of this blog, and in the legendary phrase coined by Peter Drucker, is the suggestion that it is our culture that dictates the behaviours of businesses to a much larger degree than company strategy. This maxim is very relevant to current questions surrounding ethical AI, and helps explain why ethics is, and should be seen as, important to businesses involved with the technology. So, why is this, and what’s sparked the sudden interest AI ethics?
Well, in short, it seems that Western culture has begun to develop a certain ethical standard concerning emerging technologies; a standard that advocates for AI to be used in accordance with foundational ethical principles that help to mitigate against the inevitable risks and blunders of new technologies. To build on this answer, in this article we’ll explore the relation between AI and ethics, outline the key principles central to the debate, and explain why it is important to embed ethics into the creation of these technologies.
With a bit of luck, this will encourage technologists and executives to begin thinking ethically about both their product, and their business, and in doing so, will produce benefits for the company, the community and the world more generally.
The Proliferation of AI
To say that we, as a society, are in a whirlwind of technological upheaval would be an understatement. AI is being adopted in every professional industry and job function, from Human Resources in the Automotive industry to Risk in the Healthcare industry. To show the extent of this upheaval, in 2021, AI made up 40% of the work output in Service Operations for the Financial Services industry. Put simply, AI is here, and it is not going away. The consequence of this is that the spotlight is firmly fixed on the companies and organisations that are wielding AI, and its beam is very unlikely to dim. Developing within this spotlight is a newfound public expectation concerning the ways in which AI is being created and used. More than ever, people are concerned about the benefit and safety of applying these technologies in the real world. Given that under 50% of surveyed consumers said that they trusted their interactions with AI systems, these expectations are not yet being properly met.
Irresponsible Tech
The UK Post Office scandal is a prime example of why there is distrust between the consumer and the business and their technology. The scandal emerged from evidence of errors in the Post Office’s accounting software, Horizon, that was being used to track the organisation’s finances. Due to these errors, over 700 branch managers were wrongly convicted of theft, with some serving prison time, and many facing personal ruin. It took 19 years for these errors to be acknowledged, convictions overturned, and compensation given. Especially important about this example is the Post Office’s failure to accept that the software could be responsible for the financial shortfalls that led to the wrongful convictions. There is one statement in particular, given by a judge at the Court of Appeal trial, that clearly represents this failure in accountability: “Defendants were prosecuted, convicted and sentenced on the basis that the Horizon data mustbe correct [italics added]”.
For many years, the Post Office publicly rejected the idea that it was the technology that was at fault, and as such, the organisation placed the burden of proof on the accused. Thankfully, after significant trialling, it was proved that it was the software that was responsible for the financial shortfalls, and that the postmasters were in fact, innocent. Alongside massive reputational damage and a currently ongoing investigation, the consequence of this scandal for the Post Office was a whopping £57.5m pay-out in damages to the wrongfully accused. While, 20 years on, the scandal is not yet over, it continues to serve as a clear example of what not to do as a business when things go wrong with your technology. Sometimes, it really is the technology that is responsible for discovered problems, and this is something that both the company should acknowledge, and the public should be made aware of.
Understanding Ethics
Thankfully, in present day, many business leaders are aware of what is expected of them, with over 79% of CEOs stating that they are now prepared to act on issues relating to AI ethics. Given this, it’ll be useful to briefly consider what is meant by ethics, and outline how possessing an understanding and appreciation of its function can support the positive growth of the businesses that deploy it. The origin of the word ‘ethics’ derives from the Greek term ‘ethos’, which roughly translates to ‘character’ or ‘nature’ in English. In Ancient Greek philosophy these terms were frequently referred to in relation to the ‘good’ - done so in an attempt to identify what it means to be a good human being. And so, while there are many ways to answer the question of what is meant by ethics, one response is to say that, at its core, ethics is a framework for identifying how we should live. It functions as a tool that we use to map, set and strive towards the ideal of any situation.
Understanding AI Ethics
In the case of AI ethics, the currently mapped and set ideal for how-to-be is represented by several foundational principles: transparency, accountability, safety, fairness and privacy. These are the principles that are driving change in the development of AI systems, and they are doing so to good effect. In practice, the principles provide the public with clarity on AI function and behaviour, and also encourage companies to hold their hands up when things go wrong. It’s important to note that while the listed principles are the most commonly referenced ones in the AI space, they aren’t the only ones relevant to the debate. Discussion will continue on whether other principles can better inform the ethics of AI, and in doing so, reorientate the developmental trajectory of the technology. In any case, it is these principles that represent the current state of AI Ethics, and more importantly, they represent the ethical standards that have been set by society. The public expects companies to uphold these principles, and that it why embedding ethics into AI development is so vital.
The Wrap Up
In short, ethics is a must for any business. If a company wants to place itself within the societal system, and in doing so, reap the commercial rewards, it needs to acknowledge the cultural expectations that are currently in operation. In the context of the AI sector, this means embedding ethics into the production and use of the technology. One way to be sure of this is to create an ethical framework that can validate an attempt to meet these cultural expectations, and in doing so, support the efforts being made to make AI and emerging technologies a set of safe and useful tools.
Comentarios