top of page
Search
sam-bfd

Ethical Strategy for Businesses in AI

Updated: Apr 7, 2023

Theory and Practice In my short, yet intensive, four-year study of Philosophy, I’ve found that one of the biggest dangers with the subject, and especially with ethics, is the potential for things to stay within the theoretical. After all, the purpose of Philosophy is to gain insights into the nature and inner-workings of our reality, and use these insights to improve ourselves and the world around us.

Nowhere is this process of applying philosophical insights into the workings of the world more needed than in the field of AI, where the risks of its use by businesses are becoming clearer every day. Thankfully, there is an abundance of insight and advice on how to mitigate against these risks and as a result, all businesses need to do is put these insights into practice. The way to do this? Create an ethical strategy for your company. A strategy that can be used to steer your business away from the danger zones that surround the technology, and provide a safe route towards the right decisions, for yourself, for the customer, and the world.

Not convinced? Here are two reasons that should do the trick.


(1) Good for its own sake

Firstly, under the condition that we want to be ethical people - people who do good things - then the work we do, and the businesses we are a part of, must also strive to be ethical. They need to be businesses that add positive value to the world, and do so with a genuine appreciation and consideration for the livelihood and the humanity of their customers and end-users.


This is especially the case with AI, as many AI systems function to emulate human cognitive processes in sophisticated ways. Because of this, if such systems aren’t built to function ethically, then the businesses and the people that are concerned with being ethical will already be at a disadvantage. Given this, it’s vital that the AI systems we create are built with, and built from, our desire to be good.

The creation of an ethical strategy is a manifestation of this moral intent. It is an indicator that the business truly understands and appreciates the power of AI, and acknowledges the personal and collective responsibility that comes with it. In sum, AI can be good, but for it to be good, it requires us to plan, build and use it with this in mind.


(2) Ruin

If the ethical reason alone isn’t convincing enough, then how about commercial ruin? The reason for this ruin, as alluded to in my previous blog, is that the ethicality of AI is no longer a fringe issue, nor an added extra to a company’s good business acumen. Over the last few years, we have witnessed various AI systems produce biased and unethical outputs, the algorithms behind PATTERN, Google Photos and PredPol are all testament to this. Such issues have happened to such a degree, that society is now acutely aware of the moral implications that result from the use of AI, and as such, a failure to account for this awareness will be treated by the public as a fundamental problem. In short, trust needs to be restored.


One example of an organisation that has already suffered as a result of neglecting the ethicality of AI is Clearview AI, a facial recognition company that was used by thousands of governments and law enforcement agencies. Only recently has Clearview AI been fined $9.4 million in the UK due to the fact that the company illegally created a database filled with billions of images taken from social media and the internet. The database was used to allow Clearview’s customers to match their photos with those on the database, and in doing so, verify the identity of the persons photographed. This was done without the consent of the individuals, and was therefore a clear and startling breach of privacy.

In present day, a search of Clearview AI returns tens of stories on the scandal, ranging from “Facial recognition: Italian SA fines Clearview AI EUR 20 million” to “Get out of our face, Clearview! – Privacy International”. In addition to receiving a 9.4 million dollar fine, they were banned from selling their database to private businesses or individuals, and have also been required to delete a large amount of their data on certain nationalities.


Despite the public and governmental outcry, it isn’t clear how Clearview responded to the charges brought against them, nor is it clear how they dealt with the fallout. At best, we can take Clearview’s press release about their new “Consent Based Product” - published around the time of the scandal - as implicit acknowledgement that they have some work to do on developing the ethicality of their product. Clearly then, there is an incredible peril attached to failures to use AI ethically, and companies will pay, not only financially, but also socially for their moral mishaps.


The defence against ruin

In contrast to Clearview AI, Microsoft and their response to the controversy surrounding their natural language AI, Taybot, can be used as an example of how a company should act after being caught in the middle of an ethical AI disaster. Taybot was Microsoft’s admirable attempt to provide a chatbot with conversational intelligence. To test and develop Tay, Microsoft placed the chatbot on Twitter, to develop its chatty disposition, and to provide Twitter users with entertaining conversations.

Unfortunately, what resulted was the transformation of Tay from pleasant AI to repulsive keyboard warrior. The AI, as a result of a “coordinated attack” by a group of online users, was tricked into learning and re-expressing wildly offensive and reprehensible beliefs that were everything from racist to misogynistic. Microsoft dealt with the issue quickly and effectively, by removing Tay immediately from the platform, and analysing what went wrong. It wasn’t as if Tay hadn’t been trained to avoid expressing such content, it was just that there were still some gaps, and people took advantage of that.


Importantly, in their response to Taybot’s behaviour, Microsoft was able to fall back onto their key AI values of transparency and accountability. What’s more, the Taybot scandal occurred in 2016, a time when Responsible AI - as a movement - was nowhere near as supported as it is today. If Microsoft had an effective ethical strategy in place for managing AI over 5 years ago, then businesses that fail to provide an ethical strategy today, are unlikely to be shown the same forgiveness.


In sum

If you take anything away from this blog, then it should be this: If businesses have an ethical strategy for their use of AI, then the inevitable mistakes they’ll make come with an implicit acknowledgement that they tried to minimise the risks. If businesses don’t create an ethical strategy for their use of AI, then the inevitable mistakes they make will come with an explicit acknowledgement that they didn’t try to minimise the risks. For both businesses and end-users - one outcome is much more preferable than the other.

Ultimately, the complexity of AI makes it such that mistakes are inevitable. What is therefore important is that businesses have the humility and foresight to fight against this truth. To do this, create an ethical strategy - a strategy focusing on the people, the process, and the technology - and in doing so, provide a roadmap for making the right decisions.






11 views0 comments

Recent Posts

See All

Commentaires


bottom of page