Thinking on the ethics of AI

Artificial Intelligence. Image from www.vpnsrus.com with permission

Artificial Intelligence has been on my mind a lot recently as it invades medicine and the rest of society too. The Australian government is putting effort into developing ethical guidelines around AI and I been puzzling as to how that should look.

Central to that are concerns around privacy which are hardly new but are more and more in the center of developing technologies which move faster than the regulators can catch up.  If we give over personal information do we know how that is going to be used? Re-used? Re-sold?  Do we really have a clue as to how it may be used.  Should we be entitled to an expectation that all the uses of that information are related to the reason why we handed it over initially?  I think so.

If our personal information has value then who owns that value?  When we get a service for free then it’s often been said that we are the product.  But perhaps its fair to expect that the provider of that ‘free’ service let’s us know who the end users of our information are, and perhaps we should have an option to opt out.

Does the right to be forgotten help you?  There has been much made of an expectation that online services shouldn’t hold our information for ever – they should allow us to be forgotten when we want to.  Embarrasing selfie from uni days – forgotten!  Social media post espousing views you are no longer proud of – forgotten!  I fell off my chair when I discovered that Google knew exactly where I was day in and day out for several years.  But it’s now – forgotten! Perhaps even more critical to the AI debate is the right to correct information that is held about you and to know exactly what information is on file.

So given all that information is being collected, what should we expect before Artificial Intelligence systems start making decisions about us? I’ve got: Fairness, Contestability, Transparency, Privacy and Compliance with the Law.  From and society-wide viewpoint its interesting to think about net benefit – or as Google famously puts it: “don’t be evil.”

It’s the last of these that is engaging the little bit of my brain that it interested in philosophy because it really does stir up some ethical thought.  If you develop a cool AI that saves you and your customers time and money then  – cool, well done.  But what if there are losers in that process, perhaps some of your customers can’t use it or get odd, unjust or incorrect responses.  Overall you and your customers are happy but some people get shafted.  Do we accept a utilitarian type conclusion that, overall, things worked out well.  Or do we demand a social justice type approach where we do the extra yards to look after everyone.  Does the answer to that depend on the service at hand?  Or who the provider is?  Generally we’d expect our governments to do work which was inclusive, just and defensible (do you know about the Aussie RoboDebt debacle?) Other firms maybe held to a lesser standard but probably they shouldn’t be.  If your bank refuses your loan application then it seems fair that your should be able to ask ‘why?’ And get and answer that makes sense, is legally based and correct.

Leave a Reply

Your email address will not be published. Required fields are marked *