Ethics in AI: The Hidden Cost
The dirty dark side that no one talks about
Thank you for taking the time to read this and I hope you walk away with a new perspective of how we use everyday technology. If you enjoyed this post, please do share with your network.
Artificial Intelligence, or AI, is the next thing that is being used to further capitalise the general consumer.
The goal of AI is to carry out functions or activities that a human would do, but arguably with fewer mistakes and making decisions much quicker, hence the term 'Artificial Intelligence'. In a literal sense, you are artificially adding in intelligence to an otherwise inanimate being, where this being can find patterns or produce outputs that would normally take a human a while to do.
This is the thing that we need to remember about AI - it's not doing anything that humans in theory can't do, it just does them in a significantly more efficient manner. It has many practical uses, from medicine and robots to more consumer friendly voice assistants. It is near impossible and very impractical to avoid AI decisions in our daily lives. AI makes many life changing decisions for people on daily basis, and as a result, more recently, there have been calls for regulation within AI.
AI, in my opinion, is for the greater good for the long term. It's enabling the human race to achieve so much in such little time and I can only imagine what else we will unlock in the future with it. However, developing AI requires a large investment and very intelligent people grouping together. The main bodies funding this are the big boys like Google, Amazon, Microsoft, Apple and Facebook (among others). This isn't necessarily bad; assuming that they can be trusted to self regulate. Recently, Google decided to fire a very high profile AI researcher, Timnit Gebru, because her paper looked into the ethics of AI and potentially implicated Google was not carrying out its AI work in the most ethical manner. This naturally raised many many questions within the AI community.
In this post, I'll be exploring one of the areas of Timnit Gebru's paper in AI: Environmental costs of AI.
I'd like to note that I, obviously, am not an expert in AI and have nowhere near the amount of experience in the field as Timnit Gebru or any of her peers, but it's something I think everyone should have some sort of knowledge on; after all, it does and will literally dictate how you live your life in the developed world. In this series I will try to shed some light on the environmental challenges of AI and give you a starting point should you wish to go and delve deeper into the subject.
So let's jump in.
We use AI literally everyday
What do you do if you want to find out more about this song called 'Despacito'? Chances are, you will take out your phone and type into a search engine (which statistics say is most likely Google), and search what you are looking for. If you search Despacito, you will be suggested with the music video, a link to the lyrics and maybe the Wikipedia page, as well as (assuming this is a Google search), other common questions, such as 'Why is Despacito so famous?'.
I think we really take for granted that when we search for something, 99.999999% of the time we get exactly what we want. But the thing that enables this to happen is AI. The fact you can search 'Despacito' and it comes up with the song, it knows you are most likely looking for the song as well as other related questions to it, is the result of AI. But this stuff also requires power.
Did you know, that when you search 'Despacito' on Google, you activate 6-8 different data centres, and these data centres (a physical building that people use to store their applications on) that run products like Google Search, consume about 2% of the whole world electricity usage? 2%. For a bit more context, in the US alone, in 2014, US data centres used the same amount of electricity as 6.4 million American homes - that's about 5% of all American homes.
Now I'm not here to make you feel guilty about your dumb google searches you do in incognito, my point is that it takes a substantial amount of energy to run these data centres that allow us to use these sophisticated AI algorithms. What requires even more energy is training the algorithms themselves. To really appreciate why so much energy is used, we need to understand what we mean by 'training AI'. What is actually happening when someone says that they 'trained' AI?
What do we mean by 'training AI'?
When we talk about AI, we are talking about giving a combination of algorithms some inputs (searching 'Despacito') and it giving an output (Google search results). An algorithm is a set of rules to be followed in a mathematical model, generally for a computer.
So how do we come up with these rules?
This is best explained if we use a more tangible example, so I will use the concept of creating an AI model that can predict house prices. To simplify the concept, the way an AI model is created is as follows:
You have a problem
You want to know, given a certain set of features (like if the house has a drive way, how many rooms it has, etc.), how much a house is worth
You gather the data
You collect as much information about houses that were sold and all the features they had
You train the model
You slap the inputs from your dataset (the house features) into relevant algorithms multiple times, until the output (house price) matches closely to what you would expect in real life from the data you already have
Your model is complete and you can now use it to predict house prices
Obviously this is very simplified, but that's the general gist.
But at the end of the day, an AI model goes through those 4 steps I've outlined. Steps 2 and 3 are the most time consuming part, and also the part that consumes most energy.
Training is a long and repetitive process
Let's look at stage 3 a little closer. When you approach a door, you naturally push the door to open it at a random, pre-determined strength that you choose. Now if you are pushing too hard, you smash the door open, and you're embarrassed a little. Silly you. You think "Next time, I'll push a little less hard". Vice versa, you push and the door is actually heavier than you expected, so you push harder, and then next time you come to the door, you know you need to push harder. This is literally what happens in AI 'training'. You trained yourself to open that specific door. Now let's look at the algorithm example.
Say you had a house that has a driveway, 2 bedrooms and a garden, and that house is £100,000. You tell the algorithm, 'Hey, here are 3 inputs: it has a driveway, 2 bedrooms and a garden, and whatever maths you're doing, the output should be £100,000'. The algorithm at first, will randomly do some calculations, and let's say it came out with the number £80,000. At this point, the algorithm will look at the actual number it is supposed to be, £100,000, and work out how badly off it was. Then it will be like 'Oh wait! I need to change this random thing in my algorithm so I'll be more correct next time'.
You repeat this process, around 10,000 times. Each time, your AI model does some calculations and gives out value, checks how wrong it was, and then tries again. Each time, you use some computation energy to do the calculations. Now for a simple model, like this one, these 10,000 iterations are done near instantly, so it's no big deal. We have 3 inputs, and 1 output. The electricity used is negligible.
But a search engine requires way more inputs than that to know what you're talking about. About 110 million or so. These algorithms are trained on multiple specialised chips for days. Algorithms are typically trained multiple times!
So what can we do?
To train the largest AI models once used by the big tech companies emit as much carbon as five cars in their whole lifetimes. This is a lot.
We can't escape AI - it's impractical at this stage. But what can we do to help us become more environmentally friendly?
A standardised metric needs to be devised to measure the power usage of algorithms
At the moment, there isn't a metric to help researchers to determine what algorithms/models use more power than others to train.
Most things have measurable metrics. For example, food has calories. You can look at any food and determine it's energy content in calories, as well as a breakdown of how much protein, carbohydrates and fat it has. It makes it easier, as a consumer, to determine whether the food choice fits our lifestyle.
Effort will be required to baseline and standardise the metrics that can be used to judge how much energy is required to train the algorithm. This is will be harder than you would think as every country has their own electrical grid infrastructure that uses electricity in different ways - but that doesn’t mean we shouldn’t try!
When a new model is suggested for wider use, it will have many parameters that can be changed to help achieve a desired output. If a researcher from the start knew how long it took to train a model, as well as how sensitive a specific parameter was (so how much a parameter contributes to the output value), then it would be easier to make a judgement on the environmental impact.
For example, if you know a model takes 24 hours to train, and the parameter you want to change isn't very sensitive, so say the accuracy will increase by only 0.05%, will it be worth it? Does your specific project require to increase accuracy by that 0.05%?
In many cases, you might make the decision that, no, this isn't worth the energy required to train the model given the output I want. Information like this will be the first step towards helping researchers become more environmental conscious.
The reporting on time and parameter sensitivity will need to be standardised. Training a model on a fast computer reduces training time. A default baseline specification will need to be agreed on to report the time, so that researchers know that the time required is based off the same computer specification. A similar activity will need to be done for model sensitivity.
Researchers should prioritise computationally efficient hardware and algorithms
Some computer chips and algorithms are fundamentally more efficient than others. For various reasons due to design, they get the job done faster and quicker.
Models are normally trained on graphic cards which are designed to draw things on a screen. These cards are very good at crunching numbers quickly, but their primary purpose is to only draw pictures on a screen, not train an algorithm. Researchers should be encouraged to use more specialised and efficient hardware to train their models, as this will reduce the use of energy.
Google use a processing unit called 'Tensor Processing Unit' (TPU) which is a chip designed for machine learning and AI training. So there is hardware out there, but more should be developed.
Use countries with cleaner energy production
This is probably the most obvious one of them all. If you use a server in a country that uses renewable energy to run their data centres to train your model, like Finland, then you are naturally not damaging the environment. It is also worth trying to use a colder country (also Finland) - a large energy usage is around cooling the servers so that they can operate.
AI is becoming something exclusively for the rich
While researching this article, the recurring theme that I came across was the roles of humans in this process - a topic that is also raised by researchers. In every stage of the supply chain of AI, there is some sort of exploitation of humans. From the forced labour in mining of cobalt for the chips and processors (which also give off greenhouse gasses), the electricity burnt throughout the world to power the data centres (giving off large amounts of carbon emissions and damaging local air), to the harvesting and mining of personal data of everyday users of the product (violating our privacy).
In addition, we’ve reached a point in AI, where if you want to do academic research in AI, you require a large amount of resources to get started. This is making it harder for academia to get into AI research around the world. Research is being dictated by the tech industry that has a vested interest in the technology.
Having different perspectives on AI from all walks of life is vital to the development and advancement of the field. Students and academics all around the world should have the resources to produce paper worthy research in AI that isn’t restricted to industry researchers.
With AI so prevalent in everything in life, I think it is about time to start looking at the supply chain and ask ourselves if we are ok with the final product.
I sure am not ok with it. And we better speak up before it’s too late.
If you have a better idea than I do, if I’ve missed out anything or you think I am talking absolute rubbish, feel free to reach out either by commenting on the post, or by emailing me on firstname.lastname@example.org
If you enjoyed this post, subscribe to Tanvir Talks, where I publish a newsletter once a month breaking down the big questions asked in tech into digestible chunks for you to consume, the average consumer.