How does social media manipulate us? - Part 2
A dive into Cambridge Analytica
Thank you for taking the time to read this and I hope you walk away with a new perspective of how we use everyday technology. If you enjoyed this post, please do share with your network.
In my last blog post I wrote about how social media or Big Tech can categorise you within 5 traits: Openness to experiences, Conscientiousness, Extroversion, Agreeableness and Neuroticism. These 5 traits are also shortened to the ‘OCEAN’ model. If this information is new to you, then I recommend you go read my previous post before reading this one, as otherwise you will be lost for a large part of the post.
So now we know how we are categorised, what does it mean to be manipulated? How is it actually done? For the basis of this post, I will be using Cambridge Analytica and their activities that led to the scandal as an example.
Again, like my last post, grab a coffee or something, as this will be a long read.
How did Cambridge Analytica create their profiling model?
Cambridge Analytica had collected information on a huge group of users through a quiz called ‘My Personality’ where users answered some questions and it gave them an overview of what they were like. The end results consisted of statements telling them how they fell within the OCEAN model of traits and they had the option to share their Facebook user profile with researchers.
By doing so, the users gave access to their whole Facebook activity to the researchers: what they liked, when they liked something, who they were friends with, what they commented on, etc. The researchers then made connections to their activity and their personality traits - and due to the fact that so many people shared their Facebook profiles to do this test, Cambridge Analytica’s model became pretty good at working out how to categorise you from your Facebook activity.
The model was so good in fact, that with an average of 68 likes from a user the model could predict:
The colour of their skin with a 95% accuracy
Their sexual orientation with a 88% accuracy
Whether they affiliate with the Democratic or Republican party with a 85% accuracy
How many things have you liked or interacted with (like commenting, retweeting, etc.) on Facebook, Instagram, Twitter or another social media app? Is it more than 68? Because if so, then chances are they have a very good idea of who you are, how you think and how you behave.
Many of us have public social media profiles. Imagine you had a model that you could scan public profiles, look at their likes, profile them and then store all those profiles on a database? What would you do with a searchable database like that?
Cambridge Analytica had access to public and private accounts (of those who shared their account activity), which is a significant amount of information that they could have used to profile people. So how did they use this?
Case Study: Ted Cruz’s presidential primary campaign
Instead of guessing and speculating, I’m going to breakdown how they used this information from their own video where they admit all the crazy things they did.
US Senator Ted Cruz employed Cambridge Analytica to help him become more popular and win votes for his presidential primary campaign. At the time, not many people knew or cared about Senator Cruz, but with the help of Cambridge Analytica, he eventually went to come second in the 2016 presidential primaries, behind Donald Trump. He is even open to running for President in 2024 due to his reigning popularity.
To do this, he used three things: Data Analytics, Behavioural Science and Addressable Ad Tech. Let’s dive into each thing one by one.
As already mentioned, Cambridge Analytica had access to a significant amount of data, and this data was analysed to find the most effective messaging.
Demographics - Factual Information
The most obvious factors that affect your behaviour are demographics, the factual information. This is information such as age, gender, ethnicity, religion, location, income, etc. This is the stuff that is commonly associated with surveying people and understanding/predicting what someone might do.
For example, when you are designing a product, you might be like ‘I want to make some shoes for a female that is in their late 20s who earns 6 figures’. Those factors are accepted factors to be able to split people up and narrow down your audience. These facts are generally easier to get hold of and as an end user these factors are easy to grasp (if a product designer said you were targeted as they are targeting women in their 20s, you would have an easier time understanding their thought process).
Psychographics - Lifestyle Attitudes
The next factor that was used in the data analysis is psychographics. This is your attitudes towards things. This is your lifestyle choices - what products you buy, what car you drive, how often you go travelling. Must you have the latest phone every year, or do you just want something that works for as cheap as possible? Do you wear exclusively designer clothes? Do you treat yourself to a fancy meal once a month?
You can learn a large amount about an individual from this information alone. Have you had a pay rise? What changes did you make if you got a pay rise? A nice place? A nicer car? Do you shop in more expensive places? Are you buying the fancier milk rather than the bog standard one? These are all individual lifestyle choices that we make, and this information may be tracked and collected through social media, however this stuff has been collected for years. This information can be bought from data brokers who scrape data from websites and other locations - and sometimes it might even be the shops themselves. What do supermarkets like Tesco get from their loyalty scheme, the Tesco Clubcard? Anyone who’s shopped in Tesco know that this Clubcard is pushed hard.
Although you might be a bit creeped out with the thought that your lifestyle habits can be deduced from how you spend your money (your bank knows you very well), as a consumer you can draw the parallels between the conclusions from marketing and your spending habits.
Personality - The OCEAN Model
The final aspect that Cambridge Analytica used in their data analytics model is the behavioural factor. This is your personality information - where you sit within the OCEAN model. This is the information that social media sites can work out about you without you realising and sometimes even deducing things about you that you yourself may not know. Due to how abstract the model and the data collection is, as well as the fact you yourself may not be aware of some of your own behaviours, this one makes people the most uncomfortable.
But using all three aspects, Cambridge Analytica had a very good idea of who you were, where you lived, your personality and how you want to live your life. All of this information fed into into a huge AI to determine how you could be influenced.
Many people don’t appreciate what is meant by Big Data, but it essentially means having access to so much data that you can train an AI model to be fairly accurate.
Imagine you, your family and your friends. Think about all the online activity across social media sites they do per person per day - the activity that you see and the activity that you don’t see due to the fact it doesn’t make it to your algorithmically curated feed. Now think of all your family’s friends and your friend’s family. Now expand to your neighbourhood/local area. How many people is that? How much data is that? Now think about your city or town? How about if you include your neighbouring town? County, state, country? Now imagine the activity for one week. One month. One year. Multiple years. That’s a lot of information.
Once they have all the information, Cambridge Analytica used behavioural science to craft messages that are specific to your personality that they would then send out as ads. To understand what we mean by that, lets look at the first example Cambridge Analytica use in their video where they are bragging about their methodologies.
Look at the message below:
There are two signs. The first one says ‘Public beach ends here’ then with a statement informing that it is private property. The second message says ‘Warning shark sighted keep out’ with red and yellow colours and a silhouette of a shark.
Both messages intend to get the same response from the reader of the sign; do not use the beach. However, the first message is more of an informational message - it’s letting you know that you, the member of public, do not have the right to use the beach as it is privately owned. Depending on your personality, you may adhere to the sign, or you may choose to ignore it and carry on with proceeding to use the beach.
The second message however is more aggressive and strikes fear into you - it is a behavioural message that aims to threaten your sense of safety. There is a literal threat that may deter you from using the beach.
Both messages aim for the member of public to not use the beach, and therefore has the same intention, but one of them will be more effective for a specific crowd. A person who is more conscientious, the first sign might work for them, as they are likely to follow rules, law and order. However if the person was a neurotic individual, fear is a more effective communication technique and the second sign would be more effective at deterring them from using the beach.
Let’s look at another example:
The example above shows two ads:
The first advert says the message ‘The second amendment isn’t just a right. It’s an insurance policy. Defend the right to bear arms’. There is an image of someone breaking through some glass.
The second advert says the message ‘From father to son, Since the birth of our nation. Defend the second amendment’. There is an image of a father and son in the fields with guns.
Both messages want to encourage you to defend owning guns. However, depending on who you are and how much you skew on the OCEAN model, one message nuance will resonate with you more than the other one.
If you are a highly neurotic and conscientious person, the first message will work with be the most effective towards you. The first message is rational and fear based. The idea of you potentially being attacked, or a negative life experience, will make you want to pay attention to the message. If you don’t have a gun, you won’t be able to defend yourself from a burglary, and therefore you will be subject to more negative events in your life. It makes sense for you to have the right to defend yourself.
If you have a low openness to experiences and you’re highly agreeable, you would resonate with the second message. The second message targets years tradition (the low openness to experiences aspect) and society or community (the highly agreeable aspect). It implies that since the US was founded, fathers have always performed the activity of teaching their sons to bear arms. There is a sense of community and history in the message.
These are two very specific examples - and for the two examples, you may feel that neither of these messages would work with you - and probably not! The two examples used are messages targeted at very specific people. Cambridge Analytic would have thousands of specific messages and ads designed and written to target every type of personality - especially you.
The final piece of the puzzle is addressable technology. I have previously written about this in detail so won’t go into the nitty gritty stuff here. But addressable technology is the ability to be able to hyper target adverts.
Communications have been limited historically to demographics, but with psychographics and personality now in the mix, you can target very specific groups of people, and then automatically show them the specific ad.
Say Cambridge Analytica know that they have a close race in Texas want to show you, a Texas resident, an ad promoting a policy Ted Cruz is campaigning for. They have the information to know how susceptible you personally are to win you over. They can use algorithms to automatically bid for an ad space in your newsfeed for when you are next on Instagram, and then as you scroll, insert an ad based on your demographic, personality and psychographic. The ad will be very specific to you, and you would be none the wiser.
Is nuanced messaging a bad thing?
How many times have you targeted a message to persuade someone to do something? Have you ever asked your parents for the same thing but in two different ways? How about suggesting a restaurant for dinner with friends? With one friend you might advertise the fact there is unlimited refills for a drink, but for another friend you might advertise how cheap the place is, and for your final friend you would focus on the proximity of the restaurant to their place of residence.
Any advert you see on social media and the internet has very specific wording aimed at tapping into your specific personality traits to make you act and grab your attention. This stuff works. Spotify have developed a method to work out your personality based on how you organise your music, and has recently bought two of the biggest podcast ad distribution platforms.
We tailor our message all the time for our audience without even realising it, so is what Cambridge Analytica have done or Big Tech do really that bad?
(If you are curious of where you stand in the OCEAN model, here is a fun test to do - the my personality 100-item test)
If you have a better idea than I do, if I’ve missed out anything or you think I am talking absolute rubbish, feel free to reach out either by commenting on the post, or by emailing me on firstname.lastname@example.org
If you enjoyed this post, subscribe to Tanvir Talks, where I publish a newsletter once a month breaking down the big questions asked in tech into digestible chunks for you to consume, the average consumer. I also have a podcast where I do the same thing!