Author profile picture

Why was it that in the last months of 2018 sales of parsnips suddenly increased enormously in Eeklo, Belgium? The answer lies in targeted advertisements on Facebook. That’s right: With a few clicks, you can let Facebook know which of the almost 1.5 billion users worldwide will see your ad. And so do the 21,000 inhabitants of Eeklo in East Flanders. Or even more precisely: women in Eeklo who like to watch the tv-programme Dagelijkse Kost and are interested in healthy food.

With this action the makers of the Belgian program Facebook and I wanted to demonstrate the impact of the platform on the privacy of users. They wanted to try out whether it was possible to influence people with advertisements. The ads they made were fake, pseudoscience showed the so-called slimming effect of parsnip. It worked, because in three local supermarkets in the city sales increased by 16, 300 and 436 percent.

This is a ‘playful’ example that shows that people are susceptible to influence via Facebook. This is certainly not a new phenomenon, but how far can you go? As a parsnip farmer, can you lie to your potential customers? That surely cannot be the intention, but you can still say sorry – entirely in the style of Google – when people find out that you don’t follow the rules completely.

When is it manipulation?

And do you make a mistake if you target ads – without dubious health claims – at a specific group? Aren’t you just working on marketing? Where is the boundary? When does steering turn into manipulation?

Judith Möller is specialised in communication sciences and is working on a PhD research on personalised communication at the University of Amsterdam. According to her, the chance of manipulation increases because of all the personal data that is collected about us: “Look at Cambridge Analytica, which tried to identify the personality characteristics of these people with personal data. With this, they predicted what kind of messages these people could be persuaded with. We have seen the effect this has had in England and the United States, which is something we should all be concerned about.”

She says there is no doubt that people are being influenced. But how exactly it works, science has no answer on (for now). “A political advertisement alone doesn’t convince people. This process is much more complex. It depends on the person, the situation and the context in which you receive a message.”

She herself prefers to receive information in the morning and is more likely to accept something if this is substantiated by many statistics. “That’s how it works for me, but it’s different for everyone. This is only a small part of the process,” explains Möller.

Politics as usual or manipulation?

A political message is always coloured, with such a message you can agree or you can argue why you disagree. According to political philosopher Bart Engelen, affiliated with Tilburg University, it becomes problematic when the political message goes beyond rational thinking. Engelen: “Manipulation skips rational capacities by using certain colours or images that our brains are known to react to. It no longer has anything to do with arguments. Moreover, people often do not know that it happens, it happens behind their backs. Certainly, in a democracy this is something dangerous, you are being steered in one direction without realising it. It becomes completely reprehensible when someone deliberately sends out false information in order to achieve their goal.”

Then there is the enormous amount of personal data that Facebook or parties like Cambridge Analytica use to find out how you are wired. Engelen thinks people are not protected enough against this. “In real life, I can try to ‘press’ all kinds of buttons with my girlfriend to let her do what I want – apart from the fact that I would be a terrible partner – but that doesn’t go well. Online, ads are designed so that if it doesn’t work one time, they just keep trying until the right ‘button’ is found. All this is due to the amount of personal data that companies have. The general public is not sufficiently aware of what is being done with that data.”

Computer science & Fake News

Here computer science can help, Claude Castelluccia knows, he is head of the privacy group at the French research institute Inria for Computer Science and Automation. Castelluccia worked as a security expert for about twenty years. Together with other scientists, he developed a model that makes the world of online advertising a lot more transparent for internet users, so they can see which ad-brokers all keep track of their data and choose not to keep track of everything. “With these kinds of tools, you get more insight and knowledge. That’s a good weapon against manipulation, but it doesn’t solve the problem,” says Castelluccia. Because he too sees enough ‘grey’ areas on the internet. Besides the fact that internet users are constantly exposed to advertisements and lures, they are also harassed to agree to something, even though they don’t really want to. Castelluccia: “Look what Instagram does: you get two options in the privacy settings. Yes or Not Now. No is not listed. Such a message comes back every time until you go crazy and click yes to get rid of it. That’s on the edge, but is it real manipulation?”

Calling a spade a spade, according to Fanny Hidvegi, is where most of the time has been spent lately. Hidvegi works for Acces Now, an organisation that defends the digital rights of people all over the world, and she is one of 52 experts who advises the European Union on rules, laws and ethical issues when implementing artificial intelligence. “To really achieve a good solution you need to have a very clear view of what the problem is. That is why we must stop using the term fake news. Research shows that people have different ideas about what this means. Some say that bad journalism is fake news, others label satire that way. But also paid journalism, advertisements and information that is not correct are included in that investigation. We are now talking about disinformation in the EU context.”

Code of practice

Last September, the European Union agreed on a code of practice with social networks, tech companies, and advertisers to combat disinformation, the first evaluation reports were published a few days ago. According to Hidvegi, these are good first steps, but large advertisers should also join in because they are also part of this data ecosystem. “It also helps that people are becoming more aware of targeting, look at Who Targets me?, that gives users more insight into how targeting works,” explains Hidvegi.

Who Targets me? is a browser extension that tells you via your Facebook use on the basis of what data you get to see a political advertisement. The aim of the initiative is to give people more insight into the ways they are ‘targeted‘ in a political campaign and what it is based on. Hidvegi argues for more such initiatives: “This makes people more alert to dangers.”