The UK government, alongside several other high profile brands including L’Oreal and Marks & Spencer, has pulled their taxpayer-funded adverts from YouTube following their appearance next to videos featuring extremist content.
Google has since been summoned to the cabinet office “to explain how it will deliver the high quality of service government demands on behalf of the taxpayer”.
“Digital advertising is a cost-effective way for the government to engage millions of people in vital campaigns such as military recruitment and blood donation,” said a government spokeswoman.
“Google is responsible for ensuring the high standards applied to government advertising are adhered to and that adverts do not appear alongside inappropriate content. We have placed a temporary restriction on our YouTube advertising pending reassurances from Google that government messages can be delivered in a safe and appropriate way.
Google has promised to change its procedures and have issued an apology in a blog post by Chief Business Officer Philip Schindler entitled “Expanded safeguards for advertisers”. Schindler champions the power that advertising has had in carving the way for small businesses and individuals to benefit from their free platform, as well as recognising their responsibility to look after those advertisers.
“Thousands of sites are added every day to our ad network, and more than 400 hours of video are uploaded to YouTube every minute. We have a responsibility to protect this vibrant, creative world—from emerging creators to established publishers—even when we don’t always agree with the views being expressed,” writes Schindler.
“But we also have a responsibility to our advertisers who help these publishers and creators thrive. We have strict policies that define where Google ads should appear, and in the vast majority of cases, our policies and tools work as intended. But at times we don’t get it right.”
On behalf of Google, Schindler admits that there were “a number of cases where brands’ ads appeared on content that was not aligned with their values” conceding that this is “unacceptable”. They have promised to make it easier for advertisers to exclude their adverts from specific channels and videos and, to raise the bar for safer default content that adverts appear next to.
For many, however, this is not enough. Following the recent terror attack in Westminster, there have been renewed calls for social media giants, Facebook, Google and Twitter to work with police to flag potential terrorist threats and remove extremist content.
The acting chief of the Metropolitan police Craig Mackey who witnessed the savage attack on PC Keith Palmer, said that the incident and those like it across Europe are a “wake-up call” to the technology industry to put its “house in order” and that the ethical statements released by the social media powerhouses “had to mean something”.
Mackey told the London assembly’s police and crime committee there was a “truly enormous” amount of digital information involved in terror investigations.
“Some of that would be in secure applications; some of that would be in a variety of formats that are more easy to analyse and work with,” he said. “We work hard with the industry to highlight some of the challenges of these very secure applications.
“It’s a challenge when you’re dealing with companies that are global by their very nature because they don’t always operate under the same legal framework as us. But it is something we continually push for.”
Mackey said there was a team within the Met’s special operations that took down material hosted in places the force could access.
Germany, known for having some of the worlds toughest laws against hate speech already, has taken this a step further releasing a new hate speech bill in an attempt to get social media companies to abide.
German Justice Minister Heiko Maas says that these organisations are not doing enough to take hate rhetoric and dangerous ‘fake news’. The new bill follows a rise in xenophobic and racist posts aimed towards migrants and refugees as well as a recent report of ten attacks a day on migrants in Germany last year.
“We have to increase the pressure on social networks,” said Maas. “Too little illegal content is deleted, it’s not deleted quickly enough and it looks like the operators of social networks aren’t taking their users seriously enough.”
The bill states that social networks must delete or block “obviously illegal” content within 24 hours after it’s been flagged or face hefty fines of up to 50 million euros.
Citing a study commissioned by his ministry, Maas says that Twitter and Facebook currently delete only 1 percent and 39 percent, respectively, of content flagged as illegal by its users. Google’s YouTube, he says, is exemplary, deleting 90 percent of flagged illegal content.
Critics of the bill have asked why the government is trying to put the liability in the hands of private companies instead of enforcing existing laws. The EU digital commissioner commenting on the bill said that there is already legislation to deal with misinformation and bigoted posts.
The trade-off faced by the social media industry continues as they tweak regulations while attempting to maintain policies of free speech. They have also encountered problems with the unwarranted removal by automatic filter systems of LGBT content causing a backlash from the community.
Google’s acceptance of responsibility represents a step forward in an industry better known for using its structure as a platform to negate its control over user-contributed content. The companies are still avoiding responsibility for the policing of users, but the old excuse that it is too difficult to produce algorithms and AIs to tackle the problem of inappropriate content will not hold for much longer, particularly with recent successful crackdowns on copyright violations by engineers. The question remains as to whether this move will instigate an industry wide shift.