It didn’t take long. Just 12 hours after the London Bridge attack, when terrorists killed eight people on a Saturday night last June, Theresa May stepped onto the tarmac of Downing Street and claimed the giants of Silicon Valley should share responsibility for the frightening impact in the rise of extremism.
“We cannot allow this ideology the safe space it needs to breed – yet that is precisely what the internet, and the big companies that provide internet-based services provide,” she said.
Something had changed since the PM’s response to the Manchester attack, a few days earlier.
She had suddenly placed the Californian social media companies at the centre of the debate about how to beat extremism.
Her new rhetoric appeared to categorise organisations like Facebook and Google as both partners and potential enemies of the state in the war against extremism.
On Thursday in Davos, seven months on, the prime minister returned to the topic.
In a speech, she said: “Technology companies still need to go further in stepping up to their responsibilities for dealing with harmful and illegal online activity.
"These companies have some of the best brains in the world. They must focus their brightest and best on meeting these fundamental social responsibilities."
But has the PM identified the real enemy, or simply found a distraction from a failure of the government’s first duty, to protect citizens from harm?
In November I was invited into Facebook’s headquarters near San Francisco to see the other side of this debate.
The social network says 99% of the Islamic State or Al Qaeda-related content it removes is taken down before it is flagged by its users, often helped by artificial intelligence.
Monika Bickert, one of the company’s top execs and a former federal prosecutor, revealed Facebook had proactively offered support to British authorities following the terror attacks in the UK last year.
Some counter-terrorism sources are less critical of the internet companies than Downing Street and the Home Office.
Assessing the assistance offered in major investigations, a senior officer recently told me: "Some are bad, more are good, but the good ones are getting more good."
This is hardly the picture of the do-bad companies hiding in the shadows that the PM’s rhetoric seems to portray.
But these contrasting outlooks are at the heart of the biggest clash in modern counter terrorism: who is responsible for finding extremist content online?
The social media companies tend to see themselves as ‘platforms’ or ‘communities’ merely enabling a conversation, but not necessarily responsible for what’s said.
Governments are more likely to see them as ‘publishers’ or ‘broadcasters’, just like ITV, which should be entirely accountable for the content it hosts.
The worlds of terror and tech have changed fast since Theresa May became home secretary, when Facebook was just six years old.
Countering terrorism increasingly relies on spotting material that exists on private platforms.
It’s frustrating, perhaps infuriating for investigators.
But make no mistake: despite Theresa May’s complaints, stopping terrorism in the UK is not Mark Zuckerberg’s responsibility, but her’s.