Facebook is working with the Metropolitan Police to improve the social network's ability to detect live streaming of terrorism and potentially alert officers about an attack sooner.
Facebook will provide officers at the Met's firearms training centres with body cameras, in a bid to help its artificial intelligence more accurately and rapidly identify videos of real-life first person shooter incidents.
Facebook came under fire for the spread of a live stream video showing the New Zealand mosque shootings in March, which left 51 dead.
The video was viewed fewer than 200 times during its live broadcast and was watched about 4,000 times in total before being removed.
Facebook largely relies on AI to spot violating content and remove it as quickly as possible, but in the case of the Christchurch terrorist attack, it says it simply did not have enough first-person footage of violent events for the system to match it up against.
Therefore it has approached the Met to increase the number of images needed to train its machine learning tools.
The global effort is part of a wider clampdown on real-world harm from manifesting on social media, with the Home Office sharing the footage with other technology companies to develop similar solutions.
Facebook says it has banned more than 200 white supremacist organisations from its platform, as well as removing more than 26 million pieces of content in the last two years related to global terrorist groups like Isis and al-Qaeda.
However, the company warns that it must stay ahead of bad actors who will continue to try new tactics.
In May, the social network - along with Amazon, Google, Microsoft and Twitter - agreed on a nine-point plan of action following a meeting with world leaders and tech firms in Paris named the Christchurch Call to Action.