The European Commission has given internet companies including Google, Facebook and Twitter two months to demonstrate progress in taking down extremist content, or face official legislation.
The commission's recommendation identified a number of measures it regards as actions that would "stem the uploading and sharing of terrorist propaganda online".
These included a one-hour rule for referrals: "Considering that terrorist content is particularly harmful in the first hours of its appearance online, companies should as a general rule remove such content within one hour of its flagging by law enforcement authorities and Europol".
To achieve this, the European Union's governing body expects these companies to put special mechanisms in place for the submission of and follow-up to referrals from competent authorities – as well as Europol's Internet Referral Unit.
The commission also expects internet companies to use proactive measures, including automated ways of detecting and removing or disabling terrorist content to stop it from reappearing once it has been removed.
It expects these companies to put in place necessary safeguards, including a human review step, before the content is removed to avoid the removal of content that is not illegal.
The commission warned the online platforms that it would "closely monitor" their actions to determine if additional steps, including legislation, were required.
A spokesperson for Facebook said: "We share the goal of the European Commission to fight all forms of illegal content. There is no place for hate speech or content that promotes violence or terrorism on Facebook. As the latest figures show, we have already made good progress removing various forms of illegal content. We continue to work hard to remove hate speech and terrorist content while making sure that Facebook remains a platform for all ideas."
When contacted for comment, Twitter referred to a statement from Edima, the European trade association representing online platforms and other innovative businesses. The association outlined its "dismay" that the commission did not choose to engage in "crucial dialogues and fact-finding discussions" before issuing the recommendation.
This news comes after the recent announcement by the Bank of America of its intent to hire a "brand safety officer" whose sole task will be to ensure that the company's advertising will not appear alongside undesirable content online.
A version of this article first appeared on Third Sector's sister publication Campaign