WASHINGTON - Recent efforts by the governments of Australia and New Zealand to tackle online extremism has renewed the debate over the threat of radicalization on the internet, with some analysts seeing new opportunities for states and tech giants for a joint action.
Australian officials earlier this week enacted what they are calling the world's first law to curb online extremism, as authorities ordered five websites to remove extremist content or face prosecution. The offending websites are all based outside Australia, the country's eSafety commission told the Financial Times. The commission is charged with investigating and removing such content.
In neighboring New Zealand, a self-avowed white supremacist in March opened fire at two mosques and gunned down 51 people while livestreaming his actions on Facebook. On Monday, Twitter CEO Jack Dorsey met with Prime Minister Jacinda Ardern in Wellington to discuss what his company can do to help eliminate violent extremist content on its platform.
The meeting was a part of Ardern's efforts through Christchurch Call, a pledge by 18 countries and eight technology companies in Paris on March 15 to collaborate to eradicate violent extremist content from the internet.
"It is in fact the prime minister of New Zealand and the Australian movement in the parliament who have stepped up to do something a little more sharp and more defined," said Farah Pandith, former U.S. special representative to the Muslim communities.
Pandith authored the recent book "How We Win: How Cutting-Edge Entrepreneurs, Political Visionaries, Enlightened Business Leaders and Social Media Mavens Can Defeat the Extremist Threat."
"It is too early to tell whether or not those kinds of action are going to make a difference in the rate and the impact of spreading content that radicalizes. But it's going to be a very important space to watch," Pandith told VOA, adding that governments and tech companies have a long way to go in ending this evolving threat.
"As I look at where we are today, 18 years after September 11, and the morphing of the online threat and the more severe and dangerous threat landscape we are in," she said, "it is critical that private sector companies, not just technology companies, look at what they need to do to help defeat the extremist ideology and the capacity for them to spread hate and extremism around the world."
Violent extremist content has become a major concern for governments in recent years as violent ideological groups try to use social media platforms to spread propaganda. That threat became more significant when Islamic State (IS), which emerged in mid-2014, began to use the internet to establish a "virtual caliphate" to lure thousands of supporters and inspire several deadly attacks around the world.
In the past, social media giants - particularly Facebook, Twitter and YouTube - have taken several measures to identify and remove millions of extremist propaganda material.
Facebook reported that it had removed more than 3 million pieces of IS and al-Qaida propaganda in the third quarter of 2018 alone.
Within the first 24 hours of the New Zealand shooting, Facebook said it removed more than 1.2 million videos of the attack at upload, and another 300,000 additional copies after they were posted.
But governments say the companies need to do more to crackdown on extremist content.
In a statement in June at the Group of 20 summit in Osaka, Japan, world leaders pressed social media companies to improve how they root out terrorism and violent content on the internet.
"The internet must not be a safe haven for terrorists to recruit, incite or prepare terrorist acts," the world leaders wrote in their statement, pushing the tech companies to, among other measures, develop technologies that prevent extremist content online.
Some analysts charge that more cooperation between states and tech companies is crucial to combat violent extremist content, particularly as the threat crosses country borders.
Laura Pham, an expert with New York-based Counter Extremism Project, argued that countries in Europe have particularly made significant achievements through enacting transnational laws that target online extremist content.
The European Union in late 2015 established its Internet Forum (EUIF), which aims to bring together EU governments and other stakeholders, such as Europol, and technology companies to counter hate speech and terrorist content.
The EU in mid-2016 moved to establish a Code of Conduct on Countering Illegal Hate Speech Online, the original signatories to which were Facebook, Microsoft, Twitter and YouTube. An assessment of the code by the organization in February showed that tech companies were assessing 89% of flagged content within 24 hours, with removal of 72% of the content deemed to be illegal hate speech, compared with rates of 40% and 28%, respectively, when the code was first launched in 2016.
"These efforts show that the EU as a whole in parliament will not stand for the continued proliferation and the spread of extremist and terrorist material online. We will probably see more action from member states and from individual states, but there is a clear public understanding of the potential public safety and security concerns that come with proliferating terrorist material online," Pham told VOA.
Meanwhile, as countries continue their efforts with tech companies to address violent content online, potential risks to free speech and privacy will remain the core of the debate, said Maura Conway, a professor of international security at Dublin City University in Dublin, Ireland.
"The role of the internet and social media, in particular, in the case of violent extremism and terrorism was not something that internet companies wished to countenance early in their development, but is certainly an area that they now acknowledge is one in which workable solutions need to be found," Conway told VOA.
Despite those concerns, Conway said a workable solution eventually will need to be found to prevent further exploitation of the internet by hate groups to spread their ideology.