Skip to main content

Cyber Security - Vulnerability of Democracy


2021 is just a few days old, but we are already deep inside of a world where political news and developments around us leave many speechless and even more of us insecure where this all is hitting to. And those who hope and pray for a better life, being even more scared what to come their way. 

Lucky those of us, who are "apolitical" one could believe. But the reality is, that no one on this planet can afford anymore to be non-political or 'apolitical'. 

We can not afford anymore to be apolitical, because our core foundation of Democracy, free speech and human rights are at threat. Those foundations are also about the content we create and share as it is the nature we live in, hence its a clear and present danger to our society, our communities as the internet is also the source of our "knowledge" that we share between us - an internet that many of us use and are depending on for steering our daily life, pleasure or business, or for education. Moreover, we will be even more dependent on the internet technology every day to come - unless we choose to live in the "wide open" of rural space, hard to reach areas where we keep everything away from us, including the internet and any information that comes with it. 

We also need to take in consideration the alarming fact, that social media manipulation campaigns operate by now in almost 100 countries, with clear tendencies to grow, while also fully industrialized by media, PR firms and political parties (*).

If we are looking at the world wide web, one key question is becoming more and more critical to be asked in the right context, namely: 

- Who owns the Internet - and how can we assure, that the internet (and social media) will not become our biggest enemy to peace and societies? 

Networks or Networth?  

In order to resolve this challenge, we need to look at the 'structure' of the internet in the first place. We need to understand, who owns and controls the structure (of the internet), the infrastructure of content that is created and spread, the news or hate speech, the actions that then (can) cause destruction -  just as we can see with the ongoing developments in the USofA this very moment, still in motion. 

We all need to become more aware on how we move in and use the internet (akin Social Media), and how active we have to be to defend the internet as it not only entails our future in the way we communicate with each other but also how we (should) respect each other, as it is a vivid and present danger to all of us if misused with the intent to harm or create divisiveness. 

The Internet has a huge power to influence our actions, our mental sanity, in fact, our humanity  - so it is not just about our thinking and beliefs; because right now it looks like, that those who are willing to distribute lies and false claims, are out to destroy what we achieved so far in our Democratic environments: the right to speak and accept, in a ideal scenario, counter controversial opinions without placing straightout lies. But unfortunately, there is interest and strong will by many ruthless dark powers, to dissolve our freedom in every way and form; by manipulating public discontent and fear while suc-cessfully mixing disinformation with voter anxiety to smear opponents and those with other opinions; as the internet is more than just Social media of course, both channels can do huge harm to all of us, transfer the lies into hate against peace and destroy our society from within!

The thoughts below is taking all this in consideration and explains the 'technical uncertainties and legal liabilities' - basically how the DNA of the Internet currently fails for the higher purpose and good, and how and why it has to be re-structured - urgently. The examples below just spotlights a tiny bit of the bigger picture. A picture that is so big, that we hardly can see it in its entire scope. It is truly important to question and understand the structure and policies the internet is operating on. We need to understand who controls what and when, and how vulnerable all these "players" really are. And we need to get our voices heard that we care; that we need a change in our defence strategy, have a better "grip" and legal ability, on how the internet is influencing our thoughts, our lifes, our future. To make this clear, lets start with a retrospective view on some (violent) incidents (in the USofA), that have been grave to our understanding and to our perception in general.

Part 1

The Power of pictures and storytelling - is  much more than just content!

On Saturday, August 3, 2019, a gunman opened fire in a Walmart in El Paso, Texas, killing 22 people and wounding 27 before he was taken into custody by police. As news of the attack spread, so did a white supremacist manifesto, allegedly written by the shooter and uploaded hours before the shooting to an anonymous forum, 8chan. This document was archived and reproduced, amplified on other message boards and social media, and eventually reported in the press. This was the third mass shooting linked to the extremist haven 8chan in six months, and followed the same pattern as the synagogue shooting in Poway, California, in April and the Christchurch, New Zealand, mosque shootings in March: post a racist screed to 8chan; attack a targeted population; and influence national debates about race and nation.

What will it take to break this circuit, where white supremacists see that violence is rewarded with amplification and infamy? While the answer is not straightforward, there are technical and ethical actions available. And those need a new form of united international jurisdiction!

After the white supremacist car attack in Charlottesville, Virginia, in 2017, platform companies such as Facebook, Twitter and YouTube began to awaken to the fact that platforms are more than just a reservoir of content. Platforms are part of the battleground over our hearts and minds, and they must coordinate to stop the amplification of white supremacy or political repression across their various services. Advocacy groups, such as Change the Terms, are a good first step as "guiding platform companies" to standardize and enforce content moderation policies, i.e.  about hate speech.

But, what happens to content not originating on major platforms? How should websites with extremists and white supremacist content be held to account, at the same time that social media platforms are weaponized to amplify hateful content?

Following the El Paso attack, the original founder of 8chan has repeatedly stated that he believed the site should be shut down, but he is no longer the owner! The long-term failure to moderate 8chan led to a culture of encouragement of mass violence, harassment and other depraved behaviour. Coupled with a deep commitment to anonymity, the current owner of 8chan resists moderation on principle, and back tracing content to original posters is nearly impossible. The more heinous the content, the more it circulates. 8chan, among other extremist websites, also contributed to the organization of the Unite the Right Rally in Charlottesville, where Heather Heyer was murdered in the 2017 car attack and where many others were injured.

In the wake of Charlottesville, corporations grappled with the role they played in supporting white supremacists organizing online. After the attack in Charlottesville and another later in Pittsburgh in October 2018, in which a gunman opened fire on the Tree of Life synagogue, there was a wave of deplatforming and corporate denial of service, spanning cloud service companies, domain registrars, app stores and payment servicers. While some debate the cause and consequences of deplatforming specific far-right individuals on social media platforms, we need to know more about how to remove and limit the spread of extremist and white supremacist websites. And we need a proper, international body that has the power and valid jurisdiction that enables us to do so.

Researchers also want to understand the responsibility of technology corporations that act as the infrastructure allowing extremists to connect to one another and to incite violence. Corporate decision making is now serving as large-scale content moderation in times of crisis, but is corporate denial of service a sustainable way to mitigate white supremacists organizing online?

On August 5, 2019, one day after two mass shootings rocked the nation, Cloudflare, a content delivery network, announced a termination of service for 8chan via a blog post written by CEO Matthew Prince (2019). The decision came after years of pressure from activists. “Cloudflare is not a government,” writes Prince, stating that his company’s success in the space “does not give us the political legitimacy to make determinations on what content is good and bad”

Yet, due to insufficient research and policy about moderating the unmoderatable and the spreading of extremist ideology, we are left with open questions about where content moderation should occur online.

Figure 1: Content Moderation in the Tech Stack
Source: Author.
Source: Author.

Who's Truth is It Anyway?

When discussions of content moderation take a turn for the technical, we tend to hear a lot of jargon about “the tech stack” (Figure 1). It is important to understand how the design of technology also shows us where the power lies. And, it is in fact also important and helpful - if understood - that we have various vulnerabilities in regards to how content can be cracked, modified, such as cyber attacks are "performed", with little chances to trace the originator.

Most debates about content moderation revolve around individual websites’ policies for appropriate participation (level 1) and about major platforms’ terms of service (level 2). For example, on level 1, a message board dedicated to hobbies or the user’s favourite TV show may have a policy against spamming ads or bringing up political topics. If users don’t follow the rules, they might get a warning or have their account banned.

On level 2, there is a lot of debate about how major corporations shape the availability and discoverability of information. While platforms, search engines and apps have policies against harassment, hate and incitement to violence, it is difficult to enforce these policies given the enormous scale of user-generated content. In Sarah Roberts’s new book Behind the Screen: Content Moderation in the Shadows of Social Media (2019), she documents how this new labour force is tasked with removing horrendous violence and pornography daily, while being undervalued despite their key roles working behind the scenes at the major technology corporations. Because of the commercial content moderation carried out by these workers, 8chan and other extremist sites cannot depend on social media to distribute their wares.

For cloud service providers on level 3, content moderation occurs in cases where sites are hosting stolen or illegal content. Websites with fraught content, such as 8chan, will often mask or hide the location of their servers to avoid losing hosts. Nevertheless, actions by cloud service companies post-Charlottesville did destabilize the ability for the so-called alt-right to regroup quickly.

On level 4 of the tech stack, content delivery networks (CDNs) help match user requests with local servers to reduce network strain and speed up websites. CDNs additionally provide protection from malicious access attempts, such as distributed denial-of-service attacks that overwhelm a server with fake traffic. Without the protection of CDNs such as Cloudflare or Microsoft’s Azure, websites are vulnerable to political or profit-driven attacks, such as a 2018 attempt to overwhelm Github for a 2016 incident against several US banks. Cloudflare, the CDN supporting 8chan, responded by refusing to continue service to 8chan. Despite attempts to come back online, 8chan has not been able to find a new CDN at this time. It is very difficult to enforce these policies given the enourmous scake of user-generated content.

In the aftermath of Charlottesville, Google froze the domain of a neo-Nazi site that organized the event and GoDaddy also refused services. In response to the El Paso attack, another company was taking action. Tucows, the domain registrar of 8chan, has severed ties with the website. It is rare to see content decisions on level 5 of the tech stack, except in the cases of trademark infringement, blacklisting by a malware firm or government order.

Generally speaking, cloud services, CDNs and domain registrars are considered the backbone of the internet, and sites on the open web rely on their stability, both as infrastructure and as politically neutral services.

Level 6 is a different story. Internet service providers (ISPs) allow access to the open web and platforms, but these companies are in constant litigious relations with consumers and the state. ISPs have been seen to selectively control access and throttle bandwidth to content profitable for them, as seen in the ongoing net neutrality fight. Fact is: While the divide between corporations that provide the infrastructure for our communication systems and the Federal Communications Commission is overwhelmed by lobbying, the US federal and local governments remain unequipped to handle white supremacist violence, democratic threats from abroad, the regulation of tech giants or the spread of ransomware attacks in cities around the country. 

However, while most ISPs do block piracy websites, at this stage we have not seen USofA ISPs take down or block access to extremist or white supremacist content. Other countries, for example, Germany, are a different case entirely as they do not allow hate speech or the sale of white supremacist paraphernalia.

You can see the video here => https://twitter.com/i/status/1191865527448133634

Lastly, on level 7, some governments have blacklisted websites and ordered domain registrars to remove them. Institutions and businesses can block access to websites based on content. For example, a library will block all manner of websites for reasons of safety and security. (In the case of 8chan, while POTUS 45 (!) has called for law enforcement to work with companies to “red flag” posts and accounts, predictive policing has major drawbacks)

At every level of the tech stack, corporations are placed in positions to make value judgments regarding the legitimacy of content, including who should have access, and when and how. In the case of 8chan and the rash of premeditated violence, it is not enough to wait for a service provider, such as Cloudflare, to determine when a line is crossed. Unfortunately, in this moment, a corporate denial of service is the only option for dismantling extremist and white supremacist communication infrastructure.

The wave of violence has shown technology companies that communication and coordination flow in tandem. Now that technology corporations are implicated in acts of massive violence by providing and protecting forums for hate speech, CEOs are called to stand on their ethical principles, not just their terms of service. For those concerned about the abusability of their products, now is the time for definitive action. As Malkia Cyril (2017) of Media Justice, argues, “The open internet is democracy’s antidote to authoritarianism.” It’s not simply that corporations can turn their back on the communities caught in the crosshairs of their technology. Beyond reacting to white supremacist violence, corporations need to incorporate the concerns of targeted communities and design technology that produces the Web we want.


It's Never too Late To Do The Right Thing

Regulation to curb hateful content online cannot begin and end with platform governance. Platforms are part of a larger online media ecosystem, in which the biggest platforms not only contribute to the spread of hateful content, but are themselves an important vector of attack, increasingly so as white supremacists weaponize platforms to distribute racist manifestos. It is imperative that corporate policies be consistent with regulation on hate speech across many countries - and our world. Otherwise, corporate governance will continue to be not merely haphazard but potentially endangering for those who are advocating for the removal of hateful content online. In effect, defaulting to the regulation of the country with the most stringent laws on hate speech, such as Germany, is the best pathway forward for content moderation, until such time that a global governance strategy is in place. And to bring such global governance in place let's agree on one simple fact, so important to world peace: put people first!

NOTE:

(*) The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Scientific sources can be named, upon request. As one example:

https://comprop.oii.ox.ac.uk/research/posts/industrialized-disinformation/

Dear Friends, Business Partners, Supporters and Visitors:

Regardless how YOU have come to this site, hopefully you will find this read 'interesting' enough to come back again.

Stay tune for Part 2 coming soon: The Spread of Surveillance Tech in Africa - A Security Concern beyond checks and balances.

Comments

Popular posts from this blog

How To Change Now?

Dear Friends, Business Partners, Supporters and Visitors: Regardless how YOU have come to this site, hopefully you will find this read 'interesting' enough to come back again. As most people all have the wish to inform themselves about what is going on around us, let's be aware that anything we are "offered" on our daily internet visits and especially in social media, nothing (of the selection "offered") is coming to us by "accident" - and that the internet sites suggested to you and me today are very much an invisble 'pre-selection' done by 'invisible' hands that shuffle the cards for us; the cards we receive are based on artifical intelligence algorithms and our former movements in the world wide web. Moreover, it will be of intrinsic importance, HOW we go about the internet in the future, how we select and browse on (y)our site visits, how you type the "key words" and which search engine we use aside of how we searc

When does the Future begin?

When does the Future begin? We live in the Future - the Future of the Past!  The future of the past was splendid, technical, happy and almost conflict free, because for some of us, it seemed that very old  problems were solved. While the 2010 years been very confusing, from now on in the year 2020 we will realize something else: The Future is of the Past and the Future can not be anymore the "American Way".  But before we talk about the Future, we share here a short clip - in memory of #GeorgeFloyd and this weeks incident in Minneapolis, Minnesota (USofA) as words aren't possible to fully express the pain, grief and anger we have to feel as human beings.   Video courtesy of The Tyler Merrit Project @TTMPROJECT As we want to talk about The Future  it is important to reflect on what is happening around us and it is also important to acknowledge how quickly we have adapted to the " News ". To the type of  News  we can not judge anymore as News became

When The Future Is Yours!

When The Future Is Yours! We often hear people saying they don’t know what the future will bring, and to a point, this can be true.  These days many of us who look at current trends in life will be rattled rather than being able to assess how these incidents will impact their life moving forward. MAVECON wants to help you in this process, hence we introduce here some reflection type of thought process to you. We hope this blog makes you curious for more. When making plans, many of us - as well businesses and companies - are focusing mostly on short-term trends. Such trends could be in relation to (y)our business, our personal life or any particular current dilemmas we face. These short trends can be comparable to brands of toys that seem to be a hit one year and forgotten the next. This means children will simply move onto the next big thing, rather than focusing on their old toys. As we become older, it’s seems harder to shake the shackles of what has gone before, especiall