The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Is Data Privacy A Privilege? The Racial Implications Of Technology Based Tools In Government

from the algorithmic-bias-and-privacy dept

While we often read about (and most likely experience ourselves) public outrage regarding personal data pulled from websites like Facebook, the news often fails to highlight the staggering amounts of personal data collected by our governments, both directly and indirectly. Outside of the traditional Fourth Amendment protocols for constitutional searches and seizures, personally identifiable information (PII) – information that can be used to potentially identify an individual – is collected when we submit tax returns, apply for government assistance programs or interact with federal and government social media accounts.

Technology has not only expanded governments’ capability to collect and hold onto our data, but has also transformed the ways in which that data is used. It is not uncommon now for entities to collect metadata or data that summarizes and provides information about other data (for example, the author of a file or the date and time the file was last edited). The NSA, for instance, collected metadata from over 500 million calls detailing records during 2017, much of which it did not have the legal authority to collect. Governments now even purchase huge amounts of data from third party tech companies.

The implementation of artificial intelligence tools throughout the government sector has influenced what these entities do with our data. Governments aiming to “reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data the name” have implemented tools like artificial intelligence decision making in both criminal and civil contexts. Algorithms can be effective tools in remedying government inefficiencies, and idealistic champions believe that artificial intelligence can eliminate human and subjective emotions to obtain a logical and “fairer” outcome. Data collected by governments plays a role in developing these tools. Individual data is taken and aggregated into data sets which are then used for algorithmic decision making.

With all this data, what steps do governments take to protect the information they collect from their citizens?

Currently, there are real and valid concerns that governments fail to take the adequate steps necessary to protect and secure data. Take, for instance, the ever-increasing number of data breaches in densely populated cities like New York and Atlanta. In 2018, the city of Atlanta was subjected to a major ransomware attack by an Iranian based group of hackers that shut down major city systems and led to outages that were related to “applications customers use to pay bills or access court related information,” (as per Richard Cox, the city’s Chief of Operations at the time). Notably, the city had been heavily criticized for its subpar IT and cybersecurity infrastructure and apathetic attitude towards fixing any vulnerabilities in the city.

While the city claimed there was little evidence that the attack had compromised any of its citizens’ data, this assertion seems unrealistic given the span and length of the attack and the number of systems that were compromised.

Race, Algorithms and Data Privacy

As a current law student, I have given much thought over the last few years to the role of technology as the “great equalizer.” For decades, technology proponents have advocated for increased use in the government sector by highlighting its ability to level the playing field and provide opportunities for success to all, regardless of race, gender or economic income.

However, having gained familiarity with the legal and criminal justice systems, I have begun to see that human racial and gender biases, coupled with government officials’ failure to understand or question technological tools like artificial intelligence, often leads to inequitable results. Further, the allocation of governments funds for technological tools often go to police and prosecution rather than defense and protection of vulnerable communities.

There is a real threat that algorithms do not achieve the intended goals of objectivity and fairness, but further perpetuate the inequalities and biases that already exist within our societies. Artificial intelligence has enabled governments to cultivate “big data” and thus, have added another tool to their arsenals of surveillance technology. “Advances in computational science have created the ability to capture, collect, and combine everyone’s digital trails and analyze them in ever finer detail.” Through the weaponization of big data, governments can even more easily identify, control, and oppress marginalized groups of people within a society.

As our country currently addresses the decades of systematic racism inherent in our political and societal systems, privacy must be included in the conversation and reform. I believe that data privacy today is regarded as a privilege rather than a right, and this privilege is often reserved for white, middle- and upper class citizens. The complex, confusing and lengthy nature of privacy policies not only requires some familiarity with data privacy and what the government and companies do with data, but also the time, energy and resources to read through the entirety of the document. If the receipt of vital benefits was contingent on my acceptance of a government website privacy policy, I have no doubt that I would accept the terms regardless of how
unfavorable they were to me.

The very notion of the right to privacy in the United States is derived, historically, from white, male, and upper class values. In 1890, Samuel D. Warren and Louis Brandeis (future Supreme Court Justice) penned their famous and often quoted “The Right to Privacy” in the Harvard Law Review. The article was, in fact, a response to the discomfort that accompanied their high-society lives, as the invention of the camera now meant that their parties were captured and displayed prominently in newspapers and tabloid publications.

These men did not intend to include the general population when creating this new right to privacy, but instead aimed to safeguard their own interests. They were not looking to protect the privacy of the most vulnerable populations, but to make sure that the local tabloid didn’t publish any drunk or incriminating photos from the prior night’s party. Even the traditional conception of privacy, which employs physical space and the home to illustrate the public verses private divide, is a biased and elitist concept. Should someone, then, lose their right to privacy if they do not have a home themselves?

In the criminal justice system, how do we know that courts and governments are devoting an adequate amount of resources to secure records and the data of individuals in prison or court? Large portions of budgets are spent on prosecutorial tools, and it seems as though racial biases prevent governments from devoting monetary resources to protect minorities’ data and privacy as they move through the criminal justice system. Governments do not reveal much as to whether they notify prisoners and defendants if data is compromised, so it is clear that these systems must be scrutinized moving forward.

Moving Forward

Moving forward, how do we address race and inequity issues surrounding data privacy and hold our governments accountable? Personally, I think we need to start with better data privacy legislation. Currently, California is the only state with a tangible data privacy law, which should be expanded to the federal level. Limits must be placed on how long governments can hold onto data, what can be done with the data collected, and proper protocols for data destruction must be established. I believe there is a dire need for better cybersecurity legislation that places the burden on government entities to develop cybersecurity protections that exceed the bare minimum.

The few pieces of cybersecurity legislation that do exist tend to be reactive more than proactive, and often utilize ambiguous terms like “reasonable cybersecurity features,” which ultimately give more power to companies and entities to say they did what was reasonable for the situation at the time. Additionally, judges and lawyers need to be held accountable for data protection as well. Because technology is so deeply integrated into the court systems and the entirety of law itself, there should be an ethical and professional code of conduct-based requirement that holds judges and supporting court staff to a standard in which they must actively work to protect data.

We also need to implement better education in schools regarding the importance of data privacy and what governments and companies do with our personal identifying information. Countries throughout the European Union have developed robust programs in schools that focus on teaching the importance of digital privacy and skills. Programs like Poland’s “Your data – your concern” enable the youth to understand and take ownership of their privacy rather than blatantly click “Accept” on a privacy policy. To address economic and racial inequalities, non-profit groups should also aim to integrate these courses into public programming, adult education curricula, and prison educational programs.

Finally, and most importantly, we need to place limits on and reconsider what technological tools both local and federal governments are using and the racial biases inherent in these tools. Because technology can be weaponized to continue oppression, I question whether governments should implement these solutions prior to addressing the underlying systematic racism that already exists within our societies. It is important to remember the algorithms and the outcomes generated – especially in the context of government – reflect existing biases and prejudices in our society. It is clear that governments are not yet willing to accept responsibility for the biases present in the algorithm or strive to protect data regardless of race, gender and income level.

For example, a study conducted on an algorithm used to determine a criminal defendant’s likelihood of reoffending had an 80% error rate in predictions of violent recidivism. Problematically, these errors impacted minority groups significantly more than they did white defendants. The study determined that the algorithm incorrectly determined blacks as re-offenders at almost double the rate that it incorrectly identified white defendants. Because recidivism rates are considered in sentencing and bail determinations, these algorithms disastrously impact minorities’ livelihoods by subjecting them to harsher punishment and more time in prison; individuals lose valuable time and are unable to work or support their families and communities. Until women and minorities have more of a presence in both the government and programming, and can use their diverse perspectives to ensure that algorithms do not contain biases, these technology tools will continue to oppress.

We must now ask if we have succeeded in creating an environment in which these tools can be implemented to help more than cause harm. While I think these tools currently cause more harm, I am hopeful that as our country begins to address and remedy the underlying systemic racism that exists, we can create government systems that can safely implement tools in ways that benefit all.

Chynna Foucek is a rising third year student at Brooklyn Law School, where she focuses on Intellectual Property, Cybersecurity and Data Privacy law.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Is Data Privacy A Privilege? The Racial Implications Of Technology Based Tools In Government”

Subscribe: RSS Leave a comment
5 Comments
Upstream (profile) says:

Biased programmers, using biased data sets, will create biased programs, which will then yield biased results. It is just a variation on the old ‘garbage in, garbage out’ (GIGO) problem. You must cure the root of this problem before the flowers will be non-poisonous.

Nothing should be allowed to be contingent on a ‘we can violate your privacy’ policy, lest everything be contingent on one.

Maybe getting hacked should constitute prima facie evidence that ‘reasonable cybersecurity features’ were not in place? Maybe it should at least shift the burden to the hackee to demonstrate that proper, and reasonably effective, measures were in place. Of course, as time goes by, what is considered proper and reasonable will be constantly evolving. Anyone holding PII should be expected to keep up.

While I believe the right to privacy is a natural human right, any specific notions put forth by upper-class white males (or anyone else, for that matter), that they want applied to themselves and their own privacy, should also be broadly applicable to everyone.

Encryption, currently under heavy attack, should be the default, and should be required, everywhere, for everything, whenever possible. Are there even any situations where data encryption would not be possible?

Navi (profile) says:

tech developments

Recent tech developments such as artificial intelligence (AI) and blockchain are progressively being used by governments to improve the efficiency of the services they offer. For example, blockchain technologies can allow government to keep important and vital records protected and confidential within a secure ledger.

<a href="https://tirenavi.jp/“>tirenavi.jp</a&gt;

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
12:04 SOPA Didn't Die. It's Just Lying In Wait. (5)
09:30 Demanding Progress: From Aaron Swartz To SOPA And Beyond (3)
12:00 How The SOPA Blackout Happened (3)
09:30 Remembering The Fight Against SOPA 10 Years Later... And What It Means For Today (16)
12:00 Winding Down Our Latest Greenhouse Panel: Content Moderation At The Infrastructure Layer (4)
12:00 Does An Internet Infrastructure Taxonomy Help Or Hurt? (15)
14:33 OnlyFans Isn't The First Site To Face Moderation Pressure From Financial Intermediaries, And It Won't Be The Last (12)
10:54 A New Hope For Moderation And Its Discontents? (7)
12:00 Infrastructure And Content Moderation: Challenges And Opportunities (7)
12:20 Against 'Content Moderation' And The Concentration Of Power (32)
13:36 Social Media Regulation In African Countries Will Require More Than International Human Rights Law (7)
12:00 The Vital Role Intermediary Protections Play for Infrastructure Providers (7)
12:00 Should Information Flows Be Controlled By The Internet Plumbers? (10)
12:11 Bankers As Content Moderators (6)
12:09 The Inexorable Push For Infrastructure Moderation (6)
13:35 Content Moderation Beyond Platforms: A Rubric (5)
12:00 Welcome To The New Techdirt Greenhouse Panel: Content Moderation At The Infrastructure Level (8)
12:00 That's A Wrap: Techdirt Greenhouse, Broadband In The Covid Era (17)
12:05 Could The Digital Divide Unite Us? (29)
12:00 How Smart Software And AI Helped Networks Thrive For Consumers During The Pandemic (41)
12:00 With Terrible Federal Broadband Data, States Are Taking Matters Into Their Own Hands (18)
12:00 A National Solution To The Digital Divide Starts With States (19)
12:00 The Cost Of Broadband Is Too Damned High (12)
12:00 Can Broadband Policy Help Create A More Equitable And inclusive Economy And Society Instead Of The Reverse? (11)
12:03 The FCC, 2.5 GHz Spectrum, And The Tribal Priority Window: Something Positive Amid The COVID-19 Pandemic (6)
12:00 Colorado's Broadband Internet Doesn't Have to Be Rocky (9)
12:00 The Trump FCC Has Failed To Protect Low-Income Americans During A Health Crisis (26)
12:10 Perpetually Missing from Tech Policy: ISPs And The IoT (10)
12:10 10 Years Of U.S. Broadband Policy Has Been A Colossal Failure (7)
12:18 Digital Redlining: ISPs Widening The Digital Divide (21)
More arrow