Aggregated News

Congress Held 10 Hours of Hearings on Facebook. What’s Next?

eff.org - Sat, 14/04/2018 - 10:02

After grilling Mark Zuckerberg for ten hours this past week, the big question facing Congress is, “What’s next?” The wide-ranging hearings covered everything from “fake news” to election integrity to the Cambridge Analytica scandal that spurred the hearings in the first place. Zuckerberg’s testimony did not give us much new information, but did underline what we already knew going in: Facebook’s surveillance-based, advertising-powered business model creates real problems for its users’ privacy rights.

But some of those problems can be fixed. As Congress considers what to do next, here are some of our suggestions.

DO Ask For Independent Audits

Facebook mentioned cooperating with FTC audits, but we’re not clear on whether or not Facebook is allowing independent auditors to inspect the data. If we allow Facebook to control the outside world’s visibility into its data collection practices, we can never be exactly sure if Facebook is actually complying with its own assertions. Facebook, along with other large tech companies that handle massive amounts of user data, should allow truly independent researchers to regularly audit their systems. Users should not have to take the company’s word on how their data is being collected, stored, and used.

DO Consider The Impact On Future Social Media Platforms

Tech giants come and go, and that is a good thing. In the mid-1990s, for example, it was hard to imagine a world where Microsoft was not the dominant force in the tech world. In the early 2000s, AOL email addresses and Instant Messenger were ubiquitous. Today, social media is dominated by a few platforms, but they too can be deposed. We need to make sure new regulations don’t forestall that possibility. If Congress decides to “do something” to address the problems it sees with Facebook, it’s worth considering how legislative proposals might help or hinder potential competitors.

For example, without Section 230 of the Communications Decency Act of 1996, Facebook could not have moved out of Mark Zuckerberg’s dorm room in 2004. Conversely, heavy-handed requirements, particularly requirements tied to specific kinds of technology (i.e. tech mandates) could stifle competition and innovation. Used without care, they could actually give even more power to today’s tech giants by ensuring that no new competitor could ever get started

As a massive global company, Facebook has the resources to comply with anything Congress throws at it. But smaller competitors may not.

DO Watch Out For Unintended Effects On Speech

Several Senators and Representatives asked questions about how Facebook decided to remove content from their platform, accusing Facebook of bias and political censorship. Facebook has also been in the news recently for removing accounts and pages linked to Russian bots attempting to undermine American political discourse.

Creating a more transparent and neutral platform may sound like a worthy goal, but if Congress is going to write legislation, it should ensure that transparency and user control provisions don’t accidentally undermine online speech. For example, any disclosure laws must take care to protect user anonymity.

Additionally, the right to control your data should not turn into an unfettered right to control what others say about you—as so-called "right to be forgotten" approaches can often become. If true facts, especially facts that could have public importance, have been published by a third party, requiring their removal may mean impinging on others’ rights to free speech and access to information. A free and open Internet must be built on respect for the rights of all users.

DON’T Allow Big Tech To Tell Congress How To Regulate

Several times during his testimony, Mr. Zuckerberg called for privacy regulations both for ISPs and for platforms. While we agree privacy protections are important for both of these types of businesses, they shouldn’t be conflated. The rules we need for ISPs may be significantly different from those needed for platforms. In any event, Congress shouldn’t allow the tech giants to write their own rules given their strong incentives to favor the needs of shareholders over those of the public.

For example, it will be interesting to see how Facebook implements the EU’s General Data Privacy Protections (GDPR) for their non-European users. But if Congress tries to implement something similar here, we should all be watching to make sure Big Tech doesn’t gut the most important provisions.

DON’T Treat Social Media The Same As Traditional Media

The foundation of a functional democracy is the ability to communicate freely with one other and our elected officials. Like television and radio before it, social media is now a crucial vehicle for that civic discussion. However, the rules that govern traditional media cannot be the same rules that govern social media. While that may seem obvious to some, Sen. Ted Cruz has already called for the fairness doctrine to incorrectly apply to digital communications platforms.

Additionally, Congress should not be taken in by the assertion that AI filters on social media platforms will magically fix all discourse problems. Overbroad censorship is inevitable, and marginalized groups will be the ones most affected. The ability of the public to freely communicate with each other, without government interference, was so important to the country's founders that not only did they put the right to free speech and a free press at the top of the list of Constitutional amendments, they also included, in the Constitution itself, an independent agency to facilitate ordinary communication: the U.S. Postal Service. We have to be able to talk to each other, and Congress should be careful to protect that essential cornerstone of democracy.

DO Talk to Technologists, Engineers, and Internet Lawyers

We’ve seen lots of jokes about the Senate hearings sounding like tech support talking to your grandparents about how to fix their Facebook. It’s not a surprise that many Senators don’t know the technical ways that Facebook works – and that’s actually okay. Participating in a large and complicated branch of government requires a different set of skills than running a technology company, and those skills don’t necessarily overlap with writing or understanding code. The country’s lawmakers didn’t have to be mechanics to legislate basic vehicle safety, nor did they have to be indigent widows to create the Social Security Administration.

What they do have to do is talk to some experts. Congress should be looking to a wide variety of technologists, engineers, and lawyers with deep experience in tech law and policy for advice on any proposals. As Rep. Chaffetz put it in a very different context, time to bring in the nerds.

Bottom Line

Congress needs to get this right. Balancing our right to privacy with our rights to communicate and innovate may be hard, but it’s a task worth doing right.

Categories: Aggregated News

Large ISPs that Orchestrated the Repeal of the Open Internet Order Ask California’s Legislature to Stand Down and Just Let Them Win Already

eff.org - Sat, 14/04/2018 - 08:01

The fight to protect Internet freedom is coming to California this month as the Senate Energy and Utilities Committee (April 17) and Senate Judiciary Committee (April 24) have scheduled hearings and votes on Senator Wiener’s S.B. 822, comprehensive legislation that would utilize the tools available to the state of California to promote net neutrality. As these critical dates approach the large ISPs have filed their opposition (see attached) and it is worth looking at what they say in the context of what they have been doing in D.C. and in the courts. It is also important to see what they are not saying to California Senators.

Parties that Decimated Federal Law are Decrying States Acting in Response

While opponents of S.B. 822 profess to prefer a federal solution, they have never really supported network neutrality at the federal level either. In fact, they spent more than $26 million to support the FCC’s effort to repeal network neutrality and are likely spending millions in California right now to sustain their victory. The money spent helps explain how the FCC reached a decision opposed by roughly 8 out of 10 Americans across the political spectrum.

The ultimate resolution to protecting network neutrality across the country is going to include restoring the 2015 Open Internet Order’s protections. That can happen in three ways: the FCC loses in court, the FCC reverses course, or, most likely, Congress passes a new law. Each of these scenarios are very likely years in the making and, in a matter of weeks, the so-called “Restoring Internet Freedom Order” will take effect. That leaves a very long gap of time for companies like Comcast and AT&T to strike exclusive deals with dominant Internet companies like Facebook to begin prioritizing their services and ensure no future small Internet competitors can compete and replace them (it was not that long ago when Facebook supported AT&T’s antitrust violating merger with T-Mobile).

ISPs Oppose Net Neutrality Because They Want It to Be Legal for Them to Charge More for Access Under Paid Prioritization

The large ISPs pretend they support network neutrality by proclaiming their support for a law banning blocking and throttling. What they consistently leave out in all of their letters is their desire to legalize paid prioritization, the ability for them to pick winners and losers determined by how much they can pay the ISP. This is an especially serious problem when considering the high-speed access market gives more than half of all Americans one choice. Notably, Comcast abandoned its pledge to not engage in paid prioritization the moment the FCC began its process to repeal network neutrality protections and no major ISP has ever fully committed to not begin sorting out the Internet by who can pay them more. They are already relying on their allies in Congress to promote their goal to charge more for Internet access simply because they have the leverage to demand more money.

Making paid prioritization legal gives Comcast, AT&T, and Verizon full control on deciding which Internet products and services get preferential treatment and that has enormous value. In fact, a recent study by Adobe found that close to half of Internet users simply switch to a different service if it is slow loading with up to 85 percent switching if it is a video service that is slow loading. The power to harm online services by slowing them down for other services willing to pay extra is the central danger to a free and open Internet particularly as large ISPs now are vertically integrated with content companies. There is such an extraordinary temptation to self-deal and favor their own content to the detriment of alternatives that it is the central antitrust claim by the Department of Justice’s lawsuit against AT&T’s merger by Time Warner. As they aptly stated, AT&T with control over shows like HBO has “the incentive and ability to use…that control as a weapon to hinder competition.” This is also why zero-rating is a problem (also addressed by SB 822) in the context of companies like AT&T exempting their product (DirecTV) from its own data caps and distorting the market.

The Biggest Myth ISPs perpetrate on Sacramento is There is No Network Neutrality Problem and Repealing Network Neutrality is a Return to the Status Quo

The worst talking point goes to US Telecom, which is effectively AT&T and Verizon, saying we have never had a network neutrality problem. The history of net neutrality is full of violations by ISPs. It is almost humorous that a very old talking point by companies like AT&T used more than ten years ago finds new life at the state level. It is as if the Republican-led FCC that sanctioned Comcast for throttling Bit-Torrent was a figment of our imagination or AT&T itself blocking Skype, Google Voice, or FaceTime (let alone zero-rating its own product DirecTV, which the FCC expressed concerns about until Chairman Ajit Pai was sworn into office).

What the FCC did in 2017 will likely go down as the worst Internet policy decision in history and that is because it was such a radical departure. Despite the fact that the ISP market is more concentrated than ever and that even the Trump Administration’s Department of Justice worries about ISPs exerting power to harm competition this FCC concluded that it was proper for it to absolve itself of responsibility. There is nothing normal about that decision when compared to the previous decades of FCCs that regularly promoted network neutrality and took action against ISPs that violate it. And after years of litigation and losing against ISPs under its efforts to promote network neutrality under Title I of the Communications Act, it is completely insincere to argue that returning ISPs to Title I status is going back to FCC regulation as intended.

If all of this nonsense large ISPs like AT&T, Comcast, and Verizon are pushing at your elected state officials in Sacramento has you upset, then you need to take action and make sure your voice is heard as SB 822 comes to a vote.

Take Action

Tell California's State Senators to Stand up for Net Neutrality

Categories: Aggregated News

Facebook Doesn't Need To Listen Through Your Microphone To Serve You Creepy Ads

eff.org - Sat, 14/04/2018 - 06:04

In ten total hours of testimony in front of the Senate and the House this week, Mark Zuckerberg was able to produce only one seemingly straightforward, privacy-protective answer. When Sen. Gary Peters asked Zuckerberg if Facebook listens to users through their cell phone microphones in order to collect information with which to serve them ads, Zuckerberg confidently said, “No.”

What he left out, however, is that Facebook doesn’t listen to users through their phone microphones because it doesn’t have to. Facebook actually uses even more invasive, invisible surveillance and analysis methods, which give it enough information about you to produce uncanny advertisements all the same.

Users' fear and even paranoia about hyper-targeted adds is warranted—just not for the exact reasons they might think.

Suspicions that Facebook listens to its users’ conversations have been swirling for years, prompting statements of denial from Facebook leadership and former employees. Facebook does request microphone permissions to handle any videos you post, as well as to identify music or TV shows when you use the “Listening to” status feature. But technical investigations have confirmed that you can be confident the Facebook app is not surreptitiously turning on your phone mic and listening in on your conversations.

But how does Facebook know to serve you an ad for a specific product right after you talk about it? What explains seeing ads for things you have never searched for or communicated about online? The list is long. Instead of listening to your conversations through your phone, Facebook:

Tracking and analysis methods like these power not only those too on-the-nose ads, but also invasivePeople You May Knowrecommendations.

Users are onto this. If you have ever been creeped out by an ad for a product popping up right after you were talking out loud about it, your fear and even paranoia are warranted—just not for the exact reasons you might think. No matter how Facebook achieves its frighteningly accurate ads and suggestions, the end result is the same: an uncomfortable, privacy-invasive user experience.

But Zuckerberg’s testimony this week and other recent statements have made it clear that he is not listening to users’ legitimate feedback and concerns here. Putting words into the mouths of millions of users, Zuckerberg said during his testimony that Facebook users prefer a “relevant” ad experience—that is, a highly targeted one:

What we found is that even though some people don’t like ads, people really don’t like ads that aren’t relevant. And while there is some discomfort for sure with using information in making ads more relevant, the overwhelming feedback that we get from our community is that people would rather have us show relevant content there than not.

If that were the case, Congress would not have called Facebook’s CEO to testify on privacy concerns. And recent polls confirm that, while some users like targeted ads, the majority of users do not consider targeted ads “better” than traditional forms of advertising, and 63% would like to see less of them.

Zuckerberg condescendingly called the idea that Facebook is listening in via phone mics a “conspiracy theory.” But users are confused because Facebook has so far refused to be more up-front about how the company collects and analyzes their information. This lack of transparency about what is really going on behind the Facebook curtain is what can lead users to jump to technically inaccurate—but emotionally on-point—explanations for creepy ad phenomena.

Categories: Aggregated News

Christina L. Pierce

freepress.net - Sat, 14/04/2018 - 04:54
Christina L. Pierce Amy Kroin Fri, 04/13/2018 - 14:54 Name Christina L. Pierce Bio Type Staff Job Title Operations Manager Short Bio Christina coordinates all things operations, specifically IT, office management and special projects. Full Bio

Christina coordinates all things operations, specifically IT, office management and special projects. Before joining Free Press, Christina served as an AmeriCorps member with City Year at a high school in Southeast D.C. After her year of service she became the operations manager for the site. She holds a B.A. from Illinois State University in psychology and criminal justice. Outside of work, Christina can be found binge watching life hacks, hair tutorials, crockpoting and traveling.

Available for Speaker Requests Off Portrait Christina
Categories: Aggregated News

Building the “Great Collective Organism of the Mind” at The John Perry Barlow Symposium

eff.org - Sat, 14/04/2018 - 01:42

Individuals from the furthest corners of cyberspace gathered Saturday to celebrate EFF co-founder, John Perry Barlow, and discuss his ideas, life, and leadership.

The John Perry Barlow Symposium, graciously hosted by the Internet Archive in San Francisco, brought together a collection of Barlow’s favorite thinkers and friends to discuss his ideas in fields as diverse as fighting mass surveillance, opposing censorship online, and copyright, in a bittersweet event that appropriately honored his legacy of Internet activism and defending freedom online.

Thanks to the magic of fair use, you can relive the Symposium any time by visiting the Internet Archive. Video begins at 48:00.

%3Ciframe%20src%3D%22https%3A%2F%2Farchive.org%2Fembed%2Fyoutube-Oaci9vlg_Sc%22%20webkitallowfullscreen%3D%22true%22%20mozallowfullscreen%3D%22true%22%20allowfullscreen%3D%22%22%20height%3D%22431%22%20frameborder%3D%220%22%20width%3D%22575%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from archive.org

After a touching opening from Anna Barlow, John Perry Barlow’s daughter, EFF Executive Director Cindy Cohn kicked off the speaker portion of the event:

“To me, what Barlow did for the Internet was to articulate, more and more beautifully than almost anyone, that this new network had the possibility of connecting all of us. He saw that the Internet would not be just a geeky hobby or toy like ham radios, or only a military or academic thing, which is what most folks who knew about it believed.  Starting from the Deadheads who used it to gather, he saw it as a new lifeblood for humans who longed for connection, but had been separated.”

EFF Executive Director Cindy Cohn.

While the man himself may not have been present, Barlow’s connection—and influence—was palpable throughout the Symposium, with a dozen distinguished speakers and hundreds in attendance conversing, delivering remarks, and offering up questions about the past, the present, the future, and Barlow’s impact on all of it. The first speaker (and EFF’s co-founder along with Barlow), Mitch Kapor, told the audience: “I can feel his generous and optimistic spirit right here in the room today inspiring all of us.”

EFF co-founder Mitch Kapor with Pam Samuelson.

Barlow’s genius, said Kapor, was that in 1990, while most Internet usage was research- and military-based, he “absolutely nailed the Internet’s essential character and what was going to happen.”

Samuelson and Barlow speak with Bruce Lehman, head of the USPTO in 1996.

Pam Samuelson, Distinguished Professor of Law and Information at the University of California, Berkeley, pointed out that Barlow’s 1994 treatise on copyright in the age of the Internet, The Economy of Ideas, has been cited a whopping 742 times in legal literature. But he didn’t just give lawyers an article to cite—Barlow helped the world understand that copyright had a civil liberty dimension and galvanized people to become copyright activists at a time when traditional notions of information access would be shaken to their core.

Freedom of the Press Foundation's Trevor Timm.

Trevor Timm described Barlow as “the guiding light” and “the organizational powerhouse” of the Freedom of the Press Foundation, which he co-founded with Barlow in 2012. On the day the organization launched, Timm recalled, Barlow wrote: “When a government becomes invisible, it becomes unaccountable. To expose its lies, errors, and illegal acts is not treason, it is a moral responsibility. Leaks become the lifeblood of the Republic.” His hope was that the organization would inspire a new generation of whistleblowers—and the next speaker, Edward Snowden, made clear he’d achieved this goal, telling the audience: “He raised a message, sounded an alarm, that I think we all heard. He did not save the world, none of us can—but maybe he started the movement that will.”

Whistleblower Edward Snowden talks about Barlow's impact.

The speakers answered questions on Facebook privacy, their disagreements with Barlow (of which there were many, ranging from the role of government overall to whether copyright was alive or dead), and what comes next in our understanding of the web. Cory Doctorow, EFF Special Advisor and emcee of the Symposium alongside Cindy Cohn, answered this in “Barlovian” fashion: “We could sit here and try to spin scenarios until the cows come home and not get anything done, or we can roll up our sleeves and do something.”

EFF’s former Executive Director (and current director of the Tor Project) Shari Steele began the second panel, discussing Barlow’s deeply-held belief in the First Amendment, insistence on hearing opposing viewpoints, and interest in bringing together diverse opinions: “That’s how he thrived...He was always encouraging people to talk to each other—to have conversations where you normally maybe wouldn’t have thought this was somebody you would have something in common with. He was fascinating, dynamic, and helped us create an Internet that has all sorts of fascinating and dynamic speech in it.”

Shari Steele, John Gilmore, and Joi Ito.

John Gilmore, EFF Co-founder and Board Member, invoked French philosopher and anthropologist Teilhard de Chardin, whose ideas Barlow specifically referenced in his writings. Barlow’s interest in mind-altering experiences, like taking LSD, said Gilmore, wasn’t just related to his love of the Internet: it came from the exact same place, an interest in creating the “great collective organism of mind” that Barlow hoped we might one day become.

Steven Levy, author and editor at large at Wired.

Author Stephen Levy, the writer of Hackers, thought that though Barlow may be well known as a writer of lyrics for the Grateful Dead, he will possibly be even better known by his words about the digital revolution. In his view, Barlow was a terrific writer and a master storyteller “capable of pulling off a quadruple-axle level of nonfiction difficulty.” His gift was to be able to not only “explain what was happening to the out-of-it Mr. Joneses of the world, but to encapsulate what was happening, to celebrate it, and to warn against its dangers in a way that would enlighten even the...people who knew the digital world—and to do it in a way that the reading was a pure pleasure.”

Joi Ito, Director of the MIT Media Lab.

Joi Ito, Director of the MIT Media Lab, described Barlow’s sense of humor and optimism—the same “you see when you talk to the Dalai Lama.” Today’s dark moments for the Internet aren’t the end, he said, and reminded everyone that Barlow had an elegant way of bringing these elements together with activism and resolve. His deep sense of humor came “from knowing how terrible the world is, but still being connected to true nature.” Ito also touched upon Barlow's groundbreaking essay A Declaration of the Independence of Cyberspace as a crucial "battle cry for us to rally around," taking the budding cyberpunk movement and helping it become a socio-political one.

The second panel fielded questions on encryption, Barlow’s uncanny ability to show up in the weirdest places, and how we can inspire the next generation of Barlows. Echoing EFF’s mission of bringing together lawyers, technologists, and activists, Joi Ito said that we will need engineers, lawyers, and social scientists to come together to redesign technology and change law, and also change society—and that one of Barlow’s amazing abilities was that he could talk to, and influence, all of these people.

Twenty-seven years later, EFF continues to work at the bleeding edge of technology to protect the rights of the users in issues as diverse as net neutrality, artificial intelligence, opposing censorship, and fighting mass surveillance.

Ameila Barlow reads from the 25 Principles for Adult Behavior.

Amelia Barlow, John Perry’s daughter, thanked the “vast web” of infinitely interesting and radical human beings around the world who he cared about and cared about him. “Never before have you been able to draw more immediately and completely upon him—and I want you to feel that,” she said, before reading his now-famous 25 Principles for Adult Behavior.

Anna Barlow reflects on her father's life.

As Anna Barlow said in her opening remarks, Barlow’s adventures didn’t stop in his later years—they just started coming to him. Some of the most brilliant thinkers in the world showed that this will remain true even while his physical presence is missed. Perhaps the Symposium was one step towards creating the “great collective organism of mind” that Barlow hoped to see us all become. And at the very least, Anna said, he doesn’t have to be bummed about missing parties anymore—because now he can go to all of them.

Cory Doctorow gives parting words on honoring Barlow.

Cory Doctorow closed the Symposium with a request:

“This week—sit down and have the conversation with someone who’s already primed to understand the importance of technology and its relationship to human flourishing and liberty. And then I want you to go varsity. And I want you to have that conversation with someone non-technical, someone who doesn’t understand how technology could be a force for good, but is maybe becoming keenly aware of how technology could be a force for wickedness.

And ensure that they are guarded against the security syllogism. Ensure that they understand too that we need not just to understand that technology can give us problems, but we must work for ways in which technology can solve our problems too.

And if you do those things you will honor the spirit of John Perry Barlow in a profound way that will carry on from this room and honor our friend who we lost so early, and who did so much for us.”

Join EFF

Donate in honor of John Perry Barlow

Categories: Aggregated News

How You Can Stop Sinclair's Apocalyptic Zombie Journalism from Coming to Philly

freepress.net - Sat, 14/04/2018 - 00:41
How You Can Stop Sinclair's Apocalyptic Zombie Journalism from Coming to Philly Amy Kroin Fri, 04/13/2018 - 10:41 Description The problem with Sinclair isn't its conservative bias, but its soul-crushing impact on local news. News Type Mention Source Byline Will Bunch Source Publisher http://www.philly.com/philly/blogs/attytood/stop-sinclair-broadcast-group-philadelpia-phl17-tribune-merger-20180412.html Publication Date Fri, 04/13/2018 - 12:00 Issues Media Consolidation Body

The problem with Sinclair isn't its conservative bias, but its soul-crushing impact on local news.

Related Content /news/updates/unplug-trump-tv
Categories: Aggregated News

D.C. Court: Accessing Public Information is Not a Computer Crime

eff.org - Fri, 13/04/2018 - 08:17

Good news for anyone who uses the Internet as a source of information: A district court in Washington, D.C. has ruled that using automated tools to access publicly available information on the open web is not a computer crime—even when a website bans automated access in its terms of service. The court ruled that the notoriously vague and outdated Computer Fraud and Abuse Act (CFAA)—a 1986 statute meant to target malicious computer break-ins—does not make it a crime to access information in a manner that the website doesn’t like if you are otherwise entitled to access that same information.

The case, Sandvig v. Sessions, involves a First Amendment challenge to the CFAA’s overbroad and imprecise language. The plaintiffs are a group of discrimination researchers, computer scientists, and journalists who want to use automated access tools to investigate companies’ online practices and conduct audit testing. The problem: the automated web browsing tools they want to use (commonly called “web scrapers”) are prohibited by the targeted websites’ terms of service, and the CFAA has been interpreted by some courts as making violations of terms of service a crime. The CFAA is a serious criminal law, so the plaintiffs have refrained from using automated tools out of an understandable fear of prosecution. Instead, they decided to go to court. With the help of the ACLU, the plaintiffs have argued that the CFAA has chilled their constitutionally protected research and journalism.

The CFAA makes it illegal to access a computer connected to the Internet “without authorization,” but the statute doesn’t tells us what “authorization” or “without authorization” means. Even though it was passed in the 1980s to punish computer intrusions, it has metastasized in some jurisdictions into a tool for companies and websites to enforce their computer use policies, like terms of service (which no one reads). Violating a computer use policy should by no stretch of the imagination count as a felony.

In today’s networked world, where we all regularly connect to and use computers owned by others, this pre-Internet law is causing serious problems. It’s not only chilled discrimination researchers and journalists, but it has also chilled security researchers, whose work is necessary to keep us all safe. It is also threatening the open web, as big companies try to use the law as a tool to block competitors from accessing publicly available data on their sites. Accessing publicly available information on the web should never be a crime. As law professor Orin Kerr has explained, publicly posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

Luckily, Judge John Bates recognized the critical role that the Internet plays in facilitating freedom of expression—and that a broad reading of the CFAA “threatens to burden a great deal of expressive activity, even on publicly accessible websites.” The First Amendment protects not only the right to speak, but also the right to receive information, and the court held that the fact “[t]hat plaintiffs wish to scrape data from websites rather than manually record information does not change the analysis.” According to the court:

"Scraping is merely a technological advance that makes information collection easier; it is not meaningfully different from using a tape recorder instead of taking written notes, or using the panorama function on a smartphone instead of taking a series of photos from different positions.”

Judge Bates did not strike down the law as unconstitutional, but he did rule that the statute must be interpreted narrowly to avoid running afoul of the First Amendment. Judge Bates also said that a narrow construction was the most common sense reading of the statute and its legislative history.

Judge Bates is the second judge this year to recognize that a broad interpretation of the CFAA will negatively impact open access to information on the web. Last year, Judge Edward Chen found that a “broad interpretation of the CFAA invoked by LinkedIn, if adopted, could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.”

The government argued that the plaintiffs did not have standing to pursue the case, in part because there was no “plausible threat” that the government was going to prosecute them for their work. But as the judge pointed out, the government has attempted to prosecute “harmless ToS violations” in the past. 

The web is the largest, ever-growing data source on the planet. It is a critical resource for journalists, academics, businesses, and ordinary individuals alike. Meaningful access sometimes requires the assistance of technology to automate and expedite an otherwise tedious process of accessing, collecting and analyzing public information. Using technology to expedite access to publicly available information shouldn’t be a crime—and we’re glad to see another court recognize that.

Related Cases: hiQ v. LinkedIn
Categories: Aggregated News

Community Voices: Patrice Funderburg

freepress.net - Fri, 13/04/2018 - 06:49
Community Voices: Patrice Funderburg Amy Kroin Thu, 04/12/2018 - 16:49 Description A community activist and advocate, Patrice heads a firm she founded that specializes in transformational change through education, exposure and engagement. Publication Date Thu, 04/12/2018 - 12:00 Issues Local Journalism Body

From the start, the News Voices project has been about finding ways to include communities in the conversation about the future of local journalism.

In towns and cities across New Jersey, North Carolina and elsewhere, we’ve been privileged to meet incredible people doing important work at the local level. Those conversations have been invaluable to our understanding of how people outside of newsrooms view local journalism, how news coverage impacts their communities and work, and most importantly what’s working and what needs to change.

The relationships we develop and the subsequent power we build is what makes our work possible. To highlight that, we’re launching a running series where we’ll talk to our community allies about the work they do and the state of local journalism where they live.

We hope this series allows others to learn about the inspirational work we witness every day, and lifts up perspectives about local journalism from people not often heard from or sought after.

Because of her effortless ability to care, build relationships and uplift the voices of people experiencing marginalization, I knew I needed to highlight the work of Patrice Funderburg.

I met Patrice in 2016 when I was working behind the counter of a local coffee shop in Charlotte, North Carolina. She was in the early stages of facilitating what’s become an ongoing community-engagement program that uses Michelle Alexander's book The New Jim Crow: Mass Incarceration in the Age of Colorblindness to reflect and discuss the path from slavery to mass incarceration and to examine participants’ experiences of oppression and/or privilege.

The things I remember about my first encounters with Patrice were her kind and genuine welcome: She was patient as I got backed up delivering drinks to workshop participants and encouraged me to step in and participate in the New Jim Crow series if I got the chance.

Patrice works with Educate To Engage, a firm she founded that specializes in transformational change through education, exposure and engagement. As an organizer on Free Press’ News Voices: North Carolina project, I’ve worked with Patrice by listening in on sessions of the New Jim Crow series, organizing together to combat the rise of mass incarceration, and collaborating on a few News Voices events.

How would you describe the work you do?

I would describe my work as a sort of dysfunctional marriage where corporate responsibility and cultural organizing are in therapy working toward a better marriage rooted in collective oneness.

Who’s part of that collective? Who do you work with in Charlotte?

As a woman of color-owned and North Carolina Historically Underutilized Businesses-certified business consultant, I work with people in organizations to align strategy, vision and mission with sustainable measured outcomes.

As a community activist and advocate, I support the professional development of people directly impacted by incarceration by helping them broaden their pathways to community leadership. I also lead a free six-week community-engagement series designed to build and grow transformational change in our city.

Lastly, I work with currently incarcerated women as a mentor and board member with two local nonprofit organizations.

It sounds like you’re working hard to make transformational change in many different ways. What’s been the role of local news, information and journalism in your work?

Local news justifies my work. It’s a gateway to understanding what gets prioritized as newsworthy — and what doesn’t — in our community.

Information is any additional content that can be used to influence readers or viewers to ‘believe’ what is being reported, and that strengthens my work.

Lastly, journalism is ‘how’ news is shared, which includes language, messaging and how narratives are developed and delivered across a variety of different audiences. Journalism broadens the scope of my work across multiple media channels.

And how would you describe your own experiences with local news, information and journalism?

Local news is what amplified the Educate To Engage community-engagement series to greater public awareness in July 2016. Then in 2017, both Creative Loafing and The Charlotte Observer ran stories about our work.

Access to online information allowed me to research content that strengthened my ability to create and build a growing consulting practice, and journalism has been the primary medium by which my work has gained additional exposure and momentum over the past two years.

Could you talk more about what that momentum has looked like?

My guess is that although other media has emerged in our city, The Charlotte Observer is still the most widely circulated news publication in Charlotte, and the readership demographic is still predominantly older and White. This was reflected in the overwhelming response during the summer of 2016 and it carried forward as the series grew.

However, when we moved the location of the series from just outside of Uptown near Central Piedmont Community College to deeper into West Charlotte, attendance dropped and the demographic changed, and that has been interesting to watch. I’ve seen participation shift from older, mostly White women from south of the city to a younger, more diverse group of people, including some, but not many, people directly impacted by incarceration.

On a broader level, what does local journalism do that benefits your community in Charlotte? And is there anything it gets wrong?

I think local news/media/journalism is effective at creating awareness of high-level issues that impact my community — assuming by community, you mean communities of color.

However, because our primary local news source in Charlotte skews language and messaging toward a specific demographic, the impact oftentimes hurts my community in ways that perpetuate false narratives about people outside of the readership demographic.

That piece around hurt is key. How can local journalism in Charlotte mitigate that harm? Are there any specific asks that you have?

Resist media pressure to propagandize suffering in ways that perpetuate pathologies about people of color and other marginalized communities. The way, for example, local media reported on the Uprising of September 2016 painted a picture of suffering communities in ways that hurt people who were grieving and already tired of being unseen and not heard. When journalists deliver news in ways that humanize some and dehumanize others, systems of oppression thrive.

[There’s] an opportunity for journalists to challenge societal hierarchies and paint a more realistic picture that gives people an opportunity to make more informed decisions about the world in which they live and what actions they choose to take in disrupting these hierarchies in equitable ways, or not.

So, if there were one thing you wish local news covered more or knew more about, what would it be?

The role of power in the stories being reported. Who defines it? Who has it? Who doesn’t? Who is subjugated to it? What is its impact? Etc.

And finally, how can anyone reading this support the work you’re doing?

  1. Hire me as a consultant. To be honest, the Educate To Engage LLC consulting practice is a work in progress. Transformational change is fundamentally a time-consuming process, and frankly, I wasn’t focused on the ‘revenue generating’ side of the house until this year.  My vision with consulting is to provide organizational development, project management, and training and facilitation services to companies and organizations, while providing individual coaching to people in organizations.

  2. Help mobilize resources to support our work.

  3. Support the campaign to #RenameKeithFamilyYMCA. (The Keith Family are notorious prison profiteers who have a Charlotte YMCA named after them.)

For more information about Patrice, feel free to email her at info@edu2engage.org.

Author Alicia Bell
Categories: Aggregated News

New Hampshire Court: First Amendment Protects Criticism of “Patent Troll”

eff.org - Fri, 13/04/2018 - 03:50

A New Hampshire state court has dismissed a defamation suit filed by a patent owner unhappy that it had been called a “patent troll.” The court ruled [PDF] that the phrase “patent troll” and other rhetorical characterizations are not the type of factual statements that can be the basis of a defamation claim. While this is a fairly routine application of defamation law and the First Amendment, it is an important reminder that patent assertion entities – or “trolls” – are not shielded from criticism. Regardless of your view about the patent system, this is a victory for freedom of expression.

The case began back in December 2016 when patent assertion entity Automated Transactions, LLC (“ATL”) and inventor David Barcelou filed a complaint [PDF] in New Hampshire Superior Court against 13 defendants, including banking associations, banks, law firms, lawyers, and a publisher. ATL and Barcelou claimed that all of the defendants criticized ATL’s litigation in a way that was defamatory. The court summarizes describes the claims as follows: 

The statements the plaintiffs allege are defamatory may be separated into two categories. The first consists of instances in which a defendant referred to a plaintiff as a “patent troll.” The second is composed of characterizations of the plaintiffs’ conduct as a “shakedown,” “extortion,” or “blackmail.”

These statements were made in a variety of contexts. For example, ATL complained that the Credit Union National Association submitted testimony to the Senate Committee on the Judiciary [PDF] that referred to ATL as a “troll” and suggested that its business “might look like extortion.” The plaintiffs also complained about an article in Crain’s New York Business that referred to Barcelou as a “patent troll.” The complaint alleges that the article included a photo of a troll that “paints Mr. Barcelou in a disparaging light, and is defamatory.”

ATL had filed over 50 lawsuits against a variety of banks and credit unions claiming that their ATM machines infringed ATL’s patents. ATL also sent many demand letters. Some in the banking industry complained that these suits and demands lacked merit. There was some support for this view. For example, in one case, the Federal Circuit ruled the several of ATL’s asserted patent claims were invalid and that the defendants did not infringe. The defendants did not infringe because the patents were all directed to ATMs connected to the Internet and it was “undisputed” that the defendants’ products “are not connected to the Internet and cannot be accessed over the Internet.”

Given the scale of ATL’s litigation, it is not surprising that it faced some criticism. Yet, the company responded to that criticism with a defamation suit. Fortunately, the court found the challenged statements to be protected opinion. Justice Brian T. Tucker explained:

[E]ach defendant used “patent troll” to characterize entities, including ATL, which engage in patent litigation tactics it viewed as abusive. And in each instance the defendant disclosed the facts that supported its description and made ATL, in the defendant's mind, a patent troll. As such, to the extent the defendants accused the plaintiffs of being a “patent troll,” it was an opinion and not actionable. 

The court went on to explain that “patent troll” is a term without a precise meaning that “doesn’t enable the reader or hearer to know whether the label is true or false.” The court notes that the term could encompass a broad range of activity (which some might see as beneficial, while others see it as harmful).

The court also ruled that challenged statements such as “shakedown” and comparisons to “blackmail” were non-actionable “rhetorical hyperbole.” This is consistent with a long line of cases finding such language to be protected. Indeed, this is why John Oliver can call coal magnate Robert Murray a “geriatric Dr. Evil” and tell him to “eat shit.” As the ACLU has put it, you can’t sue people for being mean to you. Strongly expressed opinions, whether you find them childish or hilariously apt (or both), are part of living in a free society.

Justice Tucker’s ruling is a comprehensive victory for the defendants and free speech. ATL and Barcelou believe they are noble actors seeking to vindicate property rights. The defendants believed that ATL’s conduct made it an abusive patent troll. The First Amendment allows both opinions to be expressed.

Categories: Aggregated News

Day of Action: Help California Pass a Gold Standard Net Neutrality Bill

eff.org - Fri, 13/04/2018 - 03:49

In December of 2017, contrary to the will of millions of Americans, the FCC made the decision to abandon net neutrality protections. On the first day of business in the California state legislature, State Sen. Scott Wiener introduced a bill that would bring back those protections and more for Californians.

S.B. 822 would make getting state money or using state resources contingent on the ISP adhering to net neutrality principles. This includes the practices the FCC banned in the 2015 Open Internet Order—blocking, throttling, and paid prioritization—and picks up where the FCC left off by also tackling the practice of zero rating. This bill is a gold standard of net neutrality legislation and its passage would give California the strongest protections in the country.

Naturally, big ISPs like Comcast, AT&T, and Spectrum (née Time Warner Cable) don’t want to see this pass. That’s why we’re rallying in support of this bill before its hearings in front of the members of the state senate Utilities and Energy Committee and Judiciary Committee.

Californians: use the tool below to send tweets to the members of these committees to tell them to secure a free and open Internet for your state.

Take Action

Tell California's State Senators to Stand up for Net Neutrality

Categories: Aggregated News

Zuckerberg Grilled by Same Lawmakers Who Repealed Online Privacy Protections

freepress.net - Fri, 13/04/2018 - 03:37
Zuckerberg Grilled by Same Lawmakers Who Repealed Online Privacy Protections Amy Kroin Thu, 04/12/2018 - 13:37 Description This time last year, Republicans in Congress were rushing to pass legislation repealing the FCC’s online privacy protections. News Type Mention Source Byline Mike Ludwig Source Publisher Truthout External Link http://www.truth-out.org/news/item/44142-zuckerberg-grilled-by-same-lawmakers-w… Publication Date Thu, 04/12/2018 - 12:00 Issues Privacy Body

This time last year, Republicans in Congress were rushing to pass legislation repealing the FCC’s online privacy protections.

Related Content /news/press-releases/house-republicans-vote-destroy-fccs-online-privacy-protect… Spokesperson Timothy Karr
Categories: Aggregated News

No, Section 230 Does Not Require Platforms to Be “Neutral”

eff.org - Fri, 13/04/2018 - 03:05

One jaw-dropping moment during the Senate’s hearing on Tuesday came when Sen. Ted Cruz asked Facebook CEO Mark Zuckerberg, “Does Facebook consider itself a neutral public forum?” Unsatisfied by Zuckerberg’s response that Facebook is a “platform for all ideas,” Sen. Cruz continued, “Are you a First Amendment speaker expressing your views, or are you a neutral public forum allowing everyone to speak?”

When members of Congress recite myths about how Section 230 works, it demonstrates a frightening lack of seriousness about protecting our right to speak and gather online.

After more back-and-forth, Sen. Cruz said, “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?” It was a baffling question. Sen. Cruz seemed to be suggesting, incorrectly, that Facebook had to make a choice between enjoying protections for free speech under the First Amendment and enjoying the additional protections that Section 230 offers online platforms.

Online platforms are within their First Amendment rights to moderate their online platforms however they like, and they’re additionally shielded by Section 230 for many types of liability for their users’ speech. It’s not one or the other. It’s both.

Indeed, one of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts. In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve; since Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts. Former Rep. Chris Cox recalls reading about the Prodigy opinion on an airplane and thinking that it was “surpassingly stupid.” That revelation led to Cox and then Rep. Ron Wyden introducing the Internet Freedom and Family Empowerment Act, which would later become Section 230.

The misconception that platforms can somehow lose Section 230 protections for moderating users’ posts has gotten a lot of airtime lately—even serving as the flawed premise of a recent Wired cover story. It’s puzzling that Sen. Cruz would misrepresent one of the most important laws protecting online speech—particularly just a few days after he and his Senate colleagues voted nearly unanimously to undermine that law. (For the record, it’s also puzzling that Zuckerberg claimed not to be familiar with Section 230 when Facebook was one of the largest Internet companies lobbying to undermine it.)

The context of Sen. Cruz’s line of questioning offers some insight into why he misrepresented Section 230: like several Republican members of Congress in both hearings, Sen. Cruz was raising concerns about Facebook allegedly removing posts that represented conservative points of view more often than liberal ones.

There are many good reasons to be concerned about politically motivated takedowns of legitimate online speech. Around the world, the groups silenced on Facebook and other platforms are often those that are marginalized in other areas of public life too.

It’s foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. Trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would be unconstitutional under the First Amendment. In practice, creating additional hoops for platforms to jump through in order to maintain their Section 230 protections would almost certainly result in fewer opportunities to share controversial opinions online, not more: under Section 230, platforms devoted to niche interests and minority views can thrive.

What’s needed to ensure that a variety of views have a place on social media isn’t creating more legal exceptions to Section 230. Rather, companies should institute reasonable, transparent moderation policies. Platforms shouldn’t over-rely on automated filtering and unintentionally silence legitimate speech and communities in the process. And platforms should add features to give users themselves—not platform owners or third parties—more control over what types of posts they see.

When Congress passed SESTA/FOSTA, members made the mistake of thinking that they could tackle a real-world problem by shifting more civil and criminal liability to online platforms. When members of Congress recite myths about how Section 230 works, it demonstrates a frightening lack of seriousness about protecting our right to speak and gather online.

Categories: Aggregated News

User Privacy Isn't Solely a Facebook Issue

eff.org - Fri, 13/04/2018 - 02:26

During Congressional hearings about Facebook’s data practices in the wake of the Cambridge Analytica fiasco, Mark Zuckerberg drew an important distinction between what we expect from our Internet service providers (ISPs, such as Comcast or Verizon) as opposed to platforms like Facebook that operate over the Internet.

Put simply, an ISP is a service you pay to access the Internet. Once you get online, you run into a whole series of edge providers. Some, like Netflix, also charge you for access to their services. Others, like Facebook and Google, are platforms that you use without paying, which support themselves using ads. There’s a whole spectrum of services that make up Internet use, but the thing they all have in common is that they are gathering data when you use them. How they use it can differ widely.

The divide between ISPs and edge providers is most obvious in the context of the net neutrality debate. Platforms, by and large, want as many people accessing the Internet as possible, as easily as possible. ISPs want to charge customers as much as possible for that access and also want to start double-dipping by charging platforms a fee when you visit their websites, as protection money, so the ISP doesn’t throttle or ‘de-prioritize’ your connection.

Zuckerberg brought up that difference a couple of times during the hearings. He mentioned how he had no ISP choice when he founded Facebook in college and that paid prioritization would have hobbled his new company. Whatever you think of Facebook, it’s not good for the Internet to have ISPs deciding what platforms are allowed to exist and succeed.

The distinction is also apparent in the privacy context. Your ISP is your conduit to everything you do online, so it has the opportunity to be even more invasive of your privacy than Facebook. You can protect yourself with VPNs and HTTPS, but the ISP still has a privileged position and is likely to be able to put together a pretty complete picture of most subscribers’ online habits.

That privileged position means that protecting your privacy vis-à-vis an ISP is a different issue than protecting it with respect to online platforms. Besides, you’re already paying your ISP for services; the idea that you’re willingly trading your privacy in exchange for a service does not apply.

ISPs, however, have attempted to muddy the waters to avoid regulation, by insisting that Congress come up with a ‘one size fits all’ approach to online privacy.

The issue was illustrated during the hearing on Tuesday, when Senator Roger Wicker of Mississippi posed this question:

I understand with regard to suggested rules or suggested legislation, there are at least two schools of thought out there.

One would be the ISPs, the Internet service providers, who are advocating for privacy protections for consumers that apply to all online entities equally across the entire Internet ecosystem.

Now, Facebook is an edge provider on the other hand. It is my understanding that many edge providers, such as Facebook, may not support that effort, because edge providers have different business models than the ISPs and should not be considered like services.

So, do you think we need consistent privacy protections for consumers across the entire Internet ecosystem that are based on the type of consumer information being collected, used or shared, regardless of the entity doing the collecting, reusing or sharing?

 ISPs are not truly advocating for privacy protections. When AT&T takes out a full-page ad in major newspapers about an “Internet Bill of Rights,” it’s not users they are seeking to protect. It’s the profits they can make from things like paid prioritization and monetizing your data. ISPs have a model that lets them make money by charging users, they want to double-dip by charging platforms, and triple-dip by using data for advertising, much the way Facebook does. But unlike Facebook, ISPs don’t rely on ads for their entire revenue stream.

For ISPs, a federal law that prohibits some activities, but leaves them the tactics that make the most money—while preventing states from passing more stringent protections—is their goal.

Both Facebook and ISPs present privacy concerns, but while Facebook is in the spotlight for its practices right now, we should not let ISPs off the hook for this.

No Escape From ISP Practices

As hard as it may be to escape Facebook, ISPs have an even tighter hold on their customers.

Most Americans don’t have a choice when it comes to high-speed Internet, as Zuckerberg mentioned in his testimony. There are a lot of historical reasons for this, but one simple one is that it’s expensive to break into a new ISP market, particularly when the incumbent can temporarily lower prices in that neighborhood and pay for it by jacking up prices elsewhere where they face no competition. Besides that, big ISPs have divided up the nation geographically to avoid competing.

Another factor is that large ISPs benefit from the regulatory landscape at the expense of small, upstart ISPs that might otherwise challenge them. For instance, ISPs did have privacy regulations applied to them, but lobbied Congress and successfully got them repealed. The end of those regulations helped cement large ISP power and block competition. Small ISPs may want to offer a service with privacy protections to users, but the market is already so uneven that they can barely compete. The market can’t provide customers with alternatives that protect privacy, and so regulation of the large ISPs is necessary.

In theory, you can leave Facebook and use Twitter or Snapchat, or a noncommercial platform like Mastodon. In practice, the company’s user base is so large that it’s able to keep users simply because it’s where friends and family already are. Zuckerberg was also asked to name Facebook’s competition, and the closest he could claim was that there are other services that overlap with some of the things Facebook offers.

Badly written laws in reaction to Cambridge Analytica could end up solidifying Facebook’s dominance, as only a company with their resources could comply. Protecting the privacy of Internet users is critically important, and a law that squashed competition to Facebook would only harm it in the long run.

There are a number of things that can be done to make platforms like Facebook accountable for their privacy policies. Making it so that users can truly delete the data these platforms collect, take their data with them when they leave, and understand and customize the privacy policies would go a long way. There are a whole host of things—practical, useful things—that can be done without creating laws that only a company the size of Facebook can afford to follow.

In his answer to Wicker’s question, Zuckerberg said:

I would differentiate between ISPs, which I consider to be the pipes of the Internet, and the platforms like Facebook or Google or Twitter, YouTube that are the apps or platforms on top of that.

I think in general, the expectations that people have of the pipes are somewhat different from the platforms. So there might be areas where there needs to be more regulation in one and less in the other, but I think that there are going to be other places where there needs to be more regulation of the other type.

Zuckerberg wasn’t totally wrong when he said this. ISPs cannot be escaped, collect huge amounts data by virtue of being your conduit to the Internet, and do not need to monetize that data to survive. Subscription edge providers also do not need to monetize data to make money, but still collect some data; Netflix tracking what people watch and for how long, for example. And then there are ad-supported platforms where user data is the basis of their business model.

There are all sorts of ways our privacy is impacted by what happens online. It’s vital all companies make their policies transparent and that there are many options for users to choose from, so that they can choose the trade-offs that they are comfortable with.

Categories: Aggregated News

Despite What Zuckerberg’s Testimony May Imply, AI Cannot Save Us

eff.org - Thu, 12/04/2018 - 09:00

Yesterday and today, Mark Zuckerberg finally testified before the Senate and House, facing Congress for the first time to discuss data privacy in the wake of the Cambridge Analytica scandal. As we predicted, Congress didn’t stick to Cambridge Analytica. Congress also grilled Zuckerberg on content moderation—i.e., private censorship—and it’s clear from his answers that Facebook is putting all of its eggs in the “Artificial Intelligence” basket.

But automated content filtering inevitably results in over-censorship. If we’re not careful, the recent outrage over Facebook could result in automated censors that make the Internet less free, not more.

Facebook Has an “AI Tool” For Everything—But Says Nothing about Transparency or Accountability

Zuckerberg’s most common response to any question about content moderation was an appeal to magical “AI tools” that his team would deploy to solve any and all problems facing the platform. These AI tools would be used to identify troll accounts, election interference, fake news, terrorism content, hate speech, and racist advertisements—things Facebook and other content platforms already have a lot of trouble reliably flagging today, with thousands of human content moderators. Although Zuckerberg mentioned hiring thousands more content reviewers in the short term, there is uncertainty whether human review will continue in the long term to have an integral role in Facebook’s content moderation system.

Most sizable automated moderation systems in use today rely on some form of keyword tagging, followed by human moderators. Our most advanced automated systems are far from being able to perform the functions of a human moderator accurately, efficiently, or at scale. Even the research isn’t there yet—especially not with regard to nuances of human communication like sarcasm and irony. Beyond AI tools’ immaturity, an effective system would have to adapt to regional linguistic slang and differing cultural norms as well as local regulations. In his testimony, Zuckerberg admitted Facebook still needs to hire more Burmese language speakers to moderate the type of hate speech that may have played a role in promoting genocide in Myanmar. “Hate speech is very language-specific,” Zuckerberg admits. “It's hard to do [moderation] without people who speak the local language.”

An adequate automated content moderation system would have to adapt with time as our social norms evolve and change, and as the definition of “offensive content” changes with them. This means processing and understanding social and cultural context, how they evolve over time, and how they vary between geographies. AI research has yet to produce meaningful datasets and evaluation metrics for this kind of nuanced contextual understanding.

But beyond the practical difficulties associated with automated content tagging, automatic decision-making also brings up numerous ethical issues. Decision-making software tends to reflect the prejudices of its creators, and of course, the biases embedded in its data. Released just a couple months ago, Google’s state-of-the-art Perspective API for ranking comment toxicity originally gave the sentence “I am a black woman” an absurd 85% chance of being perceived as “toxic”.

Given the fact that they are likely to make mistakes, how can we hold Facebook’s algorithms accountable for their decisions? As research in natural language processing shifts towards deep learning and the training and usage of neural networks, algorithmic transparency in this field becomes increasingly difficult—yet it also becomes increasingly important and paramount. These issues of algorithmic transparency, accountability, data bias, and creator bias are particularly critical for Facebook, a massive global company whose employees speak only a fraction of the languages that its user base does.

Zuckerberg doesn’t have any good answers for us. He referred Congress to an “AI ethics” team at Facebook but didn’t disclose any processes or details. As with most of Congress’s difficult and substantive questions, he’ll have his team follow up.

"Policing the Ecosystem”

Zuckerberg promised Congress that Facebook would take “a more active view in policing the ecosystem,” but he failed to make meaningful commitments regarding the transparency or accountability of new content moderation policies. He also failed to address the problems that come hand-in-hand with overbroad content moderation, including one of the most significant problems: how it creates a new lever for online censorship that will impact marginalized communities, journalists who report on sensitive topics, and dissidents in countries with oppressive regimes.

Let’s look at some examples of overzealous censorship on Facebook. In the past year, high-profile journalists in Palestine, Vietnam, and Egypt have encountered a significant rise in content takedowns and account suspensions, with little explanation offered outside a generic “Community Standards” letter. Civil discourse about racism and harassment is often tagged as “hate speech” and censored. Reports of human rights violations in Syria and against Rohingya Muslims in Myanmar, for example, were taken down—despite the fact that this is essential journalist content about matters of significant global public concern.

These examples are just the tip of the iceberg: high-profile journalists, human-rights activists, and other legitimate content creators are regularly censored—sometimes at the request of governments—as a result of aggressive content moderation policies.

Congress’ focus on online content moderation follows a global trend of regulators and governments around the world putting tremendous pressure on platforms like Facebook to somehow police their content, without entirely understanding that the detection of “unwanted” content, even with “AI tools,” is a massively difficult technical challenge and an open research question.

Current regulation on copyrighted content already pushes platforms like YouTube to employ over-eager filtering in order to avoid liability. Further content regulations on things that are even more nuanced and harder to detect than copyrighted content—like hate speech and fake news—would be disastrous for free speech on the internet. This has already started with the recent passing of bills like SESTA and FOSTA.

We need more transparency.

Existing content moderation policies and processes are almost entirely opaque. How do platform content reviewers decide what is or is not acceptable speech, offensive content, falsified information, or relevant news? Who sets, controls, and provides modifications to these guidelines?

As Facebook is pressured to scale up its policing and push more work onto statistical algorithms, we need to make sure we have more visibility into how these potentially problematic decisions are made, and the sources of data collected to train these powerful algorithms.

We can’t hide from the inevitable fact that offensive content is posted on Facebook without being immediately flagged and taken down. That’s just the way the Internet works. There’s no way to reduce that time to zero—not with AI, not with human moderators—without drastically over censoring free speech on the Internet.

Categories: Aggregated News

Facebook, This Is Not What “Complete User Control” Looks Like

eff.org - Thu, 12/04/2018 - 08:49

If you watched even a bit of Mark Zuckerberg’s ten hours of congressional testimony over the past two days, then you probably heard him proudly explain how users have “complete control” via “inline” privacy controls over everything they share on the platform. Zuckerberg’s language here misses the critical distinction between the information a person actively shares, and the information that Facebook takes from users without their knowledge or consent.

Zuckerberg’s insistence that users have “complete control” neatly overlooks all the ways that users unwittingly “share” information with Facebook.

Of course, there are the things you actively choose to share, like photos or status updates, and those indeed come with settings to limit their audience. That is the kind of sharing that Zuckerberg seemed to be addressing in many of his answers to Congressmembers’ questions.

But that’s just the tip of the iceberg. Below the surface are Facebook’s often-invisible methods for collecting and generating information on users without their knowledge or consent, including (but not limited to):

Users don’t share this information with Facebook. It’s been actively—and silently—taken from them.

This stands in stark contrast to Zuckerberg’s claim, while on the record with reporters last week, that “the vast majority of data that Facebook knows about you is because you chose to share it.” And he doubled down on this talking point in his testimony to both the Senate and the House, using it to dodge questions about the full breadth of Facebook’s data collection.

Zuckerberg’s insistence that users have complete control is a smokescreen.

Zuckerberg’s insistence that users have complete control is a smokescreen. Many members of Congress wanted to know not just how users can control what their friends and friends-of-friends see. They wanted to know how to control what third-party apps, advertisers, and Facebook itself are able to collect, store, and analyze. This goes far beyond what users can see on their pages and newsfeeds.

Facebook’s ethos of connection and growth at all costs cannot coexist with users' privacy rights. Facebook operates by collecting, storing, and making it easy to find unprecedented amounts of user data. Until that changes in a meaningful way, the privacy concerns that spurred these hearings are here to stay.

Categories: Aggregated News

Broadcasting Hate: How Trump Used the FCC to Punish the Poor

freepress.net - Thu, 12/04/2018 - 08:08
Broadcasting Hate: How Trump Used the FCC to Punish the Poor Amy Kroin Wed, 04/11/2018 - 18:08 Description Erin Shields of the Center for Media Justice and Lucia Martinez of Free Press break down what they call the Trump administration’s “war on the poor.” News Type Mention Source Byline Erin Shields and Lucia Martinez Source Publisher Colorlines External Link https://www.colorlines.com/articles/broadcasting-hate-how-trump-used-fcc-punish… Publication Date Wed, 04/11/2018 - 12:00 Issues Internet Access Net Neutrality Body

Erin Shields of the Center for Media Justice and Lucia Martinez of Free Press break down what they call the Trump administration’s “war on the poor.”

Related Content /our-response/advocacy-organizing/stories-field/how-trump-fcc-disconnecting-poor
Categories: Aggregated News

Solutions for a Stalled NAFTA: Stop Pushing So Hard on IP, and Release the Text

eff.org - Thu, 12/04/2018 - 07:04

The deadline for concluding a modernized North American Free Trade Agreement (NAFTA), originally scheduled for last year, has continued to slip. An eighth and final formal round of negotiations was cancelled last week, and despite earlier optimistic plans that the parties could announce an "agreement in principle" at the Summit of the Americas in Peru this Friday 13 April, these plans have since been abandoned.

An over-optimistic negotiation schedule isn't the only problem here. The other is that United States Trade Representative (USTR) is pushing a hard line on topics such as intellectual property that neither of the other negotiating parties find remotely palatable. As a result, although advances have been made in some other chapters, reports suggest that virtually the whole of the agreement's IP chapter remains up in the air.

In October 2016, as the Trans-Pacific Partnership (TPP) was beginning to falter, Steve Metalitz of the International Intellectual Property Alliance (IIPA) remarked with surprising frankness that "We may well have reached the high water mark of linking IP and trade." Since then, more evidence has emerged that he was correct about this. One example is the suspension of most of the intellectual property chapter from the TPP, when it became the 11-country Comprehensive and Progressive Trans-Pacific Partnership Agreement (CPTPP). Another example is Europe's backdown from its demands for a twenty year copyright term extension in the Mercosur-EU trade agreement. Other U.S. trading partners have also been expressing more critical views about the downsides of excessively long minimum copyright terms, and most surprising at all, so have representatives of copyright holders.

The USTR could continue to press its hard line on intellectual property for round after round, in the hope that Canada and Mexico would eventually capitulate. Or, it could easily remove one huge obstacle to the successful conclusion of NAFTA simply by dropping these tough demands, including its demands for extension of the copyright term, and concentrate instead on issues of more importance to the farming and manufacturing sectors.

Transparency is Another Key to the Smoother Conclusion of NAFTA

The low public support for the TPP, which ultimate led to the United States' withdrawal from the agreement, has been attributed in part to the lack of transparency of the negotiations. Ahead of the commencement of the NAFTA negotiations, 52 members of Congress wrote to the USTR asking that the negotiations be made more open and transparent than the TPP had been. EFF wrote a similar letter.

Yet at the end of the official negotiating rounds, NAFTA is even less transparent and inclusive than the TPP had been. Not a single text proposal or consolidated draft has been released (or even leaked) to the public. The USTR has not yet appointed a Transparency Officer under the new administration, despite this being required under the Bipartisan Congressional Trade Priorities and Accountability Act of 2015. And there have been precisely zero stakeholder engagement events arranged for stakeholders to brief negotiators during the NAFTA negotiations, despite this having been a common practice during the negotiation of the TPP.

This month the Congressional Progressive Caucus released its Fair Trade Agenda [PDF], which recommends:

For the remainder of the NAFTA renegotiations, the Trump Administration should make draft proposals publicly available and should solicit Congressional and public input before finalizing the proposals. Negotiating texts also must be made publicly available after each negotiating round with the opportunity for public comment, so Congress can provide input in the process and so the American people can evaluate whether their interests are being advanced.

If these suggestions seem extreme, they're really not. Similar recommendations were part of the Promoting Transparency in Trade Act that was reintroduced into Congress last July, but which has languished in committee since then. Europe has already adopted rules requiring its text proposals in trade negotiations to be released to the public, and the United Kingdom is considering going a step further, by requiring consolidated texts also to be released within ten days of each negotiation round. 

Although better transparency in NAFTA would be a way of gaining public trust in the agreement, it's understandable why the USTR takes refuge in secrecy. Keeping controversial provisions out of sight and mind of the public while they are being negotiated—for example, tough secondary liability rules on Internet platforms may be under negotiation—spares the USTR from having to defend these to the public at the same time as it attempts to sell them to our trading partners. But the problem with waiting until the provisions have been agreed before releasing them to the public is that by that stage, it is practically impossible to improve them, or if necessary, to walk them back.

When it comes to the point that even copyright holder representatives are arguing against the USTR's hard line on copyright, and when its transparency practices are falling out of step with those of our major trading partners, it's time for the USTR to consider whether a course change is required. We think that trade agreements don't have to be contentious: they could even be positive for users and innovators, if done right. But the longer the negotiations drag on without any sign that questions of transparency or IP overreach are being addressed, the harder it is to maintain this optimism.

Categories: Aggregated News

The U.S. CLOUD Act and the EU: A Privacy Protection Race to the Bottom

eff.org - Wed, 11/04/2018 - 14:29

U.S. President Donald Trump’s $1.3 trillion government spending bill, signed March 23rd, offered 2,323 pages of budgeting on issues ranging from domestic drug policy to defense. The last-minute rush to fund the U.S. government through this all-or-nothing “omnibus” presented legislators with a golden opportunity to insert policies that would escape deep public scrutiny. Case in point: the Clarifying Lawful Use of Overseas Data (CLOUD) Act, whose broad ramifications for undermining global privacy should not be underestimated, was snuck into the final pages of the bill before the vote.

Between the U.S. CLOUD Act and new European Union (EU) efforts to dismantle international rules for cross-border law enforcement investigations, the United States and EU are racing against one another towards an unfortunate finish-line: weaker privacy protections around the globe. 

The U.S. CLOUD Act allows the U.S. President to enter into “executive agreements” with qualifying foreign governments in order to directly access data held by U.S. technology companies at a lower standard than required by the Constitution of the United States. To qualify, foreign governments would need to be certified by the U.S. Attorney General, and meet certain human rights standards set in the act. Those qualifying governments will have the ability to bypass the legal safeguards of the Mutual Legal Assistance Treaty (MLAT) regime.

In addition, U.S. law enforcement agencies (from local police to federal agents) can now compel U.S. and foreign technology[1] companies to disclose communications data of U.S. and foreign users that is stored overseas, regardless of the data’s physical location, potentially bypassing the countries’ privacy and data protection laws. Permitting the U.S. access to data which can be located anywhere sets a dangerous precedent for other countries, who are likely to demand similar access to data held in the United States. Such expansion of U.S. law enforcement power breaks the principle of territoriality, the core component of international law, and will produce a domino effect of information requests that overstep responding countries’ privacy safeguards.

Leaked documents obtained by the media network EURACTIV revealed the European Commission’s plans to launch on April 17th two proposals: A regulation on access to and preservation of electronic data held by companies that mirrors the CLOUD act’s self-serving agenda; and a Directive "to appoint a legal representative within the [EU] bloc", a provision that is presumably similar to article 27 of the GDPR.

According to EURACTIV, the regulation would grant EU member states the power to circumvent the responding countries’ privacy laws in fulfilling information requests. If passed, countries could demand data access of technology companies within 10 days or, in the case of an “imminent threat to life or physical integrity of a person or to a critical infrastructure,” technology companies could be compelled to comply within just six hours. Such demands would apply to internet companies such as Google, social networks like Facebook, Instagram, and Twitter, as well as cloud technology providers, domain name registries, registrars and “digital marketplaces” that allow consumers and/or traders to conclude peer-to-peer transactions.

The directive, as reported by EURACTIV, will force any company collecting data in the EU to appoint a legal representative to the EU bloc to address law enforcement data-requests. This demand would be particularly onerous for companies who do not even have an office in the EU, let alone store their data in the EU. Requiring all companies to maintain an EU legal representative will stifle innovation by further stacking the deck in favor of tech giants who have the resources to comply.

Prior to the announcement of the U.S. CLOUD act, the European Commission had already begun a process to improve access to electronic evidence within EU member states. On June 2017, the European Commission presented to EU Justice Ministers a set of options to improve cross-border access to e-evidence. Ministers then asked the Commission to come forward with concrete legislative proposals. A public consultation that was held from August to October 2017 gave some hints of the EU’s intention to adopt legislation that would enable far-reaching information demands on companies located not only within, but outside the European Union, as well.

In a statement on how the European Union can “improve” cross border access to data, V?ra Jourová, European Commissioner for Justice, Consumers and Gender Equality said:

"Our current investigation tools are not fit for the way the digital world works … These tools still work within the limits of the principle of territoriality, which is at odds with the cross-border nature of e-services and data flows. As a result investigators' work is slowed down when dealing with cybercrime, terrorism and other forms of criminal activities, even where such crimes are not cross-border in nature. This is why we launched an expert consultation in 2016."

However, the EU proposals—coupled with the U.S. CLOUD Act—signal a potentially dangerous and uncoordinated race to the bottom. The principle of territoriality has provided an important mechanism for maintaining privacy standards in a world where data is increasingly available from multiple sources operating in multiple locations around the globe. Although territorial protections for privacy were being litigated before the U.S. Supreme Court in the case United States v. Microsoft, before the CLOUD Act, U.S. officials could not ignore local privacy safeguards when seeking access to data hosted in a foreign state. (Just last week, the U.S. Department of Justice submitted a motion to the court to declare the case “moot,” according to a recent report by The Irish Times.)

Similarly, EU law must currently respect U.S. privacy safeguards when seeking to access content stored by companies in the United States. Both initiatives are willing to jettison the principle of territoriality and the foreign privacy safeguards that accompany it: the U.S. CLOUD Act allows U.S. law enforcement to ignore EU privacy protections, while the EU proposals, if passed, ignore U.S. privacy protections regarding access to content stored in the United States. However, neither would be pleased with the reciprocal impact of a world without territorial privacy.

Indeed, Commissioner Jourova has already decried deficiencies in the United States’ approach, stating on Twitter that she wants to see “the EU and the U.S. have compatible rules for obtaining evidence stored on servers located in another country, in order to solve serious crimes. Unfortunately, the U.S. Congress has adopted the CLOUD Act in a fast-track procedure.”

It remains to be seen whether EU and U.S. based lawmakers or courts will accept the European Commission’s attempts to bypass EU and U.S. privacy safeguards. Our friends from European Digital Rights (EDRi) have warned against such proposals in the EU.

EDRI’s Senior Policy Advisor, Maryant Fernández, told EFF:

"If the Commission does not change its mind prior to publication of its proposals on April 17, it would be proposing dangerous short cuts to access people's data directly from companies, turning companies into judicial authorities."

 The irony is that such unilateral moves to ignore foreign privacy standards are hardly necessary. While practical challenges currently exist in cross-border access to data, these challenges relate primarily to a lack of efficiency and clarity in the prevailing MLAT regime. This deficiency can be easily addressed through: 

  • The express codification of a dual privacy regime that meets the standards of both the requesting and the host state. Dual data privacy protection will help ensure that as nations seek to harmonize their respective privacy standards, they do so on the basis of the highest privacy standards. Absent a dual privacy protection rule, nations may be tempted to harmonize at the lowest common denominator, and
  • Improved training for law enforcement to draft requests that meet such standards, and other practical measures.

 Now is the time for improving MLATs. The EU must ensure a level of predictability, accountability and procedural safeguards that is at least equal to the level that currently exists. Moreover, the EU does not have to follow the U.S. down the same path of privacy abandonment. Instead, EU institutions and Member States have the opportunity to champion logical solutions that help law enforcement access digital evidence while still protecting privacy and maintaining respect for the sovereignty of other nations. Until we know more, we must wait. But know that, as soon as these proposals produce their first public agreements, EFF will learn, evaluate, and potentially fight for better privacy rights in Europe, and around the world.

[1]   U.S. extraterritorial warrants could apply to foreign companies--the U.S. just has to find a sufficient jurisdictional nexus to send an order. So Telegram, even though German, serves customers in the U.S. and can be subject to an order.

Categories: Aggregated News

A New Welcome to Privacy Badger and How We Got Here

eff.org - Wed, 11/04/2018 - 11:00

The latest update to Privacy Badger brings a new onboarding process and other improvements. The new onboarding process will make Privacy Badger easier to use and understand. These latest changes are just some of the many improvements EFF has made to the project, with more to come!

Install Privacy Badger

Join EFF and millions of users in the fight to regain your privacy rights!

Privacy Badger was created with the objective of protecting users from third-party tracking across the web—all users. To do this, Privacy Badger needed a couple of key features:

  • The ability to catch sneaky trackers without completely breaking your browsing experience when possible.
  • Simple to use and understand.

Privacy Badger uses heuristics, meaning it observes and learns who is tracking you

For the first purpose, Privacy Badger uses heuristics, meaning it observes and learns who is tracking you rather than maintaining a manual list of trackers. Even if there is a third-party tracker that is rather unknown, or new, Privacy Badger will see that tracker. If your Privacy Badger sees the tracker three times, it will block that tracker so you don't have to wait for someone to eventually update that list. It's also a matter of trust—Privacy Badger blocks by behavior and not by a third-party controlled list that might be sold to advertisers.

Second, we try to make Privacy Badger simple and informative. Your Privacy Badger learns on its own and displays a badge showing how many trackers it has seen. If it breaks a website’s functionality, you can quickly disable Privacy Badger on that site.

When you install Privacy Badger, it doesn't block anything immediately because it needs to learn. This is a unique functionality, so many users’ first reaction to Privacy Badger when they first install it is that it doesn't work. We explained this in our FAQ and Onboarding pages, but we’ve improved those pages to make it clear for everyone.

To fix this, the new onboarding is simple and points out some essentials on how Privacy Badger works, what to do when something breaks, and what it means to join the team of millions of Badgers. It’s also easy to see on mobile if you are testing our Beta for Firefox on Android.


We hope that these changes will help us achieve a tracking-blocking extension that is dead-simple to use for anyone and for everyone. And soon we’ll have an improved site and FAQ to guide you through more advanced settings and functions of Privacy Badger

Why did we make this change and what other things changed?

We listen to our users a lot. We read all their feedback, check GitHub, and review error reports. We also observe how people interact with Privacy Badger and ask many questions directly to people on the street, the cab, at the cafe and anywhere.

We are focused on making Privacy Badger install-and-forget-simple

We’re looking to see if they have any issues installing our extension, if they understand how it works, and if they can fix something when it breaks.
Instead of focusing on shipping tons of features, we are focused on making Privacy Badger install-and-forget-simple, so when we install it on our relatives' computer we don't get a call saying we broke the Internet. All this while still protecting their privacy.

Sometimes it's big visual changes like the onboarding process, and sometimes it's simple things like moving an advanced option away some levels so only curious users will get there—users who will probably read in more detail our FAQ.

You might think this is not important, but we observed that when people open the options, they interpret the first thing they see as "I'm supposed to play with this." If you have a complicated feature there, in an application where your target is every kind of user, you are asking for trouble and confusion.

We still have a lot of things to come, but we hope this update sheds some light on how to use Privacy Badger, and makes it easier to use. We are always listening to your feedback, and are really excited to work to protect your privacy online and that of your family and friends.

Categories: Aggregated News

Too Big to Let Others Fail Us: How Mark Zuckerberg Blamed Facebook's Problems On Openness

eff.org - Wed, 11/04/2018 - 09:12

Facebook’s first reactions to the Cambridge Analytical headlines looked very different from the now contrite promises Mark Zuckerberg made to the U.S. Congress this week. Look closer, though, and you'll see a theme running through it all. The message coming from Facebook’s leadership is not about how it has failed its users. Instead it's about how users —and especially developers —have failed Facebook, and how Facebook needs to step in to take exclusive control over your data.

You may remember Facebook's initial response, which was to say that whatever Cambridge Analytica had gotten away with, the problem was already solved. As Paul Grewal, Facebook's deputy general counsel, wrote in that first statement, "In the past five years, we have made significant improvements in our ability to detect and prevent violations by app developers" Most significantly, he added, in 2014 Facebook "made an update to ensure that each person decides what information they want to share about themselves, including their friend list. This is just one of the many ways we give people the tools to control their experience. "

By the time Zuckerberg had reached Washington, D.C., however, the emphasis was less about users controlling their experience, and more about Facebook’s responsibility to make sure those outside Facebook—namely developers and users—were doing the right thing.

A week after Grewal's statement, Zuckerberg made his first public comments about Cambridge Analytica. "I was maybe too idealistic on the side of data portability,” he told Recode's Kara Swisher and Kurt Wagner. Going forward, preventing future privacy scandals "[is] going to be solved by restricting the amount of data that developers can have access to."

In both versions, the fault supposedly lay with third parties, not with Facebook. But Mark Zuckerberg has made it clear that he now takes a “broader view” of Facebook's responsibility. As he said in his Tuesday testimony to Congress:

We didn't take a broad enough view of our responsibility... It's not enough to just connect people, we have to make sure that those connections are positive. It's not enough to just give people a voice, we need to make sure that people aren't using it to harm other people or to spread misinformation. And it's not enough to just give people control over their information, we need to make sure that the developers they share it with protect their information, too. Across the board, we have a responsibility to not just build tools, but to make sure that they're used for good.

By far the most substantial shift Facebook has already taken in this direction has been to limit further how third-parties can access data it holds for its users.

Facebook began removing access to its APIs across its platforms earlier this month, from Instagram to Facebook search.

But that move just causes a new form of collateral damage. Shutting down third-party access and APIs doesn't just mean restricting creepy data snatchers like Cambridge Analytica. APIs are the ways that machines ingest what we read in our web browsers and Facebook's offical apps. Even then, we're only operating one hop from a bot: your Web browser is an automated client, tapping into Facebook's data feeds using Facebook's own web code. Facebook apps are just automated programs that talk to Facebook with a larger set of the interfaces that it permits others to use. Even if you're just hitting “refresh” on the feed over lunch, it's all automated in the end.

By locking down its APIs, Facebook is giving people less power over their information and reserving that power for itself.

By locking down its APIs, Facebook is giving people less power over their information and reserving that power for itself. In effect, Facebook is narrowing and eliminating the ways that legitimate users can engage with their own data. And that means that we're set to become even more dependent on Facebook's decisions about what to do with that information.

What is worse, we've seen a wave of similar rhetoric elsewhere. Twitter announced (then backed down from) a rapid shutdown of an API that creators outside the company depended upon for implementing alternative Twitter clients.

In response to a court case brought by academics to establish their right to examine services like Facebook for discrimination, Bloomsberg View columnist Noah Feldman claimed last week that permitting such automated scanning would undermine Facebooks' ability to protect the privacy of its users. We'd emphatically disagree. But you can expect more columns like that if Facebook decides to defend your data by doubling down on laws like the CFAA.

Here’s a concrete example. Supposing you'd like to #deletefacebook and move to another service—and you'd like to take your friends with you. That may mean you'd like to persuade them to move to a new better service with you. Or you might just want to keep the memories of your friends and your interactions with them intact as you move on from Facebook. You wouldn't want to leave Facebook without your conversations and favorite threads. You wouldn't want to find that you could take your own photographs, but not pictures of you with your family and colleagues. You'd like all that data as takeout. That's what data portability means.

But is that personal data really yours, asks the new "responsible" Facebook? Facebook's old model implied that it was yours: that's why the Graph API allowed you to deputize apps or third-parties to examine the all the data you could see yourself. Facebook's new model is that what you know about your friends is not yours to examine or extract how you'd like. Facebook's APIs now prevent you from accessing that in an automated way. Facebook is also locking down other ways you might extract that information—like searching for email addresses, or looking through old messages.

As the APIs shut down, the only way to access much of this data becomes the "archive download," a huge, sometimes incomplete slab of all your data that is unwieldy for users or their tools to parse or do anything with.

This isn't necessary a purely self-serving step by Facebook (though anyone who has tried to delete their account before, and had their friends pictures' deployed by Facebook in poignant pleas to remain, might well be skeptical of the company's best motives). It merely represents the emerging model of the modern social network's new responsibilities as the biggest hoarder of private information on the planet.

The thinking goes: There are bad people out there, and our customers may not understand privacy options well enough to prevent this huge morass of information being misused. So we need to be the sole keepers of it. We're too big to let others fail.

But that's entirely the wrong way of seeing how we got here. The problem here cannot be that people are sucking Facebook's data out of Facebook’s systems. The problem is that Facebook sucked that data from us, subject to vague assurances that a single Silicon Valley company could defend billions of people's private information en masse. Now, to better protect it from bad people, they want to lock us out of it, and prevent us taking it with us.

This new model of "responsibility" fits neatly with some lawmakers' vision of how such problems might get fixed. Fake news on Facebook? Then Facebook should take more responsibility to identity and remove false information. Privacy violations on the social media giants? Then the social media giants should police their networks, to better conform with what the government thinks should be said on them. In America, that "fix thyself" rule may well look like self-regulation: you drag Mr Zuckerberg in front of a committee, and make him fix the mess he's caused. In Europe, it looks like shadow regulation like the EU's Code of Conduct on Hate Speech, or even the GDPR. Data is to be held in giant silos by big companies, and the government will punish them if they handle it incorrectly.

We are unwittingly hard-wiring the existence of our present day social media giants into the infrastructure of society.

In both cases, we are unwittingly hard-wiring the existence of our present day social media giants into the infrastructure of society. Zuckerberg is the sheriff of what happens between friends and family, because he's best placed to manage that. Twitter's Jack Dorsey decides what civil discourse is, because that happens on his platform, and he is now the responsible adult. And you can't take your data out from these businesses, because smaller, newer companies can't prove their trustworthiness to Zuckerberg or Dorsey or Congress or the European Commission. And you personally certainly can't be trusted by any of them to use your own data the way you see fit, especially if you use third-party tools to achieve that.

The next step in this hard-wiring is to crack down on all data access in ways "unauthorized" by the companies holding it. That means sharpening the teeth of the notorious Computer Fraud and Abuse Act, and more misconceived laws like the Georgia's new Computer Crimes act. An alliance of the concerns of politicians and social media giants could well make that possible. As we saw with SESTA/FOSTA, established tech companies can be persuaded to support such laws, even when they unwind the openness and freedom that let those companies prosper in the first place. That is the shift that Zuckerberg is signaling in his offers to take responsibility in Washington.

But there is an alternative: We could empower Internet users, not big Internet companies, to decide what they want to do with their private data. It would mean answering some tough questions about how we get consent for data that is personal and private for more than one person, like conversations, photographs, my knowledge of you  and you of me—and how individuals can seek redress when those that they trust—whether it's Facebook, or a small developer—break that trust.

But beginning that long-overdue conversation now is surely going to end with a better result than making the existing business leaders of Silicon Valley the newly deputized sheriffs of our data, with shiny badges from Washington, and an impregnable new jail in which to hold it.

Categories: Aggregated News

Advertising

 


Advertise here!

Syndicate content
All content and comments posted are owned and © by the Author and/or Poster.
Web site Copyright © 1995 - 2007 Clemens Vermeulen, Cairns - All Rights Reserved
Drupal design and maintenance by Clemens Vermeulen Drupal theme by Kiwi Themes.
Buy now