In August, an entity calling itself the “Shadow Brokers” took the security world by surprise by publishing what appears to be a portion of the NSA’s hacking toolset. Government investigators now believe that the Shadow Brokers stole the cache of powerful NSA network exploitation tools from a computer located outside of the NSA’s network where they had been left accidentally, according to Reuters. A new detail, published for the first time in yesterday’s Reuters report, is that the NSA learned about the accidental exposure at or near the time it happened. The exploits, which showed up on the Shadow Brokers’ site last month, target widely used networking products produced by Cisco and Fortinet and rely on significant, previously unknown vulnerabilities or “zero days” in these products. The government has not officially confirmed that the files originated with the NSA, but the Intercept used documents provided by Edward Snowden to demonstrate links between the NSA and the Equation Group, which produced the exploits.
The Reuters story provides a partial answer to the most important question about the Shadow Brokers leak: why did the NSA seemingly withhold its knowledge of the Cisco and Fortinet zero days, among others, from the vendors? According to unnamed government sources investigating the matter, an NSA employee or contractor mistakenly left the exploits on a remote computer about three years ago, and the NSA learned about that mistake soon after. Because the agency was aware that the exploits had been exposed and were therefore vulnerable to theft by outsiders, it “tuned its sensors to detect use of any of the tools by other parties, especially foreign adversaries with strong cyber espionage operations, such as China and Russia.” Apparently finding no such evidence, the NSA sat on the underlying vulnerabilities until the Shadow Brokers posted them publicly.
But the NSA’s overconfidence should disturb us, as security researcher Nicholas Weaver points out. The “sensors” mentioned by Reuters are likely a non-technical reference to monitoring of the Internet backbone by the NSA under such authorities as Section 702 and Executive Order 12333, which could act as a form of Network Intrusion Detection System (NIDS). (The Department of Homeland Security also operates an NIDS called Einstein specifically to monitor government networks.) But Weaver explains that at least some of the exploits, including those that affected Cisco and Fortinet products, appear not to lend themselves to detection by outside monitoring since they operate within a target’s internal network. In other words, the NSA’s confidence that its surveillance tools weren’t being used by other actors might have been seriously misplaced.
The NSA’s decision not to disclose the Cisco and Fortinet vulnerabilities becomes even more questionable in light of the fact that some of the specific products affected had been approved by the Department of Defense’s Unified Capabilities (UC) Approved Products List (APL), which identifies equipment that can be used in DoD networks:1
Under National Security Directive 42 [.pdf], NSA is tasked with securing “National Security Systems” against compromise or exploitation, a mission which was traditionally housed within the Information Assurance Directorate (IAD). The NSA is currently in the process of combining the “defensive” IAD with its “offensive” intelligence-gathering divisions, but high-level officials charged with information assurance have acknowledged the NSA’s defensive mission is more important than ever. Regardless of whether the mission of protecting National Security Systems is interpreted broadly or narrowly, the NSA’s failure to remedy defects in products used widely across the IT sector and apparently by the government, and even the DoD itself, is difficult to defend.
Above all, the Shadow Brokers story highlights the need for oversight of the government’s use of zero days. Right now, the decision whether to retain or disclose a vulnerability is theoretically governed by the Vulnerabilities Equities Process (VEP), a once-secret policy that EFF obtained in redacted form via a Freedom of Information Act lawsuit. But because the VEP isn’t binding on the government, as far as we can tell, it’s toothless. While we don’t know the exact considerations employed by the government in reaching a decision to withhold a zero day, several of the high-level considerations described by White House Cybersecurity Coordinator Michael Daniel in a blog post about the VEP seem highly relevant:
- How much is the vulnerable system used in the core Internet infrastructure, in other critical infrastructure systems, in the U.S. economy, and/or in national security systems?
- Does the vulnerability, if left unpatched, impose significant risk?
- How much harm could an adversary nation or criminal group do with knowledge of this vulnerability?
- How likely is it that we would know if someone else was exploiting it?
Even if NSA initially believed the specific vulnerabilities at issue in this case wouldn’t be discovered by others, its knowledge that the exploits had been left exposed should have changed that calculus. And if NSA knew specifically that the exploits had been stolen, it’s hard to think of a rationale where disclosure would still be outweighed by other considerations. Coincidentally, the NSA seems to have lost control of the Shadow Brokers exploits in 2013, during a fallow period for the VEP. Although the VEP was written in 2010, Michael Daniel told Wired that it was not “implemented to the full degree that it should have been” and was only “reinvigorated” in 2014.
We think lawmakers should be concerned with this story, and we encourage them to ask the NSA to explain exactly what happened. We think the government should be far more transparent about its vulnerabilities policy. A start would be releasing a current version of the VEP without redacting the decisionmaking process, the criteria considered, and the list of agencies that participate, as well as an accounting of how many vulnerabilities the government retains and for how long. After that, we urgently need to have a debate about the proper weighting of disclosure versus retention of vulnerabilities, and we should ensure that any policy that implements this decision is more than just a vague blog post or a document that lacks all “vigor.”
- 1. We have chosen not to directly link to the APL here for technical reasons. The Department of Defense uses its own Certificate Authority (CA) for authenticating websites, which is not trusted in browsers. We provide the url here as a convenience, but recommend strongly against adding additional CAs to your browser: https://aplits.disa.mil/processAPList.action
Share this: Join EFF
There’s a bill making its way through Congress that would protect consumers’ freedom of speech by limiting unfair form contracts. The Consumer Review Fairness Act (H.R. 5111), introduced by Leonard Lance (R-NJ) and cosponsored by several representatives, would address two shameful practices: contracts that bar customers from sharing negative reviews of products and services online, and contracts that attempt to assign the copyright in customers’ reviews to the businesses themselves (who then file copyright takedown notices to have negative reviews removed). The CRFA is an important bill, and it addresses a major problem, but it contains one loophole that could undermine its ability to protect people who write online reviews.
An earlier version of the bill was introduced in both houses of Congress last year under the name Consumer Review Freedom Act (S. 2044, H.R. 2110). EFF applauded the bill when it was introduced. As we argued then, when a customer has no reasonable opportunity to negotiate a contract and its terms are overwhelmingly stacked against the customer, the contract shouldn’t be enforceable. We noted that these contracts usually fail in court, but that that hasn’t stopped businesses from using them. We also pointed out a few problems with the CRFA. Most of them have been addressed in the new bill, but the most disconcerting one remains.
If a company claims that a review is not “otherwise lawful” (for example, because it allegedly defames the company), then the law may permit the company to claim that it owns the copyright in the review and have it removed as copyright infringement, thus creating a shortcut for having speech removed. We don’t think this is what Congress intended, and we hope it’s not too late to remove the two offending words.
Imagine that I’m a vendor offering you a contract for a service. My contract includes a clause saying that you assign me the copyright in any review you write of my service. Under the CRFA, that clause would be invalid and my including it in the contract would be against the law. But if my contract says you assign me the copyright in any unlawful review you write, I could argue that that contract is valid under the CRFA.
We’re concerned that businesses could effectively use this language to bypass the traditional protections for allegedly illegal speech and instead rely on the censorship tools available to copyright owners. Filing a DMCA takedown notice is both easier and faster than convincing a judge that a piece of online speech is defamatory, especially because sending a DMCA takedown doesn’t require you to prove anything. A business could claim to be the copyright owner and get a review taken down without ever testing its claims in court.
Furthermore, transforming a different possible speech violation into a copyright infringement case brings the possibility of astronomical statutory damages, penalties with no relation to any actual harm done by the alleged infringer. Lawmakers should think twice before opening a loophole that businesses could use to masquerade other speech complaints as copyright infringement complaints.
We wholeheartedly support the CRFA’s intentions. Anti-review contracts are an attack on customers’ freedom of speech and it’s gratifying to see lawmakers stand up to defend consumers. We hope that before the CRFA becomes law, Congress closes the dangerous loophole.
Share this: Join EFF
Law Enforcement, Courts Need to Better Understand IP Addresses, Stop Misuse
If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.
Yet EFF has found that police and courts are regularly conducting and approving raids based on the similar type of unreliable digital evidence: Internet Protocol (IP) address information.
In a whitepaper released today, EFF challenges law enforcement and courts’ reliance on IP addresses, without more, to identify the location of crimes and the individuals responsible. While IP addresses can be a useful piece of an investigation, authorities need to properly evaluate the information, and more importantly, corroborate it, before IP address information can be used to support police raids, arrests, and other dangerous police operations.
IP address information was designed to route traffic on the Internet, not serve as an identifier for other purposes. As the paper explains, IP addresses information isn't the same as physical addresses or license plates that can pinpoint an exact location or identify a particular person. Put simply: there is no uniform way to systematically map physical locations based on IP addresses or create a phone book to lookup users of particular IP addresses.
Law enforcement’s over-reliance on the technology is a product of police and courts not understanding the limitations of both IP addresses and the tools used to link the IP address with a person or a physical location. And the police too often compound that problem by relying on metaphors in warrant applications that liken IP addresses to physical addresses or license plates, signaling far more confidence in the information than it merits.
Recent events demonstrate the problem: A story in Fusion documents how residents of a farm in the geographic center of America are subjected to countless police raids and criminal suspicion, even though they’ve done nothing wrong. A story in Seattle’s newspaper The Stranger described how police raided the home and computers of a Seattle privacy activist operating a Tor exit relay because they mistakenly believed the home contained child pornography. And these are just two stories that found their way into the media.
These ill-informed raids jeopardize public safety and violate individuals’ privacy rights. They also waste police time and resources chasing people who are innocent of the crimes being investigated.
The whitepaper calls on police and courts to recalibrate their assumptions about IP address information, especially when it is used to identify a particular location to be searched or individual to be arrested. EFF suggests that IP address information be treated, in the words of the Supreme Court, as more like “casual rumor circulating in the underworld,” or an unreliable informant. The Constitution requires further investigation and corroboration of rumors and anonymous tips before police can rely upon them to establish probable cause authorizing warrants to search homes or arrest individuals. The same should be true of IP address information.
The paper also explains why the technology’s limitations can make it unreliable and how the Supreme Court’s rules around unreliable information provided by anonymous informants should apply to IP address information in warrant applications. The paper concludes with two lists of questions to ask and concrete steps to take: one for police and one for judges. The goal is to better protect the public so that the misuse of IP address information doesn’t lead to a miscarriage of justice.
We hope the whitepaper can serve as a resource for law enforcement and courts while also triggering a broader conversation about IP address information’s use in criminal investigations. In the coming months, EFF hopes to discuss these concerns with law enforcement and courts with the goal of preventing unwarranted privacy invasions and violations of the Fourth Amendment. We also hope that our discussions will result in better law enforcement investigations that do not waste scarce police resources. If you are a law enforcement agency or court interested in this issue, please contact email@example.com.
Share this: Join EFF
The World Wide Web Consortium has embarked upon an ill-advised project to standardize Digital Rights Management (DRM) for video at the behest of companies like Netflix; in so doing, they are, for the first time, making a standard whose implementations will be covered under anti-circumvention laws like Section 1201 of the DMCA, which makes it a potential felony to reveal defects in products without the manufacturer's permission.
This is especially worrisome because the W3C's aspiration for the new version of HTML is that it will replace apps as the user-interface for the Internet of Things, making all sorts of potentially compromising (and even lethal) bugs difficult to report without serious legal liability.
The EFF has proposed that W3C members should be required to promise not to use the DMCA and laws like it this way; this has had support from other multistakeholder groups, like the Open Source Initiative, which has said that the W3C work will not qualify as an "open standard" if it doesn't do something to prevent DMCA abuse.
Now, another important body, WHATWG, has joined the chorus calling on the W3C to prevent their technical work from become a legal weapon. WHATWG is a breakaway web standards body, backed by all the major browser vendors, and much of the W3C's standardization process consists of snapshotting WHATWG's documents and putting W3C's stamp of approval on them.
In an op-ed on the WHATWG blog, Ian "Hixie" Hickson (who formerly oversaw HTML5 for the W3C, and now edits the HTML spec for WHATWG, while working for Google) calls on the W3C to adopt the rules protecting security research, saying "We can ill afford a chilling effect on Web browser security research. Browsers are continually attacked. Everyone who uses the Web uses a browser, and everyone would therefore be vulnerable if security research on browsers were to stop."
Hixie's letter is co-signed by fellow WHATWGers Simon Pieters from Opera, and Anne van Kesteren from Mozilla.
The charter for the W3C's DRM working group runs out in eight days and will have to be renewed. Some 20 W3C members have pledged to block any further renewal unless the W3C executive requires the group to solve this problem before finishing its work. The last time this happened, the executive dismissed these objections, but the numbers have swelled and now include prominent disabled rights groups like the UK Royal National Institute for Blind People and Media Access Australia, as well as a browser vendor, Brave.
A who's who of security researchers, including the W3C's own invited experts, have signed an open letter asking the W3C to ensure that control over disclosure of vulnerabilities in web browsers isn't given to the companies whom these disclosures might potentially embarrass.
From Hixie's post:
Much has been written on how DRM is bad for users because it prevents fair use, on how it is technically impossible to ever actually implement, on how it's actually a tool for controlling distributors, a purpose for which it is working well (as opposed to being to prevent copyright violations, a purpose for which it isn't working at all), and on how it is literally an anti-accessibility technology (it is designed to make content less accessible, to prevent users from using the content as they see fit, even preventing them from using the content in ways that are otherwise legally permissible, e.g. in the US, for parody or criticism). Much has also been written about the W3C's hypocrisy in supporting DRM, and on how it is a betrayal to all Web users. It is clear that the W3C allowing DRM technologies to be developed at the W3C is just a naked ploy for the W3C to get more (paying) member companies to join. These issues all remain. Let's ignore them for the rest of post, though.
One of the other problems with DRM is that, since it can't work technically, DRM supporters have managed to get the laws in many jurisdictions changed to make it illegal to even attempt to break DRM. For example, in the US, there's the DMCA clauses 17 U.S.C. § 1201 and 1203: "No person shall circumvent a technological measure that effectively controls access to a work protected under this title", and "Any person injured by a violation of section 1201 or 1202 may bring a civil action in an appropriate United States district court for such violation".
This has led to a chilling effect in the security research community, with scientists avoiding studying anything that might relate to a DRM scheme, lest they be sued. The more technology embeds DRM, therefore, the less secure our technology stack will be, with each DRM-impacted layer getting fewer and fewer eyeballs looking for problems.
We can ill afford a chilling effect on Web browser security research. Browsers are continually attacked. Everyone who uses the Web uses a browser, and everyone would therefore be vulnerable if security research on browsers were to stop.
Since EME introduces DRM to browsers, it introduces this risk.
A proposal was made to avoid this problem. It would simply require each company working on the EME specification to sign an agreement that they would not sue security researchers studying EME. The W3C already requires that members sign a similar agreement relating to patents, so this is a simple extension. Such an agreement wouldn't prevent members from suing for copyright infringement, it wouldn't reduce the influence of content producers over content distributors; all it does is attempt to address this even more critical issue that would lead to a reduction in security research on browsers.
The W3C is refusing to require this. We call on the W3C to change their mind on this. The security of the Web technology stack is critical to the health of the Web as a whole.
- Ian Hickson, Simon Pieters, Anne van Kesteren
Excerpt copyright (c) 2016 The WHATWG Contributors. Reproduced under the MIT License.
Share this: Join EFF
It’s an old legal adage: bad facts make bad law. And the bad facts present in the Playpen prosecutions—the alleged possession and distribution of child porn, coupled with technology unfamiliar to many judges—have resulted in a number of troubling decisions concerning the Fourth Amendment’s protections in the digital age.
As we discussed in our previous post, courts have struggled to apply traditional rules limiting government searches—specifically, the Fourth Amendment, the Constitution's primary protection against governmental invasions of privacy—to the technology at issue in this case, in some cases finding that the Fourth Amendment offers no protection from government hacking at all. That's a serious problem.
In this post, we’ll do two things: explain the Fourth Amendment “events”—that is, the types of searches and seizures—that take place when the government uses malware, explain how some of the courts considering this issue have gone astray (and some have gotten it right), and what all this means for our digital rights.Hacks, searches, seizures, and the Fourth Amendment
The Fourth Amendment generally prohibits warrantless law enforcement searches and seizures. A Fourth Amendment “search” occurs when the government intrudes on an area or information in which a person has a reasonable expectation of privacy. A “seizure” occurs when the government substantially interferes with a person's property or their liberty.
As we’ve spelled out in an amicus brief filed in a number of the Playpen prosecutions, when the government hacks into a user’s computer, a series of significant Fourth Amendment searches and seizures occur:
Each use [of the government’s malware] caused three Fourth Amendment events to occur: (1) a seizure of the user’s computer; (2) a search of the private areas of that computer; and (3) a seizure of private information from the computer.
First, the government’s malware “seized” the user’s computer. More specifically, the execution of the government’s code on a user’s device “meaningful[ly] interfered” with the intended operation of the software: it turned a user’s computer into a tool for law enforcement surveillance. By hacking into the user’s device, the government exercised “dominion and control” over the device. And that type of interference and control over a device constitutes a “seizure” for Fourth Amendment purposes.
Next, the government’s code “searched” the device to locate certain specific information from the computer: the MAC address, the operating system running on the computer, and other identifying information. In this instance, where the search occurred is central to the Fourth Amendment analysis: here, the search was carried out on a user’s personal computer, likely located inside their home. Given the wealth of sensitive information on a computer and the historical constitutional protections normally afforded peoples' homes, a personal computer located within the home represents the fundamental core of the Fourth Amendment’s protections.
Finally, the government conducted a “seizure” when its malware copied and sent the information obtained from the user’s device over the internet and back to the FBI. (As an aside, it was sent unencrypted—but more on that in a later blog post about the evidentiary issues arising from these cases.) For its part, the government doesn’t even contest that the copying of this information is a seizure: it described that information as the “information to be seized” in the warrant.
Law enforcement deploying malware against a user in this way should, from a constitutional perspective, be understood the same way as if the search were carried out in the physical world: a police officer physically taking a computer away, looking through it for identifying information, and writing down the information the officer finds for later use.Fourth Amendment principles meet digital dissonance
In the physical world, courts would have no problem recognizing the Fourth Amendment consequences of law enforcement physically seizing and searching a computer. Yet, the Playpen cases, and the relatively unfamiliar technology at issue in them, have complicated the application of settled Fourth Amendment law.
Some courts have held that the Fourth Amendment was not implicated by the government’s malware, incorrectly focusing on the information obtained from the search—critically, the IP address—and not how and where the searches and seizures occurred. Those courts have relied on a separate line of cases that held that, when the government obtains an IP address from an ISP or other third party, the user lacks a reasonable expectation of privacy in the IP address, precisely because it was in the hands of a third party.
Even if we agreed with that precedent (generally, we don’t), it has no application to the Playpen cases. The government didn't obtain the IP address and other information from a third party: it got it directly from searching and seizing the user’s device. As one court correctly held:
The government is not permitted to conduct a warrantless search of a place in which a defendant has a reasonable expectation of privacy simply because it intends to seize property for which the defendant does not have a reasonable expectation of privacy. For example, if [the defendant] had written his IP address  down on a piece of paper and placed it on his desk in his home, the government would not be permitted to conduct a warrantless search of his home to obtain that IP address. The same is true here.
As we wrote before, one court went so far as to say that the defendant had no reasonable expectation of privacy—and, thus, no Fourth Amendment protection—in a personal computer, located within a private home, because it was connected to the Internet. Personal computers inside the home should receive the greatest Fourth Amendment protection, not none at all, so it was deeply concerning to see a judge reach that conclusion.
Essentially, that court held that software vulnerabilities are akin to broken blinds in a person’s house, which allow the government to peer in and see illegal activity—an investigative technique that, although creepy, does not require a warrant. The court held that “Government actors who take advantage of an easily broken system to peer into a user’s computer” are essentially peering in through the digital equivalent of broken blinds.
Setting aside the difference between looking in a window from the street and actively hacking a computer, tying the protections of the Fourth Amendment to the relative strength of security measures sets a dangerous precedent. Many (if not most) physical security features, like a lock on a door, are easily defeated, yet no court would conclude that the government can warrantlessly search a home, simply because the lock could be picked.What these decisions mean for the law of government hacking
There’s cause for concern about these decisions, but it’s not quite time to panic.
The legal rules that could ultimately flow from decisions, like those described above—that the government may warrantlessly search an electronic device so long as it is only obtaining information that, in other contexts, has been disclosed to a third party; or that the government’s ability to warrantlessly search devices is checked only by their technological capacity to do so—are very bad for privacy, to say the least.
Fortunately, the decisions so far have all been at the district court level. That means that although another court might consider the decision persuasive, the decisions do not establish legal rules that other courts or the government must follow. It will be critically important to watch these cases on appeal, though. Decisions of the federal courts of appeals and the Supreme Court are binding on other courts and the government, so the rules the Playpen cases generate on appeal will create lasting legal rules.
Nevertheless, the cases are still creating a body of troubling decisions in an area that, until now, was relatively lightly covered in the federal courts, creating a kind of bedrock layer of precedent for government hacking. Before the Playpen prosecutions, only a handful of decisions involving government hacking existed; when these cases are all said and done, there may be a hundred. That makes it all the more critical that we get these cases right—and set the right limits on government hacking—at the outset.Related Cases: The Playpen Cases: Mass Hacking by U.S. Law Enforcement
Share this: Join EFF
From cell-site simulators in New York to facial recognition devices in San Diego, law enforcement surveillance technologies are spreading across the country like an infectious disease. It’s almost epidemiological: one police department will adopt a new, invasive tool, and then the next and the next, often with little or no opportunity for the citizens to weigh in on what’s needed or appropriate for their communities. Sometimes even elected officials and judges have no idea how technologies are being used by the police under their supervision.
2016 is the year we start to turn it around. In California, we helped pass legislation to require transparency and public hearings on technologies such as cell-site simulators and automated license plate readers before they can be adopted by cities and counties. Specifically, earlier this year, the County of Santa Clara passed a groundbreaking ordinance limiting how and when law enforcement can adopt new surveillance technologies.
Today, EFF joins the ACLU and a diverse coalition of civil liberties organizations in launching the new campaign for Community Control Over Police Surveillance. This nationwide effort will pass ordinances on the local level that ensure that all affected communities will have a voice in deciding whether police may acquire a new surveillance tool. Without this reform, such decisions too often are made only by local law enforcement officials seeking to acquire the latest, shiny tools; by the federal government seeking to spread “anti-terrorism” funds and its own military-grade tech; and by the vendors aggressively marketing these devices to police departments.
The #TakeCTRL movement seeks to pass ordinances similar to that adopted by Santa Clara in 11 key and politically diverse municipalities across the country: New York City; Washington, D.C.; Richmond, Virginia; Miami Beach and Pensacola, Florida; Hattiesburg, Mississippi; Muskegon, Michigan; Madison and Milwaukee, Wisconsin; Seattle, Washington; and Palo Alto, California.
While each ordinance will be tailored to the needs of each municipality, all will be grounded in these eight critical principles:
1) Surveillance technologies should not be funded, acquired, or used without prior express city council approval.
2) Local communities should play a significant and meaningful role in determining if and how surveillance technologies are funded, acquired, or used.
3) The process for considering the use of surveillance technologies should be transparent and well-informed.
4) The use of surveillance technologies should not be approved generally; approvals, if provided, should be for specific technologies and specific, limited uses.
5) Surveillance technologies should not be funded, acquired, or used without addressing their potential impact on civil rights and civil liberties.
6) Surveillance technologies should not be funded, acquired, or used without considering their financial impact.
7) To verify legal compliance, surveillance technology use and deployment data should be reported publically on an annual basis.
8) City council approval should be required for all surveillance technologies and uses; there should be no “grandfathering” for technologies currently in use.
This movement is supported by a wide variety of national groups, including EFF, the ACLU, Bill of Rights Defense Committee/Defending Dissent Foundation, Demand Progress, Million Hoodies Movement for Justice, NAACP, National Network of Arab American Communities, Restore the Fourth, South Asian Americans Leading Together, and the Tenth Amendment Center.
This effort is crucial for society at large, but it is especially important to marginalized and disadvantaged communities. As the ACLU articulates:
The increasing, secret use of surveillance technologies by local police, especially against communities of color and other unjustly targeted and politically unpopular groups, is creating oppressive, stigmatizing environments in which every community member is treated like a prospective criminal. The overuse of surveillance technologies has turned many non-white and poor neighborhoods into fishbowls, and some into virtual prisons, where their residents’ public behavior is monitored and scrutinized 24 hours a day.
The ACLU has put together the ultimate resource guide for the Community Control Over Police Surveillance at communityctrl.com, where you can learn more about the principles, the technologies, the targeted cities, and how you can get involved. We also encourage you to learn from the work EFF is doing on these issues through our Street-Level Surveillance hub.
This effort may not be the ultimate antidote to the plague of invasive police tech, but we believe that it will help build up the antibodies to ensure that our communities become resistant to unchecked surveillance.
Share this: Join EFF
Baycloud Systems has become the latest company to join the EFF’s Do Not Track (DNT) coalition, which opposes the tracking of users without their consent. Baycloud designs systems to help companies and users monitor and manage tracking cookies. Based in the UK, it provides thousands of sites across Europe with tools for compliance with European Union (EU) data protection laws.
In contrast to the U.S., with its scant legislative privacy protection and weak self-regulatory system, EU data protection law requires companies that collect user data to provide a legal basis for using it--the most important aspect of which is user consent. And this requirement has real teeth: the new General Data Protection Regulations mean that companies will soon face serious fines of up to 2 or 4 percent (depending on the violation) of worldwide turnover.
EU rules also require user consent before a site sets cookies, and public disclosure of information as to their purpose (such as feature functionality or behavioral profiling). Although the cookie rules have been applied unevenly and have not stopped tracking, the principle requiring user consent is sound.
But what are users consenting to? Companies often hide ridiculously wide claims of consent in their terms and conditions, knowing that hardly anyone will read or understand them. The consequences of consenting to tracking should be made clear and offer the user an informed choice, as our partner Medium does when you log in:Medium's login interface offering clear information for DNT users.
Baycloud has developed a browser extension for Chrome, Bouncer, to give users more power over how they use DNT. Once set, the browser sends the DNT signal to every site visited. Bouncer monitors the DNT interaction with the webserver, shows the user what cookies are being set and checks if the site complies with DNT. If a site does not respect the DNT signal, and wants to run wild with your private information, Bouncer also blocks tracking cookies.
Bouncer also implements the standard Worldwide Web Consortium (W3C) interface so that sites can record in the browser if users have consented to being tracked. A control panel enables users to edit their consent settings for individual sites. Users may exempt sites because of a belief that their data won't be abused or willingness to trade data in exchange for services.Bouncer DNT Check interface
Sadly, the W3C document has too many loopholes to deliver adequate protection for users. Ad companies, for example, can decide how much data collection is 'reasonably necessary and proportionate' for the billing and audit ad payments, or to monitor how often ads are shown to specific users. Behind this jargon hides a back door for tracking: these exceptions would permit companies to keep a record of a user's browsing habits. Such principles are too vague and flimsy to serve as an acceptable standard for the Web. That's why the EFF has built a coalition behind its own policy.
But the W3C has done important work on the technical aspects of the DNT signal which will provide the machinery for whatever policy finally wins out. The ability to selectively manage and fine tune consent is important for its adoption by publishers who can then hope to persuade users of their bona fides or value. Baycloud has been a leading contributor to that work and we're thrilled to have them on our side in the campaign to fix the problem of online tracking.
Share this: Join EFF
After successfully defending MuckRock’s First Amendment right to host public records on its website earlier this summer, EFF filed documents in court on Monday seeking to end the last lawsuit brought against it in Seattle.
The lawsuit was one of three filed by companies against MuckRock, one of its users, and the city of Seattle after the user filed a public records request in April seeking information about the city’s smart utility meter program, including documentation of the technology’s security.
The lawsuits were all aimed at preventing disclosure of records the companies claimed contained trade secrets. In one of the cases, a company obtained a court order requiring MuckRock to de-publish two documents from its website that the city had previously released. A court quickly reversed that clear violation of MuckRock’s First Amendment rights and MuckRock put the public records back online.
After the dust settled, companies in two of the lawsuits agreed to dismiss MuckRock. This occurred after EFF explained that the website is an online platform that hosts its users public records requests and any documents they receive. As such, MuckRock did not actually request the records subject to the lawsuits and merely facilitated and hosted the request by its user.
MuckRock thus has no particular interest in the lawsuits because the underlying dispute is about whether certain documents contained trade secrets that must be redacted or withheld under Washington state’s public records law.
The company in the third case, however, has refused to dismiss MuckRock. This is particularly curious because MuckRock currently does not host any documents from the company, Elster Solutions, LLC, that are subject to the public records request.
EFF’s motion asks the federal court hearing the suit to dismiss MuckRock for two reasons.
First, the motion argues that Elster has failed to allege that MuckRock has done anything wrong that would make it subject to the lawsuit.
Second, the court cannot entertain any claims against MuckRock because it is immune from suit under 47 U.S.C. § 230, a provision of the Communications Decency Act (often referred to as Section 230).
Section 230 provides broad protections for online platforms such as MuckRock, shielding them from liability based on the activities of users who post content to their websites. Given that broad immunity, MuckRock cannot be sued for hosting public records sought by one of its users regardless of whether they contain trade secrets.
We are hopeful that the court will dismiss MuckRock from the suit and allow it to focus on maintaining and improving its online public records platform. EFF also thanks our local counsel, Venkat Balasubramani of FOCAL PLLC, for his assistance with the motion.
Share this: Join EFF
Facebook’s recent censorship of the iconic AP photograph of nine year-old Kim Phúc fleeing naked from a napalm bombing, has once again brought the issue of commercial content moderation to the fore. Although Facebook has since apologized for taking the photo down from the page of Norwegian publication Aftenposten, the social media giant continues to defend the policy that allowed the takedown to happen in the first place.
The policy in question is a near-blanket ban on nudity. Although the company has carved out some exceptions to the policy—for example, for “photographs of paintings, sculptures, and other art that depicts nude figures”—and admits that their policies can “sometimes be more blunt than we would like and restrict content shared for legitimate purposes,” in practice the ban on nudity has a widespread effect on the ability of its users to exercise their freedom of expression on the platform.
In a statement, Reporters Without Borders called on Facebook to “add respect for the journalistic values of photos to these rules.” But it’s not just journalists who are affected by Facebook’s nudity ban. While it may seem particularly egregious when the policy is applied to journalistic content, its effect on ordinary users—from Aboriginal rights activists to breastfeeding moms to Danish parliamentarians who like to photograph mermaid statues—is no less damaging to the principles of free expression. If we argue that Facebook should make exceptions for journalism, then we are ultimately placing Facebook in the troubling position of deciding who is or isn’t a legitimate journalist, across the entire world.
Reporters Without Borders also called on the company to “ensure that their rules are never more severe than national legislations.” Indeed, while it is now largely accepted that social media companies take down content in response to requests from governments, the idea that these companies should temper their rules to be more in line with the liberal policies of other governments—to keep up nudity that violates no local regulation, and is inoffensive by the societal standards of many countries outside the United States—has not yet entered the public discussion.
Despite recent statements and certain exceptions, Facebook certainly doesn’t see nude imagery as a component of freedom of expression. In a letter to the Norwegian prime minister in which she apologized for the recent gaffe, the company’s COO, Sheryl Sandberg, wrote that “sometimes … the global and historical importance of a photo like ‘Terror of War’ outweighs the importance of keeping nudity off Facebook”. What Facebook hasn’t explained, however, is why it’s so important to keep nudity off the platform.
The company’s Community Standards state that the display of nudity is restricted “because some audiences within our global community may be sensitive to this type of content - particularly because of their cultural background or age.” Facebook’s concern for this unnamed set of users rings hollow, perhaps because the fear of getting blocked by conservative authoritarian governments is more likely the real impetus behind the policy.
As a company, nothing obliges Facebook to adhere to the principles of freedom of expression. The company has the right to convey, or remove, whatever content it chooses. But a near-blanket ban on nudity certainly contradicts the company’s mission of making the world more open and connected.
So what should Facebook do? Short of getting rid of the policy altogether, there are several simple changes the company could make that would place it more in line with both its own mission and the spirit of free expression.
First, Facebook could stop conflating nudity with sexuality, and sexuality with pornography by making changes to their user reporting mechanism. Currently, when users attempt to report such content, their first option reads: “This is nudity or pornography,” with “sexual arousal,” “sexual acts” and “people soliciting sex” as examples listed below. This creates a blurry line between non-sexual nudity (which is legal and uncontroversial in a number of jurisdictions in which the company operates) and sexual content.
Facebook's reporting mechanism conflates mere nudity with sexuality
Another option would be to apply content warnings . Facebook already employs such warnings for graphic violence (a subject that promotes greater concern in much of northern Europe than nude imagery) and could easily roll them out to apply to nudity as well. The company could institute different guidelines for public and private content as well, allowing nudity on friends-only feeds, for instance. .
Facebook could also consider whether its ban on female nipples—but not male ones—is a just policy. A number of countries and regions throughout the world have equalized policies toward toplessness, but Facebook’s policy remains regressive, and discriminatory. Furthermore, it often affects transgender users, an already vulnerable population.
Finally, Facebook could reconsider the punitive bans it places on users who violate the policy. Currently, users who violate the policy first have their content taken down, while a second violation typically results in a 24-hour ban—the same length of time meted out for seemingly more egregious policy violations.
All of these would help mitigate the confusion, concern and accusations of censorship, that incidents like the Kim Phúc takedown provoke. But if Facebook wants to avoid being seen as the world’s arbitrary and prudish censor, the company should perhaps spend more time thinking—and articulating—about why a ban on nudity is so important in the first place.
Has your content been taken down, or your account suspended, on a social media platform? Report your experience now on Onlinecensorship.org, a project of EFF and Visualizing Impact which aims to find out how social media companies’ policies affect global expression.
Share this: Join EFF
If you have the power to censor other people’s speech, special interests will try to co-opt that power for their own purposes. That’s a lesson the Motion Picture Association of America is learning this year. And it’s one that Internet intermediaries, and the special interests who want to regulate them, need to keep in mind.
MPAA, which represents six major movie studios, also runs the private entity that assigns movie ratings in the U.S. While it’s a voluntary system with no formal connection to government, MPAA’s “Classification and Ratings Administration” wields remarkable power. That’s because most movie theaters, along with retail giants like Wal-Mart and Target, won’t show or sell feature films that lack an MPAA rating. And a rating of “R” or “NC-17” can drastically limit the audiences who are allowed to view or buy a movie.
Power creates its own temptation. MPAA itself has been accused of rating independent films more harshly than those produced by MPAA’s own member studios. And this year, a class action lawsuit seeks to force MPAA to use its ratings system to eliminate tobacco imagery from children’s films. The lawsuit, Forsyth v. MPAA, claims that MPAA has a special legal duty to avoid harm to children, and because of that duty, MPAA should be required to give an “R” rating to every film that contains smoking or other tobacco use.
MPAA has responded by moving to dismiss the suit under California’s Anti-SLAPP law. The group argued that its movie ratings are a form of speech protected by the First Amendment. It denied having any legal duty to protect children from images of smoking. And MPAA argued—sensibly—that Mr. Forsyth’s claims are a slippery slope:
[Plaintiff] is trying to use the tort system to require [MPAA] to implement his policy goals. If Plaintiff’s claims were permitted to proceed, there would be no end to claims invoking [MPAA’s] purported duty to disregard its own opinions and instead to implement a given advocacy group’s preferred social policy in assigning ratings.
* * *
Plaintiff’s theory . . . has no logical stopping point. The rule would require [MPAA] to give an R rating to movies that depict any conduct that advocacy groups think unhealthy—for example, movies that depict alcohol use, gambling, contact sports, bullying, consumption of soda or fatty foods, or high-speed driving.
MPAA is right. The First Amendment generally prohibits using legal processes to regulate the opinions expressed by others, no matter how noble the purpose. In fact, the slippery slope of censorship is one of the primary reasons why courts and legislatures can almost never regulate speech based on its content: if one form of “harmful” speech is banned or limited, it’s hard to avoid banning or limiting speech on every subject that some powerful interest finds harmful. We expect that MPAA will prevail in this lawsuit.
But there’s an irony to MPAA’s position in this lawsuit, because at the same time it fights to protect the ratings board against co-opting by special interests, the trade association is also trying to co-opt other powerful private gatekeepers of speech into advancing MPAA’s own special interest: copyright enforcement. Internet intermediaries like webhosts, domain name registrars, search engines, and third-party platforms are, like MPAA’s ratings board, private organizations that stand between speakers and their audiences. Their roles give them power to suppress speech, by making it harder for audiences to access, or even making entire sites disappear from the Internet.
Power, once again, creates temptation. This year, MPAA made agreements with two domain name registries, Donuts and Radix, which control new top-level Internet domains such as .movie, .online, and .site. Both registries agreed to receive accusations from MPAA that particular websites are engaged in copyright infringement, and to consider taking away those websites’ domain names. MPAA, along with other representatives of major entertainment companies, has also been pushing ICANN, the group that oversees the domain name system, to mandate this new copyright enforcement regime worldwide.
There are many problems with this initiative, which we’ll be exploring in the coming weeks. But one lesson that MPAA should have learned this year is that once one special interest obtains power to block the channels of communication, others will come knocking. Many powerful interests want the power to edit the Internet, from corporations and wealthy individuals who want to suppress criticism to repressive governments seeking to quash dissent. Some may even have widely supported (though controversial) social goals, like stopping “hate speech,” blasphemy, or pornography. Like the plaintiff in the Forsyth case, all of these folks want these private companies and systems “to perform a different function . . . one [they] make no claim to serve.” Just as MPAA is right to worry that the Forsyth case could open the door to more control of the ratings board by various special interests, new copyright enforcement systems will quickly become enforcement systems for all kinds of speech that a corporation or government declares to be dangerous.
This has already happened in the copyright realm: Major ISPs in the United Kingdom are now required to block their customers from reaching entire websites that are deemed to be copyright infringers, using a system that was originally set up to block child pornography.
But, you might say, copyright is a law, while preventing smoking is simply a policy goal. But just as MPAA has no legal duty to promote a zero-tolerance message about smoking, intermediaries have no legal duty to police the Internet for copyright infringement, or to prevent their users from infringing. And just as laws on “hate speech,” blasphemy, and sedition vary widely between countries, copyright is not the same everywhere. Depictions of tobacco use are themselves subject to strict “plain packaging” laws in some countries. The more Intermediaries on the global Internet are co-opted into regulating content, the more pressure they will face to apply the standards of the most censorious countries and organizations.
In the coming weeks, we’ll be exploring how speech on the Internet is being controlled by private agreements, and how Internet users can demand accountability and transparency in these Shadow Regulations. For now, even if the Forsyth case is quickly thrown out of court, it should serve as a cautionary tale: build a system that can regulate the speech of others, and the censors will beat a path to your door.
Share this: Join EFF
In December 2014, the FBI received a tip from a foreign law enforcement agency that a Tor Hidden Service site called “Playpen” was hosting child pornography. That tip would ultimately lead to the largest known hacking operation in U.S. law enforcement history.
The Playpen investigation—driven by the FBI’s hacking campaign—resulted in hundreds of criminal prosecutions that are currently working their way through the federal courts. The issues in these cases are technical and the alleged crimes are distasteful. As a result, relatively little attention has been paid to the significant legal questions these cases raise.
But make no mistake: these cases are laying the foundation for the future expansion of law enforcement hacking in domestic criminal investigations, and the precedent these cases create is likely to impact the digital privacy rights of Internet users for years to come. In a series of blog posts in the coming days and weeks, we'll explain what the legal issues are and why these cases matter to Internet users the world over.
More in this series: Some Fourth Amendment Basics and Law Enforcement Hacking
So how did the Playpen investigation unfold? The tip the FBI received pointed out that Playpen was misconfigured, and its actual IP address was publicly available and appeared to resolve to a location within the U.S. After some additional investigation, the FBI obtained a search warrant and seized the server hosting the site. But the FBI didn’t just shut it down. Instead, the FBI operated the site for nearly two weeks, allowing thousands of images of child pornography to be downloaded (a federal crime, which carries steep penalties). That decision, alone, has spurred serious debate.
But it’s what happened next that could end up having a lasting impact on our digital rights.
While the FBI was running Playpen, it began sending malware to visitors of the site, exploiting (we believe) a vulnerability in Firefox bundled in the Tor browser. The government, in an effort to downplay the intrusiveness of its technique, euphemistically calls the malware it used a “NIT”—short for “Network Investigative Technique.” The NIT copied certain identifying information from a user’s computer and sent it back to the FBI in Alexandria, Virginia. Over a thousand computers, located around the world, were searched in this way.
As far as we are aware, this is the most extensive use of malware a U.S. law enforcement agency has ever employed in a domestic criminal investigation. And, to top it all off, all of the hacking was done on the basis of a single warrant. (You can see our FAQ here for a bit more information about the investigation.)
As it stands now, the government has arrested and charged hundreds of suspects as a result of the investigation. Now defendants are pushing back, challenging the tenuous legal basis for the FBI’s warrant and its refusal to disclose exactly how its malware operated. Some courts have upheld the FBI’s actions in dangerous decisions that, if ultimately upheld, threaten to undermine individuals’ constitutional privacy protections in personal computers.
The federal courts have never dealt with a set of cases like this—both in terms of the volume of prosecutions arising from a single, identical set of facts and the legal and technical issues involved. For the past few months, we’ve been working to help educate judges and attorneys about the important issues at stake in these prosecutions. And to emphasize one thing: these cases are important. Not just for those accused, but for all us.
There are very few rules that currently govern law enforcement hacking, and the decisions being generated in these cases will likely shape those rules for years to come. These cases raise serious questions related to the Fourth Amendment, Rule 41 (an important rule of criminal procedure, which the Department of Justice is in the process of trying to change), and the government’s obligation to disclose information to criminal defendants and about vulnerabilities in widely used software products. We’ll tackle each of these issues, and others, in our series of blog posts designed to explain the FBI’s takedown of Playpen matters for all of us.Related Cases: The Playpen Cases: Mass Hacking by U.S. Law Enforcement
Share this: Join EFF
The Texas Department of Criminal Justice (TDCJ) sent shockwaves through the prisoner rights community in April when it announced a new policy forbidding inmates from participating in social media. The memo, distributed in English and Spanish within prisons, read:
[O]ffenders are prohibited from maintaining active social media accounts for the purposes of soliciting, updating, or engaging others, through a third party or otherwise.
Since the inception of social media, activists have used it to draw attention to incarceration issues, such as spreading the word about prison conditions or calling for sentencing reform. Supporters use social media to promote innocence and clemency campaigns and to fundraise for appeals. Social media has also allowed the family and friends of inmates to share updates and moral support on a more personal level. Often these accounts involve posting content and artwork generated by the inmates themselves.
The wording of the new TDCJ rule was vague and chillingly broad, and the community was unsure how it would be applied. Some were afraid that continuing their social media advocacy would result in their loved ones ending up in solitary confinement for supposedly breaking the new rule.
Two such advocates reached out to EFF: Esther Große and Carrie Christensen. These women work with a high-profile inmate, Kenneth Foster, to try to secure his release and reform Texas’ so-called “Law of Parties,” which allows the state to assign capital punishment to accessories to a murder, even if they didn’t actually commit the act. Foster was facing the death penalty under this rule, but hours before his scheduled execution in 2007, Gov. Rick Perry commuted his sentence to life imprisonment. Ever since, Foster has engaged in political activism from behind bars through his writing and poetry.
Esther and Carrie had been running various social media accounts to support Foster. They maintained editorial control of these accounts and posted his writing. But they voluntariliy suspended these accounts after the new TDCJ rule was announced for fear of the impact on Foster. EFF communicated (.pdf) with TDCJ on their behalves to establish better clarity on what will and will not be permitted under the policy. Based on the information we and others (.pdf) received from TDCJ, we can now share lessons we’ve gleaned for operating a social media campaign regarding an inmate.
We need to preface these tips with some caveats. First, this blog post is in no way legal advice. Following these tips may not keep you or an inmate out of trouble. The policy is still in its early application, and we do not know how TDCJ will react on a case-by-case situation, especially when they have yet to fully articulate the process in writing.
Furthermore, these tips only apply to Texas inmates; although they may be useful for consideration when engaging in social media related to inmates in other states, we can’t say with any certainty how other corrections departments will react.
Finally, while we are offering these tips, we do not believe philosophically that TDCJ’s policy is fair or respectful to free speech. It’s frustrating that we even have to write this post in the first place.#1 Consider the Risks
Below we offer our interpretations of how TDCJ will apply its new rule based on correspondence and telephone exchanges with the agency’s attorneys. However, we are not wholly confident that these rules will be applied consistently and fairly. It is unfortunately not uncommon for corrections officers to leverage vague rules to punish inmates in retaliation for criticism or political activity.#2 Pages, Not Profiles or Accounts
TDCJ seems most concerned with preventing inmates from maintaining accounts or profiles that act as if they directly belong to the inmate. For example, TDCJ is more likely to crack down on accounts registered to the inmate’s email account or that include posts that are exclusively first person in nature. This is especially true of accounts that an inmate can access or update through a contraband cell phone.
TDCJ has indicated it is opposed to supporters registering an account for an inmate, but not when supporters create pages about the inmate. In Facebook, this could mean creating a “page” or “group” that is a subset of the supporter’s own account, rather than an independent account for the inmate. Facebook has indicated that it won’t remove groups or pages created about inmates. However, Facebook treats operating an account on behalf of an inmate as a violation of their Terms of Service, which prohibits third parties from accessing another person’s account.
On other social media, such as Twitter, Instagram, or Tumblr, the difference between an account for an inmate and account about an inmate can be less clear. One way a supporter can indicate that an account does not belong to an inmate is by adjusting the title. Some examples: “Free J. Smith,” “The J. Smith Project,” “Friends of J. Smith,” “J. Smith Support Network,” or a title that doesn’t directly reference an inmate at all.
One further thing to note is that TDCJ has stated it will not apply these rules to websites or blogs. So, setting up an independent site that is not on a social network could be an option.#3 Maintain Editorial Control
TDCJ does not outright prohibit supporters on the outside from posting writing or artwork generated by inmates. However, TDCJ says it does oppose third parties acting as a direct pass-throughs who post everything and anything an inmate requests.
TDCJ has indicated that it is will not take action when supporters are the ultimate deciders of what is and is not posted to social media about prisoners. This means, most importantly, that supporters must exercise final editorial control and make independent decisions about what essays and messages by inmates are appropriate for the supporters’ social media page. Regularly editing submissions from an inmate, say for punctuation or clarity, just as a newspaper editor would, may also help to establish that a supporter is exercising their own right to free speech rather than acting as a scrivener service for the inmate.
Supporters could also add a clear disclosure to the “About” section that the account is maintained by free-world users. In addition, when posting a communication from an inmate, adding “Letter from J. Smith:” to the beginning or “— J. Smith” to the end would signify to TDCJ and to everyone else that it’s not the inmate speaking directly, but rather something that the supporter received and chose to publish.#4 Same Rules As Snail Mail
TDCJ already has in place a strict set of rules for what is appropriate for inmate correspondence. You can find these policies in Chapter 3 of TDCJ’s Offender Orientation Handbook.
Supporters should not use social media to contact victims, witnesses, or anyone on the inmate’s “Negative Mailing List” on behalf of an inmate. TDCJ is also looking for anything that could be considered a security threat, such as escape plans, contraband smuggling, or outside criminal enterprises. Some supporters choose to send inmates printouts of social media comments. TDCJ will likely treat Internet printouts containing prohibited content sent to inmates in the same way they treat other mail that the inmate would be prohibited from receiving.
We need to reiterate that there is no guarantee that following these guidelines will keep a user in the clear. However, if you find that your account has been suspended, that an inmate you’re working with has been punished under this rule, or any other complication please contact firstname.lastname@example.org so we may investigate further. If you find content removed, please submit what information you have to onlinecensorship.org.
Share this: Join EFF
Last month we submitted comments to Customs and Border Protection (CBP), an agency within the U.S. Department of Homeland Security, opposing its proposal to gather social media handles from foreign visitors from Visa Waiver Program (VWP) countries. CBP recently provided its preliminary responses (“Supporting Statement”) to several of our arguments (CBP also extended the comment deadline to September 30). But CBP has not adequately addressed the points we made.
1) We argued that the proposal would be ineffective at protecting homeland security, because would-be terrorists seeking to enter the U.S. under the VWP are unlikely to voluntarily provide social media handles that link to incriminating posts that are publicly available. In its Supporting Statement, CBP said:
Extensive research by DHS and our interagency partners has determined that these additional data elements will increase the ability to stop these travelers before they attempt to travel to the United States.
It may help detect potential threats because experience has shown that criminals and terrorists, whether intentionally or not, have provided previously unavailable information via social media that identified their true intentions.
But CBP has not shared its purported “extensive research” or provided any details about its asserted “experience.”
Before adopting a new policy with significant privacy and free speech implications, a federal agency should provide the public with the evidence supporting the agency’s claims of efficacy. CBP has failed to do so here.
2) We argued that the proposal would violate the privacy and freedom of speech of innocent travelers and their American associates. We also made the point that given the confusing wording of the proposed language (“Information associated with your online presence—Provider/Platform—Social media identifier”), travelers may over-share and turn over not just their handles, but also their passwords.
CBP said, “If an applicant chooses to answer this question, DHS will have timely visibility of the publicly available information on those platforms, consistent with the privacy settings the applicant has set on the platforms.”
Yet the agency notably did not say what the government would do if a traveler does provide log-in information that would enable access to private online content.
3) In arguing that the proposal would violate privacy and freedom of speech, we also explained that the proposal is vague and overbroad because it contains no definitions or limitations as to what counts as a “social media identifier,” which may lead VWP visitors to share a variety of online accounts that reveal highly personal details about them.
CBP said, “A social media identifier is any name, or ‘handle’, used by the individual on platforms including, but not limited to, Facebook, Twitter, LinkedIn, and Instagram. Applicants are able to volunteer up to 10 identifiers.”
Yet the agency did not say that it would provide such explanatory text on the online ESTA application or paper I-94W form.
4) In arguing that the proposal would violate privacy and freedom of speech, we expressed concern that the government might use the social media information it gathered in unknown and future non-travel contexts to the detriment of VWP travelers and their American associates.
CBP admitted that this will occur:
ESTA information may be shared with other agencies that have a need to know the information to carry out their national security, law enforcement, immigration, or other homeland security functions. Any and all information sharing with agencies outside DHS will abide by existing memoranda of understanding between the agencies and be consistent with applicable statutory and regulatory requirements.
This exacerbates the chilling effect we discussed in our comments. Innocent VWP travelers may engage in self-censorship and cut back on their social media activity—even in their home countries—out of fear of being misjudged by the U.S. government, or of putting their friends and loved ones at risk.
5) We argued that the proposal would have constitutional implications for the American associates of VWP visitors, or for American travelers themselves if the program were extended to request their social media handles or include invasive searches at the border of mobile devices and “apps” or other means of accessing cloud content.
In response to constitutional concerns, CBP merely listed a statute and regulation and said, “These authorities apply to the collection of social media identifiers.”
The agency failed to acknowledge that the Constitution has supremacy over any legislative rule.
6) We argued that the proposal would spur other countries to demand the same information from American travelers, which would put Americans at risk overseas.
CBP said, “All sovereign countries are within their authority to impose travel regulations and entry requirements. DHS does not dictate the rules and regulations of other countries. DHS has added additional fields to the ESTA application over the last two years and has not seen other countries reciprocate in the questions asked to U.S. visitors.”
Yet the agency failed to recognize that seeking social media handles, including from people who have legitimate reasons for being pseudonymous online yet publicly vocal, is particularly intrusive and so may incite certain foreign governments to demand the same information from American travelers.
7) We argued that the proposal contains no standards by which CBP would evaluate public social media posts, to ensure that posts would not be taken out of context and innocent individuals would not be misjudged.
CBP said, “Highly trained CBP personnel will independently research publicly available social media information and will be able to recognize factors such as context. CBP will make case-by-case determinations based on the totality of the circumstances.”
Yet this provides little guidance for the public or even the government agents tasked with evaluating social media posts. CBP did not address our specific concern about ideological exclusion, where someone such as an academic may not pose a security risk but has views critical of American foreign policy.
As we said in our comments, we do not doubt that CBP and DHS are sincerely motivated to protect homeland security. However, the proposal to collect social media handles has serious flaws—and the government has failed to adequately address them.Related Cases: United States v. Saboonchi
Share this: Join EFF
It's now or never for the Trans-Pacific Partnership (TPP). It's almost certain that if the TPP can't pass during the lame duck session of Congress in its present form before the new President takes office, it won't pass at all.
You may also have heard that a lame duck vote on the TPP is off the table—but that's false. In fact, the administration's pressure for such a vote to take place following the election has never been greater. Officials held a new round of meetings just last week with business interests to encourage them to sell the flawed agreement to an increasingly skeptical public and Congress. So you shouldn't believe for a moment that the TPP can't still pass within the next few months. It can.
At this late stage, literally every single vote against the TPP counts, and the most effective way to influence the way that your representative votes on the TPP is to call them by telephone. Making a phone call is more stressful and troublesome than signing an online petition—we get it, really! But we wouldn't ask you to do it if it weren't important, and it's never been more important for you to make your voice heard about the TPP than it is right now.
We've created a form where you can enter your phone number and be automatically called back to speak with your representative. We're asking you to deliver them a short and simple message—tell them to vote no on the Trans-Pacific Partnership.
If you have time during the call, you can also mention why you want the representative to vote no—because the TPP would lock the United States into its current broken copyright rules and empower foreign corporations to sue us if we reform them. Vital components of U.S. law such as fair use and net neutrality are missing or non-binding in the TPP. But even if you don't have time to go into that much detail, that's OK—the most important message is to vote no to the TPP.
EFF isn't the only organization urging you to call your representatives about the TPP today; we are joining with dozens of other public interest groups from all sectors of American society including labor unions such as AFL-CIO; consumer groups such as Public Citizen; progressive organizations like CREDO Action, MoveOn, and Daily Kos; and other digital rights groups like Fight for the Future. Together, we plan to keep phones on the Hill ringing non-stop with a loud and clear message against the TPP.
If you don't want to see the TPP pass into law before the next President takes office, click that big red button below and you'll be well on your way to helping us kill the deal once and for all.
Share this: Join EFF
Edward Snowden’s 2013 release of once-secret documents about U.S. intelligence surveillance of the private communications of Americans and non-U.S. persons focused much-needed attention on the problem of how to control the burgeoning U.S. surveillance-industrial complex.
But while the USA Freedom Act began to limit national security surveillance to some extent—and we hope for more limits given that FISA Section 702 is scheduled to expire in December 2017—it did little to address the underlying problem of excessive executive branch secrecy. We remain largely in the dark about many of the facts of surveillance conducted by the U.S. government and its foreign intelligence community allies.
Clearly, Congress cannot fulfill its vital role in the constitutional scheme of separation of powers without clear knowledge about intelligence activities. The House Permanent Select Committee on Intelligence (HPSCI) was created in 1977 to exert meaningful oversight over the intelligence community in the wake of revelations of wide-scale abuses and violations of law.1 HPSCI and its Senate counterpart, the Senate Select Committee on Intelligence, were intended to consolidate review of intelligence matters, inform the entire Congress of intelligence activities and hold public hearings to inform the broader public.
Since 9/11, however, it’s been obvious that these special intelligence committees have not effectively overseen the NSA, FBI, and other members of the intelligence community.
A recent Guardian article aptly summed up the problem:
The intelligence committee is a culture all to itself. Unlike most congressional panels, the vast majority of its work occurs in secret. It is term-limited, so members’ expertise levels vary. The byzantine rules of classification restrict staffers who might bolster legislators’ knowledge from accessing significant relevant information. Membership, particularly on committee leadership, instantly boosts the profile of ambitious legislators, who become de facto surrogates for intelligence officials on cable news. All of this means that a committee tasked as the primary avenue for imposing accountability on secret agencies faces the ever-present risk of becoming captive to the agencies they oversee.
Today, we join with Demand Progress, R Street Institute, and FreedomWorks in a white paper calling on the House of Representatives to reinvigorate its commitment to provide a meaningful check on executive-branch surveillance and reform how it conducts oversight over intelligence matters. This paper is complemented by a letter from 33 organizations endorsing stronger oversight of the intelligence community.Privacy info. This embed will serve content from youtube-nocookie.com
When the House convenes for the 115th Congress in January, it should update its rules to enhance opportunities for oversight by HPSCI members, by members of other committees with related jurisdiction, and by all other representatives. The House also should establish a select committee to review how it engages in oversight.
EFF has previously called on Congress [.pdf] to engage in a thorough review of intelligence community activities based on the model set by the Seventies-era “Church Committee,” and we continue to urge such large-scale review.
But internal reform of House rules that currently make it exceedingly difficult for the vast majority of our elected representatives to know and understand what the intelligence community is doing can also pave the path to meaningful reform.
For example, most HPSCI members lack a personal staffer for their HPSCI work, which requires a security clearance, and instead depend on committee staff—hired by and answerable to the HPSCI chair. Yet when eight HPSCI members sought funding ($125,000) to allow a staffer from each member's personal office to obtain sufficient clearance to assist with intelligence oversight, that request went nowhere, as far as we know.
Under our constitution of separated powers, access to information is essential to democratic accountability. We were fortunate that Snowden leaked significant information about surveillance abuses, but oversight by whistleblower revelations is not a sustainable strategy. Without reform of excessive secrecy and overclassification, the window of transparency provided by the Snowden troves will close as the government creates new surveillance programs protected by secrecy and invulnerable to Congressional oversight.
- 1. For examples of the abuses, see the Report of the Select Committee to Study Government Operations with Respect to Intelligence Activities, also known as the "Church Committee" report, S Rept. 94-755 (1976), detailing assassination plots against foreign leaders, surveillance of domestic political activities and much more, available at http://www.intelligence.senate.gov/churchcommittee.html; see also the Pike Committee Report, available in print format from the Library of Congress at http://www.worldcat.org/title/cia-the-pike-report/oclc/3707054.
Share this: Join EFF
EFF is pleased to announce the addition of Camille Ochoa to the Activism team as Coordinator of Grassroots Advocacy. Camille has already been with EFF on the Operations team for over a year, and has recently worked with the Activism team to call attention to California's flawed gang databases as well as problematic police surveillance. She is now moving to the Activism team full-time to drive forward EFF's Electronic Frontier Alliance.
I asked Camille a few questions to learn more about what led her to EFF and her vision for EFF's outreach efforts.
Why did you decide to work at EFF?
When I came to EFF about a year ago, I didn't necessarily need a job. I was living contentedly in San Diego, impacting and shaping the next generation of activists (i.e. raising my kids). When I was presented the opportunity to apply to such a socially significant organization, though, I had to jump on it. I read about the work they were doing to combat the spread of the surveillance state and hold those in power to account, and I wanted in! My family dropped everything and moved 500 miles north to support me in making this new dream a reality.
I have spent the past year in a part-time position on the Operations team, which afforded me an amazing view of the organization as a living, thriving entity. It is through my exposure to EFF's work in that capacity that I became better equipped to branch out and offer my talents to the Activism team.
What are you most looking forward to working on as Coordinator of Grassroots Advocacy?
I get really excited when I think about the goals we have for the growth of the Electronic Frontier Alliance. It necessitates me proactively reaching out to groups and individuals who may not have already had us on their radar, and that is something I've spent the past year yearning to see more of. I'm glad I get to be involved with something that's so experimental and fluid - to help build something that's so fresh.
What is the Electronic Frontier Alliance?
I'm glad you asked! I think of it as something like a networking hub where student groups, community groups, and hacker spaces from around the country can connect, collaborate, and be inspired. For example, a university-based group that is focused on expanding the presence of women in tech can hear about the work that a hacker/maker group is doing 30 miles away. Perhaps those two entities can host a community workshop together on the importance of open source culture, or one group can expose the other to some new outreach materials and methods.
We're not looking solely at privacy activists to participate here (though they are more than welcome!). One of the purposes of the EFA is to expose as many people as possible to EFF's work and mission, and that means reaching out to those who don't already have us on their speed dial. As long as the groups espouse the 5 simple principles of the EFA, they can apply to be a member.
What roles do outreach and diversity play in EFF's mission to defend digital rights?
I am happy to say that outreach and diversity play a huge role in EFF's future. There's a very organic awareness happening here - every employee has "bought in" to a vision of EFF's community as more diverse and expansive than it has been in the past. We have so much love and respect for our long-time supporters; and we're all eager to expand that base in a way that keeps us relevant to this dynamic, evolving American society. EFF's work has always benefited people from all walks of life, but perhaps a small percentage were actually aware of that.
What advice do you have for people or groups who want to get involved?
I would encourage everyone who cares about civil liberties issues to, first of all, venture into your communities and meet your neighbors! Many people care about these things and you may find really innovative ways to connect with like-minded people around you. Some examples that spring to mind are digital rights-themed book clubs or a movie screening followed by a group discussion on the abuse of surveillance technology.
Groups that are already up and running can apply to be part of the EFA here. We do monthly video conferences where we check in and get aligned on each other's events or campaigns. You'd be amazed how much energy people all over the country are generating on a grassroots level.
Let's talk football. Who's your team?
The Raiders are due, man! (Unless they move to Vegas.)var mytubes = new Array(1); mytubes = '%3Ciframe width=%22560%22 height=%22315%22 src=%22https://www.youtube-nocookie.com/embed/YNNNPD4cwAg?rel=0?autoplay=1%22 frameborder=%220%22 allowfullscreen=%22%22%3E%3C/iframe%3E';
Share this: Join EFF
The FCC is about to make a decision about whether third-party companies can market their own alternatives to the set-top boxes provided by cable companies. Under the proposed rules, instead of using the box from Comcast, you could buy your own from a variety of different manufacturers. It could even have features that Comcast wouldn’t dream of, like letting you sync your favorite shows onto your mobile phone or search across multiple free TV, pay TV, and amateur video sites.
We’ve been closely following the “Unlock the Box” proposal since it was first introduced in February, but its history goes back much further. Congress first authorized the FCC to enact rules bringing competition to the set-top box market 20 years ago, as a part of the Telecommunications Act of 1996. We’re so close to finally unlocking the box, but pay TV providers and big content companies have been throwing out every distracting argument they can to stop it.
This week, the FCC commissioners will testify before the Senate Commerce Committee. This might be our last chance. Let’s use this opportunity to send the commissioners a clear message: consumers should drive the set-top box market, not media conglomerates.
When people have talked about Unlock the Box, it’s mainly been about how the rule would stimulate competition. It’s a basic principle of economics that when companies have to compete for your money, the product improves. That’s why we have antitrust laws preventing companies from maintaining a monopoly through unfair means. If your cable company has to compete with other set-top box manufacturers, then they’ll have to create a better product.
This isn’t just about healthy competition, though. It’s about much more. It’s about how much control we give big content owners over our technology. It’s about whether media giants can use copyright to hold our tech back.
Just as they did thirty years ago in the fight over VCRs, media conglomerates are trying to dominate the discussion. Cable companies and big movie and television studios have been arguing that the FCC’s proposal takes too much power out of the hands of content companies. Implicit in their argument is the assertion that copyright should let them control how, where, and when you watch TV, and what hardware you use to do it.
That’s not how copyright works, and it’s easy to see why. Imagine if a cable network tried to require that viewers watch its programs on a 42-inch television, or if a book publisher made you sign an agreement that you can only use a certain brand of light bulb to see its books. By design, copyright grants rights holders a specific and limited set of rights to their works—it does not give them the right to attach unlimited strings to others’ use of those works.
The fight over set-top boxes isn’t just about stimulating competition to bring higher quality products to market (as important as that is)—it’s about your basic rights as a consumer. It’s crucial that we send the message to the FCC commissioners: copyright doesn’t let content companies unfairly control technology.
Update (September 20, 2016): Thanks to everyone who participated in this campaign. You sent over 1000 tweets to Commerce Committee members, making it clear that their constituents care about bringing fair competition to the set-top box market.
Chairman Tom Wheeler has circulated revised rules to his fellow commissioners, but they have not yet been made public. A fact sheet published by the FCC indicates that under the new proposal, pay TV providers will be required to provide apps that consumers can use to watch their programming on third-party set-top boxes. It appears that this approach won’t give box manufacturers the same ability to customize and enhance the TV viewing experience that the original proposal would have.
The decision now lies in the FCC commissioners’ hands. We will be watching closely to see what happens.var mytubes = new Array(1); mytubes = '%3Ciframe width=%22560%22 height=%22315%22 src=%22https://www.youtube-nocookie.com/embed/YNNNPD4cwAg?rel=0?autoplay=1%22 frameborder=%220%22 allowfullscreen=%22%22%3E%3C/iframe%3E'; Related Cases: FCC Set-Top Box Rulemaking
Share this: Join EFF
Imagine: you're a programmer who loves to code. You're studying at college, but you're also working as a freelance web developer. In what spare time you have, you polish and release your best work under an open source license, for the world to use. Your father has grown sick and may be dying, and so you take a short break to travel back to the country of your birth to visit him.
After the long flight, you take a walk along the streets of the capital — perhaps to shake off your jetlag. Two men approach you, and begin aggressively questioning you. You're confused. Are they police officers? Without warning, they grab you by the arm, handcuff you, and force you into an unmarked sedan. You are thrown into solitary confinement, and held there for months, out of contact with the outside world. You are tortured. You are told that you are a criminal mastermind behind a network of evil websites. If you confess, they say, you will be released. You confess. They show your confession on national television. Your mother has a heart attack when the confessions are shown. You are sentenced to death. Your father dies as you await your execution.
That is the horrific story of Saeed Malekpour, a Canadian resident and programmer who was seized by Iran's Revolutionary Guard in 2008. Following international pressure, Saeed's death sentence has been commuted to life imprisonment, but he still remains in jail, thousands of miles away from his adopted country of Canada, trapped in Iran's notorious Evin prison. His case remains one of the most disturbing in our Offline list of technologists imprisoned for their work.
There is absolutely no evidence that Saeed had any connection to the pornographic websites his captors accused him of running. It is Saeed and his family's belief that he was selected for arrest because his name was on an open source image-uploading utility used by the site.
Saeed was also chosen for political reasons. At the time of his arrest, Iran's hardliners were working to exert control over the Net and its creators. Arresting a coder living in the West and accusing them of being a foreign spy running a Persian language porn network was intended to paint the Net as a channel for corrupt Western influence—and to demonstrate that no-one, not even coders living in a foreign country, could escape punishment.Justin Trudeau Can Restore Saeed's Freedom
Saeed's freedom has always depended on the global attention his case has received. Every improvement in his condition has come as a result of international pressure. Now is the perfect moment to press for his release and return. Canada's new government, led by Justin Trudeau, has begun negotiating with Iran to normalize their diplomatic and economic ties. Making such an agreement conditional on Saeed's release would allow him to return home to his family in Canada.
Unfortunately, the Trudeau administration has barely mentioned Saeed's case since gaining power. While Canada's foreign minister, Stéphane Dion, has said that he will work to free other Canadian prisoners, he told Saeed's sister that he has "limited" ability to intervene in Saeed's case because Saeed is an Iranian citizen.
This is not true: when Saeed, who is a Canadian permanent resident, faced a death sentence, Canada's Parliament voted unanimously to hold the Iranian authorities accountable. Saeed has been listed as a concern of Canada numerous times in previous ministerial communications. In the past, Canada has adopted Saeed's plight as eagerly as Saeed had adopted Canada as his home country.
That's why we're asking Net users and creators around the world to write to Trudeau and Dion now, to ask them to fight for Malekpour's freedom. Just as Iran needs to hear Canada's voice in support of Saeed Malekpour, the Canadian government needs to know that the world has not forgotten Saeed.
Please take two minutes to join us in mailing Canada's leaders and ask them to include Saeed's case in their negotiations with Iran.
Saeed was punished for being a programmer, willing to share his work with the rest of the Net. Now the Internet has a chance to save Saeed.
Share this: Join EFF
In a case which threatens to cause turmoil for thousands if not millions of websites, the Court of Justice of the European Union decided today that a website that merely links to material that infringes copyright, can itself be found guilty of copyright infringement, provided only that the operator knew or could reasonably have known that the material was infringing. Worse, they will be presumed to know of this if the links are provided for "the pursuit of financial gain".
The case, GS Media BV v. Sanoma, concerned a Dutch news website, GeenStijl, that linked to leaked pre-publication photos from Playboy magazine, as well as publishing a thumbnail of one of them. The photos were hosted not by GeenStijl itself but at first by an Australian image hosting website, then later by Imageshack, and subsequently still other web hosts, with GeenStijl updating the links as the copyright owner had the photos taken down from one image host after another.
The court's press release [PDF] spins this decision in such a positive light that much reporting on the case, including that by Reuters, gets it wrong, and assumes that only for-profit websites are affected by the decision. To be clear, that's not the case. Even a non-profit website or individual who links to infringing content can be liable for infringing copyright if they knew that the material was infringing, for example after receiving notice of this from the copyright holder. And anyway, the definition of "financial gain" is broad enough to encompass any website, like GeenStijl, that runs ads.
This terrible ruling is hard to fathom given that the court accepted "that hyperlinks contribute to [the Internet's] sound operation as well as to the exchange of opinions and information in that network", and that "it may be difficult, in particular for individuals who wish to post such links, to ascertain whether [a] website to which those links are expected to lead, provides access to works [that] the copyright holders … have consented to … posting on the internet". Nevertheless, that's exactly what the judgment effectively requires website operators to do, if they are to avoid the risk of being found to have knowingly linked to infringing content.
There are also many times when knowingly linking to something that is infringing is entirely legitimate. For example, a post calling out a plagiarized news article might link to the original article and to the plagiarized one, so that readers can compare and judge for themselves. According to this judgment, the author of that post could themselves be liable for copyright infringement for linking to the plagiarized article—madness.
This judgment is a gift to copyright holders, who now have a vastly expanded array of targets against which to bring copyright infringement lawsuits. The result will be that websites operating in Europe will be much more reticent to allow external hyperlinks, and may even remove historical material that contains such links, in fear of punishing liability.var mytubes = new Array(1); mytubes = '%3Ciframe width=%22560%22 height=%22315%22 src=%22https://www.youtube-nocookie.com/embed/YNNNPD4cwAg?rel=0?autoplay=1%22 frameborder=%220%22 allowfullscreen=%22%22%3E%3C/iframe%3E';
Share this: Join EFF
Update: This hearing will be held at 9:00 am. In an order issued Friday, the court rescheduled arguments in the case for 9:00 am.
Washington, D.C.—On Monday, September 12, Electronic Frontier Foundation (EFF) Legal Director Corynne McSherry will urge a federal court to confirm that the public has a right to access and share the laws, regulations, and standards that govern us and cannot be blocked by overbroad copyright claims.
The court in Washington, D.C., is hearing arguments in two cases against EFF client Public.Resource.Org, an open records advocacy website. In these suits, several industry groups claim they own copyrights on written standards for building safety and educational testing they helped develop, and can deny or limit public access to them even after the standards have become part of the law. Standards like these that are legal requirements—such as the National Electrical Code—are available only in paper form in Washington, D.C., in expensive printed books, or through a paywall. By posting these documents online, Public.Resource.Org seeks to make these legal requirements more available to the public that must abide by them. The industry groups allege the postings infringe their copyright, even though the standards have been incorporated into government regulations and, therefore, must be free for anyone to view, share, and discuss.
McSherry and co-counsel Andrew Bridges at Fenwick & West will argue at the hearing that our laws belong to all of us and private organizations shouldn’t be allowed to abuse copyright to control who can read, excerpt, or share them. They will be assisted by EFF Senior Staff Attorney Mitch Stoltz and Fenwick & West Associate Matthew Becker.
Hearing in ASTM v. Public.Resource.org and AERA v. Public.Resource.org
EFF Legal Director Corynne McSherry
Monday, September 12, 9:00 am
Courtroom 2, 2nd Floor
U.S. District Court for the District of Columbia
333 Constitution Ave. N.W.
Washington, D.C. 20001
Share this: Join EFF