Your smartphone, navigation system, fitness device, and more know where you are most of the time. Law enforcement should need a warrant to access the information these technologies track.
Lawmakers have a chance to create warrant requirements for the sensitive location information collected by your devices.
Sen. Ron Wyden and Reps. Jason Chaffetz and John Conyers reintroduced the Geolocation Privacy and Surveillance Act (H.R. 1062) earlier this week. Congress should quickly move this bill and protect consumers’ privacy from warrantless searches.
Currently, law enforcement need to obtain a warrant before they can use their own GPS device to track individuals—like by attaching a GPS unit to a suspect’s car—under the Supreme Court’s 2012 ruling in U.S. v. Jones. But that kind of court oversight is missing when law enforcement goes to a third-party company to get location information or when law enforcement uses devices that mimic cellphone towers and siphon off users’ location information.
Tell Congress to put in place basic and necessary privacy protections for the sensitive location information collected by the devices in your pockets, in your car, and on your wrist.
Share this: Join EFF
If a patent troll threatens your company, can you go to your nearest federal court and ask for a ruling that the patent is invalid or that you aren’t infringing it? According to the Federal Circuit (the court that hears all patent appeals), the answer to this question is usually no. The court has a special rule for patent owners that demand letters cannot create jurisdiction. EFF, together with Public Knowledge, recently filed a friend-of-the-court brief asking for this rule to be overturned. But in a decision this week, the Federal Circuit reached the right result for the accused infringer in the case, but left its bad law largely in place.
In Xilinx v Papst Licensing, a German patent troll, Pabst, accused Xilinx of infringing a patent relating to memory tests in electronics. Papst sent Xilinx a couple of letters and visited the company at its offices in California to demand payment of a license fee. Xilinx then filed a lawsuit in the Northern District of California asking the court to rule that the patent was invalid and it did not infringe. The district court dismissed the case. On appeal, the Federal Circuit was asked to determine whether the California district court could exercise personal jurisdiction over Papst.
At EFF we’ve long complained about unfair rules in patent cases that give patent owners almost complete control over where disputes are litigated. The Federal Circuit has developed two strands of jurisprudence that, in tandem, have led to this result. First, in a 1990 case called VE Holding, the Federal Circuit held that companies that sell products nationwide can be sued for patent infringement in any federal court in the country. (The Supreme Court is set to decide whether this holding should be overruled.)
Second, in a case called Red Wing Shoe, the Federal Circuit ruled that companies who receive patent demand letters from trolls can’t sue them in their home district to get a determination the patent is invalid or not-infringed. As others have noted, the Federal Circuit has “gone to great lengths to deny jurisdiction over patentees sending demand letters from afar.”
As a practical matter, VE Holding and Red Wing Shoe operate as a one-two punch that gives patent owners almost complete control over where patent disputes can be litigated. This means that a productive company threatened by a troll may have no choice but to litigate in a distant and expensive forum, such as the Eastern District of Texas, where local rules systematically favor patent owners over patent defendants.
In our amicus brief, we argued that the Federal Circuit should hear the case en banc and overrule Red Wing Shoe. But the court did not go so far. Instead, it relied on the physical visit of Papst employees to Xilinx’s offices to justify jurisdiction in the forum. This allowed the court to distinguish other cases where it held that demand letters are never enough to establish jurisdiction. So while this is a good result for Xilinx, it won’t help most targets of patent trolls.
But the rule in Red Wing Shoe is wrong and should be overruled. Indeed, it is part of a long pattern of Federal Circuit decisions that create rigid rules favoring patent owners. While we suspect it would not survive review by the Supreme Court, that question will have to wait for another case.
Share this: Join EFF
Last week, a federal court in Seattle issued a ruling in Microsoft’s ongoing challenge to the law that lets courts impose indefinite gag orders on Internet companies when they receive requests for information about their customers. Judge James Robart—he of recent Washington v. Trump fame—allowed Microsoft’s claim that the gags violate the First Amendment to proceed, denying the government’s motion to dismiss that claim. It’s an important ruling, with implications for a range of government secrecy provisions, including national security letters (NSLs). Unfortunately, the court also dismissed Microsoft’s Fourth Amendment claim on behalf of its users.
When tech companies can’t tell users that the government is knocking
Before looking at the substance of Judge Robart’s ruling, it’s worth remembering why EFF thinks Microsoft’s lawsuit is important. In fact, we’d go so far as to say that challenging gag orders imposed alongside government data requests is one of the key digital civil liberties issues of our time. That’s true for at least two reasons:
First, there has been a sea change in where we keep our sensitive personal information— “papers and effects” protected by the Fourth Amendment and records of First Amendment-protected speech and associations. Just twenty or thirty years ago, most or all of this information would have been found in people’s homes. In order to get at your information—whether by breaking down your door or serving you with a grand jury subpoena—the government usually couldn’t help tipping you off. These days, private information is more likely to be stored in Microsoft Office 365 or with another third-party provider than a home office. In that case, you won’t know the government is interested in your information unless you hear from the government or the third-party provider. But the government isn’t always required to notify the targets of data requests, and it routinely gags providers from notifying their users. The long-standing default—notice that the government is after your information—has in just a short time effectively flipped to no notice.
Second, gags distort the public’s understanding of government surveillance and correspondingly place far more responsibility on providers. The statutory provision at issue in Microsoft’s lawsuit, 18 U.S.C. § 2705, applies in criminal cases. This statute allows the government to gag service providers if a court finds that informing the user will result in one of several enumerated harms—death or injury to a particular person, destruction of evidence, witness tampering, and so on. But as Microsoft’s complaint explains, Section 2705 gag orders accompany at least half of the data demands the company receives, and courts often grant them without explicit findings of potential harm. In many cases, they also do so without setting a date for the gag to dissolve. The result is a de facto permanent gag order. That’s an abuse of what is intended as a limited power, granted to the government to protect specific, sensitive investigations.
Unless a provider takes extraordinary steps—like filing a facial constitutional challenge as Microsoft did—it’s likely that the public won’t be aware of this abuse. This intensifies the role that providers play as trustees of our data. That’s why EFF tracks both transparency reports and user notification as part of our annual Who Has Your Back report. We don’t just rely on companies to keep our data secure, we also need them to stand up to the government on our behalf. It’s a point often missed by those who dismiss companies’ growing commitments to privacy as empty marketing. If not Microsoft, Apple, Google, Facebook and all the others, then who?
The ruling: first party prior restraints and third-party Fourth Amendment rights
Despite the importance of these issues, the government argued that Microsoft’s challenge should be bounced out of court at the preliminary motion to dismiss stage. On the First Amendment claim, at least, the court disagreed. Microsoft’s basic argument will be familiar if you’ve followed EFF’s NSL cases: when the government prevents you from speaking in advance, it’s known as a prior restraint. Under the First Amendment, prior restraints must meet “exacting scrutiny” and are rarely constitutional. Here, the court found that Microsoft had more than adequately alleged that Section 2705 does not meet this exacting scrutiny because it does not require courts to time-limit gags to situations where they are actually necessary based on the facts of the case.
This is nearly identical to one of the issues in EFF’s NSL cases—NSLs similarly allow the FBI to gag service providers indefinitely.1 However, NSLs are even more egregious in several ways: the FBI can issue them without any involvement by a court at all, and it need not even claim that one of the specified harms will actually result without an NSL gag. We hope the Ninth Circuit will consider our NSL clients’ arguments about their First Amendment rights as thoroughly as Judge Robart did here.
Finally, the court reached an unsatisfying conclusion about Microsoft’s attempt to raise its users’ Fourth Amendment rights. As EFF explained in our amicus brief earlier in the case, notice of a search is a core part of the Fourth Amendment’s protections. When Microsoft is precluded from notifying users, it is the only party with knowledge of the search and therefore should be able to raise its users’ Fourth Amendment rights. Nevertheless, the court found that Fourth Amendment rights are inherently personal and cannot be raised by a third party, leading it to dismiss Microsoft’s claim. We think that’s wrong on the law, and we hope Microsoft will consider seeking leave to appeal. Meanwhile, we’ll watch as the case progresses on Microsoft’s First Amendment claim.
- 1. Judge Robart’s order wrongly states that NSL are time-limited.
Share this: Join EFF
This Friday, EFF lawyers and other experts from the field will lead a conversation about constitutional law at the Internet Archive. The event is open to the public, totally free, and will stream live on Facebook for anybody who can't make it in person.
Come learn about censorship, surveillance, digital search and seizure, and more. Plus, if you can be there in person, there will be a potluck emphasizing apple pie.
Donations are welcome but not required. Details below.
When: Friday, February 17th 5:30pm-9pm (program 6-8)
- Cindy Cohn – Executive Director of EFF
- Corynne McSherry – Legal Director of EFF
- Stephanie Lacambra – Staff Attorney at EFF
- Victoria Baranetsky – First Look Media Technology Legal Fellow for the Reporter’s Committee for Freedom of the Press
- Geoff King – Lecturer at UC Berkeley, and Non-Residential Fellow at Stanford Center for Internet and Society
- Bill Fernholz – Lecturer In Residence at Berkeley Law
Share this: Join EFF
A group of Mexican nutrition policy makers and public health workers have been the latest targets of government malware attacks. According to the New York Times, several public health advocates were targeted by spyware developed by NSO Group, a surveillance software company that sells its products exclusively to governments. The targets were all vocal proponents of Mexico’s 2014 soda tax—a regulation that the soda industry saw as a threat to its commercial interests in Mexico.
It's no secret that Mexico has a deeply-rooted culture of secrecy surrounding surveillance. Mexican digital rights NGO, Red en Defensa de los Derechos Digitales, has been raising awareness about the lack of control of communications surveillance in the country and advocating for surveillance law that complies with human rights standards. Today, EFF joins more than 40 organizations in expressing our concern about the use of highly intrusive software against these public health advocates and demand that the Mexican government identify and punish those responsible for conducting illegal surveillance in Mexico.
Here is the text of the letter:
On July 11, an investigation by the Citizen Lab at the University of Toronto’s Munk School of Global Affairs and the New York Times revealed evidence that Dr. Simon Barquera, researcher at Mexico’s Public Health National Institute, Alejandro Calvillo, Director at El Poder del Consumidor and Luis Manuel Encarnación, Coordinator of ContraPESO Coalition received targeted attacks with the objective of infecting their mobile devices with surveillance malware exclusively sold to governments by the company NSO Group.
According to the evidence, the attacks are related to the target’s activities in defense of public health, particularly advocating for a soda-tax and criticizing deficient food labeling regulation. In the light of these revelations, the signatory national and international civil society organizations:
1. Condemn the illegal surveillance revealed and show our solidarity and stand with the academic institutions and civil society organizations targeted with these attacks.
2. Express our concern about the Mexican government’s use of highly intrusive software such as the Pegasus malware commercialized by the NSO Group, particularly against researchers and civil society organizations. This type of surveillance malware that exploits unknown security vulnerabilities (zero-day) in commercial software and products to obtain an absolute control of a device, severely compromises the right to privacy, especially when there is no legal controls or democratic oversight of state surveillance.
3. Demand the government of Mexico to stop the threats and surveillance against researchers and civil society organizations and call for an immediate investigation to identify and punish the officials responsible for illegal surveillance in Mexico.
4. Call international organizations, governments around the world and the international community as a whole, to investigate the activities of the NSO Group and other companies that sell surveillance capabilities to Mexico, a country with a record of human rights abuses.
5. Express our special concern regarding this new instance of harassment against researchers and health activists that affect the interests of the food and beverage industries. We call the industry to clarify its involvement or knowledge of the revealed surveillance and to publicly reject any act of intimidation against human rights defenders.
Asociación Nacional de la Prensa de Bolivia (ANP)
Asociación para el Progreso de las Comunicaciones (APC)
Asociación por los Derechos Civiles (ADC)
Association of Caribbean Media Workers
Australian Privacy Foundation
Centro Nacional de Comunicación Social AC (Cencos)
Centro de Estudios Constitucionales y en Derechos Humanos de Rosario
Centro de Reportes Informativos Sobre Guatemala (CERIGUA)
Comisión Mexicana de Defensa y Promoción de los Derechos Humanos, A.C.(CMPDH)
Electronic Frontier Foundation (EFF)
Espacio Público, Venezuela
Fundación para la Libertad de Prensa (FLIP)
Fundar, Centro de Análisis e Investigación
Intercambio Internacional por la Libertad de Expresión (IFEX-ALC)
Instituto de Liderazgo Simone de Beauvoir (ILSB)
Instituto de Prensa y Libertad de Expresión (IPLEX)
Instituto Prensa y Sociedad (IPYS)
Organización Fraternal Negra Hondureña (OFRANEH)
Patient Privacy Rights
Public KnowledgeRed en Defensa de los Derechos Digitales (R3D)
Renata Aquino Ribeiro, Researcher E.I. Collective
Reporteros Sin Fronteras
SonTusDatos Artículo 12, A.C.
Sursiendo, Comunicación y Cultura Digital (Chiapas, MX)
Usuarios Digitales, Ecuador
Washington Office on Latin America (WOLA)
Share this: Join EFF
Specifically targeting black children for unlawful DNA collection is a gross abuse of technology by law enforcement. But it’s exactly what the San Diego Police Department is doing, according to a lawsuit just filed by the ACLU Foundation of San Diego & Imperial Counties on behalf of one of the families affected. SDPD’s actions, as alleged in the complaint, illustrate the severe and very real threats to privacy, civil liberties, and civil rights presented by granting law enforcement access to our DNA. SDPD must stop its discriminatory abuse of DNA collection technology.
According to the ACLU’s complaint, on March 30, 2016, police officers stopped five African American minors as they were walking through a park in southeast San Diego. There was no legal basis for the stop. As an officer admitted at a hearing in June 2016, they stopped the boys simply because they were black and wearing blue on what the officers believed to be a gang “holiday.”
Despite having no valid basis for the stop, and having determined that none of the boys had any gang affiliation or criminal record, the officers handcuffed at least some of the boys and searched all of their pockets. They found nothing but still proceeded to search the bag of one of the boys—P.D., a plaintiff in the ACLU’s case. (It’s standard to use minors’ initials, rather than their full names, in court documents.) The officers found an unloaded revolver, which was lawfully registered to the father of one of the boys, and arrested P.D.
The officers told the other four boys that they could go free after submitting to a mouth swab. The officers had them sign a consent form, by which they “voluntarily” agreed to provide their DNA to the police for inclusion in SDPD’s local DNA database. The officers then swabbed their cheeks and let them go.
P.D. was then told to sign the form as well. After he signed, the officers swabbed his cheek and transported him to the police department. The San Diego District Attorney filed numerous charges against P.D., but they were all dropped as a result of the illegal stop. The court did not, however, order the police to destroy either P.D.’s DNA sample or the DNA profile generated via his sample. The ACLU seeks destruction of the sample and profile, along with a permanent injunction "forbidding SDPD officers from obtaining DNA from minors without a judicial order, warrant, or parental consent."
The Police Did Not Get Meaningful, Voluntarily Consent For These Highly Invasive DNA Searches
There are a few huge problems with SDPD’s actions here. One is that the officers apparently didn’t explain to the boys what either signing the form or swabbing their cheeks meant—i.e., that they were asking the boys to both waive their Fourth Amendment rights and turn over highly sensitive genetic material. The officers wanted the boys to consent to the seizure of their DNA because consent is an exception to the Fourth Amendment’s prohibition on unreasonable searches and seizures. But a person can’t meaningfully consent to a DNA search without fully understanding the serious privacy invasion that accompanies a perhaps seemingly innocuous mouth swab. DNA can reveal an extraordinary amount of private information about a person, including familial relationships, medical history, predisposition for disease, and possibly even behavioral tendencies and sexual orientation. And DNA samples collected via mouth swabs are used to create DNA profiles, which are added—in most cases permanently—into law enforcement databases used for solving crimes.
Furthermore, for consent to be valid, it must be voluntary—and not motivated by threats, promises, pressure, or any other form of coercion. Here, the boys were in handcuffs, and the officers made it clear that they could go freely once they signed the form and submitted to the mouth swab. This presents both an implied threat of arrest for failure to cooperate and an implied promise of “leniency” in return for cooperation—two distinct types of coercion. California courts have recognized that threats and promises have more of a coercive effect on children than on adults, making SDPD’s abuse of the consent exemption in this case all the more appalling.
And as the Voice of San Diego reports, this isn't the first time the ACLU has sued SDPD over unlawful DNA collection. In 2013, SDPD paid $35,000 to settle a lawsuit involving a 2011 incident where officers improperly collected DNA without cause from five family members of a parolee.
SDPD's Policy Flouts Protections Built Into California’s DNA Collection Law
SDPD’s policy on obtaining DNA from kids specifically provides for the use of these so-called “consent” searches. The terms of the policy, obtained via a public record act request by the Voice of San Diego, are problematic on their own. For example, the policy fails to require parental notification prior to seeking a child’s consent. But what’s even more problematic is that SDPD’s policy seems to intentionally sidestep the minimal protections the California legislature built into California’s DNA collection law, Cal. Penal Code § 296. California’s law specifies that DNA can be collected from juveniles only in very narrow—and serious—circumstances: after they’ve been convicted of or plead guilty to a felony, or if they are required to register as a sex offender or in a court-mandated sex offender treatment program. And there’s a reason California law limits the situations in which law enforcement can collection DNA from minors—DNA collection involves a serious invasion of privacy. SDPD’s actions are in direct conflict with the protections for children built into the law.
SDPD’s policy acknowledges the limits in Section 296, but it gets around these limits by keeping the DNA profiles collected via its “consent” searches in a local database, rather than adding them into the statewide DNA database. As the policy points out, Section 296 only governs DNA seized for inclusion in the statewide database. So, as the Voice of San Diego puts it, "the San Diego Police Department has found a way around state law." SDPD’s apparent efforts to flout limitations designed to protect children are deeply troubling.
Targeting Black Children For DNA Collection Is a Gross Abuse of Power
The complaint’s allegations regarding SDPD’s coercive tactics to collect DNA from these children are astounding. But what's even uglier is that, based on the ACLU’s allegations, the collection here was racially motivated. Law enforcement believes these databases will help them solve crimes, and it seems that underlying efforts to target African American minors for inclusion in San Diego's local DNA database is the biased assumption that these children are criminals—that they either have or will in the future commit some crime. So per the ACLU’s allegations, SDPD is not only abusing its power, but it's doing so in a racially discriminatory way.
We applaud the ACLU Foundation of San Diego & Imperial Counties and Voice of San Diego for shedding light on SDPD’s abuse of DNA collection technology, and we’ll be following this case closely.
 California’s DNA collection law does allow pre-conviction DNA collection from adults who are charged with a felony offense—a provision that we’ve argued violates the Fourth Amendment—but it does not permit the same for juveniles.
Share this: Join EFF
Publishers Still Fighting to Bury Universities, Libraries in Fees for Making Fair Use of Academic Excerpts
On behalf of three national library associations, EFF today urged a federal appeals court for the second time to protect librarians’ and students’ rights to make fair use of excerpts from academic books and research.
Nearly a decade ago, three of the largest academic publishers in the world— backed by the Association of American Publishers (AAP) trade group— sued Georgia State University (GSU) for copyright infringement, insisting that GSU owed licensing fees for the use of excerpts of academic works in its electronic reserve system. Such systems are commonly used to help students save money; rather than forcing students to buy a whole book when they only need a short excerpt from it, professors will place the excerpts “on reserve” for students to access. GSU argued that posting excerpts in the e-reserve systems was a “fair use” of the material, thus not subject to licensing fees. GSU also changed its e-reserve policy to ensure its practices were consistent with a set of fair use best practices that were developed pursuant to a broad consensus among libraries and other stakeholders. The practices are widely used, and were even praised by the AAP itself.
But that was not enough to satisfy the publishers. Rather than declare victory, they’ve doggedly pursued their claims. It seems the publishers will not be content until universities and libraries agree to further decimate their budgets. As we explain in our brief, that outcome would undermine the fundamental purposes of copyright, not to mention both the public interest, and the interests of the authors of the works in question. The excerpts are from academic works whose authors are not looking to get rich on licensing fees. They are motivated, instead, by a desire to contribute to the greater store of knowledge, and by the benefits accrued to their professional reputation when other scholars read, and cite, their published work. They care about recognition, not royalties.
Moreover, the fair use analysis is supposed to consider whether the practice at issue will cause material harm to an actual or potential market. But there’s no real market for digital excerpts that the libraries’ practices could harm. Indeed, as GSU explained in their brief, “[m]any professors testified that they would not have used any excerpt if students were required to pay a licensing fee.” And even if such a market existed, most libraries likely couldn’t afford to be part of it. In light of rising costs and shrinking resources, “academic libraries simply do not have the budget to participate in any “new” licensing market" without diverting funds away from other areas—like those used to add new works to their collections.
Copyright is supposed to help foster the creation of new works. Requiring university libraries to devote even more of their budgets to licensing fees will have the opposite effect. We hope the court agrees.
Share this: Join EFF
Imagine if someone, after reading something you wrote online that they didn’t agree with, decided to forge racist and anti-Semitic emails under your name. This appears to be what happened to J. Alex Halderman, a computer security researcher and professor of computer science at the University of Michigan. Halderman is one of many election security experts—along with EFF, of course—who has advocated for auditing the results of the 2016 presidential election. The recent attempts to smear his name in retaliation for standing up for election integrity are a threat to online free speech.
Halderman, who is a frequent collaborator and sometimes client of EFF, published a piece on Medium in November 2016 arguing that we should perform recounts in three states—Wisconsin, Michigan, and Pennsylvania—to ensure that the election had not been “hacked.” To be clear, despite a report in New York Magazine, Halderman never stated that there was hard evidence that the election results had in fact been electronically manipulated. He just stated that we should check to be sure:
The only way to know whether a cyberattack changed the result is to closely examine the available physical evidence?—?paper ballots and voting equipment in critical states like Wisconsin, Michigan, and Pennsylvania.
Concern over a “hacked election” isn’t unfounded. In 2014, pro-Russia hacking collective CyberBerkut attempted to sabatoge Ukrainian’s vote-counting infrastructure just prior to a presidential election. This is just one example. With these threats out there, auditing should be basic election hygiene. As computer security expert Poorvi Vora of George Washington University says, “Brush your teeth. Eat your spinach. Audit your elections.” Halderman specifically calls in his post for risk-limiting audits, a statistical method we’ve also advocated for that involves randomly selecting a certain number of paper ballots for manual recount. And it’s something we should be doing after every election. It’s a no-brainer.
Someone, however, does not agree. On February 7, about two and a half months after Halderman’s post, someone sent racist and anti-Semitic emails to University of Michigan engineering students purporting to be from Halderman. According to the AP, the emails had subject lines like “African American Student Diversity” and “Jewish Student Diversity,” and two of the emails contained the phrase “Heil Trump.”
This type of smear campaign is unsophisticated and easy to pull off. The smear artist(s) here didn’t break into Halderman's e-mail account. They simply created a “spoofed” email header, which made the messages appear to have originated from Halderman rather than their actual source. This is a ploy all-too-common in phishing campaigns, as it can trick Internet users into providing sensitive information or clicking on malicious links. Read: this could happen to anyone.
Luckily, the spoof here was quickly revealed, and we doubt that many of the recipients—students at the university, where Halderman is well-known and liked—were deceived. But it did still result in a 40-student protest outside the home of the university’s president.
Halderman has called these attempts to smear his name “cowardly action,” and we agree. But we’re also concerned. The threat of being the target of a smear campaign could chill the speech of others who want to speak out on the need for ensuring the integrity of our election system—an increasingly critical topic. Such efforts to chill speech threaten the very nature of the Internet as we know it—a place for open, robust, and diverse discourse.
We expect the University of Michigan to take a strong stand against this type of retaliation targeting a member of its community. And the rest of us should take a stand in support of Halderman, not only for his efforts to move the debate on election integrity forward, but also to make sure that such ugly, dastardly, and quite frankly lame attempts to smear people don’t become a more widely used method for chilling speech.
Share this: Join EFF
The San Jose City Council is considering a proposal to install over 39,000 “smart streetlights.” A pilot program is already underway. These smart streetlights are not themselves a surveillance technology. But they have ports on top that, in the future, could accommodate surveillance technology, such as video cameras and microphones.
EFF and our allies sent a letter to the San Jose City Council urging them to adopt an ordinance to ensure democratic control of all of that community’s surveillance technology decisions—including whether to plug spy cameras into the ports of smart streetlights.What Are Smart Cities?
Under “smart cities” programs like the one in San Jose, many municipalities across the country are building technology infrastructures in public places that collect data in order to save energy, reduce traffic congestion, and advance other governmental goals. Some of these programs may improve urban life, and EFF does not oppose smart cities per se.
But we have a word for government use of technology to document how identifiable people are living their lives in public spaces: surveillance. And we strongly oppose the web of street-level surveillance that is rapidly spreading across our urban landscapes. It invades privacy, chills free speech, and disparately burdens communities of color and poor people.
There is an inherent risk of mission creep from smart cities programs to surveillance. For example, cameras installed for the benevolent purpose of traffic management might later be used to track individuals as they attend a protest, visit a doctor, or go to church.Democratic Control of Spy Tech
To prevent this mission creep, communities must adopt laws ensuring democratic decision-making and oversight of surveillance technology. All too often, police chiefs and other agency executives unilaterally decide to install new spying tools. Instead, these decisions must be made by elected city councils after robust public debate in which all members of the community have their voices heard. Communities will reject some proposed surveillance tools, and require strong privacy safeguards for others.
Our letter to the San Jose City Council urges them to adopt such an ordinance. Our allies on this letter are the ACLU (Santa Clara Valley Chapter), Asian Americans Advancing Justice, the Coalition for Justice and Accountability, the Council on American-Islamic Relations (San Francisco Bay Area Office), the Center for Employment Training, the Japanese American Citizens League (San Jose, Sequoia, and Silicon Valley Chapters), the Nihonmachi Outreach Committee, the Peninsula Peace and Justice Center, and The Utility Reform Network.Privacy By Design
“Privacy by design” is an equally necessary means to ensure that smart cities do not devolve into surveillance programs. Privacy by design means that technology manufacturers and municipal purchasers must work together at all stages of product development to build privacy safeguards into smart cities technologies. It is not enough to bolt privacy safeguards onto completed tools at the last minute.
Privacy by design has substantive and procedural components. Substantive protections include limits on initial collection of personal information; encryption and other security measures to control access to that information; and strong policies restraining use and disclosure of that information.
A critical procedural measure is for cities to employ their own privacy officers. With the great power of smart cities tools comes the great responsibility to competently manage them. A privacy officer must have expertise in the technological, legal, and policy issues presented by these powerful tools. Absent such in-house expertise, cities may inadvertently create privacy problems, or unduly defer to the privacy judgments of vendors, which will not always have the same privacy goals as cities.Next Steps
Now is the time for San Jose to ensure that its smart streetlights do not become another tool of street-level surveillance. To do so, San Jose must adopt an ordinance ensuring democratic control of decisions about surveillance tools. It must also practice privacy by design. Otherwise, residents may find that the new "smart" technologies designed to improve their lives have instead become tools of government spying.
Share this: Join EFF
It’s well documented that the FBI is keen on adopting new technologies that intrude on our civil liberties. The FBI’s enthusiasm for technology, however, doesn’t extend to tools that make it easier for the public to understand what the agency is up to—despite such transparency being mandated by law.
The FBI recently announced that it’s removing the ability for the public to send Freedom of Information Act (FOIA) requests to the agency via email. Instead, the FBI will now only accept requests sent through snail mail, fax, or a poorly designed and extremely limited website.
The FBI’s decision to abandon email—a free and ubiquitous method of communication—as a means of sending in FOIA requests will make sending requests to the agency far more difficult. The decision will thus undoubtedly thwart both transparency and accountability, and the FBI must be well aware of this. In a world in which thermostats and toasters are increasingly connected to the Internet, the FBI's rejection of emailed FOIA requests is a slap in the face to transparency. The FBI's decision is all the more galling given that other agencies are currently embracing technologies that both help people making FOIA requests and help the agencies more efficiently and effectively process them.
What's more, the FBI’s alternative solution—it's new “E-FOIA” website website—is no solution at all. The website places a 3,000 character limit on requests and has technical barriers that prevent automated FOIA requests. These constraints significantly limit the amount of information people can seek via a single request and needlessly slow down the process.
Perhaps the biggest problem is the website’s terms of service, which place limits on the types of requests that can be filed digitally. They suggest the website will not accept FOIA requests seeking records about FBI operations, activities, or communications. Not only does this make no sense from a technical standpoint, it runs directly counter to the very purpose of FOIA: ensuring that the public can learn about an agency’s operations and activities.
EFF is grateful to Sen. Ron Wyden (D-Or.), who sent a letter (pdf) to the FBI on Friday highlighting many of the concerns we have about the FBI’s abandonment of email and its reliance on an problematic website. We look forward to the FBI’s response.
The FBI's recent announcement makes one thing clear: Congress should—and easily could—update FOIA to require all federal agencies, including the FBI, to accept FOIA requests via email. In the digital world we live in, this is a no-brainier. EFF has been calling for this simple fix, along with a host of other changes, for some time, and we remain committed to supporting legislative efforts that increase government transparency.
Share this: Join EFF
Boston—An FBI search warrant used to hack into thousands of computers around the world was unconstitutional, the Electronic Frontier Foundation (EFF) told a federal appeals court today in a case about a controversial criminal investigation that resulted in the largest known government hacking campaign in domestic law enforcement history.
The Constitution requires law enforcement officers seeking a search warrant to show specific evidence of a possible crime, and tie that evidence to specific persons and places they want to search. These fundamental rules protect people from invasions of privacy and police fishing expeditions.
But the government violated those rules while investigating “Playpen,” a child pornography website operating as a Tor hidden service. During the investigation, the FBI secretly seized servers running the website and, in a controversial decision, continued to operate it for two weeks rather than shut it down, allowing thousands of images to be downloaded. While running the site, the bureau began to hack its visitors, sending malware that it called a “Network Investigative Technique” (NIT) to visitors’ computers. The malware was then used to identify users of the site. Ultimately, the FBI hacked into 8,000 devices located in 120 countries around the world. All of this hacking was done on the basis of a single warrant. The FBI charged hundreds of suspects who visited the website, several of whom are challenging the validity of the warrant.
In a filing today in one such case, U.S. v. Levin, EFF and the American Civil Liberties Union of Massachusetts urged the U.S. Court of Appeals for the First Circuit to rule that the warrant is invalid and the searches it authorized unconstitutional because the warrant lacked specifics about who was subject to search and what locations and specific devices should be searched. Because it was running the website, the government was already in possession of information about visitors and their computers. Rather than taking the necessary steps to obtain narrow search warrants using that specific information, the FBI instead sought a single, general warrant to authorize its massive hacking operation. The breadth of that warrant violated the Fourth Amendment.
“No one questions the need for the FBI to investigate serious crimes like child pornography. But even serious crimes can’t justify throwing out our basic constitutional principles. Here, on the basis of a single warrant, the FBI searched 8,000 computers located all over the world. If the FBI tried to get a single warrant to search 8,000 houses, such a request would unquestionably be denied. We can’t let unfamiliar technology and unsavory crimes lead to an erosion of everyone’s Fourth Amendment rights,” said EFF Senior Staff Attorney Mark Rumold.
EFF filed a brief in January in a similar case in the Eighth Circuit Court of Appeals, and will be filing briefs in Playpen cases in the Third and Tenth Circuits in March. Some trial courts have upheld the FBI’s actions in dangerous decisions that, if ultimately upheld, threaten to undermine individuals’ constitutional privacy protections over information on personal computers.
“These cases will be cited for the future expansion of law enforcement hacking in domestic criminal investigations, and the precedent is likely to impact the digital privacy rights of all Internet users for years to come,” said Andrew Crocker, EFF Staff Attorney. “Recent changes to federal rules for issuing warrants may allow the government to hack into thousands of devices at a time. These devices can belong not just to suspected criminals but also to victims of botnets and other hacking crimes. For that reason, courts need to send a very clear message that vague search warrants that lack the required specifics about who and what is to be searched won’t be upheld.”
For the brief:
Share this: Join EFF
Now more than ever, it is apparent that U.S. Customs and Border Protection (CBP) and its parent agency, the Department of Homeland Security (DHS), are embarking on a broad campaign to invade the digital lives of innocent individuals.
The new DHS secretary, John Kelly, told a congressional committee this week that the department may soon demand login information (usernames and passwords) for social media accounts from foreign visa applicants—at least those subject to the controversial executive order on terrorism and immigration—and those who don’t comply will be denied entry into the United States. This effort to access both public and private communications and associations is the latest move by a department that is overreaching its border security authority.
In December 2016, DHS began asking another subset of foreign visitors, those from Visa Waiver Countries, for their social media handles. DHS defended itself by stating that not only would compliance be voluntary, the government only wanted to access publicly viewable social media posts: “If an applicant chooses to answer this question, DHS will have timely visibility of the publicly available information on those platforms, consistent with the privacy settings the applicant has set on the platforms.”
As we wrote last fall in comments to DHS, even seeking the ability to view the public social media posts of international travelers implicates the universal human rights of free speech and privacy, and—importantly—the comparable constitutional rights of their American associates. Our objections are still salient given that DHS may soon mandate access to both public and private social media content and contacts of another group of foreigners visitors.
Moreover, as a practical matter, such vetting is unlikely to weed out terrorists as they would surely scrub their social media accounts prior to seeking entry into the U.S.
Such border security overreach doesn’t stop there.
There have been several reports recently of CBP agents demanding access to social media information and digital devices of both American citizens and legal permanent residents. Most disturbing are the invasive searches of Americans’ cell phones, where CBP has been accessing social media apps that may reveal private posts and relationships, as well as emails, texts messages, browsing history, contact lists, photos—whatever is accessible via the phone.
Such border searches of Americans’ digital devices and cloud content are unconstitutional absent individualized suspicion, specifically, a probable cause warrant. In light of the DHS secretary’s statements this week, we fear that DHS may soon take the next step down this invasive path and demand the login information for American travelers’ online accounts so that the government can peruse private, highly personal information without relying on access to a mobile device.
These policies and practices of DHS/CBP must be chronicled and opposed.United States v. Saboonchi
Share this: Join EFF
EFF had high hopes that the Domain Name Association's Healthy Domains Initiative (HDI) wouldn't be just another secretive industry deal between rightsholders and domain name intermediaries. Toward that end, we and other civil society organizations worked in good faith on many fronts to make sure HDI protected Internet users as well.
Those efforts seem to have failed. Yesterday, the Domain Name Association (DNA), a relatively new association of domain registries and registrars, suddenly launched a proposal for "Registry/Registrar Healthy Practices" on a surprised world, calling on domain name companies to dive headlong into a new role as private arbiters of online speech. This ill-conceived proposal is the very epitome of Shadow Regulation. There was no forewarning about the release of this proposal on the HDI mailing list; indeed, the last update posted there was on June 9, 2016, reporting "some good progress," and promising that any HDI best practice document "will be shared broadly to this group for additional feedback." That never happened, and neither were any updates posted to HDI's blog.
While yesterday's announcement claims that "civil society" was part of a "year-long process of consultation" leading to this document, it doesn't say which groups participated, or how they were selected. In any purported effort to develop a set of community-based principles, a failure to proactively reach out to affected stakeholders, especially if they have already expressed interest, exposes the effort as a sham. "Inclusion" is one of the three key criteria that EFF developed in explaining how fair processes can lead to better outcomes, and this means making sure that all stakeholders who are affected by Internet policies have the opportunity to be heard. The onus here lies on the organization that aims to develop those policies, and in this the DNA has clearly failed.Copyright Censorship Through Compulsory Private Arbitration
So, what did HDI propose in its Registry/Registrar Healthy Practices [PDF]? The Practices divide into four categories, quite different from one another: Addressing Online Security Abuse, Complaint Handling for “Rogue” Pharmacies, Enhancing Child Abuse Mitigation Systems, and Voluntary Third Party Handling of Copyright Infringement Cases. We will focus for now on the last of these, because it is the newest and most overreaching voluntary enforcement mechanism described in the Practices.
The HDI recommends the construction of "a voluntary framework for copyright infringement disputes, so copyright holders could use a more efficient and cost-effective system for clear cases of copyright abuse other than going to court." This would involve forcing everyone who registers a domain name to consent to an alternative dispute resolution (ADR) process for any copyright claim that is made against their website. This process, labelled ADRP, would be modeled after the Uniform Dispute Resolution Policy (UDRP), an ADR process for disputes between domain name owners and trademark holders, in which the latter can claim that a domain name infringes its trademark rights and have the domain transferred to their control.
This is a terrible proposal, for a number of reasons. First and foremost, a domain name owner who contracts with a registrar is doing so only for the domain name of their website or Internet service. The content that happens to be posted within that website or service has nothing to do with the domain name registrar, and frankly, is none of its business. If a website is hosting unlawful content, then it is the website host, not the domain registrar, who needs to take responsibility for that, and only to the extent of fulfilling its obligations under the DMCA or its foreign equivalents.
Second, it seems too likely that any voluntary, private dispute resolution system paid for by the complaining parties will be captured by copyright holders and become a privatized version of the failed Internet censorship bills SOPA and PIPA. While the HDI gives lip service to the need to "ensure due process for respondents," if the process by which the HDI Practices themselves were developed is any guide, we cannot trust that this would be the case. If any proof is needed of this, we only need to look at the ADRP's predecessor and namesake, the UDRP, a systemically biased process that has been used to censor domains used for legitimate purposes such as criticism, and domains that are generic English words. Extending this broken process beyond domain names themselves to cover the contents of websites would make this censorship exponentially worse.Donuts Are Not Healthy
Special interests who seek power to control others' speech on the Internet often cloak their desires in the rhetoric of "multistakeholder" standards development. HDI's use of terms like "process of consultation," "best practices," and "network of industry partners" fits this mold. But buzzwords don't actually give legitimacy to a proposal, nor substitute for meaningful input from everyone it will affect.
The HDI proposal was written by a group of domain name companies. They include Donuts Inc., a registry operator that controls over 200 of the new top-level domains, like .email, .guru, and .movie. Donuts has taken many steps that serve the interests of major corporate trademark and copyright holders over those of other Internet users. These include a private agreement with the Motion Picture Association of America to suspend domain names on request based on accusations of copyright infringement, and a "Domain Protected Marks List Plus" that gives brand owners the power to stop others from using common words and phrases in domain names--a degree of control that they don't get from either ICANN procedures or trademark law.
The "Healthy Practices" proposal continues that solicitude towards corporate rightsholders over other Internet users. This proposal begs the question: healthy for whom?
If past is prologue, we can expect to see heaps of praise for this proposal from the same special interests it was designed to serve, and from their allies in government who use Shadow Regulations like this one to avoid democratic accountability for unpopular, anti-user policies. But no talk of "self-regulation" nor "best practices" can transform an industry's private wishlist into legitimate governance of the Internet, or an acceptable path for other Internet companies to follow.
Share this: Join EFF
Bad news for Internet users. In his first few days in office, FCC Chairman Ajit Pai has shelved the Commission’s investigation into Internet companies’ zero-rating practices and whether they violate the Commission's Open Internet Order.
As recently as January, the FCC was rebuking AT&T (PDF) for seemingly prioritizing its own DirecTV content over that of its competitors. Now, Pai has made it clear that the FCC doesn’t plan to move forward with the investigation.
Simply put, zero-rating is the practice of ISPs and mobile providers choosing not to count certain content toward users’ data limits, often in exchange for capping the speeds at which customers can access that content. Most major mobile providers in the U.S. offer some form of zero-rated service today, like T-Mobile’s BingeOn program for zero-rated streaming and Verizon and AT&T’s FreeBee Data program. Facebook, Wikimedia, and Google have their own zero-rated apps, too. While they are currently focused on emerging mobile markets in developing countries, this recent development from the FCC may open the domestic market to them in new ways.
EFF doesn’t flat-out oppose all zero-rating. But in current practice, it often has the consequence (intended or not) of giving ISPs unfair control over the content their customers access and, ultimately, stifling competition. When the ISP has sole control over what content sources are eligible for zero-rating, it becomes a de facto Internet gatekeeper: its choices around free bandwidth can bias its customers’ Internet usage toward certain sites and services. That can make it prohibitively difficult for new, innovative services to get off the ground. For example, entrepreneurs trying to promote a new video streaming site will face hurdles to widespread adoption of their service if users have unmetered access to existing competitors like YouTube and Netflix.
This problem gets particularly dodgy when the mobile provider owns the zero-rated content source, as is the case with AT&T and DirecTV. In the course of its now-shuttered zero-rating investigation, the FCC asked AT&T to prove that it treated DirecTV and other video services the same. AT&T claimed that it did, but the FCC found evidence that AT&T’s practices were obstructing competition and harming users:
The limited information we have obtained to date… tends to support a conclusion… that AT&T offers Sponsored Data to third party content providers at terms and conditions that are effectively less favorable than those it offers to its affiliate, DIRECTV. Such arrangements likely obstruct competition for video programming services delivered over mobile Internet platforms and harm consumers by inhibiting unaffiliated edge providers’ ability to provide such service to AT&T’s wireless subscribers. (Emphasis added.)
According to Pai, “These free-data plans have proven to be popular among consumers, particularly low-income Americans.” But that’s a red herring. That a service is popular doesn’t mean that rules protecting users’ freedoms shouldn’t apply to it. If anything, zero-rating’s supposed popularity among low-income users is another reason to make sure that it doesn’t further curb users’ Internet experience and funnel vulnerable users towards certain content.
On top of that, users have different preferences and habits, and do not necessarily agree on the optimal content and services to zero-rate. Instead of expanding carriers’ discretion over the content their customers can or cannot easily access, EFF would like to see edge providers given a clear path to being included in zero-rating plans, one that doesn’t favor established players. And ultimately, users themselves should be empowered to decide what content gets zero-rated.
Share this: Join EFF
President Donald Trump’s nominee to lead the country’s law enforcement has cleared the Senate.
The Senate voted 52-47 on Wednesday to confirm Sen. Jeff Sessions, whose record on civil liberties issues—including digital rights—has drawn fire from Democratic lawmakers and public interest groups.
EFF has expressed concerns about Sessions’ record on surveillance, encryption, and freedom of the press. Those concerns intensified during his confirmation process.
Throughout his confirmation hearing in front of the Senate Judiciary Committee and his written responses to additional questions from lawmakers, Sessions made a number of troubling statements. He said he would support legislation to enable a privacy-invasive Rapid DNA system. He refused to definitively commit not to put journalists in jail for doing their job. He dodged questions about Justice Department policies on Stingrays, and wouldn't commit to publish guidelines on how federal law enforcement uses government hacking. He called it “critical” that law enforcement be able to “overcome” encryption.
His Senate record on surveillance is also disturbing. Sessions helped to derail reform to the Electronic Communications Privacy Act in the Senate. He also opposed the USA FREEDOM Act, a set of moderate reforms to the NSA’s mass collection of information about Americans’ domestic phone calls. In 2015, he went so far as to pen an alarmist op-ed against the bill, in which he claimed that the bulk phone records collection was “subject to extraordinary oversight” and warned the bill “would make it vastly more difficult for the NSA to stop a terrorist than it is to stop a tax cheat.”
During the hearing, USA FREEDOM sponsor Sen. Patrick Leahy pressed Sessions on whether he is committed to enforcing the surveillance reform law. Sessions responded that the prohibition on bulk collection “appears to be” the governing statute for U.S. government surveillance. His qualified answer raises concerns. And while he pledged to follow that law, he couldn’t confirm it prohibited bulk collection of domestic phone records in all cases. (It does.)
In a marathon, all-night debate in opposition to Sessions, Senate Democrats pointed to his track record on surveillance and privacy as a source of concern.
Montana Democrat Sen. Jon Tester pointed to Sessions’ repeated votes in favor of “the most intrusive aspects of the Patriot Act.” He asked, “Will he fight on behalf of government officials that listen into our phone calls or scroll through our emails or preserve our Snapchats?”
Washington Democrat Sen. Maria Cantwell said she is concerned by Sessions’ support for “President [George W.] Bush’s warrantless wiretapping and surveillance programs,” and his support for backdoor access to encrypted technologies. “I do have concerns that the president’s nominee…may not stand up to the President of the United States in making sure that the civil liberties of Americans are protected.”
Now that he has been confirmed, EFF and other civil liberties advocates will work to hold him accountable as Attorney General and block any attempts by him or anyone else to broaden the government surveillance powers that threaten our basic privacy rights.
Share this: Join EFF
As a school librarian at a small K-12 district in Illinois, Angela K. is at the center of a battle of extremes in educational technology and student privacy.
On one side, her district is careful and privacy-conscious when it comes to technology, with key administrators who take extreme caution with ID numbers, logins, and any other potentially identifying information required to use online services. On the other side, the district has enough technology “cheerleaders” driving adoption forward that now students as young as second grade are using Google’s G Suite for Education.
In search of a middle ground that serves students, Angela is asking hard, fundamental questions. “We can use technology to do this, but should we? Is it giving us the same results as something non-technological?” Angela asked. “We need to see the big picture. How do we take advantage of these tools while keeping information private and being aware of what we might be giving away?”
School librarians are uniquely positioned to navigate this middle ground and advocate for privacy, both within the school library itself and in larger school- or district-wide conversations about technology. Often, school librarians are the only staff members trained as educators, privacy specialists, and technologists, bringing not only the skills but a professional mandate to lead their communities in digital privacy and intellectual freedom. On top of that, librarians have trusted relationships across the student privacy stakeholder chain, from working directly with students to training teachers to negotiating with technology vendors.Following the money
Part of any school librarian’s job is making purchasing decisions with digital vendors for library catalogs, electronic databases, e-books, and more. That means that school librarians like Angela are trained to work with ed tech providers and think critically about their services.
“I am always asking, ‘Where is this company making their money?’” Angela said. “That’s often the key to what’s going on with the student information they collect.”
School librarians know the questions to ask a vendor. Angela listed some of the questions she tends to ask: What student data is the vendor collecting? How and when is it anonymized, if at all? What does the vendor do with student data? How long is it retained? Is authentication required to use a certain software or service, and, if so, how are students’ usernames and passwords generated?
In reality, though, librarians are not always involved in contract negotiations. “More and more tech tools are being adopted either top-down through admin, who don’t always think about privacy in a nuanced way, or directly through teachers, who approach it on a more pedagogical level,” Angela said. “We need people at the table who are trained to ask questions about student privacy. Right now, these questions often don’t get asked until a product is implemented—and at that point, it’s too late.”Teaching privacy
Angela wants to see more direct education around privacy concepts and expectations, and not just for students. Teachers and other staff in her district would benefit from more thorough training, as well.
“As a librarian, I believe in the great things technology can offer,” she said, “but I think we need to do a better job educating students, teachers, and administrators on reasons for privacy.”
For students, Angela’s district provides the digital literacy education mandated by Illinois’s Internet Safety Act. However, compartmentalized curricula are not enough to transform the way students interact with technology; it has to be reinforced across subjects throughout the school year.
“We used to be able to reinforce it every time library staff worked with students throughout the year,” Angela said, “but now staff is too thin.”
Teachers also need training to understand the risks of the technology they are offering to students.
“For younger teachers, it’s hard to be simultaneously skeptical and enthusiastic about new educational technologies,” Angela said. “They are really alert to public records considerations and FERPA laws, but they also come out of education programs so heavily trained in using data to improve educational experiences.”
In the absence of more thorough professional training, Angela sees teachers and administrators overwhelmed with the task of considering privacy in their teaching. “Sometime educators default to not using any technology at all because they don’t have the time or resources to teach their kids about appropriate use. Or, teachers will use it all and not think about privacy,” she said. “When people don’t know about their options, there can be this desperate feeling that there’s nothing we can do to protect our privacy.”
Angela fears that, without better privacy education and awareness, students' intellectual freedom will suffer. “If students don’t expect privacy, if they accept that a company or a teacher or ‘big brother’ is always watching, then they won’t be creative anymore.”A need for caution moving forward
Coming from librarianship’s tradition of facilitating the spread of information while also safeguarding users’ privacy and intellectual freedom, Angela is committed to adopting and applying ed tech while also preserving student privacy.
“I am cautious in a realistic way. After all, I’m a tools user. I know I need a library catalog, for example. I know I need electronic databases. Technologies are a necessary utility, not something we can walk away from.”
As ed tech use increases, school librarians like Angela have an opportunity to show that there is no need to compromise privacy for newer or more high-tech educational resources.
“Too many people in education have no expectation of privacy, or think it’s worth it to hand over our students’ personal information for ed tech services that are free. But we don’t have to give up privacy to get the resources we need to do good education.”
Share this: Join EFF
Rep. Blake Farenthold (R-Texas) and Jared Polis (D-Colo.) just re-introduced their You Own Devices Act (YODA), a bill that aims to help you reclaim some of your ownership rights in the software-enabled devices you buy.
We first wrote about YODA when it was originally introduced back in 2014. The bill would go a ways toward curbing abusive End User License Agreements (EULAs) by making sure companies can’t use restrictions on the software within your device to keep you from selling, leasing, or giving away the device when you’re done with it by. The bill would override EULAs that purport to limit your ability to transfer ownership of the device (and its software) and would make sure that whoever ends up with your device has the same access to security and bug fixes that you would have had.
Making sure that you can sell and transfer your old devices isn’t just good for you – it’s good for everyone else as well. Resale markets for consumer products help improve access to affordable technology and provide a valuable resource for innovators [PDF].
We’re pleased to see some members of Congress tackling this issue, and there’s still a long way to go to make sure that outdated and unconstitutional copyright laws, like Section 1201, don’t keep you from controlling your own media and devices.
Share this: Join EFF
The House passed the Email Privacy Act (H.R. 387) yesterday, bringing us one step closer to requiring a warrant before law enforcement can access private communications and documents stored online with companies such as Google, Facebook, and Dropbox. But the fight is just beginning.
We’ve long called for pro-privacy reforms to the 1986 Electronic Communications Privacy Act (ECPA), the outdated law that provides little protection for “cloud” content stored by third-party service providers. H.R. 387 would codify the Sixth Circuit’s ruling in U.S. v. Warshak, which held that the Fourth Amendment demands that the government first obtain a warrant based on probable cause before accessing emails stored with cloud service providers. While imperfect—the House-passed bill doesn’t require the government to notify users when it obtains their data from companies like Google—the reforms in the Email Privacy Act are a necessary step in the right direction.
EFF and more than 60 other privacy advocates, tech companies, and industry groups wrote to lawmakers asking them to approve the Email Privacy Act.
Now the Senate needs to take up the measure and send it to the president’s desk without weakening it. Despite the fact that the House voted 419-0 to pass the Email Privacy Act last year, it stalled in the upper chamber, where senators attempted to use the incredibly popular bill to attach additional provisions that would have harmed Internet users’ privacy.
Such “poison pill” amendments included one that would have expanded the already problematic Computer Fraud and Abuse Act, one that would have allowed the FBI to get more user information with National Security Letters, and amendments that could have made it easier for law enforcement to abuse the exemption in the law that grants access to user data in the case of emergencies without judicial oversight.
Senators need to be vigilant about fending off these kinds of amendments when the Email Privacy Act is considered in the Senate this time around.
The House’s unanimous vote on the Email Privacy Act last year and yesterday’s voice vote demonstrate bipartisan agreement that the emails in your inbox should have the same privacy protections as the papers in your desk drawer. We urge the Senate to swiftly pass the H.R. 387 to protect online content with a probable cause warrant.
Share this: Join EFF
The Copyright Alert System has called it quits, but questions remain about what, if anything, will replace it. Known also as the “six strikes” program, the Copyright Alert System (CAS) was a private agreement between several large Internet service providers (ISPs) and big media and entertainment companies, with government support. The agreement allowed the media and entertainmenet companies to monitor those ISPs' subscribers' peer-to-peer network traffic for potential copyright infringement, and imposed penalties on subscribers accused of infringing. Penalties ranged from “educational” notices, to throttling subscribers' connection speeds and, in some cases, temporarily restricting subscribers’ web access.
From the beginning, the Copyright Alert System presented problems for ordinary Internet users. The agreement creating the CAS was negotiated without the opportunity for public input. As is often the result with such secretive private agreements, users’ interests weren’t sufficiently protected when the program finally came into effect. For example, because the program treated accusations of infringement as conclusive, and the appeals process was both costly and offered unnecessarily limited defenses, the CAS failed to adequately protect users who were wrongfully accused of infringement. Further, the program included surveillance by copyright owners of Internet subscribers’ peer-to-peer network activity, a level of monitoring that many found invasive of their online privacy. Even the program’s educational materials were biased. And throughout its operation, the program struggled to provide enough transparency into how it was impacting Internet users.
But the CAS wasn’t nearly as bad as it could have been. For example, while the media companies could join swarms to track users’ activity on peer-to-peer networks, the ISPs themselves were not required to monitor their subscriber’s activity by using deep packet inspection (DPI), a much more invasive tactic. And ISPs were not required, under the terms of the agreement, to cut off subscribers’ Internet access after repeat allegations of infringement. Lastly, the program had an advisory board that did include consumer advocates (a measure we believed to be inadequate).
EFF had serious concerns with the program from the start, and we welcome its retirement. But we’re not celebrating just yet. The statement from the Center for Copyright Information (the organization that administered the CAS) announcing the program’s retirement states:
While this particular program is ending, the parties remain committed to voluntary and cooperative efforts to address these issues.
As we’ve said before, a big problem with these private agreements is that they frequently leave Internet users without at seat at the negotiating table, and with little or no recourse when the companies involved violate users’ privacy or silence users’ online speech. And when government actors pressure companies to come to terms, these agreements can easily become the “de-facto” law of the Internet – with none of the potential for democratic accountability that accompanies actual laws. If companies and governments are committed to protecting Internet users in future voluntary agreements, we’ve provided a simple set of criteria for how those agreements can be done well.
While there are as yet no details as to why the CAS closed up shop, or what could be coming next, the MPAA’s statements following the announcement are far from reassuring. The MPAA’s general counsel stated that he believed the program didn’t do enough to punish people the media companies decided were “repeat infringers”:
[the CAS] was simply not set up to deal with the hard-core repeat infringer problem. Ultimately, these persistent infringers must be addressed by ISPs under their 'repeat infringer' policies as provided in the Digital Millennium Copyright Act.
This statement comes on the heels of another industry attempt to turn ISPs into draconian copyright enforcers in the BMG Rights Management v. Cox Communications case. Copyright holders in that case argued that ISPs, like Cox (Cox was not part of the CAS) should cut off subscribers’ Internet access on the basis of copyright holders’ mere allegations of infringement.
We hope the CAS is not being abandoned simply so big media and entertainment companies can try to impose something worse. Whatever happens, we’ll be on the lookout for threats to Internet users from future Shadow Regulations like the CAS.
Share this: Join EFF