For years, EFF has been helping concerned technology users contact Congress. The EFF community stopped SOPA, we fought back privacy-invasive cybersecurity proposals, we are championing software patent reform, and now we’re demanding real NSA reform—not a fake fix.Here's How To Jump In and Help
- Read the instructions for contributing.
- Check out the GitHub repo.
- Ask questions on IRC: #opencongress on irc.freenode.net.
But we’re at an impasse. Our community has grown significantly in the last few years, and every day we’re confronted with more reasons that users need to be speaking to lawmakers. But no one has a good system for contacting Congress.
Right now, EFF pays a for-profit company using proprietary software so that our friends and members can stop Congress from enacting dumb laws that hurt the Internet.
This rubs us the wrong way. At EFF, we like to practice what we preach, but our third-party action center suffers from proprietary licensing and limited configurability. When we find bugs, we can’t always fix them ourselves or hack around the problem.
It shouldn’t be this way. We shouldn’t have to compromise our principles just so that our friends and members can speak out about important issues. We shouldn’t have to sacrifice security, customizability, or freedom when engaging in political activism.
We can build something new. And better.
For the last few months, EFF and our partners at the Sunlight Foundation have been working on a way to revolutionize how everyday people contact Congress. The resource we're building with Sunlight is in the public domain, released under CC0, and makes it easy to contact members of Congress using online forms. The new action tool we're creating will be free software, so anyone can hack on and improve it. That means it will be customizable—the community can improve it and hold it to the high level of security that should be the standard for all infrastructure projects and tools for change. And it won’t just be for EFF: anybody can customize this system to contact Congress.
Thanks to our partners at Taskforce.is and the Sunlight Foundation, we’ve got a prototype of the new system ready.
Now, we need your help.
Calling all techs.
We finished the basic backend for the new contacting Congress tool, but now we need tech volunteers to help us complete the project.
Here’s the challenge: Each member of Congress has a special form that their own constituents can use to contact them. Each form is different: some require a CAPTCHA, some require a title, some require you to choose a topic from a dropdown list. Our new action center will let you connect directly to these Congressional forms for your elected officials whenever you want to submit a letter about an issue you care about. However, we need to program for each unique form used for individual members of Congress.
To that end, we need volunteers to conduct tests on the forms of each of the 500+ members of Congress. We created a simple bookmarklet that you can install in your browser, then visit our action center hub and test out different members of Congress. It’s easy to use, and it takes 4-10 minutes to test a Congressional form and make sure it works.
How many volunteers do you need?
We’re looking for between 10 and 30 people who can commit time to this project. We’re hoping to find several people who can work 4-5 hours on this, and then we’re hoping for 10 people who will be willing to spend one or two days on this project.
How technical do I need to be?
People contact EFF frequently with offers to help. I want to help you, they tell us. I want to contribute more than just money. What can I do?
This is it. We really need this system to work so that our voices can be heard in the halls of Congress. And we can only be successful if folks like you (yes you) step up and donate a few hours to help us finish this off.
There’s no tool currently available that would do what we want to do using secure, free software. With a system like this in place, EFF’s efficacy in advocating for your rights can increase dramatically.
We can’t do this without the support and engagement of our best supporters. Want to get involved? Email firstname.lastname@example.org.
It’s not hard and we’ll show you how.
We created these instructions (including video) on how to get started.
Most importantly, we’re available on IRC pretty much all the time. If you bump into problems, just let us know and we’ll try to troubleshoot. Find us on #opencongress on irc.freenode.net.
Ready to get involved? Send an email to email@example.com if you want more information or are ready to get involved.
You can also check out the github repo: https://github.com/unitedstates/contact-congress/
We want to show you some love.
The main reason to take part in this is because you want to help EFF and the Sunlight Foundation, and you believe that the world is a better place when everyday people can contact Congress simply and easily.
Nonetheless, we want to shower you with mountains of amazing swag to thank you for your help.
Here are the prize bundles for volunteers who make:
40+ commits to the project on Github
- Our undying gratitude
- An EFF hat
150+ commits to the project on Github
- Our undying gratitude
- 1 year EFF membership -- for yourself, as a gift for a friend, or in memory of someone who inspired you.
- An EFF hat
- An EFF sticker pack
- An EFF shirt
300+ commits to the project on Github:
- Our undying gratitude
- 1 year EFF membership -- for yourself, as a gift for a friend, or in memory of someone who inspired you.
- The famous EFF NSA Hoodie
- An EFF hat
- An EFF sticker pack
- An EFF shirt
- Free entry to any EFF-hosted party (typically, this is our Pioneer Awards and our birthday party, both of which are in San Francisco. Note that the DefCon party is hosted for EFF by someone else, so we cannot guarantee entry to that.)
- A public profile on the EFF website, under a soon-to-be-created ‘tech volunteers’ section.
We really need you. Please email firstname.lastname@example.org to let us know if you can help out.
Share this: || Join EFF
Updates to the email privacy law called the Electronic Communications Privacy Act (ECPA) are long overdue. It's common sense that emails and other online private messages (like Twitter direct messages) are protected by the Fourth Amendment. But for a long time, the Department of Justice (DOJ) argued ECPA allowed it to circumvent the Fourth Amendment and access much of your email without a warrant. Thankfully, last year it finally gave up on that stance.
But now it appears that the Securities and Exchange Commission (SEC), the civil agency in charge of protecting investors and ensuring orderly markets, may be doing the same exact thing: it is trying to use ECPA to force service providers to hand over email without a warrant, in direct violation of the Fourth Amendment.
EFF and the Digital Due Process Coalition, a diverse coalition of privacy advocates and major companies, are fighting hard to push a common sense reform to ECPA. The law, passed in the 1980s before the existence of webmail, has been used to argue that emails older than 180 days may be accessed without a warrant based on probable cause. Instead, the agencies send a mere subpoena, which means that the agency does not have to involve a judge or show that the emails will provide evidence of a crime.
Contrary to the position taken by the DOJ, the courts, the public at-large, and EFF, the SEC asserted last week that it can obtain emails with simple subpoenas, issued under ECPA. The Chair of the SEC, Mary Jo White, tried to reassure Rep. Kevin Yoder that the SEC's "built-in privacy protections" make it ok. Unfortunately, Chair White wouldn't explain what are the exact "privacy protections." Rep. Yoder, the sponsor of HR 1852, The Email Privacy Act—a bill with over 200 cosponsors that updates ECPA—was rightfully dubious and tried to no avail to get the Chair to explain why the SEC thinks it can use ECPA to get around the Fourth Amendment.
Just because your emails are on your computer, must not mean they have any less protection than if they were printed on your desk. Many other agencies disagree with the SEC's approach and recognize the Fourth Amendment covers all private communications—whether paper or electronic. It's time for the SEC to update its practices so that it's inline with the courts, public opinion, and with other agencies.
It's also time for the White House to send a clear message to all of its executive agencies. Remember, the SEC consists of five presidentially appointed commissioners. Since November, the White House has failed to respond to a White House Petition demanding ECPA reform. The White House must pronounce loud and clear that it supports HR 1852, The Email Privacy Act, and that government agencies like the SEC should not be using ECPA as a run-around to the Fourth Amendment.
Many courts, including the Sixth Circuit in United States v. Warshak, have already ruled that emails and other private communications are protected by the Fourth Amendment. Congress, through members such as Senators Patrick Leahy and Ron Wyden; and Representatives Kevin Yoder, Tom Graves, and Jared Polis, are pushing common sense reforms to ECPA like HR 1852 The Email Privacy Act. The bills are slowly making its way through Congress, but we can speed them up. Tell your Representative now to support HR 1852 so that we don't leave email privacy laws stuck in the 1980s.
Related Issues: Privacy
Share this: || Join EFF
Let’s just imagine we could transport an Internet-connected laptop back to the 1790s, when the United States was in its infancy. The technology would no doubt knock the founders out of their buckle-top boots, but once the original patriots got over the initial shock and novelty (and clearing up Wikipedia controversies, hosting an AMA and boggling over Dogecoin), the sense of marvel would give way to alarm as they realized how electronic communications could be exploited by a tyrant, such as the one from which they just freed themselves.
As America’s first unofficial chief technologist, Benjamin Franklin would be the first to recognize the danger and take to trolling the message boards with his famous sentiment: Those who would trade liberty for safety deserve neither. (And he’d probably troll under a fake handle, using Tor, since the patriots understood that some truths are best told with anonymity.)
Today the Tea Party movement aspires to continue the legacy of the founders by championing the rights guaranteed by the Constitution and Bill of Rights. Never afraid of controversy, Tea Party activists and elected leaders are fighting against mass surveillance in the courts and in the halls of state legislatures and Congress.
Each year on April 15, Americans pay taxes that keep the government running. It’s a time for reflecting upon whether that money is funding a government for the people, or a government that is undermining the people, supposedly for their own good. After a watershed year of newly disclosed information about the National Security Agency, the Tea Party has plenty to protest about.
How the Founders Fought Mass Surveillance
Mass surveillance was not part of the original social contract—the terms of service, if you will—between Americans and their government. Untargeted surveillance is one reason we have an independent country today.
Under the Crown’s rule, English officials used writs of assistance to indiscriminately “enter and go into any house, shop cellar, warehouse, or room or other place and, in case of resistance, to break open doors, chests, trunks, and other package there” in order to find tax evaders. Early patriot writers, such as James Otis Jr. and John Dickinson, railed against these general warrants, and it was this issue, among other oppressive conditions, that inspired the Declaration of Independence and the Fourth Amendment.
James Madison drafted clear language guaranteeing the rights of Americans, and it bears reading again in full:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
Centuries later, the principle still applies, whether we’re talking about emails or your mobile phone. As the Tea Party activists at FreedomWorks told us when we consulted them for this post: the Fourth Amendment does not stop at technology’s door.
(For a more in-depth historical review, check out former EFF legal intern David Snyder's essay, "The NSA's 'General Warrants': How the Founding Fathers Fought an 18th Century Version of the President's Illegal Domestic Spying.")
Tea Party vs. Big Brother
The Tea Party movement is closely associated with the right to bear arms, religious rights, and tax freedom. But, as Brian Brady, a prolific Tea Party activist in San Diego County we also consulted, said: the movement must embrace the Constitution as a whole. Threats to privacy, he says, are also threats to freedom of speech, religion and association. Property rights mean nothing if the government can search your home or computer without probable cause.
In other words, mass surveillance is a manifestation of big government.
Tea Party activists don’t shy away from confrontations that may put them at odds with other groups (particularly on the left), but no one can deny that on the subject of mass surveillance, the movement is on the frontlines protecting every American’s rights.
TechFreedom and gun-rights groups, such as the CalGuns Foundation and the Franklin Armory (named after Ben), have joined unlikely allies such as Greenpeace and People for the American Way to sue the NSA. Represented by EFF, the plaintiffs argue that collecting phone metadata (your number, who you called, when and for how long you spoke), chills the ability for these groups to associate freely, as guaranteed by the First Amendment as well as the Fourth Amendment. FreedomWorks and Sen. Rand Paul have also filed a class action lawsuit against the NSA on similar grounds
Conservative attorney and founder of Judicial Watch Larry Klayman was the first plaintiff to challenge the program's unconstitutionality. So far, his lawsuit in Washington, D.C. has been successful. In December, the federal judge in the case wrote, “I cannot imagine a more ‘indiscriminate’ and ‘arbitrary invasion’ than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying it and analyzing it without judicial approval.”
Tea Party-affiliated lawmakers have also been pushing back against mass surveillance with a variety of bipartisan legislative reforms; Rep. Justin Amash, for example, came within a few votes of cutting the NSA’s telephone metadata program funding with a budget amendment last July. State legislators who align with the Tea Party have also sponsored bills across the country condemning the NSA, from California State Sen. Joel Anderson’s successful resolution calling for an end to the call records program to Michigan Rep. Tom McMillin’s call for the Department of Justice to prosecute Director of National Intelligence James Clapper for misleading Congress.
Tax, Spend and Surveil
Reason magazine has an excellent essay about IRS and privacy, outlining how the IRS obtains, scours and fails to secure personal data collected from taxpayers, while tax-reform advocate Grover Norquist wrote a worthwhile op-ed in The Daily Caller today about how the IRS exploits the outdated Electronic Communications Privacy Act. But it’s also important to consider that the taxes the government collects ultimately fund the surveillance state. “No taxation without representation” was the rallying cry of the American revolution, and yet here we are today, with the NSA conducting surveillance without adequate checks and balances. Members of Congress complain that they haven’t been properly briefed on the NSA’s programs and judicial approval of these programs is conducted by a secret court that only hears the government’s side of the story. On the local level, law enforcement agencies are adopting new surveillance technologies such as automatic license plate readers, facial recognition and Stingrays with little public input or other oversight.
On the whole, maintaining the mass surveillance state is expensive. There are 17 (that’s right, 17) different federal agencies that are part of the “intelligence community,” each of them involved in various, interconnected forms of surveillance. Some would say there is little concrete evidence of how it has made us safer, but there’s plenty of concrete evidence of how much it has cost. The bottom line? We’re paying the government to unreasonably intrude on our lives. The budget for intelligence in 2013 was $52.6 billion. Of that, $10.8 billion went to the NSA. That’s approximately $167 per person in the United States
For a prime example of the wasteful spending, one only need to read Sen. Tom Coburn’s report, “Safety at Any Price” that outlined the inappropriate spending done under the Department of Homeland Security’s grant program (such as paying for “first responders to attend a HALO Counterterrorism Summit at a California island spa resort featuring a simulated zombie apocalypse.”) This followed on the heels of a harsh bipartisan Senate report criticizing the extreme waste at fusion centers around the country. Federal funds were used to purchase big screen TVs, decked out SUVS, and miniature cameras. To make matters worse, the report found that fusion centers violated civil liberties and produced little information of any use.
Mass surveillance is a symptom of uncontrolled government overreach. The question is what’s the cure?
Defending Privacy is a Patriotic Duty
While every single person has cause to be alarmed by surveillance, those who criticize government policies have particular reason to be concerned. Those who have new, or not yet popular ideas (or, in the case of the Tea Party, old and popular ideas in resurgence) are often targets of overreaching surveillance. It’s not a partisan issue; it’s a constitutional issue.
Activism is most effective when is happens at the personal, local and national levels and the Tea Party has proven it knows how make a ruckus, whether it’s on a personal blog or outside the White House. America needs the Tea Party to keep applying that patriotic passion to NSA reform.
We have also just created a new collection of resources for grassroots activists, including tips on how to organize public events and use the media to spread the word about your issues, as well as a collection of one-page informational sheets that make it easy to explain these issues. And above all, speak out. Help us stop bills that attempt to legalize mass surveillance and join us in demanding real reform.
Stopping mass surveillance—it’s what the first patriots did, and it’s what today’s patriots are doing right now.
Related Issues: PrivacyNSA Spying
Share this: || Join EFF
After seven years of litigation, the basic contours of the Digital Millennium Copyright Act (DMCA) safe harbors should be pretty well established. Unfortunately, a new front may have opened up in a case called Gardner v. CafePress, thanks to a mistaken and dangerous misreading of Section 512.
With the invaluable assistance of Venkat Balasubramani, EFF, joined by the Center for Democracy and Technology, the Computer & Communications Industry Association, and Public Knowledge, has filed an amicus brief in that case. In our brief, we explain our deep concerns about how that recent ruling could have profound consequences for user-generated content sites.
CafePress is a platform that allows users to set up online shops to sell custom physical goods like clothing and stationery. The lawsuit was filed by photographer Steven Gardner, whose wildlife images were included on a user's sales page. CafePress had asked the court to resolve the case as a matter of law (also called summary judgment) because it believed it was clearly protected by the DMCA's safe harbors. The court denied that request, concluding that it could not be sure that CafePress was protected by the DMCA.
Our brief explains why that was a dangerous decision for online speech and innovation. We focus on two issues in particular: (1) the court’s interpretation of the term “service provider”; and (2) the court’s suggestion that image metadata might qualify as a “standard technical measure” under the DMCA—which would mean CafePress's automated stripping of metadata from photos would jeopardize the availability of safe harbor protections. The court could have resolved these arguments in CafePress’s favor as a matter of law. By forcing the parties to go trial on these issues, the court may undermine the purpose of the DMCA safe harbors.
On the first point, it appears that the court conflated CafePress’s online and offline activities as a website and as a producer of physical goods, and adopted a cramped definition of “service provider” that has long since been rejected by numerous courts.
On the second point, the court clearly misunderstood the definition of a “standard technical measure.” This point is pretty technical, but it has serious implications because service providers are required to comply with “standard technical measures” in order to enjoy the legal protections of the DMCA safe harbors.
A standard technical measure, in the sense of DMCA § 512(i) is one that is “used by copyright owners to identify or protect copyrighted works” and “has been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process;” is “available to any person on reasonable and nondiscriminatory terms;” and does not “impose substantial costs on service providers or substantial burdens on their systems or networks.”
However, no broad consensus has ever emerged as to any such measure, with respect to metadata or any other technical artifact. In fact, with respect to metadata, industry practices show there is no such consensus: service providers commonly strip metadata from uploaded images. Without a consensus standard, there can be no "technical measure" that a website is required to honor.
And a good thing too. From our brief:
Casting doubt on the practice of removing metadata may also put users at risk. ... Stripping metadata from uploaded images helps protect users’ privacy and security, and should not be discouraged.
But even though there is no broad industry consensus to treat image metadata as a "standard technical measure" for copyright enforcement, the court seems to have made metadata removal a ticket to trial. That's bad news.
Heads up: this case has flown under the radar, but a wrong decision on these points could end up shrinking the effective contours of DMCA safe harbors. Online service providers have a very strong incentive to stay inside those boundaries: the staggering quantity of user-generated content uploaded combined with ridiculously large statutory damages and litigation costs mean any risk of ambiguity is serious.
Service providers need well-established legal safe harbors, because those safe harbors create the space within which new platforms can develop and thrive. That’s good for user speech, and good for online innovation. We hope the court agrees.Files: cafepress_amicus_curiae_brief.pdfRelated Issues: Fair Use and Intellectual Property: Defending the BalanceDMCA
Share this: || Join EFF
The DC Circuit Court of Appeals heard argument today in AF Holdings v. Does 1-1058, one of the few mass copyright cases to reach an appellate court, and the first to specifically raise the fundamental procedural problems that tilt the playing field firmly against the Doe Defendants. The appeal was brought by several internet service providers (Verizon, Comcast, AT&T and affiliates), with amicus support from EFF, the ACLU, the ACLU of the Nation's Capitol, Public Citizen, and Public Knowledge. On the other side: notorious copyright troll Prenda Law.
Copyright trolls like Prenda want to be able to sue thousands of people at once in the same court – even if those defendants have no connection to the venue or each other. The troll asks the court to let it quickly collect hundreds of customer names from ISPs. It then shakes those people down for settlements. These Doe defendants have a strong incentive to pay nuisance settlements rather than travel to a distant forum to defend themselves. The copyright troll business model relies on this unbalanced playing field.
In this case, Prenda sued 1058 Does (anonymous defendants identified only by an IP address) in federal district court in the District of Columbia. It then issued subpoenas demanding that ISPs identify the names of these customers. The ISPs objected to this request arguing that most of the IP addresses were associated with computers located outside of the court's jurisdiction. The ISPs and EFF also showed that Prenda could have used simple geolocation tools to determine the same thing. And we explained that joining together 1000+ subscribers in one lawsuit was fundamentally unfair and improper under the rules governing when defendants can be sued together (known as ‘joinder’).
Unfortunately, the district court did not agree, holding that any consideration of joinder and jurisdiction was "premature." In other words, the court can't consider whether the process is unfair unless and until a Doe comes to the court to raise the issue. By then, of course, it is too late; the subscribers will have already received threatening letters and, in many cases, be reluctant to take on the burden of defending themselves in a far away location.
We believe this ruling was fundamentally wrong. As we've said many times, plaintiffs have every right to go to court to enforce their rights. But they must play by the same litigation rules that everyone else has to follow. To get early discovery, plaintiffs must have a good-faith belief that jurisdiction and joinder are proper. Given the evidence presented to the district court, there is no way Prenda could have formed this good faith belief. So its demand for customer information should have been denied.
The ISPs appealed the district court’s troubling ruling. At the hearing today, the appellate court was particularly interested in the issue of joinder. The court seemed immediately skeptical of the notion of suing 1000 people at once, but wondered if it might be acceptable join together 20 Bittorrent users who had joined the same swarm to acquire the same work. The ISPs and amici said generally no, because the plaintiff can't know whether a given Doe 1 acquired anything from a given Doe 2 – in other words, they aren't necessarily part of the same "transaction or occurrence." We analogized a bittorrent swarm to a casino poker table: over the course of a weekend, a week, or a month, players may come and go, adding and subtracting from the pot, but the players on day one are unlikely to be related to the players on day 4, or day 30.
The ISPs and amici also stressed the issue of burden. While the ISPs were focused on the burden they faced in responding to the subpoenas, EFF directed the court's attention to the fundamental burden on the IP subscribers, noting that the subscribers identified as a result of a subpoena aren't necessarily going to be responsible for any unauthorized activity. An IP address, we explained, only tells you the name on the bill, not who is using the account. In this context, it is crucial that courts attend to the burden on the Does, as well as the ISPs.
The court had a number of question regarding jurisdiction, and directed many of them to counsel for AF Holdings, Paul Duffy. At root, the court seemed to want to know why AF Holdings had not used geolocation tools to help determine where its targets might be located, and why it had not dropped its effort to pursue many of them when the ISPs explained that the Does just weren't in the court's jurisdiction. Finally, the court had some questions about AF Holdings litigation tactics, including the shenanigans that have been widely reported elsewhere.
It is difficult to predict how a court will rule based only on a hearing. But we are encouraged that the judges asked the important and thoughtful questions, and clearly understood both the context and implications of their decision. Many district courts have now concluded that the copyright troll business model is fundamentally unfair, and have taken steps to ensure the judicial process is not abused to foster a shakedown scheme. Let's hope they will soon be joined by the DC Circuit Court of Appeals.Related Issues: Fair Use and Intellectual Property: Defending the BalanceCopyright TrollsRelated Cases: AF Holdings v. Does
Share this: || Join EFF
New documents released by the FBI show that the Bureau is well on its way toward its goal of a fully operational face recognition database by this summer.
EFF received these records in response to our Freedom of Information Act lawsuit for information on Next Generation Identification (NGI)—the FBI’s massive biometric database that may hold records on as much as one third of the U.S. population. The facial recognition component of this database poses real threats to privacy for all Americans.
What is NGI?
NGI builds on the FBI’s legacy fingerprint database—which already contains well over 100 million individual records—and has been designed to include multiple forms of biometric data, including palm prints and iris scans in addition to fingerprints and face recognition data. NGI combines all these forms of data in each individual’s file, linking them to personal and biographic data like name, home address, ID number, immigration status, age, race, etc. This immense database is shared with other federal agencies and with the approximately 18,000 tribal, state and local law enforcement agencies across the United States.
The records we received show that the face recognition component of NGI may include as many as 52 million face images by 2015. By 2012, NGI already contained 13.6 million images representing between 7 and 8 million individuals, and by the middle of 2013, the size of the database increased to 16 million images. The new records reveal that the database will be capable of processing 55,000 direct photo enrollments daily and of conducting tens of thousands of searches every day.
NGI Will Include Non-Criminal as well as Criminal Photos
One of our biggest concerns about NGI has been the fact that it will include non-criminal as well as criminal face images. We now know that FBI projects that by 2015, the database will include 4.3 million images taken for non-criminal purposes.
Currently, if you apply for any type of job that requires fingerprinting or a background check, your prints are sent to and stored by the FBI in its civil print database. However, the FBI has never before collected a photograph along with those prints. This is changing with NGI. Now an employer could require you to provide a “mug shot” photo along with your fingerprints. If that’s the case, then the FBI will store both your face print and your fingerprints along with your biographic data.
In the past, the FBI has never linked the criminal and non-criminal fingerprint databases. This has meant that any search of the criminal print database (such as to identify a suspect or a latent print at a crime scene) would not touch the non-criminal database. This will also change with NGI. Now every record—whether criminal or non—will have a “Universal Control Number” (UCN), and every search will be run against all records in the database. This means that even if you have never been arrested for a crime, if your employer requires you to submit a photo as part of your background check, your face image could be searched—and you could be implicated as a criminal suspect—just by virtue of having that image in the non-criminal file.
Many States Are Already Participating in NGI
The records detail the many states and law enforcement agencies the FBI has already been working with to build out its database of images (see map below). By 2012, nearly half of U.S. states had at least expressed an interest in participating in the NGI pilot program, and several of those states had already shared their entire criminal mug shot database with the FBI. The FBI hopes to bring all states online with NGI by this year.
The FBI worked particularly closely with Oregon through a special project called “Face Report Card.” The goal of the project was to determine and provide feedback on the quality of the images that states already have in their databases. Through Face Report Card, examiners reviewed 14,408 of Oregon’s face images and found significant problems with image resolution, lighting, background and interference. Examiners also found that the median resolution of images was “well-below” the recommended resolution of .75 megapixels (in comparison, newer iPhone cameras are capable of 8 megapixel resolution).
FBI Disclaims Responsibility for Accuracy
At such a low resolution, it is hard to imagine that identification will be accurate.1 However, the FBI has disclaimed responsibility for accuracy, stating that “[t]he candidate list is an investigative lead not an identification.”
Because the system is designed to provide a ranked list of candidates, the FBI states NGI never actually makes a “positive identification,” and “therefore, there is no false positive rate.” In fact, the FBI only ensures that “the candidate will be returned in the top 50 candidates” 85 percent of the time “when the true candidate exists in the gallery.”
It is unclear what happens when the “true candidate” does not exist in the gallery—does NGI still return possible matches? Could those people then be subject to criminal investigation for no other reason than that a computer thought their face was mathematically similar to a suspect’s? This doesn’t seem to matter much to the FBI—the Bureau notes that because “this is an investigative search and caveats will be prevalent on the return detailing that the [non-FBI] agency is responsible for determining the identity of the subject, there should be NO legal issues.”
Nearly 1 Million Images Will Come from Unexplained Sources
One of the most curious things to come out of these records is the fact that NGI may include up to 1 million face images in two categories that are not explained anywhere in the documents. According to the FBI, by 2015, NGI may include:
- 46 million criminal images
- 4.3 million civil images
- 215,000 images from the Repository for Individuals of Special Concern (RISC)
- 750,000 images from a "Special Population Cognizant" (SPC) category
- 215,000 images from "New Repositories"
However, the FBI does not define either the “Special Population Cognizant” database or the "new repositories" category. This is a problem because we do not know what rules govern these categories, where the data comes from, how the images are gathered, who has access to them, and whose privacy is impacted.
A 2007 FBI document available on the web describes SPC as “a service provided to Other Federal Organizations (OFOs), or other agencies with special needs by agreement with the FBI” and notes that “[t]hese SPC Files can be specific to a particular case or subject set (e.g., gang or terrorist related), or can be generic agency files consisting of employee records.” If these SPC files and the images in the "new repositories" category are assigned a Universal Control Number along with the rest of the NGI records, then these likely non-criminal records would also be subject to invasive criminal searches.
Government Contractor Responsible for NGI has built some of the Largest Face Recognition Databases in the World
The company responsible for building NGI’s facial recognition component—MorphoTrust (formerly L-1 Identity Solutions)—is also the company that has built the face recognition systems used by approximately 35 state DMVs and many commercial businesses.2 MorphoTrust built and maintains the face recognition systems for the Department of State, which has the “largest facial recognition system deployed in the world” with more than 244 million records,3 and for the Department of Defense, which shares its records with the FBI.
The FBI failed to release records discussing whether MorphoTrust uses a standard (likely proprietary) algorithm for its face templates. If it does, it is quite possible that the face templates at each of these disparate agencies could be shared across agencies—raising again the issue that the photograph you thought you were taking just to get a passport or driver’s license is then searched every time the government is investigating a crime. The FBI seems to be leaning in this direction: an FBI employee email notes that the “best requirements for sending an image in the FR system” include “obtain[ing] DMV version of photo whenever possible.”
Why Should We Care About NGI?
There are several reasons to be concerned about this massive expansion of governmental face recognition data collection. First, as noted above, NGI will allow law enforcement at all levels to search non-criminal and criminal face records at the same time. This means you could become a suspect in a criminal case merely because you applied for a job that required you to submit a photo with your background check.
Second, the FBI and Congress have thus far failed to enact meaningful restrictions on what types of data can be submitted to the system, who can access the data, and how the data can be used. For example, although the FBI has said in these documents that it will not allow non-mug shot photos such as images from social networking sites to be saved to the system, there are no legal or even written FBI policy restrictions in place to prevent this from occurring. As we have stated before, the Privacy Impact Assessment for NGI’s face recognition component hasn’t been updated since 2008, well before the current database was even in development. It cannot therefore address all the privacy issues impacted by NGI.
Finally, even though FBI claims that its ranked candidate list prevents the problem of false positives (someone being falsely identified), this is not the case. A system that only purports to provide the true candidate in the top 50 candidates 85 percent of the time will return a lot of images of the wrong people. We know from researchers that the risk of false positives increases as the size of the dataset increases—and, at 52 million images, the FBI’s face recognition is a very large dataset. This means that many people will be presented as suspects for crimes they didn’t commit. This is not how our system of justice was designed and should not be a system that Americans tacitly consent to move towards.
For more on our concerns about the increased role of face recognition in criminal and civil contexts, read Jennifer Lynch’s 2012 Senate Testimony. We will continue to monitor the FBI’s expansion of NGI.
Here are the documents:
- 1. In fact, another document notes that “since the trend for the quality of data received by the customer is lower and lower quality, specific research and development plans for low quality submission accuracy improvement is highly desirable.”
- 2. MorphoTrust’s parent company, Safran Morpho, describes itself as “[t]he world leader in biometric systems,” is largely responsible for implementing India’s Aadhaar project, which, ultimately, will collect biometric data from nearly 1.2 billion people.
- 3. One could argue that Facebook’s is larger. Facebook states that its users have uploaded more than 250 billion photos. However, Facebook never performs face recognition searches on that entire 250 billion photo database.
Share this: || Join EFF
Friday, April 4th was 404 Day - a day meant to call attention to Internet censorship in public schools and libraries in the United States. This censorship is the result of a well-meaning but misguided law, the Children's Internet Protection Act (CIPA), which ties federal funding for public schools and libraries to requirements to filter child pornography and content that is obscene or "harmful to minors." Unfortunately, bad and secretive filtering technology and over-aggressive filtering implementations result in the filtering of constitutionally-protected speech, among other problems.
The day centered around a digital teach-in for an in-depth discussion of the issues, featuring: Deborah Caldwell-Stone, Director of Intellectual Freedom at the American Library Association; Chris Peterson from MIT's Center for Civic Media and the National Coalition Against Censorship; and Sarah Houghton, blogger and Director of the San Rafael Public Library in Northern California.Privacy info. This embed will serve content from youtube-nocookie.com
They addressed such issues as the cost and efficacy of these filters, the lack of transparency around what is filtered, and how you can ask your librarian to turn them off. The video, above, is a fantastic resource for beginning to understand problems CIPA creates.
Concurrently, a discussion ranged on Twitter around the hashtag #404day, as users, including Senator Ron Wyden, asked questions and shared their own experiences with filtering software in libraries and schools.
— Ron Wyden (@RonWyden) April 4, 2014
Many of those participating in the online discussion discussed the futility of filtering and how they had learned to circumvent filters at early ages, and brought up how the filters disproportionately affect low-income communities or those who rely on public computer access.When the screen reads “404 Error – Not Found” we need to recognize that one of the things which is not being found is the values of libraries.
Throughout the day, librarians, researchers, teachers, and even a student blogged about how CIPA hinders their work, stifles speech, and runs counter to the ideals of public libraries. From an explainer about the censorship reporting tool Herdict to the experiences of a researcher unable to access material she needed to the manifesto of high school librarian preferring trust and education to blocking, the posts illustrated the personal and social harms of censorship under CIPA.
We're thrilled about the discussion the day engendered and thankful to our partners at the National Coalition Against Censorship and the Center for Civic Media at MIT, the teach-in participants, and all those who joined in blogging or tweeting throughout the day. The next time you get a 404 error at the library, we hope you think about why it's there and ask your librarian whether it's because of filtering and to turn the filtering off if it is.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22//www.youtube-nocookie.com/embed/g_9sgZIVCJY%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: || Join EFF
"Australia is ready for, and needs, a fair use exception now." These were the unambiguous words of the Australian Law Reform Commission's report investigating how to modernize the country's copyright laws. Specifically, the Commission called for a fair use doctrine that resembles that of the U.S., with the same four-factor balancing test.
So then you might expect that when George Brandis, Australia's Attorney General, makes his first official trip to the United States—the one he concluded just days ago—he would take the opportunity to meet with American experts on fair use. They could discuss the areas where the law has proven flexible in accommodating unforeseen uses, how the balance between specificity and flexibility is continuously struck, and what he might hope to bring back to his home country.
You might be disappointed to learn, then, that despite the straightforwardness of the Commission's recommendation, Brandis has pointedly refused to explore the idea of fair use in Australia. Though he received the Commission's report in November, he waited until February to publish it—and even then, only alongside his own misguided proposal: that Australia should establish a three-strikes-style graduated response program.
Along those lines, instead of meeting with copyright scholars and fair use expert on this week's trip to Washington, DC, Brandis met with the executive director of the Center for Copyright Information—the organization behind the U.S. graduated response system known as "Six Strikes," or the Copyright Alert System.
In terms of evidence-based policy making, this is a failure. For one thing, he needn't come all the way to the U.S. to find out how graduated response programs work (or don't). Australian copyright scholar Rebecca Giblin has conducted an exhaustive study on the effect of these programs and found "remarkably little evidence" that they were effective in reducing infringement, increasing legitimate markets, or improving access to knowledge and culture.
But more broadly, the fact that the rest of Brandis's agenda consisted of meetings with senior officials at intelligence agencies like the NSA, FBI, and CIA, raises major red flags for user privacy. And indeed, politicians in Australia have recently re-introduced mandatory data retention proposals for Internet service providers, after similar proposals suffered defeat just last year. Perhaps unsurprisingly, these proposals have the backing of Attorney General Brandis, who has repeatedly defended NSA spying during parliamentary question time.
Brandis may consider increased surveillance to be a two-for-one special: take some visible action to look strong on national security and at the same time appease the legacy content industries that want to make Internet companies snoop on their users.
Australians should demand better. Reforms that would empower users, like fair use, merit serious consideration. An obsessive devotion to mass surveillance at home and abroad does not.Related Issues: Fair Use and Intellectual Property: Defending the BalanceThe "Six Strikes" Copyright Surveillance MachineInternational
Share this: || Join EFF
San Francisco - The Electronic Frontier Foundation (EFF) is urging a federal appeals court to reconsider its decision to order Google to take down the controversial "Innocence of Muslims" video while a copyright lawsuit—based on a claim that the Copyright Office itself has rejected—is pending. As EFF explains, the decision sets a dangerous precedent that could have disastrous consequences for free speech.
"Innocence of Muslims" sparked protests worldwide in the fall of 2012. For a time, its anti-Islamic content was even linked to the violent attack on an American diplomatic compound in Benghazi, Libya, although that was later refuted. An actress named Cindy Lee Garcia, after being tricked into appearing in the film for just five seconds, claimed she held a copyright in that performance. She sued Google for copyright infringement and asked the court to order Google to take the video offline. The district court refused, noting that it could not restrain speech massed on nothing more than a highly debatable copyright claim. On appeal, a three-judge panel of the United States Court of Appeals for the Ninth Circuit agreed that the copyright claim was not strong, but nonetheless ordered Google to take down all copies of the video. It even issued a gag order, preventing Google from talking about the controversial decision for a full week.
"This video is a matter of extreme public concern–the center of a roiling, global debate," EFF Intellectual Property Director Corynne McSherry said. "The injunction in place now means we can still talk about the video–but we can't see what we are actually talking about. While the injunction stretched the First Amendment beyond its intent, the gag order snapped it in half. It delayed the public and the press from discovering this unprecedented copyright decision, and prevented others from challenging the ruling."
In an amicus brief filed today, EFF argues that the full appeals court must reconsider the earlier decision in order to protect free speech in the debate over the film and also to safeguard the future of free expression online.
"This decision means that any number of creative contributors–from actors to makeup artists to set designers–could be entitled to royalties and even control over the distribution of works they were paid to contribute to," said EFF Staff Attorney Nate Cardozo. "Such a rule would stifle creative expression for big studios and amateur filmmakers alike. While we can understand Garcia's desire to distance herself from this film, copyright law is not designed to address the harm she suffered by suppressing the global debate on a matter of public concern."
The American Civil Liberties Union, Public Knowledge, the Center for Democracy and Technology, New Media Rights, the American Library Association, the Association of College and Research Libraries, and the Association of Research Libraries joined EFF in this brief.
For the full amicus brief:
For more on Garcia v. Google:
Electronic Frontier Foundation
Intellectual Property Director
Electronic Frontier Foundation
Share this: || Join EFF
Washington, DC - The Electronic Frontier Foundation (EFF) will ask a federal appeals court at a hearing on Monday, April 14, to prevent a notorious copyright troll from obtaining the identities of more than 1,000 Internet users.
Speaking on behalf of EFF, the American Civil Liberties Union, the ACLU of the Nation's Capital, Public Citizen and Public Knowledge, EFF Intellectual Property Director Corynne McSherry will urge the Court of Appeals for the District of Columbia to reverse a district court decision that allowed the plaintiff to seek identifying information for thousands of "John Does" without complying with basic procedural rules.
The coalition of public interest groups filed an amicus brief in May 2013 in support of several Internet service providers that are resisting subpoenas for user records. Representatives for those providers will offer the principal argument. However, the court took the unusual step of allowing amici to appear and argue as well.
AF Holdings, the plaintiff in the case, is seeking the identities of individuals that it claims may have illegally downloaded a copyrighted adult film. The case is one of hundreds being pursued around the country that follow the same pattern, which judges have described as "essentially an extortion scheme." A copyright troll looks for IP addresses that may have been used to download films (usually adult films) via BitTorrent, files a single lawsuit against thousands of "John Doe" defendants based on those IP addresses, then seeks to subpoena the ISPs for the contact information of the account holders associated with those IP addresses. The troll then uses that information to contact the account holders and threatens expensive litigation if they do not settle promptly. Faced with the prospect of hiring an attorney and litigating the issue, often in a distant court, most subscribers—including those who may have done nothing wrong—will choose to settle rather than fight.
AF Holdings is linked to Prenda Law, a firm that is facing allegations that it used stolen identities and fictitious signatures on key legal documents and made other false statements to the courts. AF Holdings will have an opportunity to address the court but has so far not designated a representative for the hearing.
WHAT: Oral Argument in AF Holdings v. Does
WHO: Corynne McSherry, Intellectual Property Director, EFF
Benjamin Fox, Partner, Morrison & Foerster LLP, counsel for ISPs
WHERE: U.S. District Court of Appeals for the District of Columbia Circuit
625 Indiana Ave NW, Washington, DC 20004
WHEN: Monday, April 14, 2014 9:30 A.M. EST
For more information on our case, including the amicus brief: https://www.eff.org/cases/af-holdings-v-does
Intellectual Property Director
Electronic Frontier Foundation
Share this: || Join EFF
San Francisco - A federal appeals court overturned the conviction of Andrew "weev" Auernheimer, the computer researcher who was charged with violating the Computer Fraud and Abuse Act (CFAA) after he exposed a massive security flaw in AT&T's website.
Auernheimer was represented on appeal by the Electronic Frontier Foundation (EFF), Professor Orin Kerr of George Washington University, and attorneys Marcia Hofmann, and Tor Ekeland. In an opinion issued this morning by the U.S. Court of Appeals for the Third Circuit, Judge Michael Chagares wrote that the government should not have charged Auernheimer in New Jersey, which had no direct connection to AT&T or Auernheimer.
"We're thrilled that the Third Circuit reversed Mr. Auernheimer's conviction," EFF Staff Attorney Hanni Fakhoury said. "This prosecution presented real threats to security research. Hopefully this decision will reassure that community."
In 2010, Auernheimer's co-defendant, Daniel Spitler, discovered that AT&T had configured its servers to make the email addresses of iPad owners publicly available on the Internet. Spitler wrote a script and collected roughly 114,000 email addresses as a result of the security flaw. Auernheimer then distributed the list of email addresses to media organizations as proof of the vulnerability, ultimately forcing AT&T to acknowledge and fix the security problem.
Federal prosecutors charged Auernheimer and Spitler with identity theft and conspiracy to violate the CFAA in New Jersey federal court. Spitler accepted a plea deal, while Auernheimer unsuccessfully fought the charges in a jury trial. Auernheimer began serving a 41-month prison sentence in March 2013.
On appeal, Auernheimer's defense team argued that accessing a publicly available website does not constitute unauthorized access to a computer under the CFAA. They also argued that Auernheimer should not have been charged in New Jersey. At the time they were obtaining email addresses, Auernheimer was in Arkansas, Spitler was in California and AT&T's servers were in Georgia and Texas.
The court agreed with Auernheimer that charging the case in New Jersey was improper and reversed his conviction and ordered him released from prison. Although it did not directly address whether accessing information on a publicly available website violates the CFAA, the court suggested that there may have been no CFAA violation, since no code-based restrictions to access had been circumvented.
"Today's decision is important beyond weev's specific case," added Fakhoury. "The court made clear that the location of a criminal defendant remains an important constitutional limitation, even in today's Internet age."
For the opinion: https://www.eff.org/document/appellate-court-opinion
Electronic Frontier Foundation
Share this: || Join EFF
What is a warrant canary?
A warrant canary is a colloquial term for a regularly published statement that a service provider has not received legal process that it would be prohibited from saying it had received. Once a service provider does receive legal process, the speech prohibition goes into place, and the canary statement is removed.
Warrant canaries are often provided in conjunction with a transparency report, listing the process the service provider can publicly say it received over the course of a particular time period. The canary is a reference to the canaries used to provide warnings in coalmines, which would become sick before miners from carbon monoxide poisoning, warning of the danger.
How might a warrant canary work in practice?
An ISP might issue a semi-annual transparency report, stating that it had not received any national security letters in the six month period. NSLs come with a gag, which purports to prevent the recipient from saying it has received one. (While a federal court has ruled that the NSL gag is unconstitutional, that order is currently stayed pending the government’s appeal). When the ISP issues a subsequent transparency report without that statement, the reader may infer from the silence that the ISP has now received an NSL.
Why would an ISP want to publish a warrant canary?
“Sunlight is said to be the best of disinfectants.” – Justice Louis D. Brandeis.
We are in a time of unprecedented public debate over the government’s powers to secretly obtain information about people. The revelations about the massive NSA bulk surveillance program have raised serious questions about whether these powers are necessary, legal and constitutional. Secret surveillance violates not only the privacy interests of the account holder, but the speech interests of ISPs who wish to participate in these public debates.
Why should we care about publicizing secret legal process like national security letters?
As part of the reauthorization of the Patriot Act in 2006, Congress directed the DOJ Inspector General to investigate and report on the FBI’s use of NSLs. In three reports issued between 2007, 2008 and 2010, the IG documented the agency’s systematic and extensive misuse of NSLs.
The reports showed that between 2003 and 2006, the FBI’s intelligence violations included improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, and unauthorized information collection through NSLs. The FBI’s improper practices included requests for information based on First Amendment protected activity.
In December 2013, the President’s Review Group on Intelligence and Communications Technologies recommended public reporting—both by the government and NSL recipients—of the number of requests made, the type of information produced, and the number of individuals whose records have been requested.
As discussed below, NSLs are just one type of gagged legal process. Similar problems persist in other forms of secret process.
Is it legal to publish a warrant canary?
There is no law that prohibits a service provider from reporting all the legal processes that it has not received. The gag order only attaches after the ISP has been served with the gagged legal process. Nor is publishing a warrant canary an obstruction of justice, since this intent is not to harm the judicial process, but rather to engage in a public conversation about the extent of government investigatory powers.
What are some of the gagged legal processes that an ISP might receive?
An ISP may be gagged from stating it has received any one of several types of national security letters, orders from the Foreign Intelligence Surveillance Court (like the Section 215 orders used for the bulk call records program and the Section 702 orders used for the NSA’s PRISM program), or even an ordinary subpoena when accompanied by a gag order pursuant to the Electronic Communication Privacy Act. The government has issued hundreds of thousands of these gagged legal requests, but very few have ever seen the light of day.
What does the government say is permissible for recipients of gagged legal process?
The government allows ISPs to report receipt of gagged legal process in ranges of 1000, starting at 0, for six-month periods. So if an ISP received 654 NSLs, it could report 0-999. If the companies choose to report FISC requests and NSL requests combined, they can use ranges of 250, again starting at 0. For example, Apple reported receiving 0-249 national security requests in the first half of 2013 and AT&T reported 0-999 content FISC orders, 0-999 non-content FISC orders and 2000-2999 NSLs for the same period.
While the government-approved ranges all start at zero, publication of a range indicates that the ISP has received at least one, as otherwise the ISP would have no obligation to follow the government’s formula.
In contrast to the government-approved ranges, warrant canaries can be much more specific, making the it easier to determine what sort of legal process an ISP has been served with.
What’s the legal theory behind warrant canaries?
The First Amendment protects against compelled speech. For example, a court held that the New Hampshire state government could not require its citizens to have “Live Free or Die” on their license plates. While the government may be able to compel silence through a gag order, it may not be able to compel an ISP to lie by falsely stating that it has not received legal process when in fact it has.
Have courts upheld compelled speech?
Rarely. In a few instances, the courts have upheld compelled speech in the commercial context, where the government shows that the compelled statements convey important truthful information to consumers. For example, warnings on cigarette packs are a form of compelled commercial speech that have sometimes been upheld, and sometimes struck down, depending on whether the government shows there is a rational basis for the warning.
Have courts upheld compelled false speech?
No, and the cases on compelled speech have tended to rely on truth as a minimum requirement. For example, Planned Parenthood challenged a requirement that physicians tell patients seeking abortions of an increased risk of suicidal ideation. The court found that Planned Parenthood did not meet its burden of showing that the disclosure was untruthful, misleading, or not relevant to the patent’s decision to have an abortion.
Are there any cases upholding warrant canaries?
Not yet. EFF believes that warrant canaries are legal, and the government should not be able to compel a lie. To borrow a phrase from Winston Churchill, no one can guarantee success in litigation, but only deserve it.
What should an ISP do if the warrant canary is triggered?
If an ISP with a warrant canary receives gagged legal process, it should obtain legal counsel and go to a court for a determination that it cannot be required to publish false information. While some ISPs may be tempted to engage in civil disobedience, EFF believes that it is better to present the issue to a court, to help establish a precedent. If you run an ISP with a warrant canary and receive gagged legal process, contact email@example.com if you would like help finding counsel.
How often should an ISP publish the warrant canary?
Various ISPs have published canaries on a wide range of schedules. To allow time to file a case and for the court to rule on the important legal questions, we suggest at least few months between the transparency report and the time period covered.
Who has issued warrant canaries?
A number of service providers have issued warrant canaries, including:
- Apple (“Apple has never received an order under Section 215 of the USA Patriot Act.”)
- Espionageapp.com (“We have not placed any backdoors into our software and have not received any requests for doing so. Pay close attention to any modifications to the previous sentence, and verify the signature of this "watch zone" by viewing the page source. Our public GPG key can be found using this ID: A884B988”)
- Lookout (“Furthermore, as of the date of this report, Lookout has not received a national security order and we have not been required by a FISA court to keep any secrets that are not in this transparency report.”)
- MagusNet (picture of a warrant canary with the statement, “No Warrants. No Searches, No Seizures [sic] at Magus Net, LLC.”)
- Pinterest. (“National security: 0”)
- Rise Up (“We would like to clearly state that Riseup has never given any user information to any third party.”)
- Rsync.net (“No warrants have ever been served to rsync.net, or rsync.net principals or employees. No searches or seizures of any kind have ever been performed on rsync.net assets . . . .”)
- Tumblr (“As of the date of publication of this report, we have never received a National Security Letter, FISA order, or any other classified request for user information.”)
- Vilain (“THE FBI HAS NOT BEEN HERE (watch very closely for the removal of this sign).”)
- Wickr (“As of the date of this report, Wickr has not been required by a FISA request to keep any secrets that are not in this transparency report as part of a national security order.”)
Share this: || Join EFF
EFF filed a request to submit an amicus brief today in the Federal District Court of the Northern District of California, urging the Court to let a case entitled Doe v. Cisco Systems go forward against Cisco for its role in contributing to human rights abuses against the Falun Gong religious minority in China. China's record of human rights abuses against the Falun Gong is notorious, including detention, torture, forced conversions, and even deaths. These violations have been well-documented by the U.N., the U.S. State Department, and many others around the world, including documentation of China's use of sophisticated surveillance technologies to facilitate this repression.
The central claim in the case is that Cisco purposefully customized its general purpose router technology to allow the Chinese government to identify, track, and detain Falun Gong members. Specifically, the case alleges that Cisco customized technology for anti-Falun Gong purposes including:
- A library of carefully analyzed patterns of Falun Gong Internet activity (or “signatures”) that enable the Chinese government to uniquely identify Falun Gong Internet users;
- Several log/alert systems that provide the Chinese government with real time monitoring and notification based on Falun Gong Internet traffic patterns;
- Applications for storing data profiles on individual Falun Gong practitioners for use during interrogation and “forced conversion” (i.e., torture);
- Applications for storing and sharing videos of “efficient forced conversions” for purposes of training security officers on successful methods;
- Applications for categorizing individual Falun Gong practitioners by their likely susceptibility to different methods of “forced conversion”;
- Highly advanced video and image analyzers that Cisco marketed as the “only product capable of recognizing over 90% of Falun Gong pictorial information;” and
- A nationwide video surveillance system which enabled the Chinese government to identify and detain Falun Gong practitioners.
The suit also alleges that Cisco not only knew that its customizations would be used to repress the Falun Gong, but actively marketed, sold, and supported the technologies toward that purpose. In fact, the case arises in part from the publication several years ago of a presentation in which Cisco confirms that the Golden Shield is helpful to the Chinese government to “Combat Falun Gong Evil Religion and Other Hostilities.” It also alleges that these customizations were actually used to identify and detain the plaintiffs.
People around the world are increasingly concerned about the sale by Western companies of surveillance and other technologies used for repression. Over the past few years, EFF has tracked a pattern around the world (here, here and here) and has suggested "Know Your Customer" standards for technology companies who are selling technologies that can be used in human rights abuses to potentially repressive governments. Many have suggested increased export controls to combat the problem, but the Doe v. Cisco and EFF's Kidane v. Ethiopia cases show that there are other ways to address the very real problem of companies selling the tools of repression as well as the repression that results.
In its brief, EFF suggests a careful liability analysis, expressly noting in this case, and in another case against Cisco from last year, Du Daobin v. Cisco,1 that a tech company could not (and should not) be held accountable when governments misuse general use products for nefarious purposes. Yet the allegations here are that Cisco has done far more than sell standard router technology and services to the Chinese authorities; they are that Cisco has specifically and intentionally customized its technologies and services in order to facilitate well-documented human rights violations against a religious minority. That should be sufficient to allow the case to proceed.
EFF legal intern Hilary Richardson greatly assisted in the writing of EFF's amicus brief. Thanks Hilary!
- 1. The Du Daobin case was dismissed earlier this year and EFF noted the problems with that decision and urged the California court not to follow suit.
Share this: || Join EFF
Yesterday afternoon, Ars Technica published a story reporting two possible logs of Heartbleed attacks occurring in the wild, months before Monday's public disclosure of the vulnerability. It would be very bad news if these stories were true, indicating that blackhats and/or intelligence agencies may have had a long period when they knew about the attack and could use it at their leisure.
In response to the story, EFF called for further evidence of Heartbleed attacks in the wild prior to Monday. The first thing we learned was that the SeaCat report was a possible false positive; the pattern in their logs looks like it could be caused by ErrataSec's masscan software, and indeed one of the source IPs was ErrataSec.
The second log seems much more troubling. We have spoken to Ars Technica's second source, Terrence Koeman, who reports finding some inbound packets, immediately following the setup and termination of a normal handshake, containing another Client Hello message followed by the TCP payload bytes 18 03 02 00 03 01 40 00 in ingress packet logs from November 2013. These bytes are a TLS Heartbeat with contradictory length fields, and are the same as those in the widely circulated proof-of-concept exploit.
Koeman's logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 188.8.131.52 and 184.108.40.206. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.
To reach a firmer conclusion about Heartbleed's history, it would be best for the networking community to try to replicate Koeman's findings. Any network operators who have extensive packet logs can check for malicious heartbeats, which most commonly have a TCP payload of 18 03 02 00 03 01 or 18 03 01 00 03 01 (or perhaps even 18 03 03 00 03 01). We urge any network operators who find this pattern to contact us.
Network operators might also keep an eye out for other interesting log entries from 193.104.110.* and the other IPs in the related botnet. Who knows what they might find?
A lot of the narratives around Heartbleed have viewed this bug through a worst-case lens, supposing that it might have been used for some time, and that there might be tricks to obtain private keys somewhat reliably with it. At least the first half of that scenario is starting to look likely.Related Issues: Encrypting the WebSecurity
Share this: || Join EFF
The Heartbleed SSL vulnerability presents significant concerns for users and major challenges for site operators. This article presents a series of steps server and site owners should carry out as soon as possible to help protect the public. We acknowledge that some steps might not be feasible, important, or even relevant for every site, so the steps are given in order both of their importance and the order they should be carried out.1. Update Your Servers
If you haven't yet, update any and all of your systems that use OpenSSL for TLS encrypted communications. This includes most web servers, load balancers, cache servers, mail servers, messaging and chat servers, VPN servers, and file servers, especially those running on Linux, Unix, BSD, Mac OS X, or Cygwin.
The vulnerable OpenSSL version numbers are 1.0.1 through 1.0.1f and 1.0.2-beta1. The flaw is fixed in OpenSSL 1.0.1g. However, some operating systems have introduced the fix to earlier branches of OpenSSL, and may instruct you to install packages with minimum versions such as 1.0.1e-2+deb7u5 (in the case of Debian GNU/Linux).
If your operating system has not yet released an updated package, download openssl-1.0.1g.tar.gz directly from https://www.openssl.org/source/ and follow the instructions in the INSTALL text file to compile the new version locally.
After installing a fixed version of OpenSSL, be sure to restart all services that depend on it. On your sysytem this might include web and proxy servers such as apache, nginx, pound, and squid, caches such as memcached and redis, databases like mysql and postgres, and mail services like postfix, exim, and dovecot. When in doubt, reboot the entire server if possible.
If you manage systems with custom operating systems like switches and routers, you may need to ask your vendor for a patch directly.
If you haven't updated your systems yet, stop reading and do it now. If this is the only step you can carry out in your environment, you will still have done the most important thing by far.2. Test Your Servers
It's important to verify that the hole has been closed, especially if you have multiple servers and services to stitch up. The bad news is that this vulnerability is relatively easy to exploit. The good news is that means there are a few tools available to see if you're safe.
The SSL Server Test from Qualys SSL Labs will let you know if your web server remains vulnerable. If you have servers running on other ports to test, or STARTTLS mail servers, you can try the hb-test.py script. The hbcheck script can help you test an internal network using nmap. Finally, if you have a large number of hostnames to test, my hb-batch.py script might be helpful.
Please note these tests might not be completely reliable, and running them against servers you do not own might not always be considered polite.3. Be Safer Next Time
This is the worst and biggest security flaw we've seen recently, but it won't be the last. Putting good practices into play for Heartbleed can help you prepare for anything else that might come down the pike next.
One of the strongest protections you can have against TLS vulnerabilities is Perfect Forward Secrecy. This is not simple to configure, and does not yet have global browser support. However, it is the encryption technology that provides the best defense against attacks with the potential to steal your private key and use it to decrypt your traffic.
You should also make sure you're practicing good password discipline. Use a password vault, use strong passwords, change them regularly, and don't reuse them.
Practice least authority for certificates, too. If you don't need to give everyone root access to every server, you probably don't need to give every server a certificate for *.example.com.
Finally, make sure you have reliable (if not automated) process for providing all of your servers with security updates quickly. After all, the only thing worse than getting pwned by a zero-day vulnerability is getting pwned by a one-day.4. Consider Rekeying Your Servers
One of the worst things about the Heartbleed vulnerability is that it makes it theoretically possible for an attacker to recover your server's private key. Fortunately, the probability of this being possible on a given server appears quite low. Unfortunately, we can't yet be completely sure if that's true.
Key theft is a terrible attack because it tends to be undetectable by you, the server operator. Worse still is the harsh truth that, unless all your connections are served with Perfect Forward Secrecy, this would allow such an attacker not only to decrypt any newly intercepted traffic but to decode records of past traffic. If you run a server that intelligence agencies are likely to attack, this is a serious problem.
That means you may wish to consider revoking and regenerating your existing SSL certificates using new keys. Doing so will protect against the possibility of passive traffic decryption (if you don't use PFS) and man-in-the-middle attacks with a stolen key.
Because private key compromise via Heartbleed currently appears to be quite rare, this may not need to be a priority except for high value services (large or sensitive email and messaging systems, software distribution points, banks). Other services may not need to panic and rush to rekey quite so urgently. For most threat scenarios, adopting PFS provides greater overall protections than rekeying so we will remind you to make PFS a priority.
The details of the rekeying process will vary depending on the Certificate Authority you use to generate certificates and/or manage domain names. Some will allow you to regenerate in one step. Some will require you to revoke the old certificate before requesting a new one.1 Most will have a prominent link in their control panels, and many will waive their normal fees right now.
If you are given the option during the certificate regeneration process, it's a good idea to create a .csr file (Certificate Signing Request) and private key locally on your server using the openssl command. It might seem strange to prefer trusting OpenSSL at the moment, but it's still a safer bet than trusting a third party with your private key right off the bat.5. Consider Changing Passwords
Unlike private key compromises, Heartbleed leakage of recently-used passwords from server processes linked to OpenSSL appears to have been quite common. Unfortunately, this could affect not just your operators and staff but your users.
This means you should perform risk assessment and determine which categories of passwords on your servers and services may need immediate resets, user-reset-on-next-login, or advisories suggesting resets. Variables in the risk assessment include how quickly you were able to patch your servers after the vulnerability was publicly disclosed at around 17:30UTC on 2014-04-07, According to a recent article in Bloomberg, i2. the sensitivity and value of potentially accessible accounts, whether accounts had been used recently (meaning their passwords were in RAM), and the probability that random or specific people on the Internet might have found your servers to be interesting targets.
You should determine which passwords are of sufficient value to deserve precautionary resets, and perform these after the steps above, in order to offer the new passwords proper protection. (If you've decided to rekey because of a concern about private key exposure, that is another reason to change users' passwords.)
You should also consider changing CSRF and OAUTH authentication tokens, invalidating session cookies, and rotating authentication cookies. These steps can be performed independently of passwords changes and may be far less disruptive.6. Update Your Users
Your users have already heard of this scary Internet password thing, and chances are they're concerned about how it affects your site. Let them know what you've done, what you will do, and what the remaining risks are. Don't try to give them a false sense of security. Knowing that you're working on it and reaching out to them at all will work wonders.7. Turn on Perfect Forward Secrecy
Because you skipped it in step three, didn't you? That's okay. There's still time. We'll wait.
- 1. The infrastructure for revocation of certificates is quite broken, but that's a story for another post.
- 2. t appears that NSA knew about and used the Heartbleed vulnerability for at least two years before its public discovery. It is not currently known if any other attackers had knowledge of the bug before April 7 2014
Share this: || Join EFF
Recent events have shown us more than ever that the technologies we use and create every day have astonishing implications on our basic, most cherished rights. Tens of thousands more people have joined us in the past year alone—together, we're building a movement. But we need your help.
Today, we at EFF are unveiling new tools for student and community activists to engage in campaigns to defend our digital rights.
We want you to bring the fight to protect online civil liberties to cities, towns, and campuses across the country. We invite you—whether you're a newly minted activist or an experienced community organizer—to join our growing team of driven individuals and organizations actively working make sure that our rights are not left behind as we develop and adopt new technologies.
There are plenty of ways to take part, no matter how much organizing experience you have.
- Start a group: Talk to friends and community members to gauge who else in your network is interested in digital freedom. Form a group that can discuss the issues and plan ways of advocating for your rights. For some tips on getting started, check out our guide on how to build a coalition on campus and in your community.
- Bring digital rights to an existing group: These issues are everybody's issues, no matter where on the political spectrum you lie. You can work with existing political, civil liberties, activist, and computer-related groups and urge members to take on a digital rights campaign.
- Organize an event: We have plenty of suggestions for events you can throw, from film screenings to rallies, parties to speaker series.
- Let your voice be heard: We are all part of the digital rights movement together, and your voice is as important as ours. Learn how to coordinate with local and national campaigns, and amplify your message by reading our tips on engaging with the press.
While many student groups and local community organizations are working on surveillance reform in light of the recent disclosures about massive government spying, it’s not only the NSA that we’re fighting: we’re demanding open access to publicly funded research; we’re fighting to protect the future of innovation from patent trolls; we’re urging companies and institutions to deploy encryption; we're defending the rights of coders and protecting the free speech rights of bloggers worldwide—the list goes on.
We can’t do this by ourselves. That’s why we’re building a trusted team of activists and organizers across the country to spread the word and build momentum for political reform and technical tools to protect our rights.Road trip!
EFF is also hitting the road. We're traveling to cities and towns across the country to speak to student groups, meet with community organizers, and host local events to share and broaden our vision of an Internet grounded in creativity, community, and civil rights. In March and April, we’re visiting Boston, Cambridge, New York City, Ames, Des Moines, Washington, D.C., New Haven, and Middletown.
If you’re interested in having someone from EFF come to your event, class, or campus or community group to speak and help you all organize, send an email to firstname.lastname@example.org and join our community organizers mailing list. Let us know what you’re up to, and we’ll let you know when we’re in your area.Campus activism: All the cool people are doing it
Many activists, lawyers, and technologists will tell you that they got their start as a student. That's why we're especially excited to work with students and professors.
You don’t have to be a lawyer or have a college degree to be a strong voice. There’s no prerequisite for setting up a meeting with your elected official, writing an op-ed, or growing a campus organization. All it takes is a vision for change. We’ve seen student activists and innovators drive reform by challenging poorly written policies and developing new technologies that bring us closer to our vision of a networked world that respects our rights and fosters creativity.
Not a student? No worries! If you’re a member of a community that wants to engage deeper in EFF’s work, you can still join our organizers mailing list. There’s so much to do on the community level, too. If you’re concerned about local law enforcement surveillance hubs, the use of license plate readers, domestic drones, or are in a community of artists stifled by oppressive copyright policies, now is the time to raise awareness, build a coalition, and organize to defend our digital rights.
This is only the beginning. When we finally see meaningful reform of our broken intellectual property system and new bills passed that bring our national security programs back within the bounds of the Constitution—and we will—it won’t be due to the effort of a few policy wonks and privacy enthusiasts or a handful of lawyers in Washington, D.C. It will be because millions of people across the world fought for change, demanded meaningful reform, started using privacy enhancing technology, and held their elected officials accountable. Together, we’re going to make history
We hope to see you digital rights activists out there. Stay tuned. This is going to be huge.
Share this: || Join EFF
San Francisco - The digitization of medical records is being pitched to the public as a way to revolutionize healthcare. But rapid technological innovation and lagging privacy laws are leaving patients – and their most sensitive information – vulnerable to exposure and abuse, especially in this age of "big data." The Electronic Frontier Foundation (EFF) is launching a new medical privacy project today to identify the emerging issues and to give advocates the information they need to fight for stronger protections for patients.
"You assume that the decision about when to disclose medical data – like if you've had an abortion or have a serious heart condition – is yours and yours alone. But that information may be circulated in the process of paying for and providing treatment, or as part of mandated reporting," said EFF Senior Staff Attorney Lee Tien. "As the American medical establishment moves towards complete digitization of patient records, it's important to take a hard look on what that means for everyone's privacy, and what we should do about it."
EFF's project explores the unsettled areas of medical privacy law and technology, including a primer on how law enforcement might get access to your health information, or how the government might be able to collect it by claiming that it's necessary for national security. There's also a detailed discussion of public health reporting systems and how federal health laws give patients some rights but take others away. EFF will add more topics in the months to come.
"Genetic testing provides a striking example of some of the challenges we face with protecting medical data. Genetic data is uniquely identifiable and can be easily obtained from cells we shed every day," says EFF Activism Director Rainey Reitman. "But we have weak laws protecting this highly sensitive data."
EFF's work on the medical privacy project is supported by a grant from the Consumer Privacy Rights Fund of the Rose Foundation for Communities and the Environment.
For EFF's full medical privacy project:
Senior Staff Attorney
Electronic Frontier Foundation
Electronic Frontier Foundation
Share this: || Join EFF
Since the SHIELD Act was introduced two years ago, momentum has been building for patent reform in Congress. And when the House overwhelmingly passed the Innovation Act in December, it seemed real legislation might be close at hand. Since then, the Senate has been thrashing out its version of a patent bill. We need to keep up the pressure to make sure that any final deal includes meaningful reforms that will slow the flood of patent troll litigation. With the Senate about to break for recess, the next few days could be crucial.
In other legislative news, the Energy and Commerce Committee in the House held a hearing today regarding abusive patent demand letters. Mark Chandler of Cisco talked about the massive scam perpetrated by troll Innovatio IP Ventures, which sent thousands of misleading demand letters to cafes, hotels, and other end-users of Wi-Fi technology. And Professor Jason Schultz of NYU Law School (and EFF special counsel) explained that demand letters should include basic information such as the specific patent numbers and claims asserted, who owns those patents, and the products or services that allegedly infringe.
With patent trolls blanketing the nation with deceptive and misleading letters—many of which we've documented through our Trolling Effects project—Congress should make sure to include demand letter reform in its legislation. Again, please visit fixpatents.org and demand meaningful patent reform.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22//www.youtube-nocookie.com/embed/g_9sgZIVCJY%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Issues: PatentsLegislative Solutions for Patent ReformPatent TrollsInnovation
Share this: || Join EFF
EFF has long advocated for websites to support HTTPS instead of plain HTTP to encrypt and authenticate data transmitted on the Internet. However, we learned yesterday of a catastrophic bug, nicknamed "Heartbleed," that has critically threatened the security of some HTTPS sites since 2011. By some estimates, Heartbleed affects 2 out of 3 web servers on the Internet. 1
Heartbleed isn't a bug in the design of HTTPS itself but rather the result of a simple programming error in a widely-used piece of software called OpenSSL. It allows an attacker who connects to an HTTPS server running a vulnerable version of OpenSSL to access up to 64KB of private memory space. Doing the attack once can easily cause the server to leak cookies, emails, and passwords. Doing the attack repeatedly in a clever way can potentially leak entire encryption keys, such as the private SSL keys used to protect HTTPS traffic. If an attacker has access to a website's private SSL key, they can run a fake version of the website and/or steal any information that users send, including passwords, private messages, and credit card numbers. Neither users nor website owners can detect this attack as it happens.
It's worth emphasizing that some important services that users access everyday were affected by Heartbleed, including Yahoo Mail and LastPass. We weren't immune either, since most EFF servers were running vulnerable versions of OpenSSL. Even the private identity keys used by Tor Hidden Services may have been compromised, potentially putting some journalist organizations' communication with anonymous sources at risk.
Luckily, there's one important mitigation that could actually protect some users from the worst-case scenario: perfect forward secrecy. If a server was configured to support forward secrecy, then a compromise of its private key can't be used to decrypt past communications. In other words, if someone leaks or steals a copy of EFF's private SSL key today, any traffic sent to EFF's website in the past since EFF started supporting forward secrecy is still safe.
Unfortunately, most HTTPS websites still don't support forward secrecy, which means that a large chunk of your past communications with those servers is vulnerable to decryption when private SSL keys are compromised. For example, if someone has been intercepting your HTTPS-encrypted messages to Yahoo for the past several years and then stole a copy of Yahoo's private key yesterday with Heartbleed, they would be able to use it to go back and decrypt the previously-unintelligible recording of your old communications today — if those communications weren't made using a forward-secrecy-enabled connection.
At this moment, forward secrecy is more crucial than ever. Now that the details of Heartbleed are public, anyone can use it against servers that haven't yet patched the OpenSSL bug and changed SSL certificates.2 It can easily take weeks or months for developers to deploy new SSL certificates, and even so, certificate revocation systems are unreliable and poorly-suited to the modern web. In the meantime, any data you send now to affected servers that don't support forward secrecy will be open to eavesdropping and malicious tampering as soon as their SSL private keys are exposed.
In the aftermath of yesterday's events, it's clear that forward secrecy is necessary to protect against unforseeable threats to SSL private keys. Whether that threat is an existing or future software bug, an insider who steals the key, a secret government demand to enable surveillance, or a new cryptographic breakthrough, the beauty of forward secrecy is that the privacy of today's sessions doesn't depend on keeping information secret tomorrow.3
Although we've patched this bug on EFF's servers and are scrambling to rotate our keys as fast as possible, we're relieved that our potential damage from Heartbleed is lower because we enabled forward secrecy last summer. It's clearly time for other websites to do so as well.
PS: Fortunately, the integrity of HTTPS Everywhere downloads for Firefox and Chrome are not compromised by Heartbleed. That's because, in addition to serving downloads over SSL/TLS, we sign HTTPS Everywhere updates with an offline key to guarantee authenticity even if transport-level security is broken. You can use these instructions to check that your copy of HTTPS Everywhere has the correct update key. In light of Heartbleed, we're glad that the Chrome web store allows extension developers to include their own code signing keys in case Google's SSL certificates are compromised; until the Mozilla Addons Store does similarly, we plan to keep hosting HTTPS Everywhere for Firefox on our own servers.
- 1. Given the severity of this bug we urge site operators using versions 1.0.1-1.0.1f of OpenSSL to immediately upgrade to OpenSSL 1.0.1g or recompile OpenSSL with the -DOPENSSL_NO_HEARTBEATS flag; furthermore, any affected server should get a new SSL certificate and rotate any keys that could have been leaked from memory after updating. Please also remember to restart your load balancers after updating.
- 2. Qualys SSL Labs has an excellent online tool to check whether a website is currently vulnerable to Heartbleed attacks. The "Protocol Details" section of the results also shows support for forward secrecy.
- 3. This is assuming that an attacker is not performing active man-in-the-middle attacks during every SSL handshake. However, this threat will likely be mitigated in the future by efforts such as Certificate Transparency.
Share this: || Join EFF
Today the European Court of Justice declared the EU's Data Retention Directive invalid, declaring that the mass collection of Internet data in Europe entailed a "wide-ranging and particularly serious interference with the fundamental rights to respect for private life and to the protection of personal data." The Directive ordered European states to pass laws that obliged Internet intermediaries to log records on their users' activity, keep them for up to two years, and provide access to the police and security services. The ECJ joins the United Nations' Human Rights Committee which last month called upon the United States to refrain from imposing mandatory retention of data by third parties.
The decision is a victory for the human rights activists that have fought hard to have the original Europe-wide law—rushed through the European Parliament in 200—re-considered. Digital Rights Ireland, who first launched a lawsuit against the Irish Government against their implementation of the Directive, and AK Vorrat Austria, who organized to reject data retention in Austria, both pursued the issue for many years in the face of concerted opposition from their own governments and officials.
While the decision comprehensively rejects the current directive, some states may put up a fight to keep their laws, while others could take this opportunity to become champions of their citizens' privacy. The Finnish Minister of Communications, Krista Kiuru, has already declared a full review of Finnish law in the light of the decision, saying that "if [Finland] wants to be a model country in privacy issues, Finnish legislation has to respect fundamental rights and the rule of law." The German and Romanian data retention laws have already been declared unlawful by their national constitutional courts. Governments advocating retention, like the UK, may argue that they can still maintain their existing data retention laws, or there may even be an attempt to introduce a whole new data retention directive that would attempt to comply with the ECJ's decision.
However the data retention regime unwinds in Europe, this decision sends an important signal to other countries in the world who are considered the same path as the EU. Brazil's online activists have been fighting hard to keep data retention out of their flagship Internet Bill of Rights, the Marco Civil. The law, which is about to be considered by the Brazilian Senate, would require ISPs to record personal data for one year, and other service providers log keep private information on their users for six months. New laws requiring mandatory data retention by companies in the United States have also been championed by the Obama administration's Department of Justice, and have been proposed by the Whitehouse as a "solution" to the NSA spying scandal. As the ECJ's decision shows, the indiscriminate recording and storage of every aspect of innocent civilians' online lives is a travesty of human rights, no matter where that collected data is housed.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22//www.youtube-nocookie.com/embed/g_9sgZIVCJY%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Issues: InternationalMandatory Data Retention
Share this: || Join EFF