Aggregated News

Loopholes and Flaws in the Student Privacy Pledge - Fri, 21/10/2016 - 03:17

With a new school year underway, concerns about student privacy are at the forefront of parents’ and students’ minds. The Student Privacy Pledge, which recently topped 300 signatories and reached its two-year launch anniversary, is at the center of discussions about how to make sure tech and education companies protect students and their information. A voluntary effort led by the Future of Privacy Forum and the Software and Information Industry Association (SIAA), the Pledge holds the edtech companies who sign it to a set of commitments intended to protect student privacy.

But the Student Privacy Pledge as it stands is flawed. While we praise the Pledge’s effort to create industry norms around privacy, its loopholes prevent it from actually protecting student data.

All in the fine print

The real problems with the Student Privacy Pledge are not in its 12 large, bold commitment statements—which we generally like—but in the fine-print definitions under them.

First, the Pledge’s definition of “student personal information” is enough to call into question the integrity of the entire Pledge. By limiting the definition to data to that is “both collected and maintained on an individual level” and “linked to personally identifiable information,” the Pledge seems to permit signatories to collect sensitive and potentially identifying data such as search history, so long as it is not tied to a student’s name. The key problem here is that the term “personally identifiable information” is not defined and is surely meant to be narrowly interpreted, allowing companies to collect and use a significant amount of data outside the strictures of the Pledge. This pool of data potentially available to edtech providers is more revealing than traditional academic records, and can paint a picture of students’ activities and habits that was not available before.

By contrast, the federal definition, found in FERPA and the accompanying regulations, is broad and includes both “direct” and “indirect” identifiers, and any behavioral “metadata” tied to those identifiers. The federal definition also includes “Other information that, alone or in combination, is linked or linkable to a specific student that would allow a reasonable person in the school community, who does not have personal knowledge of the relevant circumstances, to identify the student with reasonable certainty.”

Second, the Pledge’s definition of “school service provider” is limited to providers of applications, online services, or websites that are “designed or marketed” for educational purposes.

A provider of a product that is marketed for and deployed in classrooms, but wasn’t necessarily “designed or marketed” for educational purposes, is outside the Pledge. This excludes providers while they’re providing “general audience” apps, online services and websites. We alleged in our FTC complaint against Google that the Pledge does apply to data collection on “general audience” websites—for example, when that data collection is only possible by virtue of a student using log-in credentials that were generated for educational purposes. However, SIIA, a principal developer of the Pledge, argued to the contrary and said that the Pledge permits providers to collect data on students on general audience websites even if students are using their school accounts.

The Pledge’s definition also does not include providers of devices like laptops and tablets, who are free to collect and use student data contrary to the Pledge.

Definition changes

Simple changes to the definitions of “student personal information” and “school service provider”—to bring them in line with how we generally understand those plain-English terms—would give the Pledge real bite, especially since the Pledge is intended to be legally enforced by the Federal Trade Commission.

While enforcement only applies to companies who choose to sign on, we think that if the Student Privacy Pledge meant what it said, and if signatories actually committed to the practices outlined under the heading “We Commit To”, it would amount to genuine protection for students. But with the definitions as they stand, the Pledge rings hollow.

Notwithstanding the need to improve the definitions, the Pledge could do some good. Unfortunately, the FTC has yet to take action on our complaint alleging that Google violated the Student Privacy Pledge. We urge the Commission to take this matter seriously so that parents and students can trust that when companies promise to do (or not do) something, they will be held accountable.

As the school year continues, the conversation about education technology and student privacy is more important than ever. Tell us about your experience in your own schools and communities by taking our student privacy survey.

Share this: Join EFF
Categories: Aggregated News

U.N. Joins Critique of Proposed CBP Social Media Questions - Wed, 19/10/2016 - 10:53

Having for years enforced a constitutionally offensive border search regime at physical borders and U.S. international airports, Customs and Border Protection (CBP) recently proposed to expand its violations in troubling new ways by prompting travelers from countries on the State Department’s Visa Waiver Program list to provide their “social media identifier.” Mounting criticism recently prompted the agency to commit to some useful limits, but the proposal remains flawed.

Recently joining the ranks of diverse critics is the U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, who wrote to the U.S. Ambassador at the end of September.

EFF submitted several sets of comments expressing our concerns with the proposal, beginning during the initial comment period. After CBP extended the original comment period until the end of September, the agency received comments from thousands of users opposing its ill-considered and counter-productive policy. It issued a preliminary response to those initial comments, to which we replied in a follow up analysis noting the proposal's continuing defects. We also joined coalition comments compiled by the Center for Democracy & Technology, as well as a second set of coalition comments organized by the Brennan Center for Justice in response to a DHS notice required by the Privacy Act.

Violating international law

The international community has also grown outspoken. An important new voice joined the debate at the end of September, when David Kaye, the U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, wrote to remind our government that international law protects everyone’s “right to maintain an opinion without interference and to seek, receive and impart information and ideas of all kinds, regardless of frontiers,” in addition to “the right of individuals to be protected…against arbitrary…interference with their privacy and correspondence.”

Mr. Kaye’s letter also reiterates the necessary and proportionate principles developed in 2013 by a global coalition of civil society, privacy, and technology experts (including EFF) and endorsed by over 600 organizations and a quarter million individuals around the world. It goes on to challenge CBP’s proposal for half a dozen reasons, including the vagueness that has concerned EFF. In particular, Mr. Kaye notes that:

It is unstated whether (and under what circumstances) officers may request additional information or access to private accounts. It is also unclear whether officers can request or persuade travelers who have left the data field blank to provide information, or whether they would be questioned about why they left the field blank.

Of course, social media profiles can reveal an immense amount of personal details about an individual. Many social media users share sensitive information online intended for friends and family that they would not share with their (or a foreign) government.

Chilling speech and expression

Our allies at ACCESS NOW have noted that “A person’s online identifiers are gateways into an enormous amount of their online expression and associations, which can reflect highly sensitive information about that person’s opinions, beliefs, identity, and community.” As my colleague Sophia Cope wrote in August:

[S]ocial media handles...can easily lead the government to information about [a traveler's] political leanings, religious affiliations, reading habits, purchase histories, dating preferences, and sexual orientations, among other things. Moreover, given the highly networked nature of social media, the government would also learn such personal details about travelers’ family members, friends, professional colleagues, and other innocent associates, many of whom may be U.S. citizens and/or residents with constitutional and statutory rights.

Travelers accustomed to political repression in their own countries may, as the U.N. special rapporteur noted, inhibit their own expression to avoid scrutiny during anticipated future travel. So, too, will Americans: knowing that an international friend's decision to answer CBP’s proposed question could compromise our own opportunity for anonymous speech, as well as associations, many Americans—especially those familiar with our country’s history of suppressing dissent—may rationally decide to limit their online speech to avoid controversial topics that might invite scrutiny.

Such constitutionally offensive chilling effects are established and predictable in the face of documented surveillance and even more likely given the troubling history of U.S. federal authorities excluding visitors for ideological reasons. CBP's proposal to ask visitors to disclose their social media handles undermines the Obama administration’s written commitment to reverse this policy in order to allow Americans to hear diverse views.

Undermining the privacy of Americans

CBP and DHS formulated the proposed policy in the face of longstanding criticism for their domestic programs monitoring social media activity, to which executive officials have recently re-committed their agencies. Our comments joining the Brennan Center, in particular, highlight how CBP's current proposal would further impact the rights of Americans since the proposal would enable CBP to map relationships between visitors and their U.S. contacts and then share information gleaned from the social media profiles of those U.S. residents with other agencies potentially poised to monitor them.

Collecting data on the social media profiles of international travelers could also exacerbate longstanding domestic concerns in other ways.

Only two years ago, the Supreme Court held in Riley v. California that cell phones are not subject to search incident to arrest absent a judicial warrant. In other words, even when an arrest is justified by probable cause that a person has committed a criminal offense, police must receive permission from a neutral arbiter, supported by a separate showing of probable cause, before searching the arrestee’s cell phone.

Yet at the border, DHS already violates the spirit of Riley in ways that this proposal could intensify. First, CBP has long claimed the power to search any electronic device crossing a U.S. border—including those belonging to U.S. citizens—for any reason at all, even without the individual suspicion long required to pat down a suspect within the U.S. or the the judicial warrant required by Riley.

Government lawyers who argued Riley conceded that the power to seize a phone from someone arrested within the U.S. did not justify accessing data—like social media profiles—stored in the cloud (e.g., by tapping on the Facebook app). If, however, CBP collects social media information at U.S. borders from WVP travelers (and through them, their U.S. contacts) it could enable the government to do what in Riley it conceded it could not: access data about Americans stored in the cloud without first gaining a judicial warrant.

Put another way, learning the social media accounts of travelers would expand the government's reach beyond data already gathered from devices and could allow agencies to circumvent legal limits that protect the privacy of Americans within the U.S.

Limits acknowledging some concerns

Our original comments expressed concern that CBP proposed to characterize as optional a question posed in an inherently coercive setting and invite travelers to reveal private and sensitive information by posing that question in a vague way.

Fortunately, CBP issued a statement repudiating a previous draft of its form on which its proposed question appeared as compulsory. The agency said it will make clear that “Providing this information will be voluntary. If an applicant chooses to not fill out or answer questions regarding social media, the ESTA application and I-94W can still be submitted.”

Its most recent statement also commits that "CBP will not violate any social media privacy settings in processing ESTA applications." This pledge is especially important given the agency's established practice of arbitrarily seizing devices at borders and airports, with which the government could conceivably not only access the known social media profiles of travelers but even potentially commandeer them.

On the one hand, we are proud of having helped compel CBP to accept reasonable limits.

Continuing constitutional defects

On the other hand, CBP's proposal remains flawed and continues to suffer from constitutional defects.

Sophisticated travelers may recognize that “information associated with your online presence” such as a “social media identifier” could be limited to a handle or pseudonym used to identify oneself on a particular social network. Some, however, may go further and provide multiple identifiers, or possibly even their passwords, enabling the government to potentially access private content. CBP should clarify how it will treat information provided by travelers and establish strict parameters to prevent misuse.

Morever, CBP admits that it will share data collected through its new question with other agencies "that have a need to know the information to carry out their national security, law enforcement, immigration, or other homeland security functions." This fails to address the concerns that we and others—including the U.N. Special Rapporteur—have raised about the proposal's chilling effects on expression.

Not only will travelers potentially silence themselves in their home countries to avoid prompting scrutiny when traveling to the U.S., but CBP's proposal may lead Americans to seek fewer international relationships with contacts through whom our own information could be exposed. It could also lead other countries to reciprocally demand personally identifying information from Americans seeking to enter their countries, driving a race to the bottom.

Perhaps most dangerously, the proposal omits any indication of how social media profiles will be evaluated or the process through which a traveler could be identified as a security risk. These standards must be articulated in advance to limit individual discretion and prevent ideological profiling of the sort that has long limited the rights of Americans to hear unpopular views.

Even after CBP recently articulated its limits, the proposal remains flawed. It undermines international law, individual rights, the rights of Americans both to hold and to hear unpopular views, and the Obama administration’s foreign policy to promote freedom of expression.

Having filed our comments alongside thousands of other critics, we hope that concerns from both Americans and the international community will spur the administration to reject CBP’s speech-suppressing proposal. Concerned readers can amplify our concerns by prompting their congressional representatives—especially those on the Senate and House Homeland Security committees—to write their own letters seeking answers from DHS and CBP.

Share this: Join EFF
Categories: Aggregated News

Memo to the DOJ: Facial Recognition’s Threat to Privacy is Worse Than Anyone Thought - Wed, 19/10/2016 - 08:25

Before all of this ever went down
In another place, another town
You were just a face in the crowd
You were just a face in the crowd
Out in the street walking around
A face in the crowd

-Tom Petty

If we don’t speak up now, the days when we can walk around with our heads held high without fear of surveillance are numbered. Federal and local law enforcement across the country are adopting sophisticated facial recognition technologies to identify us on the streets and in social media by matching our faces to massive databases.

We knew the threat was looming. But a brand new report from the Georgetown Law Center for Privacy and Technology indicates the problem is far worse than we could’ve imagined.  The researchers compare the use of facial recognition to a perpetual line-up, where everyday, law-abiding citizens are pulled into law enforcement investigations without their consent and, in many cases, without their knowledge.

The researchers sent more than 100 public records requests to police agencies. Among their findings:

  • One in two American adults has their image in a facial recognition network, impacting more than 117 million people. Law enforcement in at least 26 states use facial recognition in combination with driver license and ID photos.  Sixteen states grant the FBI access to their DMV databases. 
  • At least five large cities, including Los Angeles, Chicago, and Dallas, use or have considered using facial recognition to scan the faces of pedestrians in real time with surveillance cameras.
  • Facial recognition is almost completely unregulated. No states have passed comprehensive laws limiting facial recognition, and only one of 52 agencies surveyed expressly forbids police from using facial recognition to surveil people engaged in political, religious, or other First Amendment protected activities. Very few have taken measures to ensure accuracy of facial recognition results or have audited their systems for abuse.
  • Facial recognition systems have a disproportionate impact on Communities of Color. One study, which included an FBI researcher, found the technology is less reliable when analyzing African American faces. Because African Americans are already arrested at a disproportionate rate, their mugshots are overrepresented in facial recognition databases. If the technology has a higher rate of misidentification for people of color, this will also increase the chance that they will be considered a suspect for a crime they didn't commit. EFF raised many of these issues in our response to the FBI’s plan to exempt its Next Generation Identification biometric database.

In response to the report, EFF has joined a large coalition of privacy advocates to demand the U.S. Department of Justice, Civil Rights Division take two major steps to keep facial recognition in check:

1. Expand ongoing investigations of police practices and include in future investigations an examination of whether the use of surveillance technologies, including face recognition technology, has had a disparate impact on communities of color; and

2. Consult with and advise the FBI to examine whether the use of face recognition has had a disparate impact on communities of color.

The problem isn’t just the police but also an aggressive push by biometric tech vendors who downplay the accuracy issues while marketing the systems as crucial to contemporary policing.  The danger that facial recognition poses to our privacy and civil liberties is real and immediate. While we do give up a small amount of privacy when we walk around in public, we must preserve our ability to blend in as just a face in the crowd.

Read the Georgetown Law report on facial recognition: The Perpetual Line-Up: Unregulated Police Face Recognition in America.

Read the coalition letter to the U.S. Department of Justice Civil Rights Division

Related Cases: FBI's Next Generation Identification Biometrics DatabaseFBI Facial Recognition Documents
Share this: Join EFF
Categories: Aggregated News

Inside Intellectual Ventures' Portfolio: Nearly 500 University Patents - Wed, 19/10/2016 - 07:38

Harvard researcher Yarden Katz has just published some fascinating findings on which universities have sold patents to notorious patent-holding company Intellectual Ventures (IV). Of the nearly 30,000 active patents that IV lists publicly, 470 of them were originally assigned to universities—a total of 61 institutions.

Katz explains how he arrived at these numbers:

How many of IV’s patents came from universities?

To answer this, I have scraped the names of the original assignees for each of the U.S. patents in the portfolio from patent records (see annotated patents list). The analysis shows that nearly 500 of IV’s patents originally belonged to universities, including state schools.

Katz found some other surprises in IV’s portfolio, including nearly 100 patents from the U.S. Navy.

If you know nothing else about patent trolls, you’ve still probably heard the name Intellectual Ventures before. IV is one of the largest patent trolls in the world and has been behind many of the most egregious cases of litigation abuse. Earlier this year, we wrote about IV suing a florist over its patent on crew assignments. For many years, it has tried to cultivate relationships with American universities so it can add their patents to its portfolio.

As we’ve discussed here before, over 100 universities have endorsed a set of principles for university patenting practices. Among other points, it suggests that universities should require that licensees “operate under a business model that encourages commercialization and does not rely primarily on threats of infringement litigation to generate revenue.” Unfortunately, a number of those institutions appear not to be living up to this principle.

From Katz’s post:

Both the University of California and Caltech signed the 2007 statement, yet IV now owns tens of patents from these schools that were filed after 2007. For instance, the IV portfolio includes a Caltech patent filed in 2010 (granted in 2011) and University of California patent filed in 2008 (granted in 2014). Other universities that signed the statement, such as Stanford, Harvard and MIT, did not have patents in the portfolio.

Along with a coalition of users’ rights organizations, EFF recently launched a campaign asking universities to sign a pledge that they won’t sell or license patents to trolls.

When universities sell patents to trolls, it directly undermines the role that they play as engines of innovation: the more patents trolls hold in a certain area of technology, the more dangerous that field is for innovators. The licensing decisions that universities make today will strengthen or sabotage the next generation of inventors. That’s why we encourage everyone to speak out: students, faculty, alumni, parents, and community members. These policies affect all of us.

If you’d like to see universities pledge not to partner with trolls, then take a moment to tell your university. We’ve designed our petition to make it easy to share the results with university leadership. For example, here are all of the signatories affiliated with the University of California-Berkeley. We’re eager to work with local organizers to help you make sure that your institution hears your voice.

We’ve fought patent trolls in the courts and advocated for laws that bring fairness to the patent system. Universities are the next battleground. Together, we can stop the flow of university inventions into the hands of bad actors.

Tell your university: Don’t sell patents to trolls.

Share this: Join EFF
Categories: Aggregated News

Minnesota Sheriff Must Release Emails on Use of Biometric Technology - Tue, 18/10/2016 - 05:33

When EFF launched a campaign last year to encourage the public to help us uncover police use of biometric technology, we weren’t sure what to expect.  Within a few weeks, however, hundreds of people joined us in filing public records requests around the country.  

Ultimately, dozens of local government agencies responded with documents revealing devices capable of digital fingerprinting and facial recognition, while many more reported back—sometimes erroneously—that they hadn’t used this technology at all.  Several, however, either didn’t respond, demanded exorbitant fees, or outright rejected the requests.

EFF has now joined the ACLU of Minnesota in filing an amicus brief [.pdf] in a particularly egregious case now before the Minnesota Court of Appeals, demanding the release of emails regarding the Hennepin County Sheriff Office’s facial recognition program. 

In August 2016, web engineer and public records researcher Tony Webster filed a request based on EFF’s letter template with Hennepin County, a jurisdiction that includes Minneapolis, host city of the 2018 Super Bowl.  He sought emails, contracts, and other records related to the use of technology that can scan and recognize fingerprints, faces, irises, and other forms of biometrics.

Hennepin County resisted the request, so Webster lawyered up. In April, a judge ruled in Webster’s favor. As the Minneapolis Star-Tribune reported:

In an April 22 order, Administrative Law Judge Jim Mortenson described four months of unexplained delays, improperly redacted records, inadequate answers and other behavior by county officials in response to Webster’s request.

The county’s actions violated the Minnesota Government Data Practices Act (MGDPA), Mortenson found. He fined the county $300, the maximum allowed by law; ordered it to pay up to $5,000 in Webster’s attorney’s fees; refund $950 of the filing fee, and pay $1,000 in court costs.

Perhaps most significant, he ordered the county to figure out a way to make its millions of e-mail messages publicly accessible by June 1. 

It was a huge victory for Webster. But Hennepin County appealed, and thus a skirmish over biometric records has become a crucial battleground over the public’s right to access the emails of government officials across the state of Minnesota. 

Biometric technology is an emerging threat to privacy. By biometrics, we mean the physical and behavioral characteristics that make us unique, such as our fingerprints, faces, irises and retinas, tattoos, gaits, and more. Police around the country have begun adopting and testing these systems. The devices are often mobile, such as handheld devices or smart phone apps. 

Some emails that Webster already received show that the Hennepin County Sheriff’s office is contemplating using facial recognition technology on still images in investigations. Even more concerning, there’s evidence that in the next two years the sheriff intends to use real-time facial recognition to identify people in surveillance cameras streams, including those owned by private entities. 

The records obtained show that jail inmates had their mugshots enrolled in a system designed by the German firm Cognitec. One particular email showed how the $200,000 system poses a threat to the privacy of individuals not involved in crimes and presents a significant financial burden on taxpayers. As a criminal informational analyst with the Hennepin County Sheriff’s Office wrote:

"[The] system is so good I’ve found possible matches that turned out to be close relatives…It costs a shit-ton … but I love it.”

In our brief, we draw from the wealth of records EFF, and our partners at MuckRock News, received through the crowdsourcing campaign to explain why these emails are key to informing the public debate over mobile biometrics.

As EFF Senior Staff Attorney Jennifer Lynch, ACLU of Minnesota Legal Director Teresa Nelson, and the Stinson Leonard Street law firm write in the brief:

Using the documents released in response to these requests, EFF has been able to report on nine agencies using biometric technology in California. The documents revealed that most of the agencies are using digital fingerprinting devices, and many are also using iris, palm, and facial recognition technology, or plan to use them in the future. One of EFF's partner organizations used these same records to map the ties between the biometric contractors mentioned in the documents and firms in the defense and security industries that are deeply embedded in the national security apparatus. EFF is continuing to review records released by other agencies.

The brief also explains how emails often contain some of the most important information:

Emails released to other requesters have been equally revealing. For example, emails released by Miami-Dade County, Florida showed how MorphoTrak, a large biometrics vendor serving forty-two states' DMVs and many federal agencies, underpriced the devices in its invoices but increased the price later. Emails between the Phoenix, Arizona Police Department and its vendor revealed information about the sole-source procurement process. And emails released by the Polk County, Florida Sheriff's Office describe the timeline for installing biometrics devices in squad cars and outline the training process for using the devices. 

EFF hopes the appellate judges recognize that democracy functions best when the public debate is informed by government records and deny Hennepin County’s attempts to shield its emails from scrutiny.

Related Cases: FBI's Next Generation Identification Biometrics Database
Share this: Join EFF
Categories: Aggregated News

Patent Forum Shopping Must End - Fri, 14/10/2016 - 09:03
It's Time for the Supreme Court to End the Venue Loophole

As we’ve detailed on many occasions, forum shopping is rampant in patent litigation. Last year, almost 45% of all patent cases were heard in the Eastern District of Texas, a sparsely populated region of Texas probably more well-known as the birthplace of George Foreman than for any technological industry. EFF, along with Public Knowledge, has filed an amicus brief in TC Heartland v. Kraft, urging the Supreme Court to hear a case that could end forum shopping in patent cases.

The case is one of statutory interpretation. Prior to 1990, the Supreme Court had long held that in patent cases, the statute found at 28 U.S.C. § 1400 controlled where a patent case could be filed. However, in 1990 in a case called VE Holding, the Federal Circuit held that a small technical amendment to another venue statute—28 U.S.C. § 1391—effectively overruled this long line of cases. VE Holding, together with another case called Beverly Hills Fan, means that companies that sold products nationwide can be sued in any federal court in the country on charges of patent infringement, regardless of how tenuous the connection to that court. TC Heartland first asked the Court of Appeals for the Federal Circuit to revisit its law. EFF also supported TC Heartland at that court. The Federal Circuit declined the invitation.

As discussed in the FTC's recent report on patent assertion entities, 53% of the studied cases were filed in the Eastern District of Texas. 

TC Heartland has asked the Supreme Court to review the law that allows such a high concentration of patent cases in one district. In our brief in support of TC Heartland’s petition for certiorari, we argue that the Federal Circuit’s rule is fundamentally unfair and undermines the very purpose of the venue statutes. The brief also details how the rule has significant, substantive effects on cases that, without correction, will continue to create ongoing harms. We’re not the only ones that feel the Federal Circuit law is due for review. TC Heartland has also received support from 32 Internet companies, retailers, and associations as well as the Washington Legal Foundation.

We hope the Supreme Court takes this opportunity to look closely at patent venue. However, it may be that Congress may have to step in. Regardless, although either solution wouldn’t fix all of the problems with current patent law, it would go a long way to making at least the court system a lot more fair.

Update: After this post was written, four more amicus briefs were filed urging the Supreme Court to hear the case. The briefs of 56 law professors, former Chief Judge Paul Michel of the Court of Appeals for the Federal Circuit, the American Bankers Association et al., and Dell et al. can be found at the end of this post. 

Share this: Join EFF
Categories: Aggregated News

Where WhatsApp Went Wrong: EFF's Four Biggest Security Concerns - Fri, 14/10/2016 - 03:03

After careful consideration, we have decided to add additional warnings and caveats about using WhatsApp to our Surveillance Self Defense guide.

No technology is 100 percent secure for every user, and there are always trade-offs among security, usability, and other considerations. In Surveillance Self Defense (SSD), we aim to highlight reliable technologies while adding caveats to explain how their various strengths and weaknesses affect user privacy and security. In the case of WhatsApp, it is getting harder and harder to adequately explain its pitfalls in a way that is clear, understandable, and actionable for users. This is especially true since WhatsApp’s announcement that it would be changing their user agreement regarding data sharing with the rest of Facebook’s services.

This is unfortunate precisely because of WhatsApp's security strengths. Under the hood, WhatsApp uses the best-in-breed for encrypted messaging: the Signal Protocol. This gives a high assurance that messages between you and your contacts are encrypted such that even WhatsApp can’t read them, that your contacts' identities can be verified, and that even if someone steals your encryption keys and is able to tap your connection, they can’t decrypt messages you’ve already sent. In crypto parlance, these guarantees are termed end-to-end encryption, authenticity, and forward secrecy.

We take no issue with the way this encryption is performed. In fact, we hope that the protocol WhatsApp uses becomes more widespread in the future. Instead, we are concerned about WhatsApp’s security despite the best efforts of the Signal Protocol. Every application includes various components: the user interface, the code that interacts with the operating system, the business model behind the whole operation—and secure messaging apps are no exception. The changes in this surrounding functionality are where we have identified various places where a user can dangerously overestimate WhatsApp’s security.

Below we describe our four greatest concerns in more detail.

Unencrypted backups

WhatsApp provides a mechanism to back messages up to the cloud. In order to back messages up in a way that makes them restorable without a passphrase in the future, these backups need to be stored unencrypted at rest. Upon first install, WhatsApp prompts you to choose how often you wish to backup your messages: daily, weekly, monthly, or never.  In SSD, we have advised users to never back up their messages to the cloud, since that would deliver unencrypted copies of your message log to the cloud provider. In order for your communications to be truly secure, any contact you chat with must do the same.

Key change notifications

If the encryption key of a contact changes, a secure messaging app should notify you and prompt you to accept the change. On WhatsApp, however, if your contact changes keys, this fact is hidden away by default. To be notified, users have to search for the setting “Security Notifications” (found under “Security” in the “Account” section of your user settings) and manually switch it on.

Key verification is critical to prevent a Man in the Middle attack, in which a third party pretends to be a contact you know. In this attack scenario, the third party sits in the middle of your connection and convinces your device to send messages to them instead of your contact, all the while decrypting those messages, possibly modifying them, and sending them along to your original, intended recipient. If your contact’s key changes suddenly, this could be an indication that you are being man-in-the-middled (though typically it’s just because your contact has bought a new phone and re-installed the app).

Web app

WhatsApp provides an HTTPS-secured web interface for users to send and receive messages. As with all websites, the resources needed to load the application are delivered each and every time you visit that site. So, even if there is support for crypto in the browser, the web application can easily be modified to serve a malicious version of the application upon any given pageload, which is capable of delivering all your messages to a third party. A better, more secure option would be to provide desktop clients in the form of extensions rather than a web interface.

Facebook data sharing

WhatsApp’s recent privacy policy update announced plans to share data with WhatsApp’s parent company Facebook, signalling a concerning shift in WhatsApp’s attitude toward user privacy. In particular, the open-ended, vague language in the updated privacy policy raises questions about exactly what WhatsApp user information is or is not shared with Facebook. WhatsApp has publicly announced plans to share users’ phone numbers and usage data with Facebook for the purpose of serving users more relevant friend recommendations and ads. While existing WhatsApp users are given 30 days to opt out of this change in their Facebook user experience, they cannot opt out of the data sharing itself. This gives Facebook an alarmingly enhanced view of users’ online communications activities, affiliations, and habits.

Ways forward

WhatsApp and Facebook could take some simple steps to restore our confidence in their product.

  • Simplify WhatsApp’s user interface for turning on strong privacy. A slider that would switch on all of the protective options—such as disabling backups, enabling key change notifications, and opting out of aspects of data sharing—would make it far easier for users to take control of their security.

  • Make a public statement about exactly what kinds of data will be shared between WhatsApp and Facebook and how it will be used. WhatsApp needs to take certain future uses of its data permanently off the table by defining what it will—and, just as importantly, will not—do with the user information it collects.

Until such changes are made, we have to warn users to take extra caution when deciding whether and when to communicate using WhatsApp. If you decide to use WhatsApp, see our SSD guides for Android and iOS for more information on how to change your settings to protect your security and privacy.

Share this: Join EFF
Categories: Aggregated News

EFF Goes to Washington to Fight Against the Changes to Rule 41 - Thu, 13/10/2016 - 08:20

Access Now's Nathan White, Demand Progress' Kate Kizer, EFF's Rainey Reitman, and Sen. Ron Wyden

If Congress does nothing, a new policy will take effect in less than two months that will make it easier than ever for the FBI to infiltrate, monitor, copy data from, inject malware into, and otherwise damage computers remotely.

With the threat of the “Rule 41” changes looming, EFF Senior Staff Attorney Nate Cardozo and EFF Activism Director Rainey Reitman recently flew to DC to speak to policymakers about the future of computer security and the ramifications of government hacking. Over the course of just a few days filled with back-to-back appointments, Nate and Rainey briefed dozens of legislative staffers on not only Rule 41, but the larger issue of government hacking and how the system lacks the necessary safeguards for privacy. They also briefed staff on the upcoming expiration of a law used to justify NSA surveillance of the Internet, Section 702 of the FISA Amendments Act.

Nate (pictured right) also participated in a panel discussion for members of the House and their staff, hosted by the Fourth Amendment Advisory Committee on the issue of government hacking and Rule 41. Check out the video.

EFF and our coalition partners have been raising the alarm about the updates to Rule 41 for months, and thousands have joined us in speaking out through our campaign No Global Warrants. The proposed changes to Rule 41 of the Federal Rules of Criminal Procedure—which are set to automatically go into effect on December 1—would make it easier for the government to get a warrant to hack into a computer. Whenever they are investigating a crime and encounter computers using location-obscuring technology, FBI agents would be able to request a warrant from practically any magistrate judge in any district in the United States. This removes one of the key safeguards our judicial system has against forum shopping: namely, that law enforcement must go to judges in the district where crimes actually take place.

These changes would make it easier than ever for FBI agents to seek out prosecution-friendly or technically-naïve judges to sign off on the dangerous hacking warrants. And we know these warrants invite abuse: we’ve already seen cases where a single warrant was used to hack into thousands of computers.

One of the high points of EFF’s trip to DC was delivering letters from more than 35,000 individuals opposing the updates to Rule 41 to Sens. Ron Wyden and Chuck Grassley, alongside our friends at Access Now and Demand Progress. Privacy info. This embed will serve content from

Accepting the 35,000 petitions, Sen. Wyden warned: “The government is seeking a massive expansion of its hacking and surveillance authority. And these changes will go into effect automatically unless Congress acts by December 1.”

While he noted that this is an incredibly busy time in Congress, it’s nonetheless imperative that our elected officials take up this issue.  “When you’re talking about making it possible for one judge with one warrant to hack hundreds of thousands of computers, that is not a small change,” Sen. Wyden stated. “So we’ve got to get this word out.”

The petition delivery was live-streamed on Facebook and you can watch it below. If you’re concerned about the pending updates to Rule 41, please speak out now.

Related Cases: The Playpen Cases: Mass Hacking by U.S. Law Enforcement
Share this: Join EFF
Categories: Aggregated News

5 Questions With EFF’s New Writer Kate Tummarello - Thu, 13/10/2016 - 07:07

If you scroll through EFF’s staff bios, you may notice a trend: we have a lot of reporters who have joined the battle for free speech, privacy, and transparency. Some worked for years in newsrooms or as independent journalists. Others studied and taught at journalism schools or worked directly for journalism advocacy organizations. This kind of experience is often a perfect fit for tech policy advocacy, because reporters have a practical understanding of how important our rights are and how to communicate these issues to the public.  And ultimately, EFF’s work is not unlike journalism: we fight to free information and then we write about it.  

So it is with great admiration that we announce our latest addition to EFF’s activism team: Kate Tummarello, a writer who you will be reading a lot from during the heated battles over NSA spying on the horizon. Kate joins us after an impressive tour of duty in Washington, D.C., where she covered tech policy in Congress for news outlets such as Politico, The Hill, and Roll Call. When she was on the beat, we were regularly impressed by the hard questions she’d ask us and members of Congress and her ability to translate often arcane concepts to a lay audience. 

This time Kate’s on the other end of the interview. I asked her some questions about her background and what she learned covering Capitol Hill.  

How did you get started on the tech policy beat? 

My road to covering tech policy started at my college newspaper. I wanted to join the editorial board as soon as possible when I got to college, but the only opening my freshman year was as a science and technology editor. Knowing very little about the physical sciences, I decided to become a tech person.

After I graduated with a public policy degree, I moved to DC. Within a few months of starting at a newspaper there, the widespread SOPA protests kicked off, and I was lucky enough to be pulled in to help cover that debate. I saw how nontraditional and unpredictable tech policy fights could be, and I decided I wanted to focus on that.

Can you tell us about one of the most thrilling moments in tech policy that you've covered? 

I covered the Hill fight over the USA Freedom Act in 2015, and that was thrilling. Exhausting, but thrilling. I remember standing outside the Senate chamber in the early morning hours the first time the Senate voted on the bill with CSPAN on an earbud in one ear and holding my recorder up to my other ear. Things were changing by the minute, and individual members' votes were genuinely surprising, even to the staffers and lobbyists who were graciously still responding to my late-night emails. 

I also remember first seeing Rep. Justin Amash hanging out around an entrance to the Senate floor (to remind the Senate that he would be there to oppose anything less than USA Freedom if the Senate tried to push something through the already-recessed House). I remember the moment Intelligence Committee Chairman Richard Burr said he didn't believe the Section 215 program was actually shutting down as he tried to escape the throng of reporters and go home. And I remember going home to sleep, knowing that we would be back in a week to continue the fight over the bill. 

What's something that the digital rights community should know about Congress?

I think that from the outside, it's tough to fully appreciate all of the pressures individual members of Congress are under when it comes to any one vote. It's not as simple as, what's the "right" thing to do, and it's definitely not as simple as considering what donors and lobbyists want to happen. They have to factor in pressure coming from their party leadership, their committee leadership, other members, parts of the administration, their voters, business in their district/state, and more.

That obviously doesn't mean each member shouldn't do what he or she believes is right. But it does often mean that to win over any one member on any issue, you need to change the balance of those pressures so when they weigh all of those competing things against each other, there's more compelling them to do the "right" thing.

Now on the flipside—what's something Congress has trouble understanding about digital rights?

I think a lot of the time, the digital rights community gets lumped in with the tech industry. When members are looking at an issue—especially if it's the tech industry versus another industry or another interest—it's easy to assume that tech companies are using the digital rights community to amplify their own lobbying. It's important to recognize that digital rights activists are equipped with the technical and legal knowledge to arrive at policy positions entirely on their own. Sometimes they happen to align with the positions of the tech industry or individual tech companies, and sometimes they don't!

What are you excited to be working on in the next year at EFF?

Obviously next year's Hill fight over Section 702 is going to be a big deal. After watching the debate over USA Freedom, I'm excited to help contribute to the debate this time around!

Share this: Join EFF
Categories: Aggregated News

Upload Filtering Mandate Would Shred European Copyright Safe Harbor - Thu, 13/10/2016 - 03:24

After months of study, European regulators have finally released the full and final proposal on Copyright in the Digital Single Market, and unfortunately it's full of ideas that will hurt users and the platforms on which they rely, in Europe and around the world.  We've already written a fair bit about leaked version of this proposal, but it's worth taking a deeper dive into a particular provision, euphemistically described as sharing of value. This provision, Article 13 of the Directive, requires platform for user-generated content to divert some of their revenue to copyright holders who, the Commission claims, otherwise face a hard time in monetizing their content online. We strongly support balanced and sensible mechanisms that help ensure that artists get paid for their work. But this proposal is neither balanced nor sensible.

Article 13 is short enough that its key first paragraph can be reproduced in its entirety here:

Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate. The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as, when relevant, adequate reporting on the recognition and use of the works and other subject-matter.

The essence of this paragraph is that it requires large user-generated content platforms to reach agreements with copyright holders to adopt automated technologies that would scan content that users upload, and either block that content or pay royalties for it.

This is Not Content ID

The automated scanning mandate described above may sound similar to what YouTube's Content ID technology does—but there are some key differences. Perhaps the biggest difference is that whereas Content ID only scans music and video uploads, there is no such limitation in Article 13. As such the provision anticipates that any other copyright works including text, photographs, and so on, will also have to be scanned and filtered. At one stroke of a pen, this would turn the Internet from a zone of mostly permissionless free expression into a moderated walled garden. Such a general imposition on freedom of expression online is quite unprecedented in a democratic society.

Another difference from Content ID is that many additional parties would be pulled into these cooperative arrangements, both on the platforms' side and the copyright holders' side. On the platforms' side, Article 13 applies to any service provider that hosts “large amounts” of user uploaded content. What are “large amounts”? We have no way of knowing for sure, but it's easy to think of many hundreds of websites that might qualify, including commercial sites such as Tumblr and DeviantArt, but also non-profit and community websites such as Wikipedia and the Internet Archive.

On the copyright holders' side, which rightsholders are platforms required to negotiate with? Article 13 doesn't specify. Unless further regulations or industry agreements fill in this gap, we face the prospect that platforms might have to negotiate with hundreds or even thousands of claimants, all seeking their own share of the platform's revenue. That's the worst case scenario. The best case scenario is that collecting societies will step in as intermediaries, but further cementing their role in the value chain isn't such an attractive proposition either, since most European collecting societies are national monopolies and have been known to abuse their market power.

Incompatibility with European Law and Human Rights

A law that requires Internet platforms to reach “voluntary” agreements with copyright holders, is of course, the essence of Orwellian doublethink, and a hallmark of the kind of policymaking by proxy that we've termed Shadow Regulation. The Commission is likely taking that approach because that it knows that it can't directly require Internet platforms to scan content that users upload -- an existing law, Article 14 of the Directive 2000/31 on electronic commerce (E-commerce Directive), expressly prohibits any such requirement.

That provision, which is roughly equivalent to the safe harbor provisions in Section 512 of the DMCA, gives conditional immunity to Internet platforms for user-uploaded content, and rules out the imposition of a general obligation to monitor such content. The Court of Justice of the European Union (CJEU) ruled in two separate cases in 2011 and 2012 that this prohibition on general monitoring derives directly from Articles 8 and 11 of the European Charter of Fundamental Rights, which safeguard personal data and freedom of expression and information.

If the European Commission proposed to directly rescind the Article 14 safe harbor, this would be a clear infringement of Europeans' bill of rights. Yet the Commission proposes to get around this through the sham arrangement of forcing companies into private agreements. Convinced? Neither are we. It's clear law that a government can't get around its own human rights obligations by delegating the infringement of those rights to the corporate sector.

A mandate for Internet platforms to scan and filtering users' content is exactly the kind of general monitoring obligation that Article 14 prohibits. Rather than face a challenge to the Digital Single Market Directive in the CJEU, it would behoove the European Commission to abandon its attempt to rewrite the E-commerce Directive through Shadow Regulation before the proposal goes any further.

At this stage of the labyrinthine European legislative process, the proposal is out of the European Commission's hands, and awaits comment by the other EU institutions, the Council of the EU and the European Parliament. That review will offer an opportunity for users to weigh in, so get ready. EFF will work with our European partners to fight back against this repressive proposal -- and we will be asking for your help. Stay tuned.

Share this: Join EFF
Categories: Aggregated News

EFF Celebrates Women in Tech, Today and Every Day - Wed, 12/10/2016 - 08:10

We believe in celebrating women in science, technology, engineering, and mathematics every day, and today – Ada Lovelace Day – is no different. Named after visionary 19th century mathematician Ada Lovelace, today is an opportunity to recognize the achievements of women in the STEM fields.

Lovelace, who is credited with being among the first to recognize the potential of computing, was encouraged to explore the world of mathematics from a young age at a time when few women were actively involved in the field. At 17, she worked on Charles Babbage’s Analytical Engine, and her notes on his machine included what is recognized as the first computer algorithm as well as ideas about the potential of computational machines.

The day’s founder, Suw Charman-Anderson, said in a recent interview that she was inspired to establish the day of recognition after being frustrated with the dearth of women speakers at tech conferences. She wanted to create an occasion for people around the world to discuss and promote role models so that other women will be inspired to get involved in STEM fields.

“It’s essential for girls to see that they have a future in STEM, and for women to see that they can progress in STEM careers all the way to the top,” she said.

The inaugural Ada Lovelace Day in 2009 saw thousands blog about inspirational women in those fields. Since then, the day has grown to include global events, including a London-based event featuring women speakers discussing their work in engineering, physics, science writing, and more.

To celebrate the day, take part one of these events around the world, support opportunities for women in the STEM fields, and use the official hashtag #ALD16 to share the stories of women in STEM that inspire you.

Share this: Join EFF
Categories: Aggregated News

Tell the Copyright Office: Copyright Law Shouldn't Punish Research and Repair - Wed, 12/10/2016 - 05:32

After eighteen years, we may finally see real reform to the Digital Millennium Copyright Act’s unconstitutional pro-DRM provisions. But we need your help.

In enacting the “anti-circumvention” provisions of the DMCA, Congress ostensibly intended to stop copyright “pirates” from defeating DRM and other content access or copy restrictions on copyrighted works and to ban the “black box” devices intended for that purpose. In practice, the DMCA anti-circumvention provisions haven’t had much impact on unauthorized sharing of copyrighted content. Instead, they’ve hampered lawful creativity, innovation, competition, security, and privacy.

In the past few years, there’s been a growing movement to reform the law. As locked-down copyrighted software shows up in more and more devices, from phones to refrigerators to tractors, more and more people are realizing how important it is to be able to break those locks, for all kinds of legitimate reasons. If you can’t tinker with it, repair it, or peek under the hood, then you don’t really own it—someone else does, and their interests will take precedence over yours.

It seems the Copyright Office has heard those concerns. As part of an ongoing study, it’s asking for comments (PDF) on whether it should recommend that Congress enact a series of permanent exemptions to the law for several important and useful activities, including security research and repair.

On the one hand, any such recommendation may be too little and too late. Section 1201 is unconstitutional to begin with and should simply be repealed. Short of that, the best way for Congress to fix the law would be to pass Zoe Lofgren’s Unlocking Technology Act, which would protect those who want to break digital locks for noninfringing reasons. The permanent exemptions on the table don’t cover a host of other legitimate activities, like remix videos and documentary films.

On the other hand, this is progress. For almost two decades, EFF and a host of legal clinics and other public interest organizations have been going to the Librarian of Congress to plead for temporary exemptions on behalf of creators, researchers, people with disabilities, and other technology users. We’ve explained why those exemptions are needed and why they won’t harm copyright owners. There is no evidence, not a jot, than any such exemption has led to infringement—but that doesn’t save us from having to march back in every three years to do it all over again. Making a few of those exemptions permanent would let us all focus our energies on expanding the reach of the temporary ones, and working to streamline the process so it is less burdensome for both users and the government.

But that progress won’t be meaningful if the permanent exemptions aren’t truly useful. EFF is drafting comments that we hope will result in a strong and practical set of recommendations. But we need your help. We need to let the Copyright Office—and Congress—know that users want real reform and won’t settle for exemptions that fall short. Please take a moment to sign our petition; we will deliver your signatures to the Copyright Office along with our comment.

Tell the Copyright Office: copyright law shouldn’t punish research and repair.

Share this: Join EFF
Categories: Aggregated News

Is Let’s Encrypt the Largest Certificate Authority on the Web? - Tue, 11/10/2016 - 02:31

By the time you read this, Let’s Encrypt will have issued its 12 millionth certificate, of which 6 million are active and unexpired. With these milestones, Let’s Encrypt now appears to us to be the the Internet’s largest certificate authority—but a recent analysis by W3Techs said we were only the third largest. So in this post we investigate: how big is Let’s Encrypt, really?

What are certificate authorities, and how do you measure their size?

Certificate authorities (CAs) issue and maintain digital certificates that help web users and their browsers know they’re actually talking to the site they intended to. This is crucial to secure, HTTPS-encrypted communication, as these certificates verify the association between an HTTPS site and a crypographic public key. A CA provides the owner of a website with a signed certificate that web visitors can independently verify. The certificate tells a user’s browser software, “If you use this key to set up secure communications with this website, no one can intercept those communications.” Without such an introduction, browsers can succumb to traffic interception, modification and eavesdropping; even if they used encryption, they wouldn’t be sure if they were talking directly to a site as opposed to a man-in-the-middle attacker.1

Let’s Encrypt is a free, automated, open CA founded by EFF, Mozilla, and the University of Michigan, with Cisco and Akamai as founding sponsors. But is it the largest CA? It turns out this could mean several things: issuing the most public certificates, issuing the most active public certificates, protecting the most Internet connections or sites, or any of a host of other possible metrics. In this post we’ll walk through a couple of ways to measure this, and the limitations and caveats that comes with those measurements.

The numbers

At present, Let’s Encrypt has issued 6 million unexpired certificates for either 4 or 10 million domains, depending on how you count.

Source: JC Jones

On this chart, “certificates active,” in orange, represents the number of certificates that have been issued and are not yet expired. 2

Some of these un-expired certificates are duplicates, with more than one certificate covering one domain name. Others are the opposite, covering many domain names under one certificate.3 So it's probably more accurate to look at the number of distinct domains covered by unexpired certs. “Fully-qualified domains active” in red shows the number of different names among non-expired certificates. For example, and are treated as two different names. This metric can overcount sites; while most people would say that and are the same website, they count as two different names here.

Finally, “Registered domains active” in green counts the number of different top-level domain names among non-expired certificates. In this case, and would be counted as one name. This metric may undercount different sites, because pages under the same top-level domain still may be run by different people with different content—for example, different WordPress blogs hosted under

Counting by number of certs: 3rd largest, or largest?

Our friends at W3Techs Web Technology Surveys recently released a blog post analyzing the CA market and putting Let’s Encrypt in third place among certificate providers.

When we looked closely at these numbers, however, we found that W3Techs was not looking at the entire certificate market; its analysis took into account only the top 10,000,000 most popular websites (as ranked by Alexa). This is important because Let’s Encrypt (below shown as “IdenTrust,” because of the root Let’s Encrypt uses) is commonly used by smaller, less popular, low-traffic sites rather than big, popular ones. For that reason, W3Tech’s third-place ranking that relied on the biggest, most popular sites on the web enormously undercounted Let’s Encrypt’s overall market share. The smaller sites that we primarily serve are exactly the sites that W3Techs (and other analyses of top Alexa-ranked sites) are least likely to count.

CAs in the lower part of the graph tend to serve more low-traffic sites. Source: W3Techs.

By other metrics, Let’s Encrypt is in fact the CA that has issued the most certificates and protected the most sites. This ranking from the Censys project looks at all known certificates that are valid, unexpired, and would be accepted by browsers at the moment of the query:


But this first-place ranking requires some caveats. The rankings above include data from Certificate Transparency, an open-source effort to monitor and audit TLS/SSL certificates. This data may include certificates that were never deployed in practice or are no longer actively in use. So, if someone gets a Let’s Encrypt certificate but then doesn’t actually use it, it still contributes to Let’s Encrypt’s first-place spot in the chart above.

This dataset may also include duplicates. For example, a webmaster new to TLS/SSL certificates may accidentally run a Let’s Encrypt client like Certbot five times in a row and get the same certificate every time. This will show up in the chart above as five different certificates, even though that webmaster is probably only using one. Of course, this is counteracted by the fact that many other certificates cover multiple domains at once.

Lastly, there are categories of certificates that aren’t being counted here. This ranking covers only publicly trusted certificates (as opposed to self-signed ones, or those signed by private CAs not trusted by browsers by default) for domain names (as opposed to, for example, S/MIME certificates for email addresses, of which there are a huge and largely unmeasured number).

Valuable contributions regardless of numbers

The biggest caveats in the two rankings above—whether the dataset takes into account less popular sites (which makes the W3Techs numbers very pessimistic about Let’s Encrypt), and the possibility of unused or duplicate certificates (which may make our own numbers optimistic)—illustrate Let’s Encrypt’s valuable contributions to encryption efforts regardless of numbers or rankings.

As the difference between the two datasets shows, Let's Encrypt has been adopted more by smaller sites than by larger ones—often personal blogs or by the sites of small businesses and associations. That means that, compared to other CAs, we protect fewer of the most famous and most popular Internet sites, and apparently also a smaller fraction of all web browsing activity. But that's fine by us.

One of the ways Let's Encrypt has been helping to secure the web is by making it easy and affordable for sites that have never had certs before to turn on secure HTTPS connections, and for software systems to start enabling HTTPS automatically and by default. Our free certificates may be more likely to be left unused than expensive certificates, and less expert webmasters may accidentally duplicate certificates—but that’s part of making HTTPS integration available to more webmasters across a range of resource and skill levels. Statistics suggest that most of our growth has come not at the expense of other CAs, but from giving previously unencrypted sites their first-ever certificates.

Rankings also fail to capture the communities that a CA like Let’s Encrypt can serve. A large share of Let’s Encrypt certificates have been issued to major hosting companies and platforms, including: Automattic, the web development corporation behind; Shopify, an e-commerce platform; and OVH, a European ISP. And they are not alone. Dozens of web hosting providers and companies have made the commitment to use Let’s Encrypt to automatically protect their customer sites with HTTPS.

We are committed to supporting more companies and communities who want to make this move. Learn more about Let’s Encrypt and how to use the web’s largest CA here.

  • 1. Note that certificates don’t really solve the related authentication puzzle of knowing that you’re visiting the right domain name and are not being phished, which is in many ways a harder problem.
  • 2. Although it is not shown on this chart, our count of all certificates ever issued, including expired certs, is about 12 million.
  • 3. Let's Encrypt supports up to one hundred domains per certificate.

Share this: Join EFF
Categories: Aggregated News

Unblinking Eyes: The State of Communications Surveillance in Latin America - Tue, 11/10/2016 - 00:00

 New Reports Show How Vague Laws Can Pave the Way for Human Rights Violations

We're proud to announce today's release of “Unblinking Eyes: The State of Communications Surveillance in Latin America,” a project that document and analyzes surveillance laws and practices in twelves Latin America. For over a year, we have worked with partner organizations across Latin America (Red en Defensa de los Derechos Digitales, Fundación Karisma, TEDIC, Hiperderecho, Centro de Estudios en Libertad de Expresión y Acceso a la Información Derechos Digitales, InternetLab, and Fundación Acceso). Each of them published individual reports documenting the state of communications surveillance in each of these countries. Then, EFF took that research and other recent papers, and produced a broader report that compares surveillance laws and practices in Argentina, Brazil, Chile, Colombia, El Salvador, Guatemala, Honduras, Peru, Mexico, Nicaragua, Paraguay, and Uruguay. Moreover, together with Derechos Digitales, we published a  legal analysis for the 13 Necessary and Proportionate Principles, explaining the legal and conceptual basis of each of the Principles in light of the inter-american human rights standards. Finally, we published Who Can Spy On Us? A Visual Guide of the state of communications surveillance in Latin America.

On this day, let’s take a minute to reflect on the horrific consequences of unchecked surveillance.

 The Terror Archive

In December 1992, following a hastily-drawn sketch of a map given to him by a whistleblower, the Paraguayan lawyer Martin Almada drove to an obscure police station in the suburb of Lambaré, near Asunción. Behind the police offices, in a run-down office building, he discovered a cache of 700,000 documents, piled nearly to the ceiling. This was the “Terror Archive,” an almost complete record of the interrogations, torture, and surveillance conducted by the Paraguayan military dictatorship of Alfredo Stroessner. The files reported details of “Operation Condor,” a clandestine program between the military dictatorships in Argentina, Chile, Paraguay, Bolivia, Uruguay, and Brazil between the 1970s and 1980s. The military governments of those nations agreed to cooperate in sending teams into other countries to track, monitor, and kill their political opponents. The files listed more than 50,000 deaths and 400,000 political prisoners throughout Argentina, Bolivia, Brazil, Chile, Paraguay, Uruguay, Colombia, Peru, and Venezuela.

Stroessner's secret police used informants, telephoto cameras, and wiretaps to build a paper database on everyone that was viewed as a threat, plus their friends and associates. The Terror Archive shows how far a country's government might sink when unchecked by judicial authorities, public oversight bodies, and the knowledge of the general public.

That was a quarter century ago.

A modern Operation Condor would have far more powerful tools at hand than just ring-binders, cameras, and wiretapped phones. Today's digital surveillance technology leaves the techniques documented in the Terror Archive in the dust.            

Twentieth century surveillance law considers the simple wiretapping of a single phone line, with no guidance on how to apply these regulations to our growing menagerie of spying capabilities. When new surveillance or cyber-security laws are passed, they are written paper over existing practice, or to widen existing powers—such as data retention laws that force phone and Internet companies to log and retain even more data for state use.  Each of these new powers is a ticking time-bomb, waiting for abuse. One way to stop these powers from being turned against the public is to create robust and detailed modern privacy law to constrain its use, an independent judiciary who will enforce those limits, and a public oversight mechanism that allows the general public to know what its country's most secretive government agents are up to in their name. 

Unfortunately, legislators and judges within Latin America and beyond have little insight into how existing surveillance law is flawed or how it might be fixed.  To assist in that imposing task, EFF has released “Unblinking Eyes: The State of Communications Surveillance in Latin America.”  

For over a year, we have worked with partner organizations across Latin America (Red en Defensa de los Derechos Digitales, Fundación Karisma, TEDIC, Hiperderecho, Centro de Estudios en Libertad de Expresión y Acceso a la Información Derechos Digitales, InternetLab, Fundación Acceso) to shed a light on the current state of surveillance in the region both in law and in practice. We've carefully documented existing laws in 13 countries, and gathered evidence of the misapplication of those laws. Our aim is to understand the legal situation in each country, and contrast them with existing human rights standards. For this work, we analyzed publicly available laws and practices in Argentina, Brazil, Chile, Colombia, El Salvador, Guatemala, Honduras, Peru, Mexico, Nicaragua, Paraguay, Uruguay, and the United States and published individual reports documenting the state of communications surveillance in each of these countries. Then, we took that research and produced a broader report that compares surveillance laws and practices throughout the entire region.

Our project was not limited to legal research, however. We mixed our legal and policy work with on-site training throughout the region for digital rights activists, traditional human rights lawyers, investigative journalists, activists, and policy makers. We explained how surveillance technologies work and how governments must apply international human rights standards to their laws and practices in order to appropriately limit those legal powers. We also mixed our legal and policy workshops with technical advice on how our partners in the region can protect themselves against government surveillance.

What have we learned?

Given the deeply rooted culture of secrecy surrounding surveillance, it is hard to judge the extent to which states comply with their own published legal norms. Ensuring that law not only complies with human rights standards but also genuinely governs and describes the state's real-world behavior is an ongoing challenge. Even still, we identified deficiencies that are widespread throughout the region and are in need of special and immediate action.

Here are our recommendations:

  • The culture of secrecy surrounding communications surveillance must stop. We need the ensure that civil society, companies, and policy makers understand the importance of transparency in the context of surveillance, and why transparency reporting from the companies and the state is crucial to preventing abuses of power.

  • State officials and civil society must ensure that written norms are translated into consistent practice and that any failure to uphold the law is discovered and remedied. Judicial guidance from impartial, independent, and knowledgeable judges is needed.

  • States should have dedicated communications surveillance laws rather than a jigsaw puzzle of numerous provisions spread throughout various legislation and these laws should be necessary, proportionate, and adequate.

  • The region should commit to implementing public oversight mechanisms that are carefully matched in resources and authority over those who wield these powers.

  • Individuals need to be granted due process, and a right to be notified about a surveillance decision with enough time and information to challenge that decision or seek other remedies whenever possible; innocent individuals affected by surveillance need avenues for redress.

We need a strong civil society coalition working on these issues. With the help of watchful and informed judges and legislators, we hope that digital technology will be used wisely to protect, not violate, human rights. We must ensure that we build a world where the Terror Archive remains a grim record of past failings, not a low-tech harbinger of an even darker future.

Lastly, the problems encountered in Latin America are not isolated. From the United States to Australia, from France to Argentina, surveillance technology and techniques are on the rise, yet there is limited public awareness about the civil liberties implications of these rapid changes. Activists should keep pushing for the 13 Principles to protect privacy and restrict digital surveillance.

 Read our reports, and learn about the situation of surveillance in Latin America. Join us to defend our rights and those of the future. Below you can find some key findings for each country.

Share this: Join EFF
Categories: Aggregated News

Ojos que no parpadean: El estado de la vigilancia en América Latina - Mon, 10/10/2016 - 23:59

Nuevos informes muestran cómo leyes poco precisas pueden allanar el camino para violaciones de los derechos humanos

Estamos orgullosos de anunciar el lanzamiento de "Ojos que no parpadean: El estado de la vigilancia en América Latina", un proyecto que analiza las leyes y prácticas de vigilancia en doce países de América Latina. Por más de un año, EFF trabajó  con nuestros partners en América Latina (Red en Defensa de los Derechos Digitales, Fundación Karisma, TEDICCentro de Estudios en Libertad de Expresión y Acceso a la InformaciónHiperderecho, Derechos Digitales, InternetLab, Fundación Acceso), documentando y publicando informes individuales que analizaban el estado de la vigilancia en doce países. Entonces, a partir de esa investigación, EFF elaboró ?un informe más amplio que compara las leyes y prácticas de vigilancia en Argentina, Brasil, Chile, Colombia, El Salvador, Guatemala, Honduras, Perú, México, Nicaragua, Paraguay y Uruguay. En este día, tomemos un minuto para reflexionar sobre las terribles consecuencias de una vigilancia sin control. Por otra parte, junto con Derechos Digitales, publicamos materiales de referencia que explican el fundamento jurídico y conceptual de cada uno de los 13 Principios Necesario y Proporcional vis-a-vis los estándares interamericanos de derechos humanos. Por último, hemos publicado ¿Quién puede espiarnos?, una guía visual del estado de vigilancia en la región.

El Archivo del Terror

En Diciembre de 1992, siguiendo la pista del borrador de un mapa que le fue entregado por un denunciante, el abogado paraguayo Martín Almada condujo hacia una oscura comisaría en el barrio de Lambaré, cerca a Asunción. Detrás de las oficinas de la policía, en un decadente edificio de oficinas, descubrió un almacén de 700.000 documentos, apilados casi hasta el techo. Estaba frente a el "Archivo del Terror", un registro casi completo de los interrogatorios, torturas y vigilancia llevada a cabo en Paraguay por la dictadura militar de Alfredo Stroessner. Los archivos revelaron detalles de la "Operación Cóndor", un programa de cooperación clandestina entre las dictaduras militares en Argentina, Chile, Paraguay, Bolivia, Uruguay, y Brasil durante la década entre 1970 y 1980. Los gobiernos militares de esos países acordaron cooperar en el envío de equipos entre países para rastrear, monitorear y asesinar a sus oponentes políticos. Los archivos listan más de 50.000 muertes y 400.000 presos políticos en toda la Argentina, Bolivia, Brasil, Chile, Paraguay, Uruguay, Colombia, Perú y Venezuela.

La policía secreta de Stroessner usó informantes, cámaras con teleobjetivos, y escuchas telefónicas para construir una base de datos en papel sobre todo aquel que fuera visto como una amenaza, además de sus amigos y asociados. El Archivo del Terror muestra hasta qué punto puede caer el gobierno de un país cuando no está limitado por autoridades judiciales imparciales e independientes, garantías legales sólidas, y órganos de supervisión pública que el público puede escrutinar.

Esto ocurrió hace un cuarto de siglo.

Una Operación Cóndor moderna puede tener a la mano herramientas mucho más poderosa que carpetas anilladas, cámaras y teléfonos intervenidos. La tecnología de vigilancia moderna deja a las técnicas documentadas en el Archivo del Terror en el polvo.

Las leyes de vigilancia del siglo XX están diseñadas para la vigilancia de una sola línea telefónica, sin guía alguna sobre cómo aplicar estas normas a nuestra creciente arsenal de capacidades de espionaje. Cada vez que se aprueban nuevas leyes de vigilancia o de seguridad cibernética, están son escritas para legitimar poderes ya ejercidos s, o ampliar sus poderes existentes, como el caso de las leyes de retención de datos, que obligan a las empresas de telefonía e Internet a registrar y conservar aún más información de la población entera para fines de vigilancia.

Cada uno de estos nuevos poderes es una bomba de tiempo, esperando detonar. La única manera de evitar que estos poderes sean usados en contra de los intereses del público es crear leyes de privacidad modernas, sólidas y detalladas que restrinjan su uso, un poder judicial imparcial, independiente e informado, que haga cumplir esos límites, y mecanismos de supervisión pública que permitan al público en general saber lo que las agencias más secretas del país están haciendo en su nombre.

Desafortunadamente, los legisladores y los jueces dentro de América Latina y más allá tienen una visión muy limitada de los defectos de las leyes de vigilancia existentes o de cómo podrían ser resueltos.

Para apoyar en esta ardua tarea, EFF ha trabajado por más de un año, con sus colaboradores en América Latina (Red en Defensa de los Derechos Digitales, Fundación Karisma, TEDICCentro de Estudios en Libertad de Expresión y Acceso a la InformaciónHiperderecho, Derechos Digitales, InternetLab, Fundación Acceso), para encender una luz sobre el estado actual de la vigilancia en la región, tanto en el aspecto legislativo como en el práctico. Hemos documentado cuidadosamente las leyes existentes en 13 países, y reunido evidencia de la mala aplicación de estas mismas leyes. Nuestro objetivo es entender la situación legal en cada país, y compararlas con las normas existentes de derechos humanos.

Para este trabajo, hemos analizado las leyes y las prácticas, públicamente accesibles, de Argentina, Brasil, Chile, Colombia, El Salvador, Guatemala, Honduras, Perú, México, Nicaragua, Paraguay y Uruguay, y publicado informes individuales que documentan el estado de la vigilancia de las comunicaciones en cada uno de estos países. Entonces, a partir de esa investigación y otras publicaciones recientes, hemos elaborado ?un informe más amplio que compara las leyes y prácticas de vigilancia en toda la región. Nuestro proyecto no está limitado a la investigación jurídica, sin embargo. Combinamos nuestro trabajo legal y de políticas públicas con talleres a activistas y abogados de derechos humanos, periodistas de investigación, y funcionarios públicos. Hemos explicado cómo las tecnologías de vigilancia funcionan y cómo los gobiernos deben aplicar las normas internacionales de derechos humanos a sus leyes y prácticas con el fin de limitar adecuadamente aquellos poderes legales. También mezclamos nuestros talleres legales y de políticas con asesoramiento técnico sobre cómo nuestros socios en la región pueden protegerse contra la vigilancia estatal.

¿Qué es lo que hemos aprendido?

Que dada la arraigada cultura del secreto que rodea la vigilancia, es difícil juzgar cuál es el grado en que los Estados cumplen con sus propias normas legales vigentes. Asegurar que la ley no sólo cumple con las normas de derechos humanos, sino que también genuinamente gobierna y describe el comportamiento del estado en el mundo real es un desafío permanente.

Aún así, identificamos deficiencias generalizadas en la región que necesitan una acción especial e inmediata.


Aquí están nuestras recomendaciones

  • La cultura del secreto, rodeando a la vigilancia de las comunicacione debe parar. Necesitamos asegurarnos que la sociedad civil, las empresas y los responsables políticos comprendan la importancia de la transparencia en el contexto de la vigilancia, y porque los informes de transparencia de las empresas y el Estado son cruciales para prevenir el abuso de poder.

  • Los funcionarios del Estado y la sociedad civil deben garantizar que las normas escritas se traduzcan en práctica constante y que cualquier incumplimiento de la ley sea descubierto y remediado. Se necesita orientación judicial de jueces imparciales, independientes y con conocimientos, y una supervisión pública independiente que la ciudadanía pueda auditar.

  • Los Estados deberían tener leyes de vigilancia de comunicaciones específicas, en lugar de un rompecabezas de disposiciones repartidos en diversas leyes. Estas leyes deben ser necesarias, proporcionadas y adecuadas.

  • La región debe comprometerse a implementar mecanismos de supervisión pública que correspondan cuidadosamente en recursos y autoridad a los que ejerzan estos poderes.

  • Las personas necesitan que se respete el debido proceso y el derecho a ser notificados acerca de una decisión de vigilancia con el suficiente tiempo e información para impugnar tal decisión o buscar otras soluciones siempre que sea posible; las personas inocentes afectadas por la vigilancia necesitan vías de reparación.

Necesitamos una fuerte coalición de la sociedad civil trabajando en estos temas, demandando a las autoridades transparencia y rendición de cuentas. Con la ayuda de jueces y legisladores vigilantes, independientes e informados, esperamos que la tecnología digital se use, sabiamente, para proteger, no violar los derechos humanos. Debemos asegurarnos de poder construir un mundo en el que el Archivo de Terror permanezca como un triste recuento de los errores pasados, no un presagio de baja tecnología de un futuro aún más oscuro.

Por último, los problemas encontrados en América Latina no son problemas aislados. Desde los Estados Unidos a Australia, de Francia a Argentina, la tecnología y las técnicas de vigilancia están en aumento, sin embargo, no existe una conciencia pública sobre el impacto en las libertades civiles frente a estos rápidos cambios. Los activistas deben seguir buscando implementar los 13 Principios con la finalidad de proteger la privacidad y restringir la vigilancia digital.

Lee nuestros informes, y conoce la situación de la vigilancia en América Latina. Únete a nosotros por la defensa de nuestros derechos y los del futuro.

Share this: Join EFF
Categories: Aggregated News

Comprehensive Legal Reform Needed to Restrain Widespread Surveillance in Latin America - Mon, 10/10/2016 - 23:15
New Reports Show How Vague Laws Can Pave the Way for Human Rights Violations in the Digital Age

San Francisco - The people of Latin America need comprehensive legal reform to protect themselves from unlawful government surveillance, according to a new series of reports published by the Electronic Frontier Foundation (EFF).

The reports apply the “Necessary and Proportionate” Principles to surveillance practices in twelve different countries in Latin America. The Principles—cooperatively written by privacy organizations and advocates worldwide, and launched three years ago at the 24th Session of the United Nations Human Rights Council—act as guidelines for fair and just government surveillance practices to protect the privacy of people around the world.

The reports, released today in partnership with digital rights organizations across the region, conclude that while every Latin American constitution recognizes a right to privacy and data protection, most countries do not implement those rights in a way that fully complies with international human rights standards.

“Current technology allows governments to easily conduct sophisticated and pervasive digital surveillance of ordinary individuals. But just because they can doesn’t mean that they should,” said EFF International Rights Director Katitza Rodríguez. “New surveillance technologies are in widespread use without any specific authorization nor human rights protections in place. Too often, these technologies are cell-site simulators—which intercept cell phone signals by imitating cell towers—or malware, which is software that is used to harm computer users by disrupting computer operation, gathering sensitive information, or gaining access to private computer systems. At the same time, executive regulation authorizing surveillance or mandating data retention are regularly issued without any public discussion or input. Some of those decisions remain secret, including confidential regulations and decrees. All of these activities violate the Necessary and Proportionate Principles for conducting surveillance within the bounds of human rights law.”

The reports, in both Spanish and English, currently cover eight Latin American countries as well as the United States, and include an overall comparative survey for twelve countries in the region, analyzing whether government surveillance is used only when it is prescribed by law, necessary to achieve a legitimate aim, and proportionate to the aim pursued. Overall, secrecy surrounding tactics and prevalence of surveillance is widespread in Latin America, and many countries have yet to develop a culture of transparency reporting by communications providers. Without this transparency, citizens are unable to hold governments accountable for overuse of surveillance technologies.

“The vast amount of digital communications content we create—and the increasing ease with which it can be collected—means that governments are capable of creating profiles of our lives, including things like medical conditions, political viewpoints, and religious affiliations,” said Rodríguez. “Yet laws throughout Latin America and around the world are often vague and ripe for abuse, and there is too much secrecy about what the governments are doing These reports are part of our long-term work to reform global communications surveillance until it comports with human rights standards.”

For all the reports:

For our visualization:

For more on the Necessary and Proportionate Principles:

Contact:  KatitzaRodriguezInternational Rights
Share this: Join EFF
Categories: Aggregated News

USA FREEDOM Act Requires Government to Declassify Any Order to Yahoo - Sat, 08/10/2016 - 09:08

In the wake of reports this week that the secretive Foreign Intelligence Surveillance Court (FISC) ordered Yahoo to scan all of its users’ email in 2015, there are many unanswered legal and technical questions about the mass surveillance.

But before we can even begin to answer them, there is a more fundamental question: what does the court order say?

We should be able to answer this question. Section 402 of the USA FREEDOM Act, passed in June 2015, specifically requires government officials to “conduct a declassification review of each decision, order, or opinion issued” by the FISC “that includes a significant construction or interpretation of any provision of law.” The Yahoo order would appear to fall squarely within this provision.

Congress passed Section 402 to end decades of secret FISC-created law after learning that the court was interpreting federal statutes and the U.S. Constitution in secret and without the benefit of any other voices to counter arguments by the Executive Branch.

Both the text of Section 402 and statements from members of Congress who authored and supported it make clear that the law places new, affirmative obligations on the government to go back, review decades of secret orders and opinions, and make the significant ones public. This is exactly what ranking House Judiciary Member Representative John Conyers of Michigan said (video link): USA FREEDOM required the declassification of all significant FISC opinions.

If the reports about the Yahoo order are accurate – including requiring the company to custom build new software to accomplish the scanning  – it’s hard to imagine a better candidate for declassification and disclosure under Section 402. Given the divergent media reports about what the FISC required Yahoo to do, it is crucial for the public to see the order.

So why isn’t the order public?

The biggest reason is the Department of Justice has refused to comply with the text of the statute and, as far as we know, has not even begun declassification reviews of any significant FISC opinions issued prior to USA FREEDOM’s passage. DOJ attorneys have argued in litigation with EFF that the statute is not retroactive and thus only requires the government to declassify significant opinions issued after June 2015.

Although we don’t know the exact date of the Yahoo order, the stories indicate that the scanning began sometime in spring 2015, which would be a few months prior to passage of USA FREEDOM. Thus, under DOJ’s view, Section 402 wouldn’t apply to the Yahoo order.

But it is irrelevant that the Yahoo order may have been issued earlier in 2015 before the passage of USA FREEDOM. DOJ’s cramped interpretation of the law conflicts with Congress’ explicit command in Section 402 that the government must review “each” significant FISC opinion, declassify and release it. There is no start date in the text to support the DOJ’s reading.

The DOJ’s view also conflicts with one of the chief advocates for greater FISC transparency, Oregon Senator Ron Wyden, who on Friday called for the government to release the Yahoo order because it is required to under USA FREEDOM.

EFF is fighting the DOJ’s incorrect interpretation of Section 402 in court right now. We hope that the court will dismiss DOJ’s wrongheaded view of the statute and require it to declassify and release all significant opinions the FISC has issued since its inception, including the one issued to Yahoo.

Releasing the Yahoo opinion will help us begin to answer the bigger questions about the Yahoo order and its dubious constitutionality. Releasing all significant FISC opinions will not only comply with what Congress required under USA FREEDOM, it will help us better understand exactly what the FISC has secretly decided about our civil liberties.

Related Cases: Yahoo's Challenge to the Protect America Act in the Foreign Intelligence Court of Review
Share this: Join EFF
Categories: Aggregated News

Briefing Unsealed in Court Battle Over National Security Letters - Sat, 08/10/2016 - 05:36
EFF Argues that NSL Secrecy Violates First Amendment and Chills Debate on Government Surveillance

San Francisco - An appeals court published redacted briefing by the Electronic Frontier Foundation (EFF) today arguing that national security letters (NSLs) and their accompanying gag orders violate the free speech rights of companies who want to keep their users informed about government surveillance.

EFF represents two service providers in challenging the NSL statutes in front of the United States Court of Appeals for the Ninth Circuit. Most of the proceedings have been sealed since the case began five years ago, but some redacted documents have been released after government approval.

“Just this week we’ve seen Open Whisper Systems—the company behind the Signal messaging service—successfully fight a government gag order attached to a subpoena for customer information. Meanwhile, Yahoo is facing criticism for allowing the government wide-ranging access to its users’ communications,” said EFF Staff Attorney Andrew Crocker. “Our clients want to join this conversation, using their own experiences as a basis to talk about what kind of government surveillance is appropriate and what reform is needed—but NSL gags prevent them from doing so. We’re asking the court to strike down this unconstitutional statute so we can have the robust and inclusive debate that this issue deserves.”

The NSL statutes have been highly controversial since their use was expanded under the USA PATRIOT Act. With an NSL, the FBI—on its own, and without court approval—can issue a secret letter to a communications provider, demanding information about its customers. In this case and nearly all others, the NSL is issued in conjunction with a gag order, preventing the companies from notifying users of the demand or discussing the letter at all. Congress changed some parts of the statute in 2015, but retained the basic elements of the gags. In fact, EFF’s clients still cannot identify themselves publicly or share their experiences as part of the debate over government surveillance of technology services.

“Our clients want to be able to issue accurate transparency reports and talk to their customers about how they try to defend users from overreaching government investigations,” Crocker said. “But instead, the FBI instituted indefinite gag orders to shield its demands for information. This is an unconstitutional restriction of our clients’ First Amendment rights.”

For the full redacted brief:

For more on national security letters:

Contact:  AndrewCrockerStaff
Share this: Join EFF
Categories: Aggregated News

FCC Helped Create the Stingray Problem, Now it Needs to Fix It - Fri, 07/10/2016 - 10:23
It is long overdue for the FCC to address Stingrays' impact on speech, interference with 911 calls, and invasion of privacy.

FCC Helped Create the Stingray Problem, Now it Needs to Fix It

It is long overdue for the FCC to address Stingrays' impact on speech, interference with 911 calls, and invasion of privacy.

EFF recently joined with the American Civil Liberties Union in a petition to the Federal Communications Commission (FCC) in support of a complaint filed against the Baltimore Police Department for illegal usage of a surveillance technology, often called “Stingray,” that spies on our cell phones by simulating a cellular tower. A dozen U.S. Senators led by Senators Franken, Leahy, and Wyden have also recently weighed in with the FCC on the need to investigate the issue along with any disproportionate impacts on communities of color who are more dependent on wireless broadband as their only means to communicate. We think the time has come for FCC action as the grave problems of harmful communications interference, disrupted access to emergency 911 services, and invasions of privacy reach beyond just Baltimore and require a national solution. The airwaves are public property and belong to all of us and the FCC manages those airwaves on behalf of the public.

What is the FCC's Role in Addressing the Issues? 

Federal law mandates that every commercial device that emits or transmits electromagnetic signals must be approved by the FCC. From the iPhone to your common router, the FCC has reviewed and approved every wireless commercial product in the United States in order to ensure that the airwaves remain usable by avoiding interference that would make transmitting a clear signal impossible. While this may seem fairly top down, it has prevented many instances of harmful interference in the wireless marketplace.

The FCC's involvement in cell site simulators began years ago when it first approved commercial sales to law enforcement. Documents disclosed under FOIA show that the company that sells Stingrays had local police departments lobby the FCC close to ten years ago for approval. A common claim repeated verbatim by different departments was that cell site simulators would create minimal interference, be rarely used, and briefly interact with phones. However, law enforcement today instead is using this surveillance equipment in ways that directly contradict their original assertions to the FCC.

We now know, for example, that police departments use them for hours at a time without a warrant, that officers deploy them for tracking down people suspected of non-violent crimes like harassing phone calls, and that certain devices do in fact cause significant interference to cell service. The combination of the extraordinary power of these surveillance tools (they can scan hundreds of innocent user cell phones at once) and the lack of FCC regulations has resulted in explosive growth in their deployment. Outside of the baseline statutory prohibitions against "harmful interference" and requiring a license to transmit (which is different than an authorization to sell the device), no FCC rules exists that specifically regulate cell site simulators.

Police today violate these basic statutory protections when using cell site simulators and thereby disrupting the cellular service of many innocent people. Based on publicly available information, it appears that some cell site simulators utilized today by law enforcement are jamming LTE and 3G services in order to force phones to downgrade to 2G services where they are easily exploited due to legacy vulnerabilities. A study by the Royal Canadian Mounted Police also found that 911 call access can be blocked 50 percent of the time when a phone interacts with a cell site simulator. Testing these devices requires technical analysis but cell site simulators are only legally sold to and owned by law enforcement agencies. Therefore, the FCC is the best suited agency with the legal authority and technical expertise to determine what is happening in Baltimore and potentially across the entire country as wireless surveillance by law enforcement continues to proliferate.

In the past, the FCC faced a similar issue when dealing with cell phone signal boosters. Third parties developed mini-towers that would augment wireless signals in areas with poor coverage. Carriers complained that these devices were operating in their exclusive space and disrupting their service. That was the same problem we see today: signal boosters, like cell site simulators, were interfering with communication services and 911 access. The FCC's response should be the same now as it was then: the agency studied the problem and took steps to resolve it in a public forum.

FCC Should Mandate Transparency and Judicial Review for Cell Site Simulators

The sale of police surveillance equipment (often in coordination with federal law enforcement officials) has systematically been shielded from public scrutiny. EFF has spent years trying to break through the obfuscation, with some success, but too many secrets remain. It is time for local communities to have more control over their police. The FCC has the authority to require transparency as a condition of usage. For example, it can require local law enforcement departments to register their intent to purchase and deploy a cell site simulator and thereby provide public notice before the fact. In the few instances where local government has been made aware of the intention of local police to purchase surveillance equipment, public debate followed, and local officials and community members properly had a direct say for or against the expansion of police surveillance.

The time also appears ripe to harmonize basic judicial review requirements for state and local police with policies already adopted by federal agencies. In late 2015, the Department of Justice instructed federal law enforcement agencies to obtain a warrant before using a cell site simulator, in recognition of the constitutional privacy rights of citizens. The Department of Homeland Security followed suit with its own policy mandating that Stingray usage required a warrant. The FCC should apply such a policy to state and local law enforcement entities, too, as a condition of using the public airwaves for surveillance equipment. The FCC can protect the public interest by bringing local and state law enforcement actors in line with federal policy designed to protect citizens' constitutional privacy rights against unreasonable searches.

It is possible that cell site simulators simply will not work in today's crowded wireless market and that law enforcement will have to rely directly on carriers for information about telephones after acquiring the appropriate judicial clearance. Simply put, Americans should not be forced to accept degraded services and law enforcement should not be given a blank check to cause harmful interference. The FCC must act on behalf of the public to begin resolving this problem.

Share this: Join EFF
Categories: Aggregated News

Honoring Visionaries at the 25th Annual Pioneer Awards - Fri, 07/10/2016 - 04:57

Since 1992, EFF’s annual Pioneer Awards celebration has honored those who expanded freedom and innovation on what was dubbed the electronic frontier—a bleeding edge of technology intersecting with the rights of users. Today we understand better than ever that digital privacy and free expression are fundamental elements of democracy and human rights around the world.

This year we recognized the work of four leaders in this space: trailblazing digital rights activist Malkia Cyril, tireless international data protection activist Max Schrems, the groundbreaking encryption researchers who authored “Keys Under Doormats," and champions of California’s CalECPA privacy law Senators Mark Leno (D-San Francisco) and Joel Anderson (R-Alpine).

Julia Angwin kicks off the night. Photo by Alex Schoenfeldt.

Award-winning investigative journalist Angwin opened the event with a keynote speech tracing the rise of online tracking and surveillance technologies. Angwin described her career in tech journalism from an early Wall Street Journal article that was “literally about cookies” to her latest work at ProPublica focusing on the effect of racially biased computer programs used by courts to predict who will be a future criminal. The journalist affirmed her commitment to reframe the conversation about privacy rights into a conversation about human rights.

Lee Tien, EFF Senior Staff Attorney and Adams Chair for Internet Rights, presented the first awards of the evening to California Senators Mark Leno and Joel Anderson. The pair introduced CalECPA—the California Electronic Communications Privacy Act—a landmark law that safeguards privacy and free speech rights. It gave California the strongest digital privacy law in the nation and helps prevent abuses before they happen.

CALECPA champs Senators Mark Leno and Joel Anderson. Photo by Alex Schoenfeldt.

San Francisco Democratic Senator Leno quipped that he and fellow winner Republican Senator Joel Anderson were a legislative “odd couple” yet they accomplished something valuable. “San Diego is as far right as San Francisco is far left,” said Leno. “When we get together people notice and pay attention.” Privacy, he says, is a “rare non-partisan issue in Sacramento,” but the fight to get stronger digital privacy protections for Californians and get the California Electronic Communications Privacy Act passed took years. “We won across the board. We are a formidable team,” Leno said.

Senator Anderson added that while he has opposed bills of Leno’s, he was happy to work with his Democratic colleague to get CalECPA passed. He hoped this year’s award wouldn’t be his last: “I still have 2 more years to get more of my own.”

EFF International Director Danny O’Brien presented the next award to Max Schrems. O’Brien pointed out that Schrems was often described in media reports “with a slight tone of dismissal” as an “Austrian law student. How could he possibly believe he was right” in challenging Facebook and the mighty tech industry? Schrems had the audacity to ask companies, “What are you doing to comply with EU data protection law?” Today Schrems can be rightly described in the media as a “noted jurist and successful litigant,” O’Brien said.

Data protection activist Max Schrems. Photo by Alex Schoenfeldt.

Schrems said the Pioneer Award was “really meaningful” to him “coming from EFF.” “We are all interconnected,” said Schrems, and “we can also use the legal system to affect change.” He remembered reports of his case in Europe noting that he was not backed by a sophisticated legal team and had little chance of prevailing. The rest is history.

EFF Legal Director Corynne McSherry presented the next Pioneer Award to Malkia Cyril. McSherry described first meeting Cyril, founder and executive director of the Center for Media Justice, at a rally organized at San Francisco City Hall “in the thick of the fight for net neutrality.” She described how Cyril continued a moving campaign joining communities and “explaining how the fight for net neutrality relates to the fight for racial and economic justice, and vice versa.” McSherry praised Cyril’s voice on varied issues including challenging surveillance and advocating for prisoners’ rights, and for being “equally dedicated to helping others speak.”

Malkia Cyril: “It’s not time to go slow—there are times for that, but that’s not this time.” Photo by Alex Schoenfeldt.

When Cyril got off a plane after a recent vacation, their phone lit up with the latest reports about young black men shot by police. “That’s what my life is like, who else has gotten killed today,” Cyril said. They went on to describe the passion to “dismantle the structure that views my blackness as a crime” and spoke of people of color being “legally enslaved through” gang databases and said high-tech policing, including so-called predictive policing systems which “succeed in only one thing: systematic discrimination against communities of color. That’s wrong. It’s up to us to make that change.” Cyril called for “dismantling these programs” and “abolishing the surveillance state.”

Cindy Cohn, EFF Executive Director, presented the final award of the evening to the authors of the 2015 paper “Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data And Communications.”

Cohn remarked that the oft-referenced report helped frame the second round of the so-called Crypto Wars. “The report clearly and cleanly lays out what’s at stake for all of us in the FBI’s push for limits on encryption, bridging from the technical problems in a world where forward secrecy and authenticated encryption are increasingly needed to protect us, to the procedural ones in our international, interconnected world where countries with horrible human rights records and local sheriffs will demand the same access as the FBI. “

Susan Landau on behalf of "Keys Under Doormats" authors. Photo by Alex Schoenfeldt.

Susan Landau spoke on behalf of the 15 co-awardees:

  • Harold Abelson, of MIT
  • Ross Anderson, University of Cambridge
  • Steven M. Bellovin, Columbia University
  • Josh Benaloh, Microsoft Research
  • Matt Blaze, University of Pennsylvania
  • Whitfield Diffie, who in 1975 helped discover the concept of public-key cryptography
  • John Gilmore, co-founder of the Electronic Frontier Foundation,
  • Matthew Green, Johns Hopkins University
  • Peter G. Neumann of SRI International
  • Susan Landau, Worcester Polytechnic Institute
  • Ronald L. Rivest, MIT
  • Jeffrey I. Schiller, Internet Engineering Steering Group Area Director for Security from 1994 to 2003
  • Bruce Schneier, EFF Board and fellow at the Berkman Center for Internet and Society, Harvard University,
  • Michael A. Specter, Ph.D. candidate at MIT
  • Daniel J. Weitzner, of MIT and United States Deputy Chief Technology Officer in the White House (2011-2012)

Landau recalled the group's 1997 report responding to the Clipper Chip proposal and why a review of recent law enforcement efforts at exceptional access to private communications was necessary. The team of experts concluded, Landau said at the event, that the kinds of access the FBI demanded would break authenticated encryption and could also break forward secrecy (which prevents a key exposure from enabling the decryption of all previously encrypted materials). The government’s requests would compromise “fundamental security protections.”

Landau said that beginning work on the encryption debate 25 years ago “shaped the rest of my life for good.” She recalled the morning that the Snowden documents were leaked. She received an email from Julia Angwin and fielded questions all day from reporters, too busy to change from her pajamas. Technical expertise and research had become crucial in the debate about government spying.

“EFF realized that real people who were living real lives,” cared about freedom and liberty, and recognized that the fight about encryption is, “at its core” about freedom and liberty.

Landau had a parting message for EFF: “Just keep doing the fabulous work you’re doing.”

The 2016 Pioneer Award winners with EFF Executive Director Cindy Cohn. Photo by Alex Schoenfeldt.

The Pioneer Awards provide an opportunity to look back at who is ensuring that the rights to privacy and free expression can stay in harmony with our changing methods of interaction and communication. We thank every EFF member and supporter for being a part of this growing movement. With each passing year, it becomes more evident that our collective work is fundamental to protecting basic freedoms and we are not yet done.

Special thanks to Airbnb, Dropbox, O'Reilly Media, No Starch Press, Adobe, and Ron Reed for supporting EFF and the 2016 Pioneer Awards ceremony.

Share this: Join EFF
Categories: Aggregated News



Advertise here!

Syndicate content
All content and comments posted are owned and © by the Author and/or Poster.
Web site Copyright © 1995 - 2007 Clemens Vermeulen, Cairns - All Rights Reserved
Drupal design and maintenance by Clemens Vermeulen Drupal theme by Kiwi Themes.
Buy now