Aggregated News

A Bad Broadband Market Begs for Net Neutrality Protections - Sat, 27/05/2017 - 11:15

Anyone who has spent hours on the phone with their cable company can tell you that in the broadband market, the customer is not always right.

When it comes to Internet access wired into your home, the major ISPs like Comcast, Charter, and Verizon don’t have to play nice because they know that most customers aren’t able to switch to another provider.

Thanks to policies at the federal, state, and local levels, as well as some careful planning by the major ISPs, there is no meaningful competition in the broadband market in most parts of the country. Instead, consumers are stuck with government-backed monopolistic ISPs that can get away with anti-consumer business practices.

Luckily, the FCC has laid down some basic net neutrality protections to keep ISPs from completely controlling what you can do online. The basic idea behind those protections is that your ISP shouldn’t be able to block or slow your access to certain websites or online services. Under the bright-line rules passed by the FCC in 2015, ISPs can’t provide faster or slower access to certain websites and services based on whether those sites and services are willing to pay.

These rules keep the Internet open so that consumers can go where they want online, including to new websites and services that don’t have the deep pockets to pay for fast lanes to reach users.

But those rules are under threat after the new FCC Chairman Ajit Pai started a process earlier this month to roll back the protections approved in 2015. Write to the FCC and tell the agency not to undo the net neutrality rules currently in place.

Those advocating for Pai’s rollback often accuse the FCC of overreaching in 2015 and applying unnecessary regulation on the broadband market. But that argument ignores the unique lack of competition in the broadband market.

According to the FCC’s 2016 data, 51 percent of Americans have access to only one provider of high-speed Internet access. That means slightly more than half of the country has no other option for high-speed Internet if they don’t like something their ISP doing. Only 38 percent of Americans have access to more than one ISP. The remaining 10 percent doesn’t have access to a high-speed Internet at all. The map below shows which parts of the country have access to two or more options for broadband, based on the FCC's data.

The areas highlighted on this map have two or more high-speed Internet providers, according to the FCC's 2016 data

Even in places where there are multiple high-speed Internet providers, the markets are often carefully carved up so that there’s little to no overlap between competitors. Data from 2014—the year that the government stopped updating its National Broadband Map, which marked which ISPs operate where—and earlier clearly shows that ISPs have monopoly or duopoly control over wide swaths of the country.

While there are some naturally occurring reasons for the concentration in the broadband market (it’s expensive and time-consuming to lay the initial groundwork for a broadband network, so incumbents automatically have a big advantage over newcomers to a market), intervention by the federal government has made it worse.

As we wrote in a brief the last time the FCC was sued over its net neutrality rules, the large incumbent ISPs received subsidies to build their networks in the first place and those ISPs get to access phone companies infrastructure at preferential rates set by the government. “Today’s … market is inseparable from the government policies that enabled, and continue to enable, its existence,” we said.

On top of that, state governments often work to protect those incumbents. Several states have laws on the books that create barriers for local governments who want to build broadband infrastructure to give their residents more ISP options.

Even if you’re lucky enough to have more than one option for high-speed Internet providers where you live, it is usually an expensive hassle to switch ISPs. You often have to go through the time-consuming process of cancelling your account and returning equipment as well as early-termination and equipment rental feels, as then-FCC-Chairman Tom Wheeler pointed out in 2014.

All of this adds up to the fact that most Americans are stuck with their ISP, meaning ISPs have no incentive to respect their customers’ wishes, including when it comes to net neutrality and treating online content equally. That’s why the FCC shouldn’t roll back its open Internet rules. Tell the FCC to keep its clear, bright-line net neutrality protections in place.

Take Action

Categories: Aggregated News

Six Things Trump's FCC Chairman Doesn't Want You to Know About Net Neutrality - Sat, 27/05/2017 - 00:18
Six Things Trump's FCC Chairman Doesn't Want You to Know About Net NeutralityTimothy KarrMay 26, 2017As the Trump FCC moves forward with a misinformation campaign about the Net Neutrality rules, it’s worth highlighting the six things its chairman doesn’t want you to know.
Categories: Aggregated News

Diego Gomez Finally Cleared of Criminal Charges for Sharing Research - Thu, 25/05/2017 - 08:06

In 2011, Colombian graduate student Diego Gomez shared another student’s Master’s thesis with colleagues over the Internet. After a long legal battle, Diego was able to breathe a sigh of relief today as he was cleared of the criminal charges that he faced for this harmless act of sharing scholarly research.

Since Diego was first brought to trial, thousands of you have shown your support for him via our online petition. The petition’s message is simple: open access should be the international default for scholarly publication.

That’s true, but Diego’s story also demonstrates what can go wrong when nations enact severe penalties for copyright infringement. Even if all academic research were published freely and openly, researchers would still need to use and share copyrighted materials for educational purposes. With severe prison sentences on the line for copyright infringement, freedom of expression and intellectual freedom suffer.

Diego’s story demonstrates what can go wrong when nations enact severe penalties for copyright infringement.

Diego’s story also serves as a cautionary tale of what can happen when copyright law is broadened through international agreements. The law Diego was prosecuted under was enacted as part of a trade agreement with the United States. But as is often the case when trade agreements are used to expand copyright law, the agreement only exported the U.S.’ extreme criminal penalties; it didn’t export our broad fair use provisions. When copyright law becomes more restrictive with no account for freedom of expression, people like Diego suffer.

Diego was lucky to have the tireless support of local NGO Fundación Karisma, and allies around the world such as EFF, who brought global attention to the injustice of the criminal accusations against him. However, the prosecutor in the case has appealed the verdict, leaving Diego with possible liability continuing to hang over his head for an undetermined time to come.

There are also many other silent victims of overzealous copyright enforcement, including those who are constrained from performing useful research, who shut down websites that come under unfair attack, and who shy away from sharing with colleagues for fear of being targetted with civil or criminal charges.

Please join us today in standing up for open access, standing up for fair copyright law, and standing with Diego.

Take Action

Support Open Access Worldwide

Categories: Aggregated News

Book Review: The End of Ownership - Thu, 25/05/2017 - 07:56
In the digital age, a lot depends on whether we actually own our stuff, and who gets to decide that in the first place.

In The End of Ownership: Personal Property in the Digital Age, Aaron Perzanowski and Jason Schultz walk us through a detailed and highly readable explanation of exactly how we’re losing our rights to own and control our media and devices, and what’s at stake for us individually and as a society. The authors carefully trace the technological changes and legal changes that have, they argue, eroded our rights to do as we please with our stuff. Among these changes are the shift towards cloud distribution and subscription models, expanding copyright and patent laws, Digital Rights Management (DRM), and use of End User License Agreements (EULAs) to assert all content is “licensed” rather than “owned.” And Perzanowski and Schultz present compelling evidence that many of us are unaware of what we’re giving up when we “buy” digital goods.

Ownership, as the authors explain, provides a lot of benefits. Most importantly, ownership of our stuff supports our individual autonomy, defined by the authors as our “sense of self-direction, that our behavior reflects our own preferences and choices rather than the dictates of some external authority.” It lets us choose what we do with the stuff that we buy – we can keep it, lend it, resell it, repair it, give it away, or modify it, without seeking anyone’s permission. Those rights have broader implications for society as a whole – when we can resell our stuff, we enable secondary and resale markets that help disseminate knowledge and technology, support intellectual privacy, and promote competition and user innovation. And they’re critical to the ability of libraries and archives to serve their missions – when a library owns the books or media in its collection, it can lend those books and media almost without restriction, and it generally will do so in a way that safeguards the intellectual privacy of its users.

These rights, long established for personal property, are safeguarded in part by copyright law’s “exhaustion doctrine.” As the authors make clear, that doctrine, which holds that some of a copyright holders’ rights to control what happens to a copy are “exhausted” when they sell the copy, is a necessary feature in copyright law’s effort to limit the powers granted to copyright holders so that overbroad copyright restrictions do not undermine the intended benefit to the public as a whole.

Throughout the book, Perzanowski and Schultz present a historical account of rights holder attempts to overcome exhaustion and exert more control over what people do with their media and devices. The authors describe book publishers’ hostile, “fearful” response to lending libraries in the 1930’s:

…a group of publishers hired PR pioneer Edward Bernays….to fight against used “dollar books” and the practice of book lending. Bernays decided to run a contest to “look for a pejorative word for the book borrower, the wretch who raised hell with book sales and deprived authors of earned royalties.”…Suggested names included “bookweevil,”…”libracide,” “booklooter,” “bookbum,” “culture vulture,” … with the winning entry being “booksneak.”

Publishers weren’t alone, the authors show that both record labels and Hollywood studios fought against the rise of secondary markets for music and home video rental, respectively. Hollywood fought a particularly aggressive battle against the VCR. In the end, the authors note, Hollywood continued to “resist[] the home video market,” at least until they gained more control over the distribution technology.

But while historically, overzealous rights holders may have been stymied to some extent by the law’s limitation of their rights, recent technological changes have made their quest a lot easier.

“In a little more than the decade,” the authors explain, we’ve seen dramatic changes in content distribution, from tangible copies, to digital downloads, to the cloud, and now, increasingly, to subscription services. These technological changes have precipitated corresponding changes in our abilities to own the works in our libraries. While, as the authors explain, copyright law has long relied on the existence of a physical copy to draw the lines between rights holders’ and copy owners’ respective rights, “[e]ach of these shifts in distribution technology has taken us another step away from the copy-centric vision at the heart of copyright law.” Unfortunately, the law hasn’t kept up: Even as copies escape our possession and disappear from our experience, copyright law continues to insist that without them, we only have the rights copyright holders are kind enough to grant us.”

Perzanowski and Schultz point to End User License Agreements (EULAs), with their excessive length, one-sided, take-it-or-leave-it nature, complicated legalese, and relentless insistence that what you buy is only “licensed” to you (not “owned”), as a main culprit behind the decline of ownership.  They provide some pretty standout examples – including EULAs that exceed the lengths of classic works of literature, and those that claim to prevent a startling array of activity. For the authors, these EULAs

. . . create private regulatory schemes that impose all manner of obligations and restrictions, often without meaningful notice, much less assent. And in the process, licenses effectively rewrite the balance between creators and the public that our IP laws are meant to maintain. They are an effort to redefine sales, which transfer ownership to the buyer, as something more like conditional grants of access.

And unfortunately, despite their departure from some of contract law’s core principles, some courts have permitted their enforcement, “so long as the license recites the proper incantations.”

The authors are at their most poetic in their criticism of Digital Rights Management (DRM) and Section 1201 of the DMCA, perhaps the worst scourges of ownership in the book. As they point out, even in the absence of restrictive EULA terms, DRM embeds rights holders’ control directly into our technologies themselves – in our cars, our toys, our insulin pumps and heart monitors. Comparing it to Ray Bradbury’s Farenheit 451, they explain:

While not nearly as dramatic as flamethrowers and fighting robot dogs, the unilateral right to enforce such restrictions through DRM exerts many of the types of social control Bradbury feared. Reading, listening, and watching become contingent and surveilled. That system dramatically shifts power and autonomy away from individuals in favor of retailers and rights holders, allowing for enforcement without anything approaching due process.

As Perzanowski and Schultz explain, these shifts aren’t just about our relationship to our stuff. They recalibrate the relationship between rights holders and consumers on a broad scale:

When we say that personal property rights are being eroded or eliminated in the digital marketplace, we mean that rights to use, to control, to keep, and to transfer purchases – physical and digital – are being plucked from the bundle of rights purchasers have historically enjoyed and given instead to IP rights holders. That in turn means that those rights holders are given greater control over how each of us consume media, use our devices, interact with our friends and family, spend our money, and live our lives. Cast in these terms, it is clear that there is a looming conflict between the respective rights of consumers and IP rights holders.

The authors repeatedly remind us that who makes the decision between what is owned and what is licensed is crucial – both on the individual and societal scale. When we allow companies to define when we can own our stuff, through EULAs or Digital Rights Management, we shift crucially important decisions about how our society should work away from legislatures, courts, and public processes, to private entities with little incentive to serve our interests. And, when we don’t know exactly what we give up when we “buy” digital goods, we’re not making an informed choice. Further, when we opt for mere access over ownership, our choices have broader societal effects. The more we shift to licensing and subscription models, the more it may become harder for those who would rather own their stuff to exercise that option – stores close, companies shift distribution models, and some works disappear from the market.

In the end, Perzanowski and Schultz leave us with a thread of hope that we still might see a future for ownership of digital goods. They believe that at least some courts and policy makers, and “[p]erhaps more importantly, readers, listeners, and tinkerers – ordinary people – are expressing their own reluctance to accept ownership as an artifact of some bygone predigital era.” And they provide a set of arguments and reform proposals to martial in the fight to save ownership before it’s too late. They lay out an array of technological and legal strategies to reduce deceptive practices, curb abusive EULAs, and, reform copyright law. The most thoroughly developed of these proposes a legislative restructuring of copyright exhaustion in a flexible, multi-factor format, in part modeled on the United States’ fair use doctrine. It’s a good idea, and it would probably work.  But (and the authors acknowledge this) even modest attempts at reform have failed to garner the necessary support in Congress to move forward. A more ambitious proposal, like this one, seems at least unlikely in the near term.

Overall, the End of Ownership is a deeply concerning exposition of how we’re losing valuable rights. The questions it raises about whether and how we can preserve the benefits of ownership in the digital age will likely continue to be relevant even as technology, and the law, evolve. Most critically, it asks us to rethink who we want making the decisions that shape how we live our lives. While the book tackles complex issues in law and technology, it does so in a way that’s accessible and interesting both for lawyers and laypersons alike. The book’s ample real world examples of everything from disappearing e-book libraries, to tractors, dolls, and medical devices resistant to their owners’ control bring home both the impact of abstract legal doctrines and the urgency of their reform.

 To learn about some of EFF’s efforts to protect your rights of ownership and autonomy, you can:

Categories: Aggregated News

Congress’ Imperfect Start to Addressing Vulnerabilities - Thu, 25/05/2017 - 03:18

With the global and debilitating WannaCry ransomware attack dominating the news in recent weeks, it’s increasingly necessary to have a serious policy debate about disclosure and patching of vulnerabilities in hardware and software.

Although WannaCry takes advantage of a complex and collective failure in protecting key computer systems, it’s relevant to ask what the government’s role should be when it learns about new vulnerabilities. At EFF, we’ve been pushing for more transparency around the decisions the government makes to retain vulnerabilities and exploit them for “offensive purposes.”

Now, some members of Congress are taking steps towards addressing these decisions with the the proposal of the Protecting Our Ability to Counter Hacking—or PATCH—Act (S.1157). The bill, introduced last week by Sens. Ron Johnson, Cory Gardner, and Brian Schatz and Reps. Blake Farenthold and Ted Lieu, is aimed at strengthening the government’s existing process for deciding whether to disclose previously unknown technological vulnerabilities it finds and uses, called the “Vulnerabilities Equities Process” (VEP).

The PATCH Act seeks to do that by establishing a board of government representatives from the intelligence community as well as more defensive-minded agencies like the Departments of Homeland Security and Commerce. The bill tasks the board with creating a new process to review and, in some cases, disclose vulnerabilities the government learns about.

The PATCH Act is a good first step in shedding some light on the VEP, but, as currently written, it has some shortcomings that would make it ineffective in stopping the kind of security failures that ultimately lead to events like the WannaCry ransomware attack. If lawmakers really want to deal with the dangers of the government holding on to vulnerabilities, the VEP must apply to classified vulnerabilities that have been leaked.

The VEP was established in 2010 by the Obama administration and was intended to require government agencies to collectively weigh the costs and benefits of disclosing these vulnerabilities to outside parties like software vendors instead of holding onto them to use for spying and law enforcement purposes.

Unfortunately, after EFF fought a long FOIA battle to obtain a copy of the written VEP policy document, we’ve learned that it went largely unused. In the meantime, agencies like the NSA and CSA suffered major thefts of their often incredibly powerful tools. In particular, the 2016 Shadow Brokers leak enabled outsiders to later develop the WannaCry ransomware using an NSA tool that the agency likened to “fishing with dynamite.” 

Lawmakers should be commended for trying to codify and expand the existing process to ensure that the government is adequately considering these risks, and the PATCH Act is a welcome first step.

But there are two areas in particular where it needs to go further.

First, as described above, the current bill seems to overlook situations where the government loses control of vulnerabilities that it has decided to retain. As we’ve seen with the Shadow Brokers leaks, this is a very real possibility, one which even kept the NSA up at night, according to the Washington Post. Yet the PATCH Act specifically states that a classified vulnerability will not be considered “publicly known” if it has been “inappropriately released to the public.” That means that a stolen NSA tool can be circulating widely among third parties without triggering any sort of mandatory reconsideration of disclosure to a vendor to issue a patch. While it might be argued that other provisions of the bill implicitly account for this scenario, we’d like to see it addressed explicitly.

In addition to overlooking situations like the WannaCry ransomware attack, the bill excludes cases where the government never actually acquires information about a vulnerability and instead contracts with a third-party for a “black box exploit.”

For example, in the San Bernardino case, the FBI reportedly paid a contractor a large sum of money to unlock an iPhone without ever learning details of how the exploit worked. Right now, the government apparently believes it can contract around the VEP in this way. This raises concerns about the government’s ability to adequately assess the risks of using these vulnerabilities, which is why a report written by former members of the National Security Council recommended prohibiting non-disclosure agreements with third-parties entirely. At the very least, we’d like to see the bill bring more transparency to the use of vulnerabilities even when the government itself doesn’t acquire knowledge of the vulnerability.

We hope to see the bill’s authors address these concerns as it moves forward to ensure that all of the vulnerabilities known to the government are reviewed and, where appropriate, disclosed.

Related Cases: EFF v. NSA, ODNI - Vulnerabilities FOIA
Categories: Aggregated News

TPP Comes Back From the Dead... Or Does It? - Thu, 25/05/2017 - 02:32

Could the Trans-Pacific Partnership (TPP) be coming back from the dead? It is at least a possibility, following the release of a carefully-worded statement last Sunday from an APEC Ministerial meeting in Vietnam. The statement records the agreement of the eleven remaining partners of the TPP, aside from the United States which withdrew in January, to "launch a process to assess options to bring the comprehensive, high quality Agreement into force." This assessment is to be completed by November this year, when a further APEC meeting in Vietnam is to be held.

We do know, however, that not all of the eleven countries are unified in their view about how the agreement could be brought into force. In particular, countries like Malaysia and Vietnam would like to see revisions to the treaty before they could accept a deal without the United States. This is hardly an unreasonable position, since it was the United States that pushed those countries to accept provisions such as an unreasonably long life plus 70 year copyright term, which is to no other country's benefit.

Other TPP countries, such as Japan and New Zealand, are keen to bring the deal into force without any renegotiation, which could add years of further delay to the treaty's completion. Japan also likely fears losing some of the controversial rules that it had pushed for, such as the ban on software source code audits. The country's Trade Minister, Hiroshige Seko, has been quoted as saying, "No agreement other than TPP goes so far into digital trade, intellectual property and improving customs procedures."

For now, that remains true; many of the TPP's digital rules are indeed extreme and untested. But for how much longer? Industry lobbyists are pushing for the same digital trade rules to be included in Asia's Regional Comprehensive Economic Partnership (RCEP) and in a renegotiated version of the North American Free Trade Agreement (NAFTA). Since RCEP and NAFTA together cover most of the same countries as the TPP, there will be little other rationale for the TPP to exist if lobbyists succeed in replicating its rules in those other deals. 

Free Trade Rules that Benefit Users

It's worth stressing that EFF is not against free trade. If trade agreements could be used to serve users rather than to make their lives more difficult EFF could accept or even actively support certain trade rules. For example, last week the Re:Create Coalition, of which EFF is a member, issued a statement explaining how the inclusion of fair use in trade agreements would make them more balanced than they are now. The complete statement, issued by Re:Create's Executive Director Joshua Lamel, says:

If NAFTA is renegotiated and if it includes a chapter on copyright, that chapter must have mandatory language on copyright limitations and exceptions, including fair use. The United States cannot export one-sided enforcement provisions of copyright law without their equally important partner under U.S. law: fair use.

The U.S. should also take further steps to open up and demystify its trade policy-making processes, not only to Congress but also to the public at large, by publishing text proposals and consolidated drafts throughout the negotiation of trade agreements.

The last paragraph of this statement is key: we can't trust that trade agreements will reflect users' interests unless users have a voice in their development. Whether the TPP comes back into force or not, the insistence of trade negotiators on a model of secretive, back-room policymaking will lead to the same flawed rules popping up in other agreements, to the benefit of large corporations and the detriment of ordinary users.

At this point we have no faith that the TPP would be reopened for negotiation in a way that is inclusive, transparent and balanced, and we maintain our outright opposition to the deal. RCEP is being negotiated in an equally closed process, though we are continuing to lobby negotiators about our concerns with that agreement's IP and Electronic Commerce chapters. As for NAFTA, we are urging the USTR to heed our recommendations for reform of the office's practices before negotiations commence.

The death of the TPP didn't mark the end of EFF's work on trade negotiations and digital rights, and its reanimation won't change our course either. No matter where the future of digital trade rules lie, our approach remains the same: advocating for users' rights, and fighting for the reform of closed and captured processes. Until our concerns are heard and addressed, trade negotiators can be assured that regulating users' digital lives through trade agreements isn't going to get any easier.

Categories: Aggregated News

No Evidence that "Stronger" Patents Will Mean More Innovation - Thu, 25/05/2017 - 02:27
Push to once again allow abstract patents is misguided

Right now, the patent lobby—in the form of the Intellectual Property Owners Association and the American Intellectual Property Law Association—is demanding “stronger” patent laws. They want to undo Alice v. CLS Bank and return us to a world where “do it on a computer” ideas are eligible for a patent. This would help lawyers file more patent applications and patent litigation. But there’s no evidence that such laws would benefit the public or innovation at all.

One of the primary justifications we hear for why patents are social goods is that they encourage innovation. Specifically, the argument goes, patents incentivize companies and individuals to invest in costly research and development that they would not otherwise invest in because they know they will be able to later charge supracompetitive prices and recoup the costs of that development.

Those who want "stronger" patents (i.e. patents that are easier to get and/or harder to invalidate) often use this rationale to justify changing patent laws to make patents more enforceable. For example, a former Judge on the Court of Appeals for the Federal Circuit recently suggested that "America is in danger because we have strangled our innovation system" by making it easier to challenge patents and show they never should have been granted. As another example, the Chief Patent Counsel at IBM argued that "The U.S. leads the software industry, but reductions in U.S. innovation prompted by uncertain patent eligibility criteria threaten our leadership" because "Patents promote innovation."

These arguments all presume that "stronger" patents mean more research and development dollars and thus more innovation. They also presume that if the U.S. doesn't provide "stronger" patents, innovation will go elsewhere.

But reality is much more complex. As one recent paper put it: "there is little evidence that stronger patent laws result in increases in [research and development] investments," at least if the yardstick is patent filings.  Indeed, "we still have essentially no credible empirical evidence on the seemingly simple question of whether stronger patent rights – either longer patent terms or broader patent rights – encourage research investments into developing new technologies."

There are good reasons to think "stronger" patents do not actually spur innovation. Patents are a double-edged sword. Although they may provide some incentive to innovate (even that premise is unclear), they also create barriers to more innovation. Patents work to prevent the development of follow-on innovation until that patent expires, delaying innovation that would have occurred, but is prevented by the grant of an artificial, government-backed monopoly.

The problem of patents impeding future innovation is exacerbated in software, where the life cycle is relatively short and innovation tends to move quickly. When a patent lasts for 20 years, software patents—especially broad and abstract software patents—have the potential to significantly delay the introduction of new innovations to the market.

Despite no "credible empirical evidence" that recent changes to patent laws, including the limits on patentable subject matter reaffirmed by the U.S. Supreme Court in Alice, have done any harm to the innovation economy or innovation generally, some patent owners have been lobbying Congress legislate the case away. But doing so would allow patents on abstract ideas, and risks exacerbating the deadweight loss caused by too much patenting. The proposals are not minor changes. For example, if enacted they would mean that anything is patentable, so long as it is doesn't "exist solely in the human mind," i.e. "do it on a computer." Absent any evidence that this would mean more innovation, the recent reform proposals seem like little more than a bid by lawyers to create work for themselves.

Those rushing to ratchet up patent rights are doing so with little to no empirical basis that any such change is necessary, and it may actually end up harming the innovation economy. Congress should think twice before changing patent law so as to make patents even "stronger." 

Categories: Aggregated News

Wikimedia's Constitutional Challenges of NSA Upstream Surveillance Move Forward - Wed, 24/05/2017 - 09:00

A court ruling today allowing Wikimedia’s claims challenging the constitutionality of NSA’s Upstream surveillance to go forward is good news. It shows that the court—the U.S. Court of Appeals for the Fourth Circuit—is willing to take seriously the impact mass surveillance of the Internet backbone has on ordinary people. Wikimedia's First and Fourth Amendment challenges will move on to the next phase in the case, Wikimedia Foundation v. NSA

The news isn't all good: we disagree with the court's decision disallowing Wikimedia's other dragnet collection claims from going forward, and think the dissent got it right. In Jewel v. NSA, EFF's landmark lawsuit challenging NSA surveillance, the Ninth Circuit Court of Appeals has already ruled that our claims pass initial review. The trial court presiding over the case just last week required the government to comply with our request to provide information about the scope of the mass surveillance. Jewel v. NSA includes specific evidence of a backbone tapping location on Folsom Street in San Francisco presented by former AT&T employee Mark Klein. This level of detail and description is enough for our claims to move forward even with the Fourth Circuit’s ruling.

Related Cases: Wikimedia v. NSAJewel v. NSA
Categories: Aggregated News

Addressing Delays in and the EFF Action Center Message Delivery - Wed, 24/05/2017 - 08:39

EFF has identified and addressed the delivery problem, and we extend our deep apologies for the delays to digital activists who use our tools.

We recently became aware that there were significant delays in delivering some of the messages sent to Congress via two of EFF’s open-source messaging tools, and the EFF Action Center. While we have now addressed the problem, we wanted to be transparent with the community about what happened and the steps we’ve taken to fix it.

The EFF Action Center is a tool people can use to speak out in defense of digital liberty using text prompts from EFF, including letters to Congress that users can edit and customize. is a free tool that we built for the world based on the same technical backend as our Action Center. It lets users send messages to their members of Congress on any topic, with as few clicks as possible. The errors we experienced only impacted letters (not petitions, tweet campaigns, or call campaigns) for a number of Representatives and a handful of Senators. We sincerely apologize to everyone who was affected by this delay.

The issue sprang from the way in which our tools handled CAPTCHAs, a type of service that website owners use to verify that a given user is a human and not a bot. Our tools work by filling out contact forms on individual congressional websites on behalf of users. When our tool bumps into a CAPTCHA, it takes a snapshot, returns it to the user, and lets the user give the correct answer to finish filling out the form. Since all of our messages to Congress are submitted by real people, this worked fine for traditional CAPTCHAs. However, a percentage of Congress members had begun using a more complicated type of CAPTCHA known as reCAPTCHA, which was beyond the technical abilities of our system.

At the same time, we have made some fundamental changes to our error-logging system. As a result, the engineers who staff and maintain stopped receiving notifications of delivery errors, so we unfortunately missed the fact that a portion of messages were failing.

Some messages are undeliverable due to user data errors, legislators leaving office, or other irresolvable issues. However, we have now successfully re-sent nearly all the deliverable messages that had been delayed in our system. A very small percentage of messages are still pending, but we will be delivering them over the next few weeks.

In addition to delivering the delayed messages, we’ve made some key infrastructure changes to help prevent problems like this from arising in the future and to mitigate the impact of any issues that do arise. First, we integrated an experimental API delivery for the House of Representatives called Communicating with Congress. This implementation has resolved the reCAPTCHA problems we were facing in the House of Representatives. In addition, when someone tries to send a message to one of the few Senators whose forms we cannot complete, we’ll notify the user in real time and provide a link to the Senator’s website so the user can send a message directly. Finally, we’ve improved our error logging process so that if another significant delay happens in the future, we’ll know about it right away.

It’s unfortunate and frustrating that many members of Congress have placed digital hurdles on constituent communications. In a more perfect democracy, we think it would be easy for constituents to simply send an email to their members of Congress and be assured that the message was received and counted. Instead, each member of Congress adopts their own form, many of them requiring users to provide information like titles, exact street address, topic areas, etc. Users who want to email their Congress members may have to hunt down and complete forms on three different websites, and they may inadvertently end up on the wrong site.

We believe that the voices of technology users should echo loudly in the halls of Congress and that timely and personal communication from constituents is vital to holding our elected officials to account. That’s why we built these tools for both the EFF community and wider world. We’re committed to continuing to improve the process of communicating with Congress, both for EFF friends speaking out in defense of digital rights and for the general public. We hope one day Congress will make it easier for constituents to reach them. Until then, we’ll do our best to help tech users find a powerful voice. We are sorry that in this instance we fell short of our goal.

Categories: Aggregated News

Court Orders Government To Provide More Information About Withheld Information in Laura Poitras’ FOIA Lawsuit - Wed, 24/05/2017 - 02:50

Laura Poitras—the Academy and Pulitzer Prize Award-winning documentary filmmaker and journalist behind CITIZENFOUR and Risk—wants to know why she was stopped and detained at the U.S. border every time she entered the country between July 2006 and June 2012. EFF is representing Poitras in a Freedom of Information Act (FOIA) lawsuit aimed at answering this question. Since we filed the complaint in July 2015, the government has turned over hundreds of pages of highly redacted records, but it has failed to provide us with the particular justification for each withholding—as it is required to do. In March, in a win for transparency, a federal judge called foul and ordered the government to explain with particularity its rationale for withholding each document.


Poitras travels frequently for her work on documentary films. Between July 2006 and June 2012, she was routinely subject to heightened security screenings at airports around the world and stopped and detained at the U.S. border every time she entered the country—despite the fact that she is a law-abiding U.S. citizen. She’s had her laptop, camera, mobile phone, and reporter notebooks seized, and their contents copied. She was also once threatened with handcuffs for taking notes. (The border agents said her pen could be used as a weapon.) No charges were ever brought against her, and she was never given any explanation for why she was continually subjected to such treatment.

In 2014, Poitras sent FOIA requests to multiple federal agencies for any and all records naming or relating to her, including case files, surveillance records, and counterterrorism documents. But the agencies either said they had no records or simply didn’t respond. The FBI, after not responding to Poitras’ request for a year, said in May 2015 that it had located a mere six pages of relevant material but that it was withholding all six because of grand jury secrecy rules.

With EFF’s help, Poitras ultimately filed a lawsuit against the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence. In the months following the filing of the lawsuit, the government discovered and released over 1,000 pages of responsive records, some of which were on display as at the Whitney Museum in New York last year as part of Poitras’ Astro Noise exhibit. But most of these records are highly redacted, so while Poitras now has some information about why she stopped, the details remain unclear. And the government failed to provide clear rationale for why withholding the redacted information was justified.

Court to Government: “Try Again”

We argued in a motion for summary judgment filed last fall that the government had failed to meet its burden of justifying its continued withholding of information. In an order issued last month, the Honorable Ketanji Brown Jackson agreed with us. As the court explained, the government “describes in great detail the government’s general reasons for withholding entire categories of information, but does not connect these generalized justifications to the particular documents that are being withheld in this case in any discernable fashion.” She noted that instead of providing a complete list of “document-specific justifications,” the government provided a list with “only some of the records that the agency has withheld” and even then failed to “explain the reasons that the particular exemption is being asserted with respect to any document[.]”

The court didn’t grant our motion for summary judgment, but it did order the government to go back and try again—i.e., provide both us and the court with a list describing each document redacted or withheld, noting the FOIA exemption(s) that the government thinks apply to the document, and explaining the “particularized reasons that the government believes that the asserted exemption applies to the particular document at issue.”

It’s clear the judge isn’t planning to just rubber stamp the government’s assertions in this case. Forcing the government to justify its vast withholding of documents in this case is a win for transparency. We will post updates on the case as it proceeds and as we continue our fight to shed more light on the government’s unjust and potentially chilling treatment of a journalist.

Categories: Aggregated News

Judge Orders Government to Provide Evidence About Internet Surveillance - Wed, 24/05/2017 - 01:51

We're finally going to get some honesty on how the NSA spies on innocent Americans' communications.

A federal judge late last week in Jewel v. NSA, EFF’s landmark case against mass surveillance, ordered [PDF] the government to provide to it all relevant evidence necessary to prove or deny that plaintiffs were subject to NSA surveillance via tapping into the Internet backbone.   This includes surveillance done pursuant to section 702 of the FISA Amendments Act since 2008, which is up for renewal this year. It also includes surveillance between 2001-8 conducted pursuant to the Presidents Surveillance Program.  

In 2016 the Court had ordered that the plaintiffs could seek discovery. After over a year of government stonewalling, the Court has now ordered the government to comply with a narrowed set of discovery requests by August 9, 2017. The discovery is aimed at whether plaintiffs' communications were subject to the mass NSA program tapping into the Internet backbone called Upstream.  The court also ordered the government to file as much of its responses as possible on the public court docket.  

The Jewel v. NSA case continues to mark the first time the NSA has been ordered to respond to civil discovery about any of its mass surveillance programs.  Since the first EFF case against NSA mass surveillance was launched in 2006, the government has abandoned or dramatically reduced three of the four key programs addressed by the lawsuit: 

What's left, at least that the public is aware of at this time, is the interception and use of communications flowing over the Internet backbone at key junctures. Thanks to the new order, the U.S. government will, for the first time, have to answer to privacy concerns about the remaining Internet surveillance methods and their impact on Americans.

    The NSA must tell the Court whether its 702 Upstream surveillance touches the communications of millions of Americans.

    It’s been a long, slow road, but the NSA has been forced to reduce its mass spying in the United States in major ways.  This has come through a combination of litigation pressure, ongoing activism and public concern, technological efforts to encrypt more of the Internet, Congressional pressure, and a steady stream of information coming out about its activities including from government investigations spurred by whistleblowers like Edward Snowden and Mark Klein. EFF will continue to push forward with the litigation and all of EFF's other efforts until all Americans who rely on the Internet can feel safe that they can communicate online without NSA having broad access to their communications.

    San Francisco attorney Richard Wiebe argued the matter for the plaintiffs.  Also assisting EFF with the case are attorneys from the firm Keker, Van Nest and Peters, Thomas Moore III, James Tyre and Aram Antaramian.

    Related Cases: Jewel v. NSA
    Categories: Aggregated News

    Illinois Advances “Right to Know” Digital Privacy Bills - Tue, 23/05/2017 - 10:06

    EFF supports Illinois legislation (SB 1502 and HB 2774) that would empower people who visit commercial websites and online services to learn what personal information the site and service operators collected from them, and which third parties the operators shared it with. EFF has long supported such “right to know” legislation, which requires company transparency and thereby advances digital privacy.

    As we explain in our support letter:

    Many operators of commercial websites and online services collect from their visitors a tremendous amount of highly personal information. This can include facts about our health, finances, location, politics, religion, sexual orientation, and shopping. Many operators share this information with third parties, including advertisers and data brokers. This information has great financial value, so pressure to collect and share it will continue to grow.

    This is a profound threat to our privacy. We live more and more of our lives online. The aggregation of our myriad clicks can turn our lives into open books. Our sensitive personal information, pooled into ever-larger reservoirs, can be sold to the highest bidder, stolen by criminals, and seized by government investigators.

    Many people would like to protect their own privacy, by making informed choices about which websites and online services to visit. Some sites and services are more respectful of visitors’ privacy, and others are less so.

    But all too often, such attempts at privacy self-help are stymied by the lack of available information about what personal information a website is collecting and sharing.

    SB 1502 and HB 2774 would even the playing field. They would ensure that people can obtain the information they need to make fact-based decisions about where they want to spend their time online.

    These bills would not restrict how any website or online service gathers or shares information. Operators can keep doing exactly what they are doing – they just have to be more transparent about it.

     In April, the Illinois Senate passed SB 1502, and the Illinois House Committee on Cybersecurity passed HB 2774. We thank the lead legislative sponsors, Sen. Michael Hastings and Rep. Arthur Turner. We also thank the Cook County Sheriff, who initiated the bill, and the bills’ proponents, including the ACLU of Illinois, the Digital Privacy Alliance, the Illinois Attorney General, Illinois PIRG, and the Privacy Rights Clearinghouse.

    Read EFF’s full letter to the Illinois legislature.

    Categories: Aggregated News

    New Twitter Policy Abandons a Longstanding Privacy Pledge - Tue, 23/05/2017 - 09:50

    Twitter plans to roll out a new privacy policy on June 18, and, with it, is promising to roll back its longstanding commitment to obey the Do Not Track (DNT) browser privacy setting. Instead, the company is switching to the Digital Advertising Alliance's toothless and broken self-regulatory program. At the same time, the company is taking the opportunity to introduce a new tracking option and two new targeting options, all of which are set to “track and target” by default. These are not the actions of a company that respects people’s privacy choices.

    Twitter implements various methods of tracking, but one of the biggest is the use of Tweet buttons, Follow buttons, and embedded Tweets to record much of your browsing history. When you visit a page that contains one of these, your browser make a request to Twitter’s servers. That request contains a header that tells Twitter which web site you visited. By setting a unique cookie, Twitter can build a profile of your browsing history, even if you aren’t a Twitter user. When Twitter rolled out this tracking, it was the first major social network to do so; at the time, Facebook and Google+ were careful not to use their social widgets for tracking, due to privacy concerns. Twitter sweetened their new tracking initiative for privacy-aware Internet users by offering Do Not Track support. However, when the other social networks quietly followed in Twitter's footsteps, they decided to ignore Do Not Track.

    Now, Twitter proposes to abandon the Do Not Track standard and use the “WebChoices” tool, part of self-regulatory program of the Digital Advertising Alliance (DAA). This program is toothless because the only choice it allows users is to opt out of “customizing ads,” when most people actually want to opt out of tracking. Many DAA participants, including Twitter, continue to collect your information even if you opt-out, but will hide that fact by only showing you untargeted ads. This is similar to asking someone to stop openly eavesdropping on your conversation, only to watch them hide behind a curtain and keep listening.

    Also, WebChoices is broken; it’s incompatible with other privacy tools, and it requires constant vigilance in order to use. It relies on setting a third-party opt-out cookie on 131 different advertising sites. But doing this is incompatible with one of the most basic browser privacy settings: disabling third party cookies. Even if you allow third party cookies, your opt-out only lasts until the next time you clear cookies, another common user strategy for protecting online privacy. And new advertising sites are created all the time. When the 132nd site is added to WebChoices, you need to go back and repeat your opt-out, which, unless you follow the advertising press, you won’t know to do.

    These problems with DAA's program are why Do Not Track exists. It’s simple, compatible with other privacy measures, and works across browsers.

    Twitter knows the difference between a real opt-out and a fake one: for years, it has implemented DNT as a true "stop tracking" option, and you can still choose that option under the "Data" section of Twitter's settings, whether you are a Twitter user or not. However, if you use the new DAA opt-out that Twitter plans to offer instead of DNT, the company will treat that as a fake opt-out: Twitter keeps tracking, but won't show you ads based on it.

    What can you do as an individual to protect yourself against Twitter's tracking? First, follow our guide to disable the settings. Second, install Privacy Badger, EFF's browser extension that, in addition to setting DNT, attempts to automatically detect and block third-party tracking behavior. Privacy Badger also specifically replaces some social network widgets with non-tracking static versions.

    Twitter is taking a big step backwards for user privacy by abandoning Do Not Track. The company should draft a new privacy policy before June 18 that keeps DNT support, and treats both DNT and the DAA opt-out as a true "stop tracking" option.

    Categories: Aggregated News

    Supreme Court Ends Texas’ Grip On Patent Cases - Tue, 23/05/2017 - 07:48

    Today the Supreme Court issued a decision that will have a massive impact on patent troll litigation. In TC Heartland v. Kraft Foods, the court ruled that patent owners can sue corporate defendants only in districts where the defendant is incorporated or has committed acts of infringement and has a regular and established place of business. This means that patent trolls can no longer drag companies to distant and inconvenient forums that favor patent owners but have little connection to the dispute. Most significantly, it will be much harder for trolls to sue in the Eastern District of Texas.

    For more than ten years, patent troll litigation has clustered in the Eastern District of Texas (EDTX). Patent trolls began to flock there when a judge created local patent rules that were perceived as friendly to patent owners. The court required discovery to start almost right away and did very little to limit costs (which were borne much more heavily by operating companies because they have more documents). Cases also tended not to be decided by summary judgment and went to trial more quickly.

    These changes led to a stunning rise in patent trolling in EDTX. In 1999, only 14 patent cases were filed in the district. By 2003, the number of filings had grown to 55. By 2015, it had exploded to over 2500 patent suits, mostly filed by trolls. Patent litigation grew so much in EDTX that it became part of the local economy. In addition to providing work for the local lawyers, it generated business for the hotels, restaurants, and printers in towns like Marshall and Tyler.

    Although the TC Heartland case will have a big impact on EDTX, the case involved a suit filed in the District of Delaware and the legal question was one of statutory interpretation. Prior to 1990, the Supreme Court had held that in patent cases, the statute found at 28 U.S.C. § 1400 controlled where a patent case could be filed. However, in 1990 in a case called VE Holding, the Federal Circuit held that a small technical amendment to another venue statute—28 U.S.C. § 1391—effectively overruled that line of cases. VE Holding meant that companies that sold products nationwide can be sued in any federal court in the country on charges of patent infringement, regardless of how tenuous the connection to that court. Today’s decision overrules VE Holding and restores venue law to how it was: corporate patent defendants can only be sued where they are incorporated or where they allegedly infringe the patent and have a regular and established place of business.

    Together with Public Knowledge, we filed an amicus brief urging the Supreme Court to hear this case, and once it did, another brief urging it to overrule VE Holding. We explained that venue law is concerned with fairness and that forum shopping in patent cases has had very unfair results, especially in EDTX. While the Supreme Court reached the result we hoped for, the court did not discuss these policy issues (it also showed little interest in the policy debate during the oral argument in the case). The court approached the case as a pure question of statutory interpretation and ruled 8-0 that the more specific statute, 28 U.S.C. § 1400, controls where a patent case can be filed.

    While today’s decision is a big blow for patent trolls, it is not a panacea. Patent trolls with weak cases can, of course, still file elsewhere. The ruling will likely lead to a big growth in patent litigation in the District of Delaware where many companies are incorporated. And it does not address the root cause of patent trolling: the thousands of overbroad and vague software patents that the Patent Office issues every year. We will still need to fight for broader patent reform and defend good decisions like the Supreme Court’s 2014 ruling in Alice v CLS Bank.

    Related Cases: TC Heartland v. Kraft Foods
    Categories: Aggregated News

    Online Censorship and User Notification: Lessons from Thailand - Mon, 22/05/2017 - 15:36

    For governments interested in suppressing information online, the old methods of direct censorship are getting less and less effective.

    Over the past month, the Thai government has made escalating attempts to suppress critical information online. In the last week, faced with an embarrassing video of the Thai King, the government ordered Facebook to geoblock over 300 pages on the platform and even threatened to shut Facebook down in the country. This is on top of last month's announcement that the government had banned any online interaction with three individuals: two academics and one journalist, all three of whom are political exiles and prominent critics of the state. And just today, law enforcement representatives described their efforts to target those who simply view—not even create or share—content critical of the monarchy and the government.

    The Thai government has several methods at its own disposal to directly block large volumes of content. It could, as it has in the past, pressure ISPs to block websites. It could also hijack domain name queries, making sites harder to access. So why is it negotiating with Facebook instead of just blocking the offending pages itself? And what are Facebook’s responsibilities to users when this happens?

    HTTPS and Mixed-Use Social Media Sites

    The answer is, in part, HTTPS. When HTTPS encrypts your browsing, it doesn’t just protect the contents of the communication between your browser and the websites you visit. It also protects the specific pages on those sites, preventing censors from seeing and blocking anything “after the slash” in a URL. This means that if a sensitive video of the King shows up on a website, government censors can’t identify and block only the pages on which it appears. In an HTTPS world that makes such granularized censorship impossible, the government’s only direct censorship option is to block the site entirely.

    That might still leave the government with tenable censorship options if critical speech and dissenting activity only happened on certain sites, like devoted blogs or message boards. A government could try to get away with blocking such sites wholesale without disrupting users outside a certain targeted political sphere.

    But all sorts of user-generated content—from calls to revolution to cat pictures—are converging on social media websites like Facebook, which members of every political party use and rely on. This brings us to the second part of the answer as to why the government can’t censor like it used to: mixed-use social media sites. When content is both HTTPS-encrypted and on a mixed-use social media site like Facebook, it can be too politically expensive to block the whole site. Instead, the only option left is pressuring Facebook to do targeted blocking at the government’s request.

    Government Requests for Social Media Censorship

    Government requests for targeted blocking happen when something is compliant with Facebook’s community guidelines, but not with a country’s domestic law. This comes to a head when social media platforms have large user bases in repressive, censorious states—a dynamic that certainly applies in Thailand, where a military dictatorship shares its capital city with a dense population of Facebook power-users and one of the most Instagrammed locations on earth.

    In Thailand, the video of the King in question violated the country’s overbroad lese majeste defamation laws against in any way insulting or criticizing the monarchy. So the Thai government requested that Facebook remove it—along with hundreds of other pieces of content—on legal grounds, and made an ultimately empty threat to shut down the platform in Thailand if Facebook did not comply.

    Facebook did comply and geoblock over 100 URLs for which it received warrants from the Thai government. This may not be surprising; although the government is likely not going to block Facebook entirely, they still have other ways to go after the company, including threatening any in-country staff. Indeed, Facebook put itself in a vulnerable position when it inexplicably opened a Bangkok office during high political tensions after the 2014 military coup.

    Platforms’ Responsibility to Users

    If companies like Facebook do comply with government demands to remove content, these decisions must be transparent to their users and the general public. Otherwise, Facebook's compliance transforms its role from a victim of censorship, to a company pressured to act as a government censor. The stakes are high, especially in unstable political environments like Thailand. There, the targets of takedown requests can often be  journalists, activists, and dissidents, and requests to take down their content or block their pages often serve as an ominous prelude to further action or targeting.

    With that in mind, Facebook and other companies responding to government requests must provide the fullest legally permissible notice to users whenever possible. This means timely, informative notifications, on the record, that give users information like what branch of government requested to take down their content, on what legal grounds, and when the request was made.

    Facebook seems to be getting better at this, at least in Thailand. When journalist Andrew MacGregor Marshall had content of his geoblocked in January, he did not receive consistent notice. Worse, the page that his readers in Thailand saw when they tried to access his post implied that the block was an error, not a deliberate act of government-mandated removal.

    More recently, however, we have been happy to see evidence of Facebook providing more detailed notices to users, like this notice that exiled dissident Dr. Somsak Jeamteerasakul received and then shared online:

    In an ideal world, timely and informative user notice can help power the Streisand effect: that is, the dynamic in which attempts to suppress information actually backfire and draw more attention to it than ever before. (And that’s certainly what’s happening with the video of the King, which has garnered countless international media headlines.) With details, users are in a better position to appeal to Facebook directly as well as draw public attention to government targeting and censorship, ultimately making this kind of censorship a self-defeating exercise for the government.

    In an HTTP environment where governments can passively spy on and filter Internet content, individual pages could disappear behind obscure and misleading error messages. Moving to an increasingly HTTPS-secured world means that if social media companies are transparent about the pressure they face, we may gain some visibility into government censorship. However, if they comply without informing creators or readers of blocked content, we could find ourselves in a much worse situation. Without transparency, tech giants could misuse their power not only to silence vulnerable speakers, but also to obscure how that censorship takes place—and who demanded it.

    Have you had your content or account removed from a social media platform? At EFF, we’ve been shining a light on the expanse and breadth of content removal on social media platforms with, where we and our partners at Visualising Impact collect your stories about content and account deletions. Share your story here.

    Categories: Aggregated News

    No Hunting Undocumented Immigrants with Stingrays - Sat, 20/05/2017 - 09:51

    In the latest sign of mission creep in domestic deployment of battlefield-strength surveillance technology, U.S. Immigration and Customs Enforcement (ICE) earlier this year used a cell site simulator (CSS) to locate and arrest an undocumented immigrant, according to a report yesterday by The Detroit News.

    CSSs, often called IMSI catchers or Stingrays, masquerade as cell phone towers and trick our phones into connecting to them so police can track down a target. EFF has long opposed CSSs. They are a form of mass surveillance, forcing the phones of countless innocent people to disclose information to the police, in violation of the Fourth Amendment. They disrupt cellular communications, including 911 calls. They are deployed disproportionately within communities of color and poorer neighborhoods. They exploit vulnerabilities in the cellular communication system that government should fix instead of exploit.

    Police said they needed CSSs to fight terrorism. Instead, police use CSSs to locate low-level offenders, such as a suspect who stole $60 of food from a restaurant delivery employee.

    Now we fear that ICE may be routinely using CSSs to hunt down people whose only offense is to unlawfully enter or remain in the United States. ICE has spent over $10 million to purchase 59 CSSs, according to a recent Congressional report. In the first quarter of 2017, ICE arrested nearly 11,000 undocumented immigrants with no criminal record, more than double the number from the first quarter of 2016. And yesterday, The Detroit News reported that ICE used a CSS to locate and arrest an undocumented immigrant.

    It is good news that ICE obtained a warrant before using its CSS to find this immigrant, in accordance with a change in DHS and DOJ policies in 2015. It is also a welcome sign that a bipartisan Congressional report in December 2016 called for federal legislation requiring a warrant for CSS use by law enforcement. But a warrant alone is not enough.

    If permitted at all, government use of CSSs should be strictly limited to addressing serious violent crime. Few law enforcement spying technologies are a greater threat to digital liberty: by their very nature, CSSs seize information from all of the people who happen to be nearby. So government should be barred, for example, from using CSSs to hunt down traffic scofflaws, petty thieves, and undocumented immigrants.

    Notably, the federal eavesdropping statute limits police use of that surveillance technology to certain enumerated crimes. Because CSSs conduct general searches, any such enumeration for CSSs must be even narrower, and limited to serious violent crimes.

    Finally, if government is allowed to use CSSs, there must be other safeguards, too. Government should be limited to using CSSs to acquire location information, and forbidden from using CSSs for other purposes, such as acquiring communications content. An Illinois statute enacted in 2016 contains this limit. Also, government should be required to minimize the capture of information from people who are not the target of investigation, and to immediately destroy all data that does not identify the target. A U.S. Magistrate Judge’s order in 2015 contains this limit.

    Too often, government deploys powerful spying technologies against vulnerable groups of people, including immigrant communities, as well as racial, ethnic, and religious minorities. EFF has long opposed this. We thus oppose using CSSs to hunt down undocumented immigrants, or anyone else who is not a serious violent threat to public safety.

    Categories: Aggregated News

    How to Opt Out of Twitter's New Privacy Settings - Sat, 20/05/2017 - 09:50

    Since Wednesday night, Twitter users have been greeted by a pop-up notice about Twitter’s new privacy policy, which will come into effect June 18:

    Contrary to the inviting “Sounds good” button to accept the new policy and get to tweeting, the changes Twitter has made around user tracking and data personalization do not sound good for user privacy. For example, the company will now record and store non-EU users’ off-Twitter web browsing history for up to 30 days, up from 10 days in the previous policy.

    Worst of all, the “control over your data” promised by the pop-up is on an opt-out basis, giving users choices only after Twitter has set their privacy settings to invasive defaults.

    Instead, concerned users have to click “Review settings” to opt out of Twitter’s new mechanisms for user tracking. That will bring you to the “Personalization and Data” section of your settings. Here, you can pick and choose the personalization, data collection, and data sharing you will allow—or, click “Disable all” in the top-right corner to opt out entirely.

    If you already clicked through the privacy policy pop-up, you can still go into your settings to make these changes. After navigating to your settings, choose “Privacy and safety” on the left, and then click “Edit” next to “Personalization and data.”

    While you’re at it, this is also a good opportunity to review, edit, and/or remove the data Twitter has collected on you in the past by going to the “Your Twitter data” section of your settings.

    Twitter has stated that these granular settings are intended to replace Twitter’s reliance on Do Not Track. However, replacing a standard cross-platform choice with new, complex options buried in the settings is not a fair trade. Although “more granular” privacy settings sound like an improvement, they lose their meaning when they are set to privacy-invasive selections by default. Adding new tracking options that users are opted into by default suggests that Twitter cares more about collecting data than respecting users’ choice.

    Categories: Aggregated News

    Net Neutrality Activists Rally Against Trump FCC's Plan to Destroy the Internet - Fri, 19/05/2017 - 06:44
    Net Neutrality Activists Rally Against Trump FCC's Plan to Destroy the InternetAmy KroinMay 18, 2017People from across the country have already generated more than 1 million comments and signatures in support of real Net Neutrality. And outside the FCC’s headquarters, a range of advocacy groups, members of Congress and nearly 100 activists rallied to preserve the open internet.
    Categories: Aggregated News

    As USTR Takes Office, EFF Sets Out Our Demands on Trade Transparency - Fri, 19/05/2017 - 04:50

    The new U.S. Trade Representative, Robert Lighthizer, took office this week. EFF has written him a letter to let him know that we'll be holding him to the commitments that he made during his confirmation hearing about improving the transparency and inclusiveness of the USTR's notoriously closed and opaque trade negotiation practices. Our letter, which you can download in full below, reads in part:

    The American people’s dissatisfaction with trade deals of the past, such as NAFTA, does not merely lie in their effects on the American manufacturing sector and its workers.  Another of the key mistakes of previous U.S. trade policy, we respectfully submit, has been the closed and opaque character of trade negotiations. ... 

    Absent meaningful reforms that allow the public to see what is being negotiated on their behalf, and to participate in developing trade policy proposals, the public will reject new agreements just as they rejected failed agreements of the past, such as the Trans-Pacific Partnership and the Anti-Counterfeiting Trade Agreement.

    Conversely, given a real voice in trade policy development, there is the potential for trade agreements of the future to become more inclusive, better informed, and more popular—all of which are essential if America is to retain and strengthen its global economic leadership in the digital age.

    Tech industry groups the Internet Association, [PDF]  the Computer and Communications Industry Association (CCIA) and the Internet Infrastructure Coalition (i2Coalition) [PDF], have also sent letters to the new USTR. In addition to addressing how America's future trade agreements should address tech policy issues, the CCIA and i2Coalition letter addresses the need for greater transparency in trade negotiations, stating "we encourage you to maintain as much transparency in trade negotiations as is reasonably possible. More open negotiation processes will contribute to increased support for the trade agenda."

    House and Senate Democrats have reportedly delivered the same message [paywalled] to Ambassador Lighthizer during his first week in office, urging that the renegotiation of NAFTA—which officially launched today—be made more transparent than the negotiations of its failed predecessor, the TPP.

    To further reinforce this message, EFF has gone even further—taking out a paid advertisement in POLITICO magazine's Morning Trade newsletter which runs all this week. It directs to a new page of EFF's website that is specifically targetted at D.C.'s trade community. You can see a copy of the banner graphic that we've used for that campaign to the side.

    Will any of this make a difference? We certainly hope so, but we're not counting on it. That's why in case Ambassador Lighthizer fails to heed our message, we'll also be supporting new legislation to be introduced in Congress to force the USTR to implement the necessary reforms. One way or another, the long overdue reform of trade negotiation processes has to happen, and we're committed to seeing it through.

    Categories: Aggregated News

    Dear FCC: We See Through Your Plan to Roll Back Real Net Neutrality - Fri, 19/05/2017 - 02:05

    Pretty much everyone says they are in favor of net neutrality–the idea that service providers shouldn’t engage in data discrimination, but should instead remain neutral in how they treat the content that flows over their networks. But actions speak louder than words, and today’s action by the FCC speaks volumes. After weeks of hand-waving and an aggressive misinformation campaign by major telecom companies, the FCC has taken the first concrete step toward dismantling the net neutrality protections it adopted two years ago.

    Specifically, the FCC is proposing a rule that would reclassify broadband as an “information service” rather than a “telecommunications service.” FCC Chairman Ajit Pai claims that this move would protect users, but all it would really do is protect Comcast and other big ISPs by destroying the legal foundation for net neutrality rules. Once that happened, it would only be a matter of time before your ISP had more power than ever to shape the Internet.

    Here’s why: Under the Telecommunications Act of 1996, a service can be either a “telecommunications service” that lets the subscriber choose the content they receive and send without interference from the service provider; or it can be an “information service,” like cable television, that curates and selects what subscribers will get. “Telecommunications services” are subject to nondiscrimination requirements–like net neutrality rules. “Information services” are not.

    For years, the FCC incorrectly classified broadband access as an “information service,” and when it tried to apply net neutrality rules to broadband providers, the courts struck them down. Essentially, the D.C. Circuit court explained that the FCC can’t exempt broadband from nondiscrimination requirements by classifying it as an information service, but then impose those requirements anyway.

    The legal mandate was clear: if we wanted meaningful open Internet rules to pass judicial scrutiny, the FCC had to reclassify broadband as a telecom service. Reclassification also just made sense: broadband networks are supposed to deliver information of the subscriber’s choosing, not information curated or altered by the provider.

    It took an Internet uprising to persuade the FCC to reclassify. But in the end we succeeded: in 2015 the FCC reclassified broadband as a telecom service. Resting at last on a proper legal foundation, its net neutrality rules finally passed judicial scrutiny [PDF].

    Given this history, there’s no disguising what the new FCC majority is up to. If it puts broadband back in the “info service” category and then tries to appease critics by adopting meaningful net neutrality rules, we’ll be in the same position we were three years ago: Comcast will take the FCC to court–and Comcast will win. It’s simple: you can’t reclassify and keep meaningful net neutrality rules. Reclassification means giving ISPs a free pass for data discrimination.

    Chairman Pai’s claim that this move is good for users because it will spur investment in broadband infrastructure is a cynical one at best. Infrastructure investment has gone up since the 2015 Order, ISP profits are growing exponentially, and innovation and expression are flourishing.

    At the same time, too many Americans have only one choice for high speed broadband. There are good reasons to worry about FCC overreach regulation in many contexts, but the fact is the U.S. broadband market is now excessively concentrated and lacks real choice, and there are few real options to prevent ISPs from abusing their power. In this environment, repealing the simple, light-touch rules of the road we just won would give ISPs free reign to use their position as Internet gatekeepers to funnel customers to their own content, thereby distorting the open playing field the Internet typically provides, or charge fees for better access to subscribers. Powerful incumbent tech companies will be able to buy their way into the fast lane, but new ones won’t.  Nor will activists, churches, libraries, hospitals, schools or local governments.

    We can’t let that happen. So, Team Internet, we need you to step up once again and tell the FCC that it works for the American people, not Comcast, Verizon, or AT&T.  Go to and tell the FCC not to undermine real net neutrality protections.

    Contact the FCC Now

    Categories: Aggregated News