Aggregated News

Court Rules That EFF's Stupid Patent of the Month Post Is Protected Speech

eff.org - Tue, 21/11/2017 - 08:52

A federal judge has ruled that EFF need not obey an Australian injunction ordering EFF to take down a “Stupid Patent of the Month” blog post and never speak of the patent owner’s intellectual property again.

It all started when Global Equity Management (SA) Pty Ltd (GEMSA)’s patent was featured as the June 2016 entry in our Stupid Patent of the Month blog series. GEMSA wrote to EFF accusing us of “false and malicious slander.” It subsequently filed a lawsuit and obtained an injunction from a South Australia court purporting to require EFF to censor itself. We declined and filed a suit in the U.S. District Court for the Northern District of California seeking a declaration that EFF’s post is protected speech.

The court agreed, finding that the South Australian injunction can’t be enforced in the U.S. under a 2010 federal law that took aim against “libel tourism,” a practice by which plaintiffs—often billionaires, celebrities, or oligarchs—sued U.S. writers and academics in countries like England where it was easier to win a defamation case. The Securing the Protection of Our Enduring and Established Constitutional Heritage Act (SPEECH Act) says foreign orders aren’t enforceable in the United States unless they are consistent with the free speech protections provided by the U.S. and state constitutions, as well as state law.

The court analyzed each of GEMSA’s claims for defamation, and found “[n]one of these claims could give rise to defamation under U.S. and California law, and accordingly “EFF would not have been found liable for defamation under U.S. and California law.” For example, GEMSA’s lead complaint was that EFF had called its patent “stupid.” GEMSA protested that its patent is not “in fact” stupid but the court found that this was clearly protected opinion. Moreover, the court found “that the Australian court lacked jurisdiction over EFF, and that this constitutes a separate and independent reason that EFF would prevail under the SPEECH Act.”

Furthermore, the court found that the Australian order was not enforceable under the SPEECH Act because “U.S. and California would provide substantially more First Amendment protection by prohibiting prior restraints on speech in all but the most extreme circumstances, and providing additional procedural protections in the form of California’s anti-SLAPP law.” 

After its thorough analysis, the court declared “(1) that the Australian Injunction is repugnant to the United States Constitution and the laws of California and the Unites States; and (2) that the Australian injunction cannot be recognized or enforced in the United States.”

The decision was a default judgment. GEMSA, which has three pending patent lawsuits in in the Northern District of California, had until May 23 to respond to our case. That day came and went without a word. While GEMSA knows its way around U.S. courts—having filed dozens of lawsuits against big tech companies claiming patent infringement—it failed to respond to ours.

EFF thanks our counsel from Ballard Spahr LLP and Jassy Vick Carolan LLP.

Related Cases: EFF v. Global Equity Management (SA) Pty Ltd
Categories: Aggregated News

Why We're Helping The Stranger Unseal Electronic Surveillance Records

eff.org - Tue, 21/11/2017 - 07:02

Consider this: Deputy Attorney General Rod Rosenstein has been going around talking about “responsible encryption” for some time now proselytizing for encryption that’s somehow only accessible by the government—something we all know to be unworkable. If the Department of Justice (DOJ) is taking this aggressive public position about what kind of access it should have to user data, it begs the question—what kind of technical assistance from companies and orders for user data is the DOJ demanding in sealed court documents? EFF’s client The Stranger, a Seattle-based newspaper, has filed a petition with one court to find out.

What’s at Stake?

In a democracy, we as citizens deserve to know what our government is up to, especially its interpretation of the law. A major reason we all knew about the government using the All Writs Act—a law originally passed in 1789—to compel Apple to design a backdoor for the iOS operating system is because the court order was public. However, there are many instances where we may not know what the government is asking. For example, could the government be asking Amazon to turn on the mic on its smart assistant product, the Echo, so they can listen in on people? This is not without precedent. In the past, the government has tried to compel automobile manufacturers to turn on mics in cars for surveillance.

Beyond the All Writs Act, we need to know what kind of warrantless surveillance the government is conducting under statutes like the Stored Communications Act (SCA) and the Pen Register Act. For instance, under certain authorities of the SCA, the government can obtain very private details about people’s email records, such as who they communicate with and when, and that in itself can be revealing regardless of the content of the messages.

The privacy problems of these non-warrant orders is compounded by the secrecy associated with them. The government files papers asking for such orders under seal, giving the public no opportunity to scrutinize them or to see how many are actually filed with the court. The people deserve to know and we support The Stranger’s efforts to seek access to these records.

Of course, the government may have good reasons to prevent disclosure of surveillance orders as part of an ongoing investigation, but under the current regime, next to no information is available even for the existence of such requests, including how many are filed each year. There are ways to meet government’s priorities—by redacting the name of the suspect to avoid tipping them off, for instance—without sacrificing transparency and access to court records for the American people under the First Amendment.

The Specifics of the Case

Our client The Stranger is a Pulitzer Prize-winning newspaper with a history of covering stories that focus on law enforcement surveillance capabilities. In 2013, The Stranger was the first local media organization to report on the surveillance devices installed by the Seattle Police Department that were capable of tracking people’s digital devices around the city. Apart from local law enforcement, The Stranger also covers federal surveillance activities in the city of Seattle. For instance, it investigated Alcohol, Tobacco, Firearms and Explosives bureau’s operation of a network of sophisticated surveillance cameras in the city.

To better report on government surveillance capabilities, the newspaper is petitioning the federal court in Seattle—home to companies like Microsoft and Amazon—to unseal government requests for electronic surveillance orders and warrants filed with the Court.

As the petition points out, the current court procedures are inadequate and counter to the widely recognized presumption of public access and openness to U.S. court records. In the Western District of Washington, government applications for electronic surveillance warrants or orders are designated as Magistrate Judge (MJ) matters. But for warrantless surveillance orders, the cases are marked as Grand Jury (GJ) proceedings. By default, anything filed as a Grand Jury case is automatically sealed and completely inaccessible to the public. This is troubling.

Support EFF’s Transparency Work

EFF has a long history of fighting for transparency by representing clients in litigation or filing public records requests for state and federal records. If you’d like to show your support for this lawsuit, please support our work and donate today.

We would like to thank Geoff M. Godfrey, Nathan D. Alexander, and David H. Tseng of Dorsey & Whitney LLP in Seattle for co-counseling with us in representing The Stranger.

Related Cases: The Stranger Unsealing
Categories: Aggregated News

Will Congress Bless Internet Fast Lanes?

eff.org - Tue, 21/11/2017 - 03:56

As the Federal Communications Commission (FCC) gets ready to abandon a decade of progress on net neutrality, some in Congress are considering how new legislation could fill the gap and protect users from unfair ISP practices. Unfortunately, too many lawmakers seem to be embracing the idea that they should allow ISPs to create Internet “fast lanes” -- also known as “paid prioritization,” one of the harmful practices that violates net neutrality. They are also looking to re-assign the job of protecting customers from ISP abuses to the Federal Trade Commission.

These are both bad ideas.  Let's start with paid prioritization. In response to widespread public demand from across the political spectrum, the 2015 Open Internet Order expressly prohibited paid prioritization, along with other unfair practices like blocking and throttling. ISPs have operated under the threat or the reality of these prohibitions for at least a decade, and continue to be immensely profitable. But they'd like to make even more money by double-dipping: charging customers for access to the Internet, and then charging services for (better) access to customers. And some lawmakers seem keen to allow it.

That desire was all too evident in a recent hearing on the role of antitrust in defending net neutrality principles. Subcommittee Chairman Tom Marino gave a baffling defense of prioritization, suggesting that it’s necessary or even beneficial to users for ISPs to give preferential treatment to certain content sources. Rep. Marino said that users should be able to choose between a more expensive Internet experience and a cheaper one that prioritizes the ISPs preferred content sources. He likened Internet service to groceries, implying that by disallowing paid prioritization, the Open Internet Order forced more casual Internet users to waste their money: “Families who just want the basics or are on a limited income aren't forced to subsidize the preferences of shoppers with higher-end preferences.”

Rep. Darrel Issa took the grocery metaphor a step further, saying that paid prioritization is the modern day equivalent of the practice of grocery stores selling prime placement to manufacturers: “Within Safeway, they’ve decided that each endcap is going to be sold to whoever is going to pay the most – Pepsi, Coke, whoever – that’s certainly a prioritization that’s paid for.”

That’s an absurd analogy. Unlike goods at a physical store, every bit of Internet traffic can get the best placement, and no one on a limited income is “subsidizing” their richer neighbors. When providers choose to slow down certain types of traffic, they’re not doing it because that traffic is somehow more burdensome; they’re doing it to push users toward the content and service the ISP favors (or has been paid to favor)—the very behavior the Open Internet Order was intended to prevent. ISPs become gatekeepers rather than conduit.

As ISPs and content companies have become increasingly intertwined, the dangers of ISPs giving preferential treatment to their own content sources—and locking out alternative sources—have become ever more pronounced. That’s why in 2016 the FCC launched a lengthy investigation into ISPs’ zero-rating practices and whether they violated the Open Internet Order. The FCC focused in particular on cases where an ISP has an obvious economic incentive to slow down competing content providers, as was the case with AT&T prioritizing its own DirecTV services. Some members of Congress fail to see the dangers to users of these “vertical integration” arrangements. Rep. Bob Goodlatte said in the hearing that “Blanket regulation… would deny consumers the potential benefits in cost savings and improved services that would result from vertical agreements.” But if zero-rating arrangements keep new edge providers from getting a fair playing field to compete for users’ attention, services won’t improve at all. Certainly, an entity with a monopoly could choose to turn every advantage into savings for its customers, but we know from history and common sense that monopolies gouge customers instead. It’s telling—and unfortunate—that one of Ajit Pai’s first actions as FCC Chairman was to shelve the Commission’s zero-rating investigation.

The other goal of the hearing was to consider whether to assign net neutrality enforcement power to the Federal Trade Commission instead of the FCC. This is a rehash of long-standing argument that the best way to defend the Internet is to have ISPs publicly promise to behave. If they break that promise or undermine competition, the FTC can go after them.

Federal Trade Commissioner Terrell McSweeny correctly explained why that approach won’t cut it: “a framework that relies solely on backward-looking consumer protection and antitrust enforcement” just cannot “provide the same assurances to innovators and consumers as the forward-looking rules contained in the FCC's open internet order.”

For example, as McSweeny noted, large ISPs have a huge incentive to unfairly prioritize certain content sources: their own bottom line. Every major ISP also offers streaming media services, and these ISPs naturally will want to direct users to those offerings. Antitrust law alone can’t stop these practices because the threat that paid prioritization poses isn’t to competition between ISPs; it’s to the users themselves.

If the FCC abandons its commitment to net neutrality, Congress can and should step in to put it back on course.  That means enacting real, forward-looking legislation that embraces all of the bright-line rules, not just the ones ISPs don’t mind. And it means forcing the FCC to its job, rather than handing it off to another agency that’s not well-positioned to do the work.

 

 

Categories: Aggregated News

The FISA Amendments Reauthorization Act Restricts Congress, Not Surveillance

eff.org - Sat, 18/11/2017 - 09:16

The FISA Amendments Reauthorization Act of 2017—legislation meant to extend government surveillance powers—squanders several opportunities for meaningful reform and, astonishingly, manages to push civil liberties backwards. The bill is a gift to the intelligence community, restricting surveillance reforms, not surveillance itself.

The bill (S. 2010) was introduced October 25 by Senate Select Committee on Intelligence Chairman Richard Burr (R-NC) as an attempt to reauthorize Section 702 of the FISA Amendments Act. That law authorizes surveillance that ensnares the communications of countless Americans, and it is the justification used by agencies like the FBI to search through those collected American communications without first obtaining a warrant. Section 702 will expire at the end of this year unless Congress reauthorizes it.

Other proposed legislation in the House and Senate has used Section 702’s sunset as a moment to move surveillance reform forward, demanding at least minor protections to how 702-collected American communications are accessed. In contrast, Senator Burr’s bill uses Section 702’s sunset as an opportunity codify some of the intelligence community’s more contentious practices while also neglecting the refined conversations on surveillance happening in Congress today. 

Here is a breakdown of the bill.

“About” Collection

Much of the FISA Amendments Reauthorization Act (the “Burr bill” for short) deals with a type of surveillance called “about” collection, a practice in which the NSA searches Internet traffic for any mentions of foreign intelligence surveillance targets. As an example, the NSA could search for mentions of a target’s email address. But the communications being searched do not have to be addressed to or from that email address, the communications would simply need to include the address in their text.  This is not normal for communications surveillance.

Importantly, nothing in Section 702 today mentions or even hints at “about” collection, and it wasn’t until 2013 that we learned about it. A 2011 opinion from the Foreign Intelligence Surveillance Court—which provides judicial review for the Section 702 program—found this practice to be unconstitutional without strict post-collection rules to limit its retention and use.

Indeed, it is a practice the NSA ended in April precisely “to reduce the chance that it would acquire communications of U.S. persons or others who are not in direct contact with a foreign intelligence target.”  Alarmingly, it is a practice the FISA Amendments Reauthorization Act defines expansively and provides guidelines for restarting.

According to the bill, should the Attorney General and the Director of National Intelligence decide that “about” collection needs to start up again, all they need to do is ask specified Congressional committees. Then, a 30-day clock begins ticking. It’s up to Congress to act before the clock stops.

In those 30 days, at least one committee—including the House Judiciary Committee, the House Permanent Select Committee on Intelligence, the Senate Judiciary Committee, and the Senate Select Committee on Intelligence—must draft, vote, and pass legislation that specifically disallows the continuation of “about” collection, working against the requests of the Attorney General and the Director of National Intelligence.

If Congress fails to pass such legislation in 30 days, “about” collection can restart.

The 30-day period has more restrictions. If legislation is referred to any House committee because of the committee’s oversight obligations, that committee must report the legislation to the House of Representatives within 10 legislative days. If the Senate moves legislation forward, “consideration of the qualifying legislation, and all amendments, debatable motions, and appeals in connection therewith, shall be limited to not more than 10 hours,” the bill says.

Limiting discussion on “about” collection to just 10 hours—when members of Congress have struggled with it for years—is reckless. It robs Congress of the ability to accurately debate a practice whose detractors even include the Foreign Intelligence Surveillance Court (FISC)—the judicial body that reviews and approves Section 702 surveillance.

Worse, the Burr bill includes a process to skirt legislative approval of “about” collection in emergencies. If Congress has not already disapproved “about” collection within the 30-day period, and if the Attorney General and the Director of National Intelligence determine that such “about” collection is necessary for an emergency, they can obtain approval from the FISC without Congress.

And if during the FISC approval process, Congress passes legislation preventing “about” collection—effectively creating both approval and disapproval from two separate bodies—the Burr bill provides no clarity on what happens next. Any Congressional efforts to protect American communications could be thrown aside.

These are restrictions on Congress, not surveillance—as well as an open invitation to restart “about” searching.

What Else is Wrong?

The Burr bill includes an 8-year sunset period, the longest period included in current Section 702 reauthorization bills. The USA Liberty Act—introduced in the House—sunsets in six years. The USA Rights Act—introduced in the Senate—sunsets in four.

The Burr bill also allows Section 702-collected data to be used in criminal proceedings against U.S. persons so long as the Attorney General determines that the crime involves a multitude of subjects. Those subjects include death, kidnapping, seriously bodily injury, incapacitation or destruction of critical infrastructure, and human trafficking. The Attorney General can also determine that the crime involves “cybersecurity,” a vague term open to broad abuse.

The Attorney General’s determinations in these situations are not subject to judicial review.

The bill also includes a small number of reporting requirements for the FBI Director and the FISC. These are minor improvements that are greatly outweighed by the bill’s larger problems.

No Protections from Warrantless Searching of American Communications

The Burr bill fails to protect U.S. persons from warrantless searches of their communications by intelligence agencies like the FBI and CIA.

The NSA conducts surveillance on foreign individuals living outside the United States by collecting communications both sent to and from them. Often, U.S. persons are communicating with these individuals, and those communications are swept up by the NSA as well. Those communications are then stored in a massive database that can be searched by outside agencies like the FBI and CIA. These unconstitutional searches do not require a warrant and are called “backdoor” searches because they skirt U.S. persons’ Fourth Amendment rights.

The USA Liberty Act, which we have written extensively about, creates a warrant requirement when government agents look through Section 702-collected data for evidence of a crime, but not for searches for foreign intelligence. The USA Rights Act creates warrant requirements for all searches of American communications within Section 702-collected data, with “emergency situation” exemptions that require judicial oversight.

The Burr bill offers nothing.

No Whistleblower Protections

The Burr bill also fails to extend workplace retaliation protections to intelligence community contractors who report what they believe is illegal behavior within the workforce. This protection, while limited, is offered by the USA Liberty Act. The USA Rights Act takes a different approach, approving new, safe reporting channels for internal government whistleblowers.

What’s Next?

The Burr bill has already gone through markup in the Senate Select Committee on Intelligence. This means that it could be taken up for a floor vote by the Senate.

Your voice is paramount right now. As 2017 ends, Congress is slammed with packages on debt, spending, and disaster relief—all which require votes in less than six weeks. To cut through the log jam, members of Congress could potentially attach the Burr bill to other legislation, robbing surveillance reform of its own vote. It’s a maneuver that Senator Burr himself, according to a Politico report, approves.

Just because this bill is ready, doesn’t mean it’s good. Far from it, actually.

We need your help to stop this surveillance extension bill. Please tell your Senators that the FISA Amendments Reauthorization Act of 2017 is unacceptable.

Tell them surveillance requires reform, not regression. 

TAKE ACTION

STOP THE BURR BILL FROM EXTENDING NSA SPYING 8 YEARS

Related Cases: Jewel v. NSA
Categories: Aggregated News

Time Will Tell if the New Vulnerabilities Equities Process Is a Step Forward for Transparency

eff.org - Fri, 17/11/2017 - 05:00

The White House has released a new and apparently improved Vulnerabilities Equities Process (VEP), showing signs that there will be more transparency into the government’s knowledge and use of zero day vulnerabilities. In recent years, the U.S. intelligence community has faced questions about whether it “stockpiles” vulnerabilities rather than disclosing them to affected companies or organizations, and this scrutiny has only ramped up after groups like the Shadow Brokers have leaked powerful government exploits. According to White House Cybersecurity Coordinator Rob Joyce, the form of yesterday’s release and the revised policy itself are intended to highlight the government’s commitment to transparency because it’s “the right thing to do.”

EFF agrees that more transparency is a prerequisite to any debate about government use of vulnerabilities, so it’s gratifying to see the government take these affirmative steps. We also appreciate that the new VEP explicitly prioritizes the government’s mission of protecting “core Internet infrastructure, information systems, critical infrastructure systems, and the U.S. economy” and recognizes that exploiting vulnerabilities can have significant implications for privacy and security. Nevertheless, we still have concerns over potential loopholes in the policy, especially how they may play into disputes about vulnerabilities used in criminal cases.

The Vulnerabilities Equities Process has a checkered history. It originated in 2010 as an attempt to balance conflicting government priorities. On one hand, disclosing vulnerabilities to vendors and others outside the government makes patching and other mitigation possible. On the other, these vulnerabilities may be secretly exploited for intelligence and law enforcement purposes. The original VEP document described an internal process for weighing these priorities and reaching a decision on whether to disclose, but it was classified, and few outside of the government knew much about it. That changed in 2014, when the NSA was accused of long-term exploitation of the Heartbleed vulnerability. In denying those accusations and seeking to reassure the public, the government described the VEP as prioritizing defensive measures and disclosure over offensive exploitation.

The VEP document itself remained secret, however, and EFF waged a battle to make it public using a Freedom of Information Act lawsuit. The government retreated from its initial position that it could not release a single word, but our lawsuit concluded with a number of redactions remaining in the document.

The 2017 VEP follows a similar structure as the previous process: government agencies that discover previously unknown vulnerabilities must submit them to an interagency group which weighs the “equities” involved and reaches a determination of whether to disclose. The process is facilitated by the National Security Council and the Cybersecurity Coordinator, who can settle appeals and disputes. 

Tellingly, the new document publicly lists information that the government previously claimed would damage national security if released in our FOIA lawsuit. The government’s absurd overclassification and withholdings extended to such information as the identities of the agencies that regularly participate in the decision-making process, the timeline, and the specific considerations used to reach a decision. That’s all public now, without any claim that it will harm national security.

Many of the changes to the VEP do seem intended to facilitate transparency and to give more weight to policies that were previously not reflected in the official document. For example, Annex B to the new VEP lists “equity considerations” that the interagency group will apply to a vulnerability. Previously, the government had argued that a similar, less-detailed list of considerations published in a 2014 White House blog post was merely a loose guideline that would not be applied in all cases. We don’t know how this more rigorous set of considerations will play out in practice, but the new policy appears to be better designed to account for complexities such as the difficulty of patching certain kinds of systems. The new policy also appears to recognize the need for swift action when vulnerabilities the government has previously retained are exploited as part of “ongoing malicious cyber activity,” a concern we’ve raised in the Shadow Brokers case.

The new policy also mandates yearly reports about the VEP’s operation, including an unclassified summary. Again, it remains to be seen how much insight these reports will provide, and whether they will prompt further oversight from Congress or other bodies, but this sort of reporting is a necessary step.

In spite of these positive signs, we remain concerned about exceptions to the VEP. As written, agencies need not introduce certain vulnerabilities to the process at all if they are “subject to restrictions by partner agreements and sensitive operations.” Even vulnerabilities which are part of the process can be explicitly restricted by non-disclosure agreements. The FBI avoided VEP review of the Apple iPhone vulnerability in the San Bernardino case due to an NDA with an outside contractor, and such agreements are apparently extremely common in the vulnerabilities market. And exempting vulnerabilities involved in “sensitive operations” seems like an exceptionally wide loophole, since essentially all offensive uses of vulnerabilities are sensitive. Unchecked, these exceptions could undercut the process entirely, defeating its goal of balancing secrecy and disclosure.

Finally, we’ve seen the government rely on NDAs, classification, and similar restrictions to improperly and illegally withhold material from defendants in criminal cases. As the FBI and other law enforcement agencies increasingly use exploits to hack into unknown computers, the government should not be able to hide behind these secrecy claims to shield its methods from court scrutiny. We hope the VEP doesn’t add fuel to these arguments.

Related Cases: EFF v. NSA, ODNI - Vulnerabilities FOIA
Categories: Aggregated News

Court Rules Platforms Can Defend Users’ Free Speech Rights, But Fails to Follow Through on Protections for Anonymous Speech

eff.org - Fri, 17/11/2017 - 04:18

A decision by a California appeals court on Monday recognized that online platforms can fight for their users’ First Amendment rights, though the decision also potentially makes it easier to unmask anonymous online speakers.

Yelp v. Superior Court grew out of a defamation case brought in 2016 by an accountant who claims that an anonymous Yelp reviewer defamed him and his business. When the accountant subpoenaed Yelp for the identity of the reviewer, Yelp refused and asked the trial court to toss the subpoena on grounds that the First Amendment protected the reviewer’s anonymity.

The trial court ruled that Yelp did not have the right to object on behalf of its users and assert their First Amendment rights. It next ruled that even if Yelp could assert its users’ rights, it would have to comply with the subpoena because the reviewer’s statements were defamatory. It then imposed almost $5,000 in sanctions on Yelp for opposing the subpoena.

The trial court’s decision was wrong and dangerous, as it would have prevented online platforms from standing up for their users’ rights in court. Worse, the sanctions sent a signal that platforms could be punished for doing so. When Yelp appealed the decision earlier this year, EFF filed a brief in support [.pdf].

The good news is that the Fourth Appellate District of the California Court of Appeal heard those concerns and reversed the trial court’s ruling regarding Yelp’s ability – known in legal jargon as “standing” – to assert its users’ First Amendment rights.

In upholding Yelp and other online platforms’ legal standing to defend their users’ anonymous speech, the court correctly recognized that the trial court’s ruling would have a chilling effect on anonymous speech and the platforms that allow it. The court also threw out the sanctions the trial court issued against Yelp.

We applaud Yelp for fighting a bad court decision and standing up for its users in the face of court sanctions.  Although we’re glad that the court affirmed Yelp’s ability to fight for its users’ rights, another part of Monday’s ruling may ultimately make it easier for parties to unmask anonymous speakers.

After finding that Yelp could argue on behalf of its anonymous reviewer, the appeals court agreed with the trial court that Yelp nevertheless had to turn over information about its user on grounds that the review contained defamatory statements about the accountant.

In arriving at this conclusion, the court adopted a test that provides relatively weak protections for anonymous speakers. That test requires that plaintiffs seeking to unmask anonymous speakers make an initial showing that their legal claims have merit and that the platforms provide notice to the anonymous account being targeted by the subpoena. Once those prerequisites are met, the anonymous speaker has to be unmasked.

EFF does not believe that the California court’s test adequately protects the First Amendment rights of anonymous speakers, especially given that other state and federal courts have developed more protective tests. Anonymity is often a shield used by speakers to express controversial or unpopular views that allows the ensuing debate to focus on the substance of the speech rather than the identity of the speaker.

Courts more protective of the First Amendment right to anonymity typically require that before unmasking speakers, plaintiffs must show that they can prove their claims—similar to what they would need to show at a later stage in the case. And even when plaintiffs prove they have a legitimate case, these courts separately balance plaintiffs’ need to unmask the users against those speakers’ First Amendment rights to anonymity.

By not adopting a more protective test, the California court’s decision potentially makes it easier for civil litigants to pierce online speakers’ anonymity, even when their legal grievances aren’t legitimate. This could invite a fresh wave of lawsuits against anonymous speakers that are designed to harass or intimidate anonymous speakers rather than vindicate actual legal grievances.

We hope that we’re wrong about the implications of the court’s ruling and that California courts will take steps to prevent abuse of unmasking subpoenas. In the meantime, online platforms should continue to stand up for their users’ anonymous speech rights and defend them in court when necessary.

Categories: Aggregated News

EFF Urges DHS to Abandon Social Media Surveillance and Automated “Extreme Vetting” of Immigrants

eff.org - Fri, 17/11/2017 - 01:46

EFF is urging the Department of Homeland Security (DHS) to end its programs of social media surveillance and automated “extreme vetting” of immigrants. Together, these programs have created a privacy-invading integrated system to harvest, preserve, and data-mine immigrants' social media information, including use of algorithms that sift through posts using vague criteria to help determine who to admit or deport.

EFF today joined a letter from the Brennan Center for Justice, Georgetown Law’s Center on Privacy and Technology, and more than 50 other groups urging DHS to immediately abandon its self-described "Extreme Vetting Initiative."Also, EFF's Peter Eckersley joined a letter from more than 50 technology experts opposing this program. This follows EFF's participation last month in comments from the Center for Democracy & Technology and dozens of other advocacy groups urging DHS to stop retaining immigrants' social media information in a government record-keeping system called "Alien Files" (A-files).

DHS for some time has collected social media information about immigrants and foreign visitors. DHS recently published a notice announcing its policy of storing that social media information in its A-Files. Also, DHS announced earlier this year that it is developing its “Extreme Vetting Initiative,” which will apply algorithms to the social media of immigrants to automate decision-making in deportation and other procedures.

These far-reaching programs invade the privacy and chill the freedoms of speech and association of visa holders, lawful permanent residents, and naturalized U.S. citizens alike. These policies not only invade privacy and chill speech, they also are likely to discriminate against immigrants from Muslim nations. Furthermore, other countries may imitate DHS’s policies, including countries where civil liberties are nascent and freedom of expression is limited.

Storing Social Media Information in the A-Files Chills First Amendment Rights

The U.S. government assigns alien registration numbers to people immigrating to the United States and to non-immigrants granted authorization to visit. In addition to containing these alien registration numbers, the government’s A-File record-keeping system stores the travel and immigration history of millions of people, including visa holders, asylees, lawful permanent residents, and naturalized citizens.

 In our previous post on the DHS’s new A-Files policy, we outlined the many problems with the government’s use of this record keeping system to store, share, and use immigrants’ social media information. In the new comments, we urge DHS to stop storing social media surveillance in the A-Files for the following reasons:

  • Chilled Expression. Activists, artists, and other social media users will feel pressure to censor themselves or even disengage completely from online spaces. Afraid of surveillance, the naturalized and U.S.-born citizens with whom immigrants engage online may also limit their social media presence by sanitizing or deleting their posts.
  • Privacy of Americans Invaded. DHS’s social media surveillance plan, while directed at immigrants, will burden the privacy of naturalized and U.S-born citizens, too. Even after immigrants are naturalized, DHS will preserve their social media data in the A-Files for many years. DHS’s sweeping surveillance will also invade the privacy of the many millions of U.S.-born Americans who engage with immigrants on social media.
  • Creation of Second-Class Citizens. DHS’s 100-year retention of naturalized citizens’ social media content in A-Files means a life-long invasion of their privacy. Effectively, DHS’s policy will relegate over 20 million naturalized U.S. citizens to second-class status.
  • Unproven Benefits. While DHS claims that collecting social media can help identify security threats, research shows that expressive Internet conduct is an inaccurate predictor of one’s propensity for violence. Furthermore, potential bad actors can easily circumvent social media surveillance by deleting their content or altering their online personas. Also, the meaning of social media content is highly idiosyncratic. Posts replete with sarcasm and allusions are especially difficult to decipher. This task is further complicated by the rising use of non-textual information like emojis, GIFs, and “likes.”

Immigrants feel increasingly threatened by the policies of the Trump administration. Social media surveillance contributes to a climate of fear among immigrant communities, and deters First Amendment activity by immigrants and citizens alike. Thus, EFF urges DHS not to retain social media content in immigrants’ A-Files.

"Extreme Vetting" of Immigrants is Ineffective and Discriminatory

In July, DHS’s Immigration and Customs Enforcement (ICE) sought the expertise of technology companies to help it automate its review of social media and other information for purposes of immigration enforcement. Specifically, ICE documents reveal that DHS seeks to develop:

  1. “processes that determine and evaluate an applicant’s probability of becoming a positively contributing member of society as well as their ability to contribute to national interests”; and
  1. “methodology that allows [the agency] to assess whether an applicant intends to commit criminal or terrorist acts after entering the United States.”

In the November letter, we urge DHS to abandon “extreme vetting” for many reasons.

  • Chilling of Online Expression. ICE’s scouring of social media to make deportation and other immigration decisions will encourage immigrants, and Americans who communicate with immigrants, to censor themselves or delete their social media accounts. This will greatly reduce the quality of our national public discourse.
  • Technical Inadequacy. ICE’s hope to forecast national security threats via predictive analytics is misguided. The necessary computational methods do not exist. Algorithms designed to judge the meaning of text struggle to identify the tone of online posts, and most fail to understand the meaning of posts in other languages. Flawed human judgment can make human-trained algorithms similarly flawed.
  • Discriminatory Impact. ICE never defines the critical phrases “positively contributing member of society” and “contribute to national interests.” They have no meaning in American law. Efforts to automatically identify people on the basis of these nebulous concepts will lead to discriminatory results. Moreover, these vague and overbroad phrases originate in President Trump’s travel ban executive orders (Nos. 13,769 and 13,780), which courts have enjoined as discriminatory. Thus, extreme vetting would cloak discrimination behind a veneer of objectivity.

In short, EFF urges DHS to abandon “extreme vetting” and any other efforts to automate immigration enforcement. DHS should also stop storing social media information in immigrants’ A-Files. Social media surveillance of our immigrant friends and neighbors is a severe intrusion on digital liberty that does not make us safer.

Categories: Aggregated News

Stupid Patent Data of the Month: the Devil in the Details

eff.org - Thu, 16/11/2017 - 05:28
A Misunderstanding of Data Leads to a Misunderstanding of Patent Law and Policy

Bad patents shouldn’t be used to stifle competition. A process to challenge bad patents when they improperly issue is important to keeping consumer costs down and encouraging new innovation. But according to a recent post on a patent blog, post-grant procedures at the Patent Office regularly get it “wrong,” and improperly invalidate patents. We took a deep dive into the data being relied upon by patent lobbyists to show that contrary to their arguments, the data they rely on undermines their arguments and conflicts with the claims they’re making.

The Patent Office has several procedures to determine whether an issued patent was improperly granted to a party that does not meet the legal standard for patentability of an invention. The most significant of these processes is called inter partes review, and is essential to reining in overly broad and bogus patents. The process helps prevent patent trolling by providing a target with a low-cost avenue for defense, so it is harder for trolls to extract a nuisance-value settlement simply because litigating is expensive. The process is, for many reasons, disliked by some patent owners. Congress is taking a new look at this process right now as a result of patent owners’ latest attempts to insulate their patents from review.

An incorrect claim about the inter partes review (IPR) and other procedures like IPR at the Patent Trial and Appeal Board (PTAB) has been circulating, and was recently repeated in written comments at a congressional hearing by Philip Johnson, former head of intellectual property at Johnson & Johnson. Josh Malone and Steve Brachmann, writing for a patent blog called “IPWatchdog,” are the source of this error. In their article, cited in the comments to Congress, they claim that the PTAB is issuing decisions contrary to district courts at a very high rate.

We took a closer look at the data they use, and found that the rate is disagreement is actually quite small: about 7%, not the 76% claimed by Malone and Brachmann. How did they get it so wrong? To explain, we’ll have to get into the nuts and bolts of how such an analysis can be run.

Malone and Brachmann relied on data provided by a service called “Docket Navigator,” which collects statistics and documents related to patent litigation and enforcement. The search they used was to see how many cases Docket Navigator marked as a finding of “unpatentable” (from the Patent Office) and a finding of “not invalid” (from a district court).

This is a very, very simplistic analysis. For instance, it would consider an unpatentability finding by the PTAB about Claim 1 of a patent to be inconsistent with a district court finding that Claim 54 is not invalid. It would consider a finding of anticipation by the PTAB to be inconsistent with a district court rejecting an argument for invalidity based on a lack of written description. These are entirely different legal issues; different results are hardly inconsistent.

EFF, along with CCIA, ran the same Docket Navigator search Malone and Brachmann ran for patents found “not invalid” and “unpatentable or not unpatentable,” generating 273 results, and a search for patents found “unpatentable” and “not invalid,” generating 208 results (our analysis includes a few results that weren’t yet available when Malone and Brachmann ran their search). We looked into each of 208 results that Docket Navigator returned for patents found unpatentable and not invalid. Our analysis shows that the “200” number, and consequently the rate at which the Patent Office is supposedly “wrong” based on a comparison to times a court supposedly got it “right” is well off the mark.

We reached our conclusions based on the following methodology:

  • We considered “inconsistent results” to occur any time the Patent Office reached a determination on any one of the conditions for patentability (namely, any of 35 U.S.C. §§ 101, 102, 103 or 112) and the district court reached a different conclusion based on the same condition for patentability, with some important caveats, as discussed below. For example, if the Patent Office found claims invalid for lack of novelty (35 U.S.C. § 102), we would not treat a district court finding of claims definite (35 U.S.C. § 112(b)) as inconsistent.
  • We did not distinguish between a finding of invalidity or lack of invalidity based on lack of novelty (35 U.S.C. § 102) or obviousness (35 U.S.C. § 103), as these bases are highly related. For example, if the Patent Office determined claims unpatentable based on anticipation, we would mark as inconsistent any jury finding that the claims were not obvious.
  • We did not consider a decision relating the validity of one set of claims to be inconsistent with a decision relating to the validity of a different, distinct set of claims. For example, if the Patent Office found claims 1-5 of a patent not patentable, we would not consider that inconsistent with a district court finding claims 6-10 not invalid. We would count as inconsistent, however, any two differing decisions that overlapped in terms of claims, even if there was not identity of claims.
  • We distinguished between the conditions for patentability of 35 U.S.C. § 112. For example, a district court finding of definiteness under 35 U.S.C. § 112(b) would not treated as inconsistent with a Patent Office finding of lack of written description under 35 U.S.C. § 112(a).
  • We did not consider a district court decision to be inconsistent with Patent Office decision if that district court decision was later overturned by the Federal Circuit. However, we did treat a Patent Office decision as inconsistent with a district court decision even if that Patent Office decision were later reversed.1 For example, if the Patent Office found claims to be not patentable, but the Patent Office was later reversed by the Federal Circuit, we would still mark that decision as inconsistent with the district court. We even counted Patent Office decisions as inconsistent in the five cases where they were affirmed by the Federal Circuit and therefore were correct according to a higher authority than a district court. We did this in order to ensure we included results tending to support Malone and Brachmann’s thesis that the Patent Office was reaching the “wrong” results.
  • We excluded fourteen results that were not the result of any district court finding. Specifically, several patents were included because of findings by the International Trade Commission, an agency (like the Patent Office) which hears cases in a non-Article III court and that does not have a jury. Those results would not meet Malone and Brachmann’s thesis of being considered “valid in full and fair trials in a court of law.”
  • We excluded two results that should not have been included in the set and appear to be a coding error by Docket Navigator. These results were excluded because there was no final decision from the Patent Office as to unpatentability.

Here’s what we found of the 194 remaining cases:

  • A plurality of the results (n=85) were only included because the Patent Office determined claims were unpatentable based on failure to meet one or more requirements for patentability (usually 35 U.S.C. § 102 or 103) and a district court found the claims met other requirements for patentability (usually 35 U.S.C. § 101 or 112). That is, the district court made no finding whatsoever relating to the reasons why the Patent Office determined the claims should be canceled. Thus the Patent Office and the court did not disagree as to a finding on validity.
    • For example, the Docket Navigator results include U.S. Patent No. 5,563,883. The Patent Office determined claims 1, 3, and 4 of that patent were unpatentable based on obviousness (35 U.S.C. § 103). A district court determined that those same claims however, met the definiteness requirements (35 U.S.C. § 112(b)). The Federal Circuit affirmed the Patent Office’s decision invalidating the claims, and the district court did not decide whether those claims were obvious at all.
  • A further 46 results were situations where either (1) the patent owner requested the Patent Office cancel claims or (2) claims were stipulated to be “valid” as part of a settlement in district court. Thus the Patent Office and the court findings were not inconsistent because at least one of them did not reach any decision on the merits.
    • For example, the Docket Navigator results includes U.S. Patent No. 6,061,551. A jury found claims not invalid, but the Federal Circuit reversed that finding, holding the claims invalid. After that determination, the Patent Owner requested an adverse judgment at the Patent Office.
    • As another example, the Docket Navigator results includes U.S. Patent No. 7,676,411. The Patent Office found claims invalid as abstract (35 U.S.C. § 101) and obvious (35 U.S.C. § 103). Because the parties stipulated that this patent was “valid” as part of settlement, which is generally not considered to be a merits determination, this patent is also tagged as “not invalid” by Docket Navigator.
  • A further 15 results were not inconsistent for a variety of reasons.
    • For example, five results were not inconsistent because the Patent Office and the district court considered different patent claims. As another example, U.S. Patent No. 7,135,641 represented an instance where a jury found claims not invalid, but the district court judge reversed that finding post-trial. As another example, in the district court, U.S. patent 5,371,734 was held “not invalid” on summary judgment, but that determination was later reversed by the Federal Circuit.

Under this initial cut, only 48 of the entries arguably could be considered to have inconsistent or disagreeing results between the Patent Office and a district court.

But in the majority of those cases, a judge or jury considered one set of prior art when determining whether the claim was new and nonobvious, but the Patent Office considered a different set (n=28). It is not surprising that the two forums would consider different evidence. The Patent Office proceedings generally only consider certain types of prior art (printed publications). That a district court proceeding may result in a finding of “not invalid” based on, e.g., prior use, is not an inconsistent result.

Eliminating those results where the Patent Office was considering completely different arguments and art means the total number of times the Patent Office arguably reached a different conclusion than a district court is only 20 times out of 273 that  a district court determined a patent "not invalid" for some reason. That means that the Patent Office is “inconsistent” with district courts only 7% of the time, not 76% of the time.

It is also important to keep in mind that there have been over 1,800 final decisions in inter partes review proceedings, covered business method review proceedings, or post grant review proceedings. In all that though, only 20 times did the Patent Office reach a conclusion that may be considered inconsistent with the district court in ways that negatively impact patent owners. That’s a rate of only around 1% of the time. That’s a remarkably low rate. Moreover, inconsistent results happen even within the court system. For example, in Abbott v. Andrx, 452 F.3d 1331, the Federal Circuit found that Abbott’s patent was likely to be held invalid. But only one year later, in Abbott v. Andrx, 473 F.3d 1196, the Federal Circuit found that the same patent was likely to be not invalid. The two different results were explained by the fact that the two defendants had presented different defenses. This is not unusual. Thus the fact that there may be different results doesn’t lead to a conclusion that the whole system is faulty.

An analysis like ours with respect to this data set takes time and a few cases might slip through the cracks or be incorrectly coded, but the overall result demonstrates that the vast majority of patent owners are never subject to inconsistent results between district court and the Patent Office.

It is disappointing that Johnson, Malone, and Brachmann made claims that the data don’t support, but demonstrates a valuable lesson. When using data sets, it is important to understand what, exactly, the data is and how to interpret it. Unfortunately here it looks like an error in understanding the results provided by Docket Navigator by Malone and Brachmann propagated to Johnson’s testimony, and would likely travel further if no one looked harder at it.

We’ve used both Docket Navigator and Lex Machina in our analyses on numerous occasions, and even briefs we submit to the court. Both services provide extremely valuable information about the state of patent litigation and policy. But its usefulness is diminished where the data they present are not understood. As always, the devil is in the details.


  • 1. For this reason, our results differ slightly from those of CCIA, reported here. CCIA did not treat decisions as inconsistent if the Patent Office decision was later affirmed on appeal. Five patents we considered inconsistent in our analysis were excluded in CCIA’s analysis. Each approach has merit.
Categories: Aggregated News

Announcing the Security Education Companion

eff.org - Thu, 16/11/2017 - 02:45

The need for robust personal digital security is growing every day. From grassroots groups to civil society organizations to individual EFF members, people from across our community are voicing a need for accessible security education materials to share with their friends, neighbors, and colleagues.

We are thrilled to help. Today, EFF has launched the Security Education Companion, a new resource for people who would like to help their communities learn about digital security but are new to the art of security training.

It’s rare to find someone with not only technical expertise but also a strong background in pedagogy and education. More often, folks are stronger in one area: someone might have deep technical expertise but little experience teaching, or, conversely, someone might have a strong background in teaching and facilitation but be new to technical security concepts. The Security Education Companion is meant to help these kinds of beginner trainers share digital security with their friends and neighbors in short awareness-raising gatherings.

A new resource for people who would like to help their communities learn about digital security but are new to the art of security training.

Lesson modules guide you through creating sessions for topics like passwords and password managers, locking down social media, and end-to-end encrypted communications, along with handouts, worksheets, and other remix-able teaching materials. The Companion also includes a range of shorter “Security Education 101” articles to bring new trainers up to speed on getting started with digital security training, foundational teaching concepts, and the nuts and bolts of planning a workshop.

Teaching requires mindful facilitation, thoughtful layering of content, sensitivity to learners’ needs and concerns, and mutual trust built up over time. When teaching security in particular, the challenge includes communicating counterintuitive security concepts, navigating different devices and operating systems, recognizing learners’ different attitudes toward and past experiences with various risks, and taking into account a constantly changing technical environment. What people learn—or don’t learn—has real repercussions.

Nobody knows this better than the digital security trainers currently pushing this work forward around the world, and we’ve been tremendously fortunate to learn from their expertise. We’ve interviewed dozens of U.S.-based and international trainers about what learners struggle with, their teaching techniques, the types of materials they use, and what kinds of educational content and resources they want. We’re working hard to ensure that the Companion supports, complements, and adds to the existing collective body of training knowledge and practice.

We will keep adding new materials in the coming months, so check back often as the Companion grows and improves. Together, we look forward to improving as security educators and making our communities safer.

Visit SEC.EFF.ORG

a resource for people teaching digital security to their friends and neighbors

Categories: Aggregated News

Appeals Court’s Disturbing Ruling Jeopardizes Protections for Anonymous Speakers

eff.org - Wed, 15/11/2017 - 12:38

A federal appeals court has issued an alarming ruling that significantly erodes the Constitution’s protections for anonymous speakers—and simultaneously hands law enforcement a near unlimited power to unmask them.

The Ninth Circuit’s decision in  U.S. v. Glassdoor, Inc. is a significant setback for the First Amendment. The ability to speak anonymously online without fear of being identified is essential because it allows people to express controversial or unpopular views. Strong legal protections for anonymous speakers are needed so that they are not harassed, ridiculed, or silenced merely for expressing their opinions.

In Glassdoor, the court’s ruling ensures that any grand jury subpoena seeking the identities of anonymous speakers will be valid virtually every time. The decision is a recipe for disaster precisely because it provides little to no legal protections for anonymous speakers.

EFF applauds Glassdoor for standing up for its users’ First Amendment rights in this case and for its commitment to do so moving forward. Yet we worry that without stronger legal standards—which EFF and other groups urged the Ninth Circuit to apply (read our brief filed in the case)—the government will easily compel platforms to comply with grand jury subpoenas to unmask anonymous speakers.

The Ninth Circuit Undercut Anonymous Speech by Applying the Wrong Test

The case centers on a federal grand jury in Arizona investigating allegations of fraud by a private contractor working for the Department of Veterans Affairs. The grand jury issued a subpoena to Glassdoor, which operates an online platform that allows current and former employees to comment anonymously about their employers, seeking the identities of eight accounts who posted about the contractor.

Glassdoor challenged the subpoena by asserting its users’ First Amendment rights. When the trial court ordered Glassdoor to comply, the company appealed to the U.S. Court of Appeals for the Ninth Circuit.

The Ninth Circuit ruled that because the subpoena was issued by a grand jury as part of a criminal investigation, Glassdoor had to comply absent evidence that the investigation was being conducted in bad faith.

There are several problems with the court’s ruling, but the biggest is that in adopting a “bad faith” test as the sole limit on when anonymous speakers can be unmasked by a grand jury subpoena, it relied on a U.S. Supreme Court case called Branzburg v. Hayes.

In challenging the subpoena, Glassdoor rightly argued that Branzburg was not relevant because it dealt with whether journalists had a First Amendment right to  protect the identities of their confidential sources in the face of grand jury subpoenas, and more generally, whether journalists have a First Amendment right to gather the news. This case, however, squarely deals with Glassdoor users’ First Amendment right to speak anonymously.

The Ninth Circuit ran roughshod over the issue, calling it “a distinction without a difference.” But here’s the problem: although the law is all over the map as to whether the First Amendment protects journalists’ ability to guard their sources’ identities, there is absolutely no question that the First Amendment grants anonymous speakers the right to protect their identities.

The Supreme Court has repeatedly ruled that the First Amendment protects anonymous speakers, often by emphasizing the historic importance of anonymity in our social and political discourse. For example, many of our founders spoke anonymously while debating the provisions of our Constitution.

Because the Supreme Court in Branzburg did not outright rule that reporters have a First Amendment right to protect their confidential sources, it adopted a rule that requires a reporter to respond to a grand jury subpoena for their source’s identity unless the reporter can show that the investigation is being conducted in bad faith. This is a very weak standard and difficult to prove.

By contrast, because the right to speak anonymously has been firmly established by the Supreme Court and in jurisdictions throughout the country, the tests for when parties can unmask those speakers are more robust and protective of their First Amendment rights. These tests more properly calibrate the competing interests between the government’s need to investigate crime and the First Amendment rights of anonymous speakers.

The Ninth Circuit’s reliance on Branzburg effectively eviscerates any substantive First Amendment protections for anonymous speakers by not imposing any meaningful limitation on grand jury subpoenas. Further, the court’s ruling puts the burden on anonymous speakers—or platforms like Glassdoor standing in their shoes—to show that an investigation is being conducted in bad faith before setting aside the subpoena.

The Ninth Circuit’s reliance on Branzburg is also wrong because the Supreme Court ruling in that case was narrow and limited to the situation involving reporters’ efforts to guard the identities of their confidential sources. As Justice Powell wrote in his concurrence, “I … emphasize what seems to me to be the limited nature of the Court’s ruling.” The standards in that unique case should not be transported to cases involving grand jury subpoenas to unmask anonymous speakers generally. However, that’s what the court has done—expanded Branzburg to now apply in all instances in which a grand jury subpoena targets individuals whose identities are unknown to the grand jury.

Finally, the Ninth Circuit’s use of Branzburg is further improper because there are a number of other cases and legal doctrines that more squarely address how courts should treat demands to pierce anonymity. Indeed, as we discussed in our brief, there is a whole body of law that applies robust standards to unmasking anonymous speakers, including the Ninth Circuit’s previous decision in Bursey v. U.S., which also involved a grand jury.

The Ninth Circuit Failed to Recognize the Associational Rights of Anonymous Online Speakers

The court’s decision is also troubling because it takes an extremely narrow view of the kind of anonymous associations that should be protected by the First Amendment. In dismissing claims by Glassdoor that the subpoena chilled their users’ First Amendment rights to privately associate with others, the court ruled that because Glassdoor was not itself a social or political organization such as the NAACP, the claim was “tenuous.”

There are several layers to the First Amendment right of association, including the ability of individuals to associate with others, the ability of individuals to associate with a particular organization or group, and the ability for a group or organization to maintain the anonymity of members or supporters.

Although it’s true that Glassdoor users are not joining an organization like the NAACP or a union, the court’s analysis ignores that other associational rights are implicated by the subpoena in this case. At minimum, Glassdoor’s online platform offers the potential for individuals to organize and form communities around their shared employment experiences. The First Amendment must protect those interests even if Glassdoor lacks an explicit political goal.

Moreover, even if it’s true that Glassdoor users may not have an explicitly political goal in commenting on their current or past employers, they are still associating online with others with similar experiences to speak honestly about what happens inside companies, what their professional experiences are like, and how they believe those employers can improve.

The risk of being identified as a Glassdoor user is a legitimate one that courts should recognize as analogous to the risks of civil rights groups or unions being compelled to identify their members. Disclosure in both instances chills individuals’ abilities to explore their own experiences, attitudes, and beliefs.

The Ninth Circuit Missed an Opportunity to Vindicate Online Speakers’ First Amendment Rights

Significantly absent from the court’s decision was any real discussion about the value of anonymous speech and its historical role in our country. This is a shame because the case would have been a great opportunity to show the importance of First Amendment protections for online speakers.

EFF has long fought for anonymity online because we know its importance in fostering robust expression and debate. Subpoenas such as the one issued to Glassdoor deter people from speaking anonymously about issues related to their employment. Glassdoor provides a valuable service because its anonymous reviews help inform other people’s career choices while also keeping employers accountable to their workers and potentially the general public.

The Ninth Circuit’s decision appeared unconcerned with this reality, and its “bad faith” standard places no meaningful limit on the use of grand jury subpoenas to unmask anonymous speakers. This will ultimately harm speakers who can now be more easily targeted and unmasked, particularly if they have said something controversial or offensive. 

Categories: Aggregated News

Who Has Your Back in Colombia? Our Third-Annual Report Shows Progress

eff.org - Wed, 15/11/2017 - 10:34

Fundación Karisma in cooperation with EFF has released its third-annual ¿Dónde Estan Mis Datos? report, the Colombian version of EFF’s Who Has Your Back. And this year’s report has some good news.
 
According to the Colombian Ministry of Information and Communication Technologies, broadband Internet penetration in Colombia is well over 50% and growing fast. Like users around the world, Colombians put their most private data, including their online relationships, political, artistic and personal discussions, and even their minute-by-minute movements online. And all of that data necessarily has to go through one of a handful of ISPs. But without transparency from those ISPs, how can Colombians trust that their data is being treated with respect?
 
This project is part of a series across Latin America, adapted from EFF’s annual Who Has Your Back? report. The reports are intended to evaluate mobile and fixed ISPs to see which stand with their users when responding to government requests for personal information. While there’s definitely room for improvement, the third edition of the Colombian report shows substantial improvement.
 
The full report is available only in Spanish from Fundación Karisma, but here are some highlights.
 
This third-annual report goes even further in evaluating companies than ever before. The 2017 edition doesn’t just look at ISPs data practices; it evaluates whether companies have corporate policies of gender equality and accessibility, whether they publicly report data breaches, and whether they’ve adopted HTTPS to protect their users and employees. By and large, the companies didn’t do very well at the new criteria, but that’s part of the point. Reports like this help push the companies to do better.
 
That’s especially clear by looking at the criteria evaluated in previous years. There’s been significant improvement.
 
New for 2017, a Colombian ISP, known as ETB, has released the country’s first transparency report. This type of report, which lists the number and type of legal demands for data from government and law enforcement, is essential to helping users understand the scope of Internet surveillance and make informed decisions about storing their sensitive data or engaging in private communications. We’ve long urged companies to release these reports regularly, and we’re happy to see a Colombian ISP join in.
 
In addition, this year’s report shows that more companies than ever are releasing public information about their data protection policies and their related corporate policies. We applaud this transparency, especially when their policies go further than the law requires as is the case with both Telefonica and ETB.
 
Finally, more companies than ever are taking the proactive step of notifying their users of data demands, even when they are not formally required to do so. This commitment is important because it gives users a chance to defend themselves against overreaching government requests. In most situations, a user is in a better position than a company to challenge a government request for personal information, and of course, the user has more incentive to do so.
 
We’re proud to have worked with Fundación Karisma to push for transparency and users’ rights in Colombia and look forward to seeing further improvement in years to come.

Categories: Aggregated News

¿Dónde Están Mis Datos en Colombia? Nuestro tercer informe anual muestra el progreso

eff.org - Wed, 15/11/2017 - 10:34

La Fundación Karisma en cooperación con EFF ha lanzado su tercer año, ¿Dónde Estan Mis Datos? Informe que es la versión colombiana de Who Has Your Back de EFF. Y la edición de este año tiene algunas buenas noticias.

Según el Ministerio de Tecnologías de la Información y las Comunicaciones de Colombia, la penetración de Internet de banda ancha en Colombia supera con creces el 50% y está creciendo rápidamente. Al igual que los usuarios de todo el mundo, los colombianos ponen sus datos más privados en línea, incluidas sus relaciones en línea, debates políticos, artísticos y personales, e incluso sus movimientos minuto a minuto. Y todos esos datos necesariamente tienen que pasar por alguno del puñado de ISP disponibles. Pero sin transparencia por parte de esos ISP, ¿cómo pueden, los colombianos, confiar en que sus datos están siendo tratados con respeto?

Este proyecto forma parte de una serie a lo largo de América Latina, a partir de la publicación anual del informe Who Has Your Back? de EFF. Estos informes tienen la intención de evaluar los ISP móviles y fijos para ver qué soporte tienen sus usuarios cuando responden a las solicitudes gubernamentales de información personal. Si bien, claramente , hay margen de mejora, la tercera edición del informe colombiano muestra una mejora sustancial.

El informe completo está disponible, solo en español, desde la web de la Fundación Karisma [LINK], pero aquí hay algunos puntos destacados.

Este tercer informe anual evalúa a las empresas más concienzudamente que nunca. La edición de 2017 no solo mira las prácticas de datos de los ISP; evalúa si las empresas tienen políticas corporativas de igualdad de género y accesibilidad, ya sea que denuncien públicamente las infracciones de datos, y si han adoptado HTTPS para proteger a sus usuarios y empleados. En general, a las empresas no les fue muy bien con los nuevos criterios, pero esto es parte del punto a tratar. Informes como este ayudan a las empresas a mejorar.

Eso es especialmente claro al observar los criterios evaluados en años anteriores. Ha habido una mejora significativa.

Como novedad  este 2017; un ISP colombiano, conocido como ETB, ha lanzado el primer informe de transparencia del país. Este tipo de informe, que enumera el número y tipo de demandas legales de datos del gobierno y las fuerzas del orden público, es esencial para ayudar a los usuarios a comprender el alcance de la vigilancia de Internet y tomar decisiones informadas sobre el almacenamiento de sus datos confidenciales o comunicaciones privadas. Hace tiempo que instamos a las empresas a que publiquen estos informes con regularidad, y nos complace ver a un ISP colombiano unirse.

Además, el informe de este año muestra a más compañías que nunca publicando información pública sobre sus políticas de protección de datos y sus políticas corporativas relacionadas. Aplaudimos esta transparencia, especialmente cuando sus políticas van más allá de lo que exige la ley, como es el caso tanto de Telefónica como de ETB.

Finalmente, más empresas que nunca están dando, proactivamente,  el paso de notificar a sus usuarios acerca de las demandas de datos, incluso cuando no están formalmente obligados a hacerlo. Este compromiso es importante porque les da a los usuarios la oportunidad de defenderse contra las solicitudes excesivas del gobierno. En la mayoría de casos, un usuario se encuentra en una mejor posición que una empresa para impugnar una solicitud gubernamental de información personal y, por supuesto, el usuario tiene más incentivos para hacerlo.

Estamos orgullosos de haber trabajado con la Fundación Karisma para impulsar la transparencia y los derechos de los usuarios en Colombia y esperamos ver mejoras adicionales en los próximos años.

Categories: Aggregated News

20 Years of Protecting Intermediaries: Legacy of 'Zeran' Remains a Critical Protection for Freedom of Expression Online

eff.org - Wed, 15/11/2017 - 06:58

This article first appeared on Nov. 10 in Law.com.

At the Electronic Frontier Foundation (EFF), we are proud to be ardent defenders of §230. Even before §230 was enacted in 1996, we recognized that all speech on the Internet relies upon intermediaries, like ISPs, web hosts, search engines, and social media companies. Most of the time, it relies on more than one. Because of this, we know that intermediaries must be protected from liability for the speech of their users if the Internet is to live up to its promise, as articulated by the U.S. Supreme Court in ACLU v. Reno, of enabling “any person … [to] become a town crier with a voice that resonates farther than it could from any soapbox“ and hosting “content … as diverse as human thought.”

As we hoped—and based in large measure on the strength of the Fourth Circuit’s decision in Zeran—§230 has proven to be one of the most valuable tools for protecting freedom of expression and innovation on the Internet. In the past two decades, we’ve filed well over 20 legal briefs in support of §230, probably more than on any other issue, in response to attempts to undermine or sneak around the statute. Thankfully, most of these attempts were unsuccessful. In most cases, the facts were ugly—Zeran included. We had to convince judges to look beyond the individual facts and instead focus on the broader implications: that forcing intermediaries to become censors would jeopardize the Internet’s promise of giving a voice to all and supporting more robust public discourse than ever before possible.

This remains true today, and it is worth remembering now, in the face of new efforts in both Congress and the courts to undermine §230’s critical protections.

Attacks on §230: The First 20 Years

The first wave of attacks on §230’s protections came from plaintiffs who tried to plead around §230 in an attempt to force intermediaries to take down online speech they didn’t like. Zeran was the first of these, with an attempt to distinguish between “publishers” and “distributors” of speech that the Fourth Circuit rightfully rejected. As we noted above, the facts were not pretty: the plaintiff sought to hold AOL responsible after an anonymous poster used his name and phone number on an AOL message board to indicate—incorrectly—that he was selling horribly offensive t-shirts about the Oklahoma City bombing. The court rightfully held that §230 protected against liability for both publishing and distributing user content.

The second wave of attacks came from plaintiffs trying to deny §230 protection to ordinary users who reposted content authored by others—i.e., an attempt to limit the statute to protecting only formal intermediaries. In one case, Barrett v. Rosenthal, the attackers succeeded at the California court of appeals. But in 2006, the California Supreme Court ruled that §230 protects all non-authors who republish content, not just formal intermediaries like ISPs. This ruling—which was urged by EFF as amicus along with several other amici—still protects ordinary bloggers and Facebook posters in California from liability for content they merely republish. Unsurprisingly, the California Supreme Court’s opinion included a four-page section dedicated entirely to Zeran.

Another wave of attacks, also in the mid-2000s, came as plaintiffs tried to use the Fair Housing Act to hold intermediaries responsible when users posted housing advertisements that violated the law. Both Craigslist and Roommates.com were sued over discriminatory housing advertisements posted by their users. The Seventh Circuit, at the urging of EFF and other amici, held that §230 immunized Craigslist from liability for classified ads posted by its users—citing Zeran first in a long line of cases supporting broad intermediary immunity. Despite our best efforts, however, the Ninth Circuit found that §230 did not immunize Roommates.com from liability if, indeed, it was subject to the law. The majority opinion ignored both us and Zeran, citing the case only once in a footnote responding to the strong dissent. It found that Roommates.com could be at least partially responsible for the development of the ads because it had forced its users to fill out a questionnaire about housing preferences that included options that the plaintiffs asserted were illegal. The website endured four more years of needless litigation before the Ninth Circuit ultimately found that it hadn’t actually violated any anti-discrimination laws at all, even with the questionnaire. The court left its earlier opinion intact, however, and we were worried the exception carved out in Roommates.com would wreak havoc on §230’s protections. It luckily hasn’t been applied broadly by other courts—undoubtedly thanks in large part to Zeran’s stronger legal analysis and influence.

The Fight Continues

We are now squarely in the middle of a fourth wave of attack—efforts to hold intermediaries responsible for extremist or illegal online content. The goal, again, seems to be forcing intermediaries to actively screen users and censor speech. Many of these efforts are motivated by noble intentions, and the speech at issue is often horrible, but these efforts also risk devastating the Internet as we know it.

Some of the recent attacks on §230 have been made in the courts. So far, they have not been successful. In these cases, plaintiffs are seeking to hold social media platforms accountable on the theory that providing a platform for extremist content counts as material support for terrorism. Courts across the country have universally rejected these efforts. The Ninth Circuit will be hearing one of these cases, Twitter v. Fields, in December.

But the current attacks are unfortunately not only in the courts. The more dangerous threats are in Congress. Both the House and Senate are considering bills that would exempt charges under federal and state criminal and civil laws related to sex trafficking from §230’s protections—the Stop Enabling Sex Trafficking Act (S. 1693) (SESTA) in the Senate, and the Allow States and Victims to Fight Online Sex Trafficking Act (H.R. 1865) in the House. While the legislators backing these laws are largely well meaning, and while these laws are presented as targeting commercial classified ads websites like Backpage.com, they don’t stop there. Instead, SESTA and its house counterpart punish small businesses that just want to run a forum where people can connect and communicate. They will have disastrous consequences for community bulletin boards and comment sections, without making a dent in sex trafficking. In fact, it is already a federal criminal offense for a website to run ads that support sex trafficking, and §230 doesn’t protect against prosecutions for violations of federal criminal laws.

Ultimately, SESTA and its house counterpart would impact all platforms that host user speech, big and small, commercial and noncommercial. They would also impact any intermediary in the chain of online content distribution, including ISPs, web hosting companies, websites, search engines, email and text messaging providers, and social media platforms—i.e., the platforms that people around the world rely on to communicate and learn every day. All of these companies come into contact with user-generated content: ads, emails, text messages, social media posts. Under these bills, if any of this user-generated content somehow related to sex trafficking, even without the platform’s knowledge, the platform could be held liable.

Zeran’s analysis from 20 years ago demonstrates why this is a huge problem. Because these bills would have far-reaching implications—just as every other legislative proposal for limiting §230—they would open Internet intermediaries, companies, nonprofits, and community supported endeavors alike to massive legal exposure. Under this cloud of legal uncertainty, new websites, along with their investors, would be wary of hosting open platforms for speech—or of even starting up in the first place—for fear that they would face crippling lawsuits if third parties used their websites for illegal conduct. They would have to bear litigation costs even if they were completely exonerated, as Roommates.com was after many years. Small platforms that already exist could easily go bankrupt trying to defend against these lawsuits, leaving only larger ones. And the companies that remained would be pressured to over-censor content in order to proactively avoid being drawn into a lawsuit.

EFF is concerned not only because this would chill new innovation and drive smaller players out of the market. Ultimately, these bills would shrink the spaces online where ordinary people can express themselves, with disastrous results for community bulletin boards and local newspapers’ comment sections. They threaten to transform the relatively open Internet of today into a closed, limited, censored Internet. This is the very result that §230 was designed to prevent.

Since Zeran, the courts have recognized that without strong §230 protections, the promise of the Internet as a great leveler—amplifying and empowering voices that have never been heard, and allowing ideas to be judged on their merits rather than on the deep pockets of those behind them—will be lost. Congress needs to abandon its misguided efforts to undermine §230 and heed Zeran’s time-tested lesson: if we fail to protect intermediaries, we fail to protect online speech for everyone.

Categories: Aggregated News

EFF’s Street-Level Surveillance Project Dissects Police Technology

eff.org - Wed, 15/11/2017 - 06:36

Step onto any city street and you may find yourself subject to numerous forms of police surveillance—many imperceptible to the human eye.

A cruiser equipped with automated license plate readers (also known as ALPRs) may have just logged where you parked your car. A cell-site simulator may be capturing your cell-phone data incidentally while detectives track a suspect nearby. That speck in the sky may be a drone capturing video of your commute. Police might use face recognition technology to identify you in security camera footage.

EFF first launched its Street-Level Surveillance project in 2015 to help inform the public about the advanced technologies that law enforcement are deploying in our communities, often without any transparency or public process.  We’ve scored key victories in state legislatures and city councils, limiting the adoption of these technologies and how they can be used, but the surveillance continues to spread, agency by agency. To combat the threat, EFF is proud to release the latest update to our work: a new mini-site that shines light on a wide range of surveillance technologies, including ALPRs, cell-site simulators, drones, face recognition, and body-worn cameras.


Designed with community advocates, journalists, and policymakers in mind, Street-Level Surveillance seeks to answer the pressing questions about police technology. How does it work? What kind of data does it collect? How are police using it? Who’s selling it? What are the threats, and what is EFF doing to defend our rights? We also offer resources specially tailored for criminal defense attorneys, who must confront evidence collected by these technologies in court.

These resources are only a launching point for advocacy. Campus and community organizations working to increase transparency and accountability around the use of surveillance technology can find additional resources and support through our Electronic Frontier Alliance. We hope you’ll join us in 2018 as we redouble our efforts to combat invasive police surveillance. 

Categories: Aggregated News

Despite A Victory on IP, the TPP's Resurgence Hasn't Cured Its Ills

eff.org - Sat, 11/11/2017 - 10:59

Update: The official Ministerial statement on the new Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), including the schedule of suspended provisions, was released on November 11.

Ever since the United States withdrew from the Trans-Pacific Partnership (TPP) back in January, the remaining eleven countries have been quietly attempting to bring a version of the agreement into force. Following some initial confusion, it was finally announced today that they have reached an "agreement in principle" on "core elements" of a deal. 

Even so Canada's trade minister, Francois-Philippe Champagne has confirmed that the agreement is far from being finalized, recognizing that more work was needed on some key issues. Meanwhile the TPP has been renamed as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) and an official statement is due to be released on Saturday November 11. 

However what we already know is that almost the entire Intellectual Property (IP) chapter that had been the source of some of the most controversial elements of the original agreement has been suspended. Back in August, EFF wrote to the TPP ministers explaining why it would make no sense to include copyright term extension in the agreement, because literally none of the remaining parties to the TPP would benefit from doing so.  The apparent decision of the eleven TPP countries to exclude not only the copyright provisions, but nearly the entire IP chapter from the agreement, more than vindicates this. As we have explained at length elsewhere, IP simply isn't an appropriate topic to be dealt with in trade negotiations, where issues such as the length of copyright and bans on circumventing DRM are traded off with totally unrelated issues like dairy quotas and sources of yarn used in garment manufacturing.

It is important to note that the agreement's IP chapter has only been "suspended". Ever since the U.S. pulled out of the TPP, the other countries involved have been trying to salvage the deal by suspending contentious elements. Suspending issues is a common tactic in trade negotiations as it allows countries to declare victory, despite major areas of disagreement. Moroeover, suspending provisions does not stop countries from discussing them. As Michael Geist has pointed out the IP chapter may still be subject to negotiation as part of working groups.

At present there is also little clarity on how the suspension of provisions would be treated if the U.S joins back to the agreement. The eleven countries could ratify an agreement that automatically reinstates these provisions when the U.S. comes back. If the countries end up being bound by provisions that they have not agreed to because of the U.S. joining back, the suspension of the IP chapter would not count for much.

Nevertheless, the exclusion of so much of the IP chapter at this stage of the negotiations is a strong rejection of US-oriented provisions and a good sign for copyright standards being discussed at other trade venues. Canada, which has the second biggest economy among remaining TPP countries after Japan is simultaneously negotiating the North American Free Trade Agreement (NAFTA) and will need to ensure consistency across NAFTA and TPP. Other TPP nations such as Vietnam and Japan are involved in the Regional Comprehensive Economic Partnership (RCEP) negotiations.

Although the IP chapter was the worst of the TPP, it was not the only concerning part  of the agreement for users. There are provisions elsewhere in the agreement that pose a threat to user rights and that we remain concerned about. For example, the telecommunications chapter establishes a hierarchy of interests where unfettered trade in telecommunications services and measures to protect the security and confidentiality of messages are prioritised over privacy of personal data of users. The investment chapter includes an investor-state dispute settlement (ISDS) process which enables multinational companies to challenge any new law or government action at the federal, state, or local level, in a country that is a signatory to the agreement. The inclusion of such provisions not only don't make sense in trade agreements but is also an affront to democracy and a threat to any law designed to protect the public interest. The electronic commerce chapter, with its weak support for privacy, its toothless provisions on net neutrality, and the poor trade-off made between access to source code of imported products, and the security of end users also remains part of the agreement and is unlikely to change much. 

Any renegotiation of the agreement can only be successful if member states improve upon and fix the broken process of trade negotiations that led us to the point. The TPP negotiations have been carried out in secret, without public participation or even visibility into the draft document, although corporate lobbyists had direct access to the texts and the ability to influence the agreement. Even when member states have initiated consultations on the TPP at the national level, brief consultation periods between submissions and ministerial meetings has left stakeholders frustrated and with the sense that it is just "consultation theatre". The only way we can trust that the TPP agreement will reflect users' interests is if the reopened negotiations are inclusive, transparent, balanced and create avenues for meaningful consultation and participation from stakeholders.

The decision to exclude some of the most dangerous threats to the public's rights to free expression, access to knowledge, and privacy online is a big win for users, if indeed the TPP countries follow through with that decision as now seems likely. However, the TPP was, and remains, a bad model for Internet regulation. 

Categories: Aggregated News

Another Court Overreaches With Site-Blocking Order Targeting Sci-Hub

eff.org - Sat, 11/11/2017 - 06:21

Nearly six years ago, Internet user communities rose up and said no to the disastrous SOPA copyright bill. This bill proposed creating a new, quick court order process to compel various Internet services—free speech’s weak links—to help make websites disappear. Today, despite the failure of SOPA, a federal court in Virginia issued just such an order, potentially reaching many different kinds of Internet services.

The website in the crosshairs this time was Sci-Hub, a site that provides free access to research papers that are otherwise locked behind paywalls. Sci-Hub and sites like it are a symptom of a serious problem: people who can’t afford expensive journal subscriptions, and who don’t have institutional access to academic databases, are unable to use cutting-edge scientific research. Sci-Hub’s continued popularity both in the U.S. and in economically disadvantaged countries demonstrates the unfair imbalance in access to knowledge that prompted the site’s creation. Sci-Hub is also less revolutionary than its critics often imagine: it continued a longstanding tradition of informal sharing among researchers.

Whatever the legality of Sci-Hub itself, the remedy pursued in this case by the American Chemical Society and awarded by the court is a dangerous overreach.

Because Sci-Hub didn’t appear in court to defend itself, the court issued a default judgment. ACS, a scientific publisher, asked the court for an injunction to stop the infringement it claimed in the suit. But the injunction ACS proposed was incredibly broad: it purported to cover not only Sci-Hub but “any person or entity in privity with Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries.”

None of these companies were named in the suit. In fact, ACS probably couldn’t name them as legitimate defendants, because simply providing services to an infringing website, or including it in search results, doesn’t make an Internet service legally responsible for the infringement. What’s more, the Digital Millennium Copyright Act limits the remedies that courts can impose against many kinds of Internet intermediaries, including hosting services and search engines. That’s a vital protection for all Internet users, because without it, the services that help us access and communicate information over the Internet would face the impossible and error-prone task of policing innumerable users’ use of innumerable copyrighted works. Even attempting this would likely be so costly and daunting as to drive new Internet businesses out of the market, leaving today’s Internet behemoths (who can afford to do some of the policing that major media companies demand) in full control.

ACS bypassed both the DMCA and basic copyright law to get a court order directed at Internet intermediaries. It simply filed a proposed injunction labeling search engines, domain registrars, and so on as “entities in privity” with Sci-Hub. A magistrate judge adopted their proposal as-is.

The Computer and Communications Industry Association stepped in at that point with an amicus brief. They pointed out that injunctions can only be directed to a named party, or to those in “active concert or participation” with them. The “active concert” rule keeps a party from avoiding a court order by acting through an associate or coconspirator. It’s not a free pass to write a court order that binds anyone who does business with a defendant, especially where the law involved (here, copyright) excludes those third parties from liability. CCIA also pointed out that “privity” is a vague term with no fixed meaning in this context. It could potentially sweep in everyone who had ever engaged in the smallest business dealings with Sci-Hub.

Unfortunately, while the court removed the vague “privity” language from the injunction, it proceeded to issue the order, still directed at an open-ended swath of Internet companies that neither knew of nor caused Sci-Hub’s copyright infringement.

We hope that any Internet companies who get served with this order will challenge it in court rather than follow it blindly. If a domain name registrar, search engine, or other intermediary can be considered to be “in active concert” with a website that infringes, simply because they provide a basic service to that website, then the protections of copyright law and the DMCA can be rendered meaningless. Some Internet companies, including CloudFlare, have fought back against overbroad orders like this one and have succeeded in narrowing them.

Companies can step up and defend their users by insisting on proper procedure and valid orders before helping to take down a website, even one that appears to be infringing. Internet users will reward companies that stand up for the rule of law and fight the tools of censorship.

Categories: Aggregated News

House Judiciary Committee Forced Into Difficult Compromise On Surveillance Reform

eff.org - Fri, 10/11/2017 - 12:19

The House Judiciary Committee on Wednesday approved the USA Liberty Act, a surveillance reform package introduced last month by House Judiciary Committee Chairman Bob Goodlatte (R-VA) and Ranking Member John Conyers (D-MI).  The bill is seen by many as the best option for reauthorizing and reforming Section 702 of the FISA Amendments Act of 2008, which is set to expire in less than two months.

Some committee members described feeling forced to choose between supporting stronger surveillance reforms or advancing the Liberty Act, and voiced their frustration about provisions that only partly block the warrantless search of Americans’ communications when an amendment with broader surveillance reforms was introduced by Reps. Zoe Lofgren (D-CA) and Ted Poe (R-TX).  Complicating their deliberations was the fact that the Senate Select Committee on Intelligence has already reported out a bill with far fewer surveillance protections.

Ranking Member Conyers reiterated the conundrum: “We have been assured in explicit terms that if we adopt this amendment today, leadership will not permit this bill to proceed to the house floor.”

He continued: “We have an opportunity to enact some meaningful reform. The alternative is no reform, and after all the work that we’ve put in, I don’t want this amendment to endanger the underlying legislation.”

Rep. Jerry Nadler (D-NY) summed up much of the internal conflict: “I rise in opposition to this amendment, though I wish I didn’t have to.”

Rep. Sheila Jackson Lee (D-TX) also appeared frustrated with the situation: “I’ll put on record that I resent being held hostage by leadership that does not know the intensity of the work and the responsibilities of the judiciary committee.”

When asked to clarify her vote in advancing the USA Liberty Act, Jackson Lee said “I am perplexed, but will be working to join in moving the bill forward.”

Rep. Jordan (R-OH) spoke up, too: “We’re the Judiciary Committee, charged with one thing and one thing only: defend the Constitution. Respect the Constitution. Adhere to the amendments in that great document, particularly, today, the Fourth Amendment. This is a darned good amendment.”

Rep. Ted Lieu (D-CA) also invoked his Constitutional duty: “Ultimately it’s important that we support the Constitution. That’s why we’re here. That’s the oath we took. I’m going to support the amendment.”

We appreciate the votes and the voices of Reps. Louie Gohmert (R-TX), Raúl Labrador (R-ID), Andy Biggs (R-AZ), Steve Cohen (D-TN), Ted Deutch (D-FL), David Cicilline (D-RI), Pramila Jayapal (D-WA), Jamie Raskin (D-MD), Conyers, Nadler, Jordan, Poe, Lofgren and Lieu.

Categories: Aggregated News

TSA Plans to Use Face Recognition to Track Americans Through Airports

eff.org - Fri, 10/11/2017 - 08:39

The “PreCheck” program is billed as a convenient service to allow U.S. travelers to “speed through security” at airports. However, the latest proposal released by the Transportation Security Administration (TSA) reveals the Department of Homeland Security’s greater underlying plan to collect face images and iris scans on a nationwide scale. DHS’s programs will become a massive violation of privacy that could serve as a gateway to the collection of biometric data to identify and track every traveler at every airport and border crossing in the country.

Currently TSA collects fingerprints as part of its application process for people who want to apply for PreCheck. So far, TSA hasn’t used those prints for anything besides the mandatory background check that’s part of the process. But this summer, TSA ran a pilot program at Atlanta’s Hartsfield-Jackson Airport and at Denver International Airport that used those prints and a contactless fingerprint reader to verify the identity of PreCheck-approved travelers at security checkpoints at both airports. Now TSA wants to roll out this program to airports across the country and expand it to encompass face recognition, iris scans, and other biometrics as well.

From Pilot Program to National Policy

While this latest plan is limited to the more than 5-million Americans who have chosen to apply for PreCheck, it appears to be part of a broader push within the Department of Homeland Security (DHS) to expand its collection and use of biometrics throughout its sub-agencies. For example, in pilot programs in Georgia and Arizona last year, Customs and Border Protection (CBP) used face recognition to capture pictures of travelers boarding a flight out of the country and walking across a U.S. land border and compared those pictures to previous recorded photos from passports, visas, and “other DHS encounters.”  In the Privacy Impact Assessments (PIAs) for those pilot programs, CBP said that, although it would collect face recognition images of all travelers, it would delete any data associated with U.S. citizens. But what began as DHS’s biometric travel screening of foreign citizens morphed, without congressional authorization, into screening of U.S. citizens, too. Now the agency plans to roll out the program to other border crossings, and it says it will retain photos of U.S. citizens and lawful permanent residents for two weeks and information about their travel for 15 years. It retains data on “non-immigrant aliens” for 75 years.

CBP has stated in PIAs that these biometric programs would be limited to international flights. However, over the summer, we learned CBP wants to vastly expand its program to cover domestic flights as well. It wants to create a “biometric” pathway that would use face recognition to track all travelers—including U.S. citizens—through airports from check-in, through security, into airport lounges, and onto flights. And it wants to partner with commercial airlines and airports to do just that.

Congress seems poised to provide both TSA and CBP with the statutory authority to support these plans. As we noted in earlier blog posts, the “Building America’s Trust” Act would require the Department of Homeland Security (DHS) to collect biometric information from all people who exit the U.S., including U.S. and foreign citizens. And the TSA Modernization Act, introduced earlier this fall, includes a provision that would allow the agencies to deploy “biometric technology at checkpoints, screening lanes, bag drop and boarding areas, and other areas where such deployment would enhance security and facilitate passenger movement.” The Senate Commerce Committee approved the TSA bill in October.

DHS Data in the Hands of Third Parties

These agencies aren’t just collecting biometrics for their own use; they are also sharing them with other agencies like the FBI and with “private partners” to be used in ways that should concern travelers.  For example, TSA’s PreCheck program has already expanded outside the airport context. The vendor for PreCheck, a company called Idemia (formerly MorphoTrust), now offers expedited entry for PreCheck-approved travelers at concerts and stadiums across the country. Idemia says it will equip stadiums with biometric-based technology, not just for security, but also “to assist in fan experience.” Adding face recognition would allow Idemia to track fans as they move throughout the stadium, just as another company, NEC, is already doing at a professional soccer stadium in Medellin, Columbia and at an LPGA championship event in California earlier this year.

CBP is also exchanging our data with private companies. As part of CBP’s “Traveler Verification Service,” it will partner with commercial airlines and airport authorities to get access to the facial images of travelers that those non-government partners collect “as part of their business processes.” These partners can then access CBP’s system to verify travelers as part of the airplane boarding process, potentially doing away with boarding passes altogether. As we saw earlier this year, several airlines are already planning to implement their own face recognition services to check bags, and some, like Jet Blue, are already partnering with CBP to implement face recognition for airplane boarding.

The Threat to Privacy and Our Freedom to Travel

We cannot overstate how big a change this will be in how the federal government regulates and tracks our movements or the huge impact this will have on privacy and on our constitutional “right to travel” and right to anonymous association with others. Even as late as May 2017, CBP recognized that its power to verify the identification of travelers was limited to those entering or leaving the country. But the TSA Modernization Act would allow CBP and TSA to collect any biometrics they want from all travelers—international and domestic—wherever they are in the airport. That’s a big change and one we shouldn’t take lightly. Private implementation of face recognition at airports only makes this more ominous.

All Americans should be concerned about these proposals because the data collected—your fingerprint, the image of your face, and the scan of your iris—will be stored in FBI and DHS databases and will be searched again and again for immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

That creates a risk that individuals will be implicated for crimes and immigration violations they didn’t commit. These systems are notoriously inaccurate and contain out-of-date information, which poses a risk to all Americans. However, due to the fact that immigrants and people of color are disproportionately represented in criminal and immigration databases, and that face recognition systems are less capable of identifying people of color, women, and young people, the weight of these inaccuracies will fall disproportionately on them.

This vast data collection will also create a huge security risk. As we saw with the 2015 Office of Personnel Management data breach and the 2017 Equifax breach, no government agency or private company is capable of fully protecting your private and sensitive information. But losing your social security or credit card numbers to fraud is nothing compared to losing your biometrics. While you can change those numbers, you can’t easily change your face.

Join EFF in speaking out against these proposals by emailing your senator and filing a comment opposing TSA’s plan today.

Take Action

No Airport Biometric Surveillance

Categories: Aggregated News

SESTA Approved by Senate Commerce Committee—Still an Awful Bill

eff.org - Thu, 09/11/2017 - 00:55

The Senate Commerce Committee just approved a slightly modified version of SESTA, the Stop Enabling Sex Traffickers Act (S. 1693).

SESTA was and continues to be a deeply flawed bill. It would weaken 47 U.S.C. § 230, (commonly known as “CDA 230” or simply “Section 230”), one of the most important laws protecting free expression online. Section 230 says that for purposes of enforcing certain laws affecting speech online, an intermediary cannot be held legally responsible for any content created by others.

It’s not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.

SESTA would create an exception to Section 230 for laws related to sex trafficking, thus exposing online platforms to an immense risk of civil and criminal litigation. What that really means is that online platforms would be forced to take drastic measures to censor their users.

Some SESTA supporters imagine that compliance with SESTA would be easy—that online platforms would simply need to use automated filters to pinpoint and remove all messages in support of sex trafficking and leave everything else untouched. But such filters do not and cannot exist: computers aren’t good at recognizing subtlety and context, and with severe penalties at stake, no rational company would trust them to.

Online platforms would have no choice but to program their filters to err on the side of removal, silencing a lot of innocent voices in the process. And remember, the first people silenced are likely to be trafficking victims themselves: it would be a huge technical challenge to build a filter that removes sex trafficking advertisements but doesn’t also censor a victim of trafficking telling her story or trying to find help.

Along with the Center for Democracy and Technology, Access Now, Engine, and many other organizations, EFF signed a letter yesterday urging the Commerce Committee to change course. We explained the silencing effect that SESTA would have on online speech:

Pressures on intermediaries to prevent trafficking-related material from appearing on their sites would also likely drive more intermediaries to rely on automated content filtering tools, in an effort to conduct comprehensive content moderation at scale. These tools have a notorious tendency to enact overbroad censorship, particularly when used without (expensive, time-consuming) human oversight. Speakers from marginalized groups and underrepresented populations are often the hardest hit by such automated filtering.

It’s ironic that supporters of SESTA insist that computerized filters can serve as a substitute for human moderation: the improvements we’ve made in filtering technologies in the past two decades would not have happened without the safety provided by a strong Section 230, which provides legal cover for platforms that might harm users by taking down, editing or otherwise moderating their content (in addition to shielding platforms from liability for illegal user-generated content).

We find it disappointing, but not necessarily surprising, that the Internet Association has endorsed this deeply flawed bill. Its member companies—many of the largest tech companies in the world—will not feel the brunt of SESTA in the same way as their smaller competitors. Small Internet startups don’t have the resources to police every posting on their platforms, which will uniquely pressure them to censor their users—that’s particularly true for nonprofit and noncommercial platforms like the Internet Archive and Wikipedia. It’s not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.

If you rely on online communities in your day-to-day life; if you believe that your right to speak matters just as much on the web as on the street; if you hate seeing sex trafficking victims used as props to advance an agenda of censorship; please take a moment to write your members of Congress and tell them to oppose SESTA.

Take Action

Tell Congress: Stop SESTA

Categories: Aggregated News

Here's How Congress Should Respond to the Equifax Breach

eff.org - Wed, 08/11/2017 - 04:02

There is very little doubt that Equifax’s negligent security practices were a major contributing factor in the massive breach of 145.5-million Americans’ most sensitive information. In the wake of the breach, EFF has spent a lot of time thinking through how to ensure that such a catastrophic breach doesn’t happen again and, just as importantly, what Congress can do to ensure that victims of massive data breaches are compensated fairly when a company is negligent with their sensitive data. In this post, we offer up some suggestions that will go a long way in accomplishing those goals.

A Federal Victims Advocate to Research and Report on Data Breaches

When almost half of the country has been affected by a data breach, it’s time for Congress to create a support structure for victims at the federal level.

Once a consumer’s information is compromised, there is a complex process to wade through to figure out who to call, what kind of protections to place on one’s credit information, and what legal remedies are available to hold those responsible accountable. To make it easier for consumers, a position should be created within the executive branch and given dedicated resources to support data breach victims.

This executive branch official, or even department, would be charged with producing rigorous research reports on the harm caused by data breaches. This is important because the federal courts have made it very hard to sue companies like Equifax. The judiciary has effectively blocked litigation by setting too high a standard for plaintiffs to prove they were harmed by a data breach. Federal research and data analyzing the financial harm Americans have faced will help bridge that gap. If attorneys can point to authoritative empirical data demonstrating that their clients have been harmed, they can make companies like Equifax accountable for their failures to secure data.
Any federal law passed in response to the data breach should be the foundation—not the ceiling—upon which states can build according to their needs.

Federal Trade Commission Needs to Have Rule-making Authority

Speaking of the executive branch, the Federal Trade Commission (FTC) has a crucial role to play in dealing with data breaches. As it stands now, federal regulators have little power to ensure that entities like Equifax aren’t negligent in their security practices. Though Americans rely on credit agencies to get essential services—apartments, mortgages, credit cards, just to name a few—there isn’t enough oversight and accountability to protect our sensitive information, and that’s concerning.

Equifax could have easily prevented this catastrophic breach, but it didn’t take steps to do so. The company failed to patch its servers against a vulnerability that was being actively exploited, and on top of that, Equifax bungled its response to the data breach by launching a new site that could be easily imitated.

To ensure strong security, Congress needs to empower an expert agency like the FTC, which has a history and expertise in data security. This can be accomplished, by restoring the FTC’s rule-making authority to set security standards and enforce them. The FTC is currently limited to only intervening in matters of unfair and deceptive business practices, and this authority is inadequate for addressing the increasingly sophisticated technological landscape and collection of personal data by third parties.

Congress Should Not Preempt State Data Breach Laws

While empowering executive agencies to address data breaches, Congress should take care in ensuring that states don’t lose their own laws dealing with data breaches. Any federal law passed in response to the data breach should be the foundation—not the ceiling—upon which states can build according to their needs.

States are generally more capable of quickly responding to changing data collection practices. For example, California has one of the strongest laws when it comes to notifying people that their information was compromised in a data breach. Among other things, it prescribes a timeline to notify victims and the manner in which it should be done. By the time a company has to comply with California’s laws, the company has infrastructure in place to notify the rest of the country. Given this, Congress should not pass a law that would gut states’ ability to have strong consumer friendly data breach laws.

We don’t need increased criminal penalties—we need to incentivize protecting the data in the first place.

Create a Fiduciary Duty for Credit Bureaus to Protect Information

Congress must also acknowledge the special nature of credit bureaus. Very few of us chose for our most sensitive information to be hoarded by an entity like Equifax that we have no control over. Yet the country’s financial infrastructure relies on them to execute even the most basic transactions. Since credit bureaus occupy a privileged position in our society’s economic system, Congress needs to establish that credit bureaus have a special obligation and a fiduciary duty to protect our data. 

Ultimately, companies like Equifax, Experian, and Transunion serve a purpose, but they lack a duty of care towards the individuals whose data they have harvested and sell because they are not the bureaus’ customers. Without obligations to adequately protect consumer data, we will likely see lax security that will lead to more breaches on the scale of Equifax.

Give People their Day in Court

The first big problem for those seeking a remedy for data breaches is just getting into court at all, especially in sufficient numbers to make a company take notice. For too many people impacted by data breaches, they learn to their great dismay that somewhere in the fine print they agreed to a mandatory arbitration clause. This means that they cannot go to court at all or must engage in singular arbitration, rather than a class-action lawsuit.

After the Equifax breach, a lot of the focus has been on binding arbitration clauses because of the company’s egregious attempt to use it to deny people their day in court. Companies like Equifax shouldn't be able to prevent people from going to court in exchange for weak assistance like credit-monitoring services given the scale of the breach and harm

As Congress debates how to protect Americans’ legal rights after a breach, the focus should go beyond just prohibiting mandatory arbitration clauses. Congress should preserve, protect, and create an unwaiveable private right of action for Americans to sue companies that are negligent with sensitive data.

We Don’t Need Additional Criminal Laws

A knee-jerk reaction to a significant breach like Equifax is to suggest that we need additional criminal laws aimed at those who are responsible. The reality is, we don’t know who was behind the Equifax breach to hold them accountable. More significantly, knowing their identity does nothing to ensure that Equifax actually applies crucial security patches when they are available. We don’t need increased criminal penalties—we need to incentivize protecting the data in the first place.

Another good reason for this is that these additional criminal anti-hacking laws more often end up hurting security researchers and hackers who want to do good. For instance in Equifax’s case, a security researcher had warned the company about its security vulnerabilities months before the actual breach happened; yet the company seemed to have done nothing to fix them. The security researcher couldn't go public with the findings without risking significant jail time and other penalties.

Without a meaningful way for security testers to raise problems in a public setting, companies have little reason to keep up with the latest security practices and fearing the resulting negative publicity. If Congress uses the Equifax breach to enhance or expand criminal penalties for unauthorized access under laws like the Computer Fraud and Abuse Act (CFAA), we’d all be worse for it. Laws shouldn’t impede security testing and make it harder to discover and report vulnerabilities.

Free Credit Freezes, Not Credit Monitoring Services

Lastly, Congress needs to provide guidance on the immediate aftermath of a data breach. It’s become almost standard practice to offer credit-monitoring services to data breach victims. In reality, these services offer little protection to victims of data breaches. Many of them are inadequate in the alerts they send consumers, and more fundamentally, there’s little utility in being informed of improper usage of one’s credit information after it’s already been exploited. Consumers will still potentially have to spend hours to get their information cleared up with the various credit bureaus and entities where the information was used fraudulently.

Instead, Congress should legislate that victims of data breaches get access to free credit freezes, which are much more effective in preventing financial harm to victims of data breaches, at all major credit bureaus. There are proposals in Congress along these lines and we are glad to see that.

There's no question that the Equifax breach has been a disaster. We at EFF are working with congressional offices to pass sensible reforms to ensure that it doesn't happen again.

Categories: Aggregated News

Advertising

 


Advertise here! <