Across the country, civilian journalists have documented government violence using cell phones to record police activities, forcing a much-needed national discourse. But in case after case after case after case, the people who face penalties in the wake of police violence are the courageous and quick-witted residents who use technology to enable transparency.
Earlier this month, the International Documentary Association launched an online petition to the Department of Justice asking the federal government to intervene when local police arrest or otherwise harass civilians who document and record police violence. EFF was proud to sign the petition, since this is an issue on which we have been increasingly active.
Led by film makers Laura Poitras and David Felix Sutcliffe, the petition also calls for an official investigation exploring "the larger pattern of abuse that has emerged on a federal, state, and local level, and the threat it poses to free speech and a free press." Finally, the petition urges "our peers in the journalistic community to investigate and report on these abuses."
Poitras' film Citizenfour, documenting the Edward Snowden revelations, won the 2015 Oscar award for Best Documentary. Sutcliffe directed (T)error, which is the first film ever to document an FBI sting operation as it unfolds (and in the interest of full disclosure, briefly features the author of this post).
While the First Amendment protects freedom of the press, and applies to grassroots journalists in addition to their professional counterparts, those protections have often been disregarded by police officers unable to accept civilian oversight and the public exposure of their violence.
Meanwhile, despite well-settled jurisprudence establishing the right to observe and record police activities, even the federal judiciary has occasionally failed to vindicate these principles.
Arrests of grassroots journalists who record police activities implicate not only the 1st and 14th Amendments to the U.S. Constitution, but also the very legitimacy of our legal system, which grounds its claim to power in impartiality. Yet, around the country, the law has subjected to penalties people pursuing constitutionally protected activities that enhance transparency, while turning a blind eye to the violence prompting residents to place themselves at risk.
At issue is not merely a fundamental constitutional right, nor the transparency on which democracy rests, but the ability for community residents to use technology to document violence endured by their neighbors.
The monitoring of public servants who have pledged to "protect and serve" should not represent a risk in a free society. That's why EFF is proud to sign and support the International Documentary Association's petition.
Share this: Join EFF
We all know that the NSA uses word games to hide and downplay its activities. Words like "collect," "conversations," "communications," and even "surveillance" have suffered tortured definitions that create confusion rather than clarity.
There’s another one to watch: "targeted" v. "mass" surveillance.
Since 2008, the NSA has seized tens of billions of Internet communications. It uses the Upstream and PRISM programs—which the government claims are authorized under Section 702 of the FISA Amendments Act—to collect hundreds of millions of those communications each year. The scope is breathtaking, including the ongoing seizure and searching of communications flowing through key Internet backbone junctures,the searching of communications held by service providers like Google and Facebook, and, according to the government's own investigators, the retention of significantly more than 250 million Internet communications per year.
Yet somehow, the NSA and its defenders still try to pass 702 surveillance off as "targeted surveillance," asserting that it is incorrect when EFF and many others call it "mass surveillance."
Our answer: if "mass surveillance" includes the collection of the content of hundreds of millions of communications annually and the real-time search of billions more, then the PRISM and Upstream programs under Section 702 fully satisfy that definition.
This word game is important because Section 702 is set to expire in December 2017. EFF and our colleagues who banded together to stop the Section 215 telephone records surveillance are gathering our strength for this next step in reining in the NSA. At the same time, the government spin doctors are trying to avoid careful examination by convincing Congress and the American people that this is just "targeted" surveillance and doesn’t impact innocent people.Section 702 Surveillance: PRISM and Upstream
PRISM and Upstream surveillance are two types of surveillance that the government admits that it conducts under Section 702 of the FISA Amendments Act, passed in 2008. Each kind of surveillance gives the U.S. government access to vast quantities of Internet communications.
Upstream gives the NSA access to communications flowing through the fiber-optic Internet backbone cables within the United States. This happens because the NSA, with the help of telecommunications companies like AT&T, makes wholesale copies of the communications streams passing through certain fiber-optic backbone cables. Upstream is at issue in EFF’s Jewel v. NSA case.
PRISM gives the government access to communications in the possession of third-party Internet service providers, such as Google, Yahoo, or Facebook. Less is known about how PRISM actually works, something Congress should shine some light on between now and December 2017.
Note that those two programs existed prior to 2008—they were just done under a shifting set of legal theories and authorities. EFF has had evidence of the Upstream program from whistleblower Mark Klein since 2006, and we have been suing to stop it ever since.Why PRISM and Upstream are "Mass," Not "Targeted," Surveillance
Despite government claims to the contrary, here’s why PRISM and Upstream are "mass surveillance":
(1) Breadth of acquisition: First, the scope of collection under both PRISM and Upstream surveillance is exceedingly broad. The NSA acquires hundreds of millions, if not billions, of communications under these programs annually. Although, in the U.S. government’s view, the programs are nominally "targeted," that targeting sweeps so broadly that the communications of innocent third parties are inevitably and intentionally vacuumed up in the process. For example, a review of a "large cache of intercepted conversations" provided by Edward Snowden and analyzed by the Washington Post revealed that 9 out of 10 account holders "were not the intended surveillance targets but were caught in a net the agency had cast for somebody else." The material reviewed by the Post consisted of 160,000 intercepted e-mail and instant message conversations, 7,900 documents (including "medical records sent from one family member to another, resumes from job hunters and academic transcripts of schoolchildren"), and more than 5,000 private photos. In all, the cache revealed the "daily lives of more than 10,000 account holders who were not targeted [but were] catalogued and recorded nevertheless." The Post estimated that, at the U.S. government’s annual rate of "targeting," collection under Section 702 would encompass more than 900,000 user accounts annually. By any definition, this is "mass surveillance."
(2) Indiscriminate full-content searching. Second, in the course of accomplishing its so-called "targeted" Upstream surveillance, the U.S. government, in part through its agent AT&T, indiscriminately searches the contents of billions of Internet communications as they flow through the nation’s domestic, fiber-optic Internet backbone. This type of surveillance, known as "about surveillance," involves the NSA's retention of communications that are neither to nor from a target of surveillance; rather, it authorizes the NSA to obtain any communications "about" the target. Even if the acquisition of communications containing information "about" a surveillance target could, somehow, still be considered "targeted," the method for accomplishing that surveillance cannot be: "about" surveillance entails a content search of all, or substantially all, international Internet communications transiting the United States. Again, by any definition, Upstream surveillance is "mass surveillance." For PRISM, while less is known, it seems the government is able to search through—or require the companies like Google and Facebook to search through—all the customer data stored by the corporations for communications to or from its targets.Seizure: Fourth Amendment and the Wiretap Act
To accomplish Upstream surveillance, the NSA copies (or has its agents like AT&T copy) Internet traffic as it flows through the fiber-optic backbone. This copying, even if the messages are only retained briefly, matters under the law. Under U.S. constitutional law, when the federal government "meaningfully interferes" with an individual’s protected communications, those communications have been "seized" for purposes of the U.S. Constitution’s Fourth Amendment. Thus, when the U.S. government copies (or has copied) communications wholesale and diverts them for searching, it has "seized" those communications under the Fourth Amendment.
Why does the government insist that it’s targeted? For Upstream, it may be because the initial collection and searching of the communications—done by service providers like AT&T on the government’s behalf—is really, really fast and much of the information initially collected is then quickly disposed of. In this way the Upstream collection is unlike the telephone records collection where the NSA kept all of the records it seized for years. Yet this difference should not change the conclusion that the surveillance is "mass surveillance." First, all communications flowing through the collection points upstream are seized and searched, including content and metadata. Second, as noted above, the amount of information retained—over 250 million Internet communications per year—is astonishing.
Thus, regardless of the time spent, the seizure and search are comprehensive and invasive. Using advanced computers, the NSA and its agents can do a full-text, content search within a blink of an eye through billions, if not trillions of your communications, including emails, social media, and web searches. Second, as demonstrated above, the government retains a huge amount of the communications—far more about innocent people than about its targets—so even based on what is retained the surveillance is better described as "mass" rather than "targeted."Yes, it is Mass Surveillance
So it is completely correct to characterize Section 702 as mass surveillance. It stems from the confluence of: (1) the method NSA employs to accomplish its surveillance, particularly Upstream, and (2) the breadth of that surveillance.
Next time you see the government or its supporters claim that PRISM and Upstream are "targeted" surveillance programs, you’ll know better.
 See, e.g., Charlie Savage, NSA Said to Search Content of Messages to and From U.S., N.Y. Times (Aug 8, 2013) (“The National Security Agency is searching the contents of vast amounts of Americans’ e-mail and text communications into and out of the country[.]”). This article describes an NSA practice known as “about surveillance”—a practice that involves searching the contents of communications as they flow through the nation’s fiber-optic Internet backbone.
 FISA Court Opinion by Judge Bates entitled [Caption Redacted], at 29 (“NSA acquires more than two hundred fifty million Internet communications each year pursuant to Section 702”), https://www.eff.org/document/october-3-2011-fisc-opinion-holding-nsa-surveillance-unconstitutional (Hereinafter, “Bates Opinion”). According to the PCLOB report, the “current number is significantly higher” than 250 million communications. PCLOB Report on 702 at 116.
 Bates Opinion at 29; PCLOB at 116.
 Id. at 35.
 PCLOB at 33-34
 First, the Bush Administration relied solely on broad claims of Executive power, grounded in secret legal interpretations written by the Department of Justice. Many of those interpretations were subsequently abandoned by later Bush Administration officials. Beginning in 2006, DOJ was able to turn to the Foreign Intelligence Surveillance Court to sign off on its surveillance programs. In 2007, Congress finally stepped into the game, passing the Protect America Act; which, a year later, was substantially overhauled and passed again as the FISA Amendments Act. While neither of those statutes mention the breadth of the surveillance and it was not discussed publicly during the Congressional processes, both have been cited by the government as authorizing it.
 See note 1.
 Barton Gellman, Julie Tate, and Ashkan Soltani, In NSA-Intercepted Data, Those Not Targeted Far Outnumber the Foreigners Who Are, Washington Post (July 5, 2014),
 Bates Opinion at 15.
 PCLOB report at 119-120.
 See 18 U.S.C § 2511(1)(a); U.S. v. Councilman, 418 F.3d 67, 70-71, 79 (1st Cir. 2005) (en banc).Related Cases: Jewel v. NSA
Share this: Join EFF
This month, the online service provider CloudFlare stood up for its website-owner customers, and for all users of those websites, by telling a court that CloudFlare shouldn’t be forced to block sites without proper legal procedure. Copyright law limits the kinds of orders that a court can impose on Internet intermediaries, and requires courts to consider the pros and cons thoroughly. In this case, as in other recent cases, copyright (and trademark) holders are trying to use extremely broad interpretations of some basic court rules to bypass these important protections. As special interests keep trying to make things disappear from the Internet quickly, cheaply, and without true court supervision, it’s more important than ever that Internet companies like CloudFlare are taking a stand.
The current dispute between CloudFlare and a group of record labels arose from the labels’ case against the music streaming site MP3Skull. The website’s owners never appeared in court to defend themselves against a lawsuit by the labels. The labels, who are all members of the Recording Industry Association of America, won a court judgment by default in March of this year. The judgment included a permanent injunction against the site and those in “active concert and participation” with it. On the last day of June, the labels’ lawyers sent the order to CloudFlare and demanded that they immediately stop providing services to various Internet addresses and domain names connected with MP3Skull.
CloudFlare provides content delivery network services, optimization, and security for websites. Its CEO previously said on the company’s blog that “if we were to receive a valid court order that compelled us to not provide service to a customer then we would comply with that court order,” but that “there will be things on our network that make us uncomfortable[, and] our proper role is not that of Internet censor.” Last year, with help from EFF, CloudFlare successfully fought back against a court order that would have required it to act as trademark police for the music labels by shutting down any customer who used domain names like “grooveshark.”
CloudFlare is keeping up that legal approach in the MP3Skull case. It wrote to the U.S. District Court for the Southern District of Florida to say that while it “does not oppose an appropriate injunction,” the RIAA members should be required to follow the procedure set out in Section 512(j) of the Digital Millennium Copyright Act (the DMCA). That law limits the kinds of injunctions that can be imposed on Internet intermediaries like CloudFlare. It also requires courts to consider the pros and cons of ordering an intermediary to help enforce a copyright. Specifically, a court has to consider whether an order would “significantly burden” the service provider or its operations, how much harm the copyright holder is likely to experience without an order, whether the order would be technically feasible and effective, whether it would tend to block non-infringing material, and whether less burdensome measures are available.
None of that happened in this case. The court simply entered a broad injunction against the MP3Skull defendants by default after they failed to show up in court, and the labels then attempted to bind CloudFlare with that order months later. The labels didn’t mention the DMCA at all in their request to the court. Instead, they pointed to Rule 65 of the Federal Rules of Civil Procedure, which says that a court can issue injunctions against a party to the case or anyone in “active concert and participation” with a party. It’s that phrase that rightsholders have used to try to bind Internet intermediaries like CloudFlare without following the procedure laid out in DMCA 512(j), and similar limitations that the courts have created for trademark law.1
The “active concert” clause of Rule 65 is actually quite narrow: it’s meant to keep parties to a case from evading a court order by acting indirectly through a friend or associate. It doesn’t sweep every company that provides services to a defendant under the court’s power, and it doesn’t bypass more specific rules like DMCA 512(j). Making Rule 65 into an injunction trump card would lead to bizarre results: the courts would have more power over a service provider like CloudFlare if it is not named as a defendant in a lawsuit, and less power if the service provider were actually sued, given their day in court, and found liable. It’s easy to see why the law shouldn’t work that way.
Although another court found that CloudFlare was in “active concert and participation” with a trademark-infringing customer last year, that court also narrowed its injunction against CloudFlare, as trademark law requires. Still, the court should reject the record labels' argument that one injunction obtained by default can bind "countless conduit online service providers, search engines, web hosts, content delivery networks, and other service providers" -- in other words, the entire Internet -- without considering the burdens, costs, and alternatives for each, as Congress required.
The limits on court orders against intermediaries are vital safeguards against censorship, especially where the censorship is done on behalf of a well-financed party. That’s why it’s important for courts to uphold those limits even in cases where copyright or trademark infringement seems obvious. Court precedents and technical tools built today to go after “notorious pirates” will be used tomorrow against popular blogs, political commentators, satirists, and innocent businesses. Insisting on a full and fair legal process before blocking users becomes more important the larger an online service provider gets. That's why it’s great to see a service like CloudFlare stepping up to protect all Internet users by doing just that.
- 1. Tiffany (NJ) Inc. v. eBay Inc., 600 F. 3d 93 (2d. Cir. 2010)
Share this: Join EFF
U.S. border control agents want to gather Facebook and Twitter identities from visitors from around the world. But this flawed plan would violate travelers’ privacy, and would have a wide-ranging impact on freedom of expression—all while doing little or nothing to protect Americans from terrorism.
Customs and Border Protection, an agency within the Department of Homeland Security, has proposed collecting social media handles from visitors to the United States from visa waiver countries. EFF submitted comments both individually and as part of a larger coalition opposing the proposal.
CBP specifically seeks “information associated with your online presence—Provider/Platform—Social media identifier” in order to provide DHS “greater clarity and visibility to possible nefarious activity and connections” for “vetting purposes.”
In our comments, we argue that would-be terrorists are unlikely to disclose social media identifiers that reveal publicly available posts expressing support for terrorism.
But this plan would be more than just ineffective. It’s vague and overbroad, and would unfairly violate the privacy of innocent travelers. Sharing your social media account information often means sharing political leanings, religious affiliations, reading habits, purchase histories, dating preferences, and sexual orientations, among many other personal details.
Or, unwilling to reveal such intimate information to CBP, many innocent travelers would engage in self-censorship, cutting back on their online activity out of fear of being wrongly judged by the U.S. government. After all, it’s not hard to imagine some public social media posts being taken out of context or misunderstood by the government. In the face of this uncertainty, some may forgo visiting the U.S. altogether.
The proposed program would be voluntary, and for international visitors. But we are worried about a slippery slope, where CBP could require U.S. citizens and residents returning home to disclose their social media handles, or subject both foreign visitors and U.S. persons to invasive device searches at ports of entry with the intent of easily accessing any and all cloud data.
This would burden constitutional rights under the First and Fourth Amendments. CBP already started a social media monitoring program in 2010, and in 2009 issued a broad policy authorizing border searches of digital devices. We oppose CBP further invading the private lives of innocent travelers, including Americans.Related Cases: United States v. Saboonchi
Share this: Join EFF
EFF recently launched Reclaim Invention, a project to encourage universities to manage their patent portfolios in a way that maximizes the public benefit. Specifically, we’ve urged universities to sign a Public Interest Patent Pledge not to sell or exclusively license patents to patent assertion entities, also known as patent trolls. EFF is proud to partner with Creative Commons, Engine, Fight for the Future, Knowledge Ecology International, and Public Knowledge on this initiative.
As part of our project, we’ve also released draft state legislation that we hope state legislators can adapt to promote pro-innovation technology transfer at state universities. Our legislative language has two components. First, it requires university technology transfer offices to adopt a policy committing them to manage patent assets in the public interest. University policy should include:
- researching the past practices of potential patent buyers or licensees;
- prioritizing technology transfer that develops inventions and scales their potential user base;
- endeavoring to nurture startups that will create new jobs, products, and services;
- fostering agreements and relationships that include the sharing of know-how and practical experience to maximize the value of the assignment or license of the corresponding patents.
The second part of the legislation voids any agreement to license or transfer a patent to a patent assertion entity.
The policies advanced by the proposed legislation are similar to principles that many in the university sector have already advocated for (like the Nine Points to Consider promoted by the Association of University Technology Managers). The view that universities should promote true technology transfer, and not trolling, is not radical. Despite general agreement on these points, universities sometimes sell to patent assertion entities like mega-troll Intellectual Ventures or dietary supplement troll ThermoLife. Legislation that carefully codifies the public-interest mission of university technology transfer would ensure that trolls don’t get hold of public universities’ patents.
Getting tech transfer legislation introduced in 50 states would be an enormous job, but that’s where you come in. If you’d like to see your state legislators fight patent trolls, then use our form to contact your state lawmakers.
Both in urging universities to change their patenting policies and in passing reforms on the state legislative level, we can’t replace local, on-the-ground activism. Reclaim Invention relies on our network of local activists in the Electronic Frontier Alliance and beyond. If you would like to help convince your college to sign the pledge or work with local lawmakers to introduce legislation, please contact us.
Share this: Join EFF
There has been significant activity relating to cases and patent infringement claims made by Shipping & Transit, LLC, formerly known as ArrivalStar. Shipping & Transit, who we’ve written about on numerous occasions, is currently one of the most prolific patent trolls in the country. Lex Machina data indicates that, since January 1, 2016, Shipping & Transit has been named in almost 100 cases. This post provides an update on some of the most important developments in these cases.
In many Shipping & Transit cases, Shipping & Transit has alleged that retailers allowing their customers to track packages sent by USPS infringe various claims of patents owned by Shipping & Transit, despite previously suing (and settling with) USPS. EFF represents a company that Shipping & Transit accused of infringing four patents.Shipping & Transit Is Facing Numerous Alice Motions
In April 2014, the Supreme Court decided Alice v. CLS Bank, holding that “abstract ideas” are not patentable. Many courts have since applied that ruling, finding that patents are “abstract” and therefore invalid, often very early in litigation, saving significant time, money, and effort by the parties.
Several defendants have now asked courts to quickly find Shipping & Transit’s patents invalid under Alice. Neptune Cigars has filed a motion with the Central District of California, arguing that two Shipping & Transit patents (U.S. Patent Nos. 6,763,299 and 6,415,207) are invalid. That motion is pending.
Another defendant, Loginext, also filed a motion arguing that U.S. Patent 6,415,207 was invalid under Alice. Shipping & Transit quickly dismissed its case against Loginext, with Loginext paying nothing to Shipping & Transit. Loginext had also sent a “Rule 11” letter to Shipping & Transit pointing out that Loginext did not even exist when U.S. Patent No. 6,763,299 expired.
Our clients, Triple7Vaping.com LLC and Jason Cugle (together, Triple7), have also noted that the patents are likely invalid under Alice. When another party in the Southern District of Florida moved to dismiss under Alice, we asked the court to consolidate our case with that one, and provided a brief explaining in detail why the claims are invalid under Alice. The motion, however, was not decided after the original party that moved to dismiss settled with Shipping & Transit.Unified Patents Filed an Inter Partes Review Against the ’270 patent
On July 25, 2015, Unified Patents filed a petition for inter partes review of U.S. Patent 6,415,207 (the ’270 patent), one of the few Shipping & Transit patents that remains in force (many of Shipping & Transit’s patent expired in 2013). In its petition to the Patent Office to review the ’207 patent, Unified Patents argues that the patent is invalid because it is obvious in light of other patents, including a different, much older, Shipping & Transit patent.Shipping & Transit Disclaims All Liability by Triple7
On May 31, 2016, Triple7 filed a lawsuit asking for a declaratory judgment that four of Shipping & Transit’s patents were invalid and not infringed. Triple7 also asked the court to find that Shipping & Transit violated Maryland state law when it made its claims of infringement, because the claims were made in bad faith.
In response, on July 21, 2016, Shipping & Transit covenanted not to sue Triple7, meaning it has disclaimed any possible claim of infringement against Triple7. In doing so, Shipping & Transit has sought to prevent the court from deciding the merits of Shipping & Transit’s claims of infringement. Triple7 has argued that the court retains that ability as part of the Maryland claim, and the court is expected to decide the issue soon.Shipping & Transit Reveals The Minimal Investigation It Does Before It Sends A Demand Letter
Shipping & Transit asked the Court to dismiss Triple7’s claims for violations of Maryland State law. In doing so, it submitted two affidavits that detailed the investigation it engages in before sending a demand letter. In response, Triple7 argued that Shipping & Transit’s investigation was plainly deficient under binding Federal Circuit law.
While every individual case will have some differences, we hope that these materials are useful to current and future targets of Shipping & Transit’s trolling campaign.Related Cases: Triple7Vaping.com, LLC et al. v. Shipping & Transit LLC
Share this: Join EFF
Despite near universal condemnation from Pakistan's tech experts; despite the efforts of a determined coalition of activists, and despite numerous attempts by alarmed politicians to patch its many flaws, Pakistan's Prevention of Electronic Crimes Bill (PECB) last week passed into law. Its passage ends an eighteen month long battle between Pakistan's government, who saw the bill as a flagship element of their anti-terrorism agenda, and the technologists and civil liberties groups who slammed the bill as an incoherent mix of anti-speech, anti-privacy and anti-Internet provisions.
But the PECB isn't just a tragedy for free expression and privacy within Pakistan. Its broad reach has wider consequences for Pakistan nationals abroad, and international criminal law as it applies to the
The new law creates broad crimes related to "cyber-terrorism" and its "glorification" online. It gives the authorities the opportunity to threaten, target and censor unpopular online speech in ways that go far beyond international standards or Pakistan's own free speech protections for offline media. Personal digital data will be collected and made available to the authorities without a warrant: the products of these data retention programs can then be handed to foreign powers without oversight.
PECB is generous to foreign intelligence agencies. It is far less tolerant of other foreigners, or of Pakistani nationals living abroad. Technologists and online speakers outside Pakistan should pay attention to the first clause of the new law:
- This Act may be called the Prevention of Electronic Crimes Act, 2016.
- It extends to the whole of Pakistan.
- It shall apply to every citizen of Pakistan wherever he may be and also to every other person for the time being in Pakistan.
- It shall also apply to any act committed outside Pakistan by any person if the act constitutes an offence under this Act and affects a person, property, information system or data location in Pakistan.
Poorly-written cyber-crime laws criminalize these everyday and innocent actions by technology users, and the PECB is no exception. It criminalizes the violation of terms of service in some cases, and ramps up the penalties for many actions that would be seen as harmless or positive acts in the non-digital world, including unauthorized copying and access. Security researchers and consumers frequently conduct "unauthorized" acts of access and copying for legitimate and lawful reasons. They do it to exercise of their right of fair use, to exposing wrongdoing in government, or to protect the safety and privacy of the public. Violating website terms of service may be a violation of your agreement with that site, but no nation should turn those violations into felonies.
The PECB asserts an international jurisdiction for these new crimes. It says that if you are a Pakistan national abroad (over 8.5 million people, or 4% of Pakistan's total population) you too can be prosecuted for violating its vague statutes. And if a Pakistan court determines that you have violated one of the prohibitions listed in the PECB in such a way that it affects any Pakistani national, you can find yourself prosecuted in the Pakistan courts, no matter where you live.
Pakistan isn't alone in making such broad claims of jurisdiction. Some countries claim the power to prosecute a narrow set of serious crimes committed against their citizens abroad under international law's "passive personality principle" (the U.S. does so in some of its anti-terrorism laws). Other countries claim jurisdiction over the actions of its own nationals abroad under the "active personality principle" (for instance, in cases of treason.)
But Pakistan's cyber-crime law asserts both principles simultaneously, and explicitly applies them to all cyber-crime, both major and minor, defined in PECB. That includes creating "a sense of insecurity in the [Pakistani] government" (Ch.2, 10), offering services to change a computer's MAC address (Ch.2, 16), or building tools that let you listen to licensed radio spectrum (Ch.2, 13 and 17).
The universal application of such arbitrary laws could have practical consequences for the thousands of overseas Pakistanis working in the IT and infosecurity industries, as well for those in the Pakistan diaspora who wish to publicly critique Pakistani policies. It also continues the global jurisdictional trainwreck that surrounds digital issues, where every country demands that its laws apply and must be enforced across a borderless Internet.
Applying what has been described as "the worst piece of cyber-crime legislation in the world" to the world is a bold ambition, and the current Pakistani government's reach may well have exceeded its grasp, both under international law and its own constitutional limits. The broad coalition who fought PECB in the legislature will now seek to challenge it in the courts.
But until they win, Pakistan has overlaid yet another layer of vague and incompatible crimes over the Internet, and its own far-flung citizenry.
Share this: Join EFF
For the second year in a row, EFF and a coalition of virtual currency and consumer protection organizations have beaten back a California bill that would have created untenable burdens for the emerging cryptocurrency community.
Unfortunately, the current bill in print does not meet the objectives to create a lasting regulatory framework that protects consumers and allows this industry to thrive in our state. More time is needed and these conversations must continue in order for California to be at the forefront of this effort.
State lawmakers were poised to quickly jam through an amended version of a digital currency licensing bill with new provisions that were even worse than last year’s version.
As in the previous version, the bill required a “digital currency business” to get approval from the state before operating in California and also comply with regulations similar to those applicable to banks and money transmitters. The amended bill, however, was so carelessly drafted that it would have forced Bitcoin miners, video game makers, and even digital currency users to register with a state agency and be subject to the new regulations.
Worse, the bill failed to accomplish its intent—protecting consumers—because it would have limited the number of digital currency options available to Californians.
EFF is grateful that Assemblymember Dababneh recognized there were problems with the legislation and put the brakes on sending it through the legislature as its session winds down.
That said, the bill demonstrates that there are still too many technical and policy gaps in the current thinking about digital currencies and the need for regulation.
EFF continues to believe that before lawmakers anywhere consider legislation regulating digital currencies, they need to better understand the technology at issue as well as demonstrating how the legislation actually benefits consumers. The California bill unfortunately failed in both respects.A.B. 1326 Would Have Hurt Consumers
First, as EFF’s opposition letter to A.B. 1326 stated, the bill’s goal to protect consumers would have ironically been frustrated by the legislation, as it would have restricted access to currencies that benefit consumers in ways that non-digital currencies do not.
Many digital currencies allow individuals to directly transact with one another even when they do not know or trust each other. These currencies have significant benefits to consumers as they eliminate the third parties needed in non-digital transactions that can often be the sources of fraud or other consumer harm.
Further, intermediaries in traditional currency transactions, such as payment processers, are often the targets of financial censorship, which ultimately inhibits people’s ability to support controversial causes or organizations.
Because the bill would have allowed California’s Department of Business Oversight to determine which digital currency businesses operated in California, the government would have been deciding which currencies and businesses could be used, rather than consumers. This would have significantly limited Californians’ digital currency options, to their detriment.A.B. 1326’s Vague Terms Would Have Required Consumers to Register
The bill was also written in a manner that failed to grasp how digital currencies work, leading to broad definitions of “digital currency business” that would have regulated not just businesses transacting on behalf of digital currency users, but the users themselves.
There were many vague definitions in the bill. Take for example, a provision requiring anyone who transmits digital currencies to another person to register and comply with its complex regulations.
Digital currency users often directly transmit digital currency value to others without any intermediary, meaning those users would have been subject to the regulations even though they are merely using a digital currency. Additionally, despite the bill purporting to have an exemption for parties such as Bitcoin miners, they would also have to register because in appending transactions to the Blockchain, they could be viewed as transmitting digital currency.
The bill also would have required video game makers who offer in-game digital currency or goods to register, as the exemption for such activity is limited to items or currency that have no value outside of the game. The reality is that many items and currencies within games often have independent markets in which players buy, sell, or exchange items, regardless of whether a game maker allows for those transactions. Those game makers, however, would have to obtain a license under the bill even though they often do not control the outside markets. The bill would have also created roadblocks for video game companies who offer in-game currency that can be used to buy real world items, such as T-shirts or stickers.
Additionally, the bill contained no exemption for start-ups or smaller companies innovating digital currencies, giving established currencies such as Bitcoin and its more sophisticated industry a leg up over competition.
The many problems with the bill would ultimately have been bad for the state, as it would have pushed innovation elsewhere and chilled a young and quickly evolving industry.
EFF recognizes that there are risks for consumers using digital currencies and appreciates lawmakers interested in addressing them. We think any legislative response, however, should be based on a better understanding of the state of digital currencies and narrowly focused on the situations that pose risks for consumers. Such an approach would preserve space for innovation in the industry while still protecting users.
Share this: Join EFF
Civil Rights Coalition files FCC Complaint Against Baltimore Police Department for Illegally Using Stingrays to Disrupt Cellular Communications
Civil Rights Groups Urge FCC to Issue Enforcement Action Prohibiting Law Enforcement Agencies From Illegally Using Stingrays
This week the Center for Media Justice, ColorOfChange.org, and New America’s Open Technology Institute filed a complaint with the Federal Communications Commission alleging the Baltimore police are violating the federal Communications Act by using cell site simulators, also known as Stingrays, that disrupt cellphone calls and interfere with the cellular network—and are doing so in a way that has a disproportionate impact on communities of color.
Stingrays operate by mimicking a cell tower and directing all cellphones in a given area to route communications through the Stingray instead of the nearby tower. They are especially pernicious surveillance tools because they collect information on every single phone in a given area—not just the suspect’s phone—this means they allow the police to conduct indiscriminate, dragnet searches. They are also able to locate people inside traditionally-protected private spaces like homes, doctors’ offices, or places of worship. Stingrays can also be configured to capture the content of communications.
Because Stingrays operate on the same spectrum as cellular networks but are not actually transmitting communications the way a cell tower would, they interfere with cell phone communications within as much as a 500 meter radius of the device (Baltimore’s devices may be limited to 200 meters). This means that any important phone call placed or text message sent within that radius may not get through. As the complaint notes, “[d]epending on the nature of an emergency, it may be urgently necessary for a caller to reach, for example, a parent or child, doctor, psychiatrist, school, hospital, poison control center, or suicide prevention hotline.” But these and even 911 calls could be blocked.
The Baltimore Police Department could be among the most prolific users of cell site simulator technology in the country. A Baltimore detective testified last year that the BPD used Stingrays 4,300 times between 2007 and 2015. Like other law enforcement agencies, Baltimore has used its devices for major and minor crimes—everything from trying to locate a man who had kidnapped two small children to trying to find another man who took his wife’s cellphone during an argument (and later returned it). According to logs obtained by USA Today, the Baltimore PD also used its Stingrays to locate witnesses, to investigate unarmed robberies, and for mysterious “other” purposes. And like other law enforcement agencies, the Baltimore PD has regularly withheld information about Stingrays from defense attorneys, judges, and the public.
Moreover, according to the FCC complaint, the Baltimore PD’s use of Stingrays disproportionately impacts African American communities. Coming on the heels of a scathing Department of Justice report finding “BPD engages in a pattern or practice of conduct that violates the Constitution or federal law,” this may not be surprising, but it still should be shocking. The DOJ’s investigation found that BPD not only regularly makes unconstitutional stops and arrests and uses excessive force within African-American communities but also retaliates against people for constitutionally protected expression, and uses enforcement strategies that produce “severe and unjustified disparities in the rates of stops, searches and arrests of African Americans.”
Adding Stingrays to this mix means that these same communities are subject to more surveillance that chills speech and are less able to make 911 and other emergency calls than communities where the police aren’t regularly using Stingrays. A map included in the FCC complaint shows exactly how this is impacting Baltimore’s African-American communities. It plots hundreds of addresses where USA Today discovered BPD was using Stingrays over a map of Baltimore’s black population based on 2010 Census data included in the DOJ’s recent report:
The Communications Act gives the FCC the authority to regulate radio, television, wire, satellite, and cable communications in all 50 states, the District of Columbia and U.S. territories. This includes being responsible for protecting cellphone networks from disruption and ensuring that emergency calls can be completed under any circumstances. And it requires the FCC to ensure that access to networks is available “to all people of the United States, without discrimination on the basis of race, color, religion, national origin, or sex.” Considering that the spectrum law enforcement is utilizing without permission is public property leased to private companies for the purpose of providing them next generation wireless communications, it goes without saying that the FCC has a duty to act.
The FCC must protect the American people from law enforcement practices that disrupt emergency communications and unconstitutionally discriminate against communities based on race. The FCC is charged with safeguarding the public's interest in transparency and equality of access to communication over the airwaves. Please join us in calling on the FCC to enforce the Communications Act and put an end to widespread network interference by the rampant unauthorized transmissions of the BPD's illegal use of stingray technology.
But we should not assume that the Baltimore Police Department is an outlier—EFF has found that law enforcement has been secretly using stingrays for years and across the country. No community should have to speculate as to whether such a powerful surveillance technology is being used on its residents. Thus, we also ask the FCC to engage in a rule-making proceeding that addresses not only the problem of harmful interference but also the duty of every police department to use Stingrays in a constitutional way, and to publicly disclose—not hide—the facts around acquisition and use of this powerful wireless surveillance technology.U.S. v. Damian Patrick State of Maryland v. Kerron Andrews
Share this: Join EFF
When universities invent, those inventions should benefit everyone. Unfortunately, they sometimes end up in the hands of patent trolls—companies that serve no purpose but to amass patents and demand money from others. When a university sells patents to trolls, it undermines the university’s purpose as a driver of innovation. Those patents become landmines that make innovation more difficult.
A few weeks ago, we wrote about the problem of universities selling or licensing patents to trolls. We said that the only way that universities will change their patenting and technology transfer policies is if students, professors, and other members of the university community start demanding it.
It’s time to start making those demands.
We’re launching Reclaim Invention, a new initiative to urge universities to rethink how they use patents. If you think that universities should keep their inventions away from the hands of patent trolls, then use our form to tell them.
Central to our initiative is the Public Interest Patent Pledge (PIPP), a pledge we hope to see university leadership sign. The pledge says that before a university sells or licenses a patent, it will first check to make sure that the potential buyer or licensee doesn’t match the profile of a patent troll:
When determining what parties to sell or license patents to, [School name] will take appropriate steps to research the past practices of potential buyers or licensees and favor parties whose business practices are designed to benefit society through commercialization and invention. We will strive to ensure that any company we sell or license patents to does not have a history of litigation that resembles patent trolling. Instead, we will partner with those who are actively working to bring new technologies and ideas to market, particularly in the areas of technology that those patents inhabit.
One of our sources of inspiration for the pledge was the technology transfer community itself. In 2007, the Association of University Technology Managers (AUTM) released a document called Nine Points to Consider, which advocates transferring to companies that are actively working in the same fields of technology the patents cover, not those that will simply use them to demand licensing fees from others. More recently, the Association of American Universities (AAU) launched a working group on technology transfer policy, and that group’s early recommendations closely mirror AUTM’s (PDF). EFF has often found itself on the opposite side of policy fights from AUTM and AAU, but we largely agree with them on this issue that something needs to change.
Despite that good advice, many research universities continue to sell patents to trolls. Just a few weeks ago, we wrote about My Health, a company that appears to do nothing but file patent and trademark lawsuits. Its primary weapon is a patent from the University of Rochester. Rochester isn’t alone: dozens of universities regularly license patents to the notorious mega-troll Intellectual Ventures.
Good intentions and policy statements won’t solve the problem. Universities will change when students, professors, and alumni insist on it.Local Organizers: You Can Make a Difference
We’re targeting this campaign at every college and university in the United States, from flagship state research institutions to liberal arts colleges. Why? Because patents affect everyone. The licensing decisions that universities make today will strengthen or sabotage the next generation of inventors and innovators. Together, we can make a statement that universities want more innovation-friendly laws and policies nationwide.
It would be impossible for any one organization to persuade every college and university to sign the pledge, so we’re turning to our network of local activists in the Electronic Frontier Alliance and beyond.
We’ve designed our petition to make it easy for local organizers to share the results with university leadership. For example, here are all of the people who’ve signed the petition with a connection to the University of South Dakota. If you volunteer for the USD digital civil liberties club—or if you’ve been looking to start it—then your group could write a letter to university leadership urging them to sign the pledge, and include the names of all of the signatories. We’re eager to work with you to make sure your voice is heard. You can write me directly with any questions.
Reclaim Invention represents a new type of EFF campaign. This is the first time we’ve launched a campaign targeting thousands of local institutions at once. It’s a part of our ongoing work to unite the efforts of grassroots digital rights activists across the country. Amazing things can happen when local activists coordinate their efforts.
Share this: Join EFF
Microsoft had an ambitious goal with the launch of Windows 10: a billion devices running the software by the end of 2018. In its quest to reach that goal, the company aggressively pushed Windows 10 on its users and went so far as to offer free upgrades for a whole year. However, the company’s strategy for user adoption has trampled on essential aspects of modern computing: user choice and privacy. We think that’s wrong.
You don’t need to search long to come across stories of people who are horrified and amazed at just how far Microsoft has gone in order to increase Windows 10’s install base. Sure, there is some misinformation and hyperbole, but there are also some real concerns that current and future users of Windows 10 should be aware of. As the company is currently rolling out its “Anniversary Update” to Windows 10, we think it’s an appropriate time to focus on and examine the company’s strategy behind deploying Windows 10.Disregarding User Choice
The tactics Microsoft employed to get users of earlier versions of Windows to upgrade to Windows 10 went from annoying to downright malicious. Some highlights: Microsoft installed an app in users’ system trays advertising the free upgrade to Windows 10. The app couldn’t be easily hidden or removed, but some enterprising users figured out a way. Then, the company kept changing the app and bundling it into various security patches, creating a cat-and-mouse game to uninstall it.
Eventually, Microsoft started pushing Windows 10 via its Windows Update system. It started off by pre-selecting the download for users and downloading it on their machines. Not satisfied, the company eventually made Windows 10 a recommended update so users receiving critical security updates were now also downloading an entirely new operating system onto their machines without their knowledge. Microsoft even rolled in the Windows 10 ad as part of an Internet Explorer security patch. Suffice to say, this is not the standard when it comes to security updates, and isn’t how most users expect them to work. When installing security updates, users expect to patch their existing operating system, and not see an advertisement or find out that they have downloaded an entirely new operating system in the process.
In May 2016, in an action designed in a way we think was highly deceptive, Microsoft actually changed the expected behavior of a dialog window, a user interface element that’s been around and acted the same way since the birth of the modern desktop. Specifically, when prompted with a Windows 10 update, if the user chose to decline it by hitting the ‘X’ in the upper right hand corner, Microsoft interpreted that as consent to download Windows 10.
Time after time, with each update, Microsoft chose to employ questionable tactics to cause users to download a piece of software that many didn’t want. What users actually wanted didn’t seem to matter. In an extreme case, members of a wildlife conservation group in the African jungle felt that the automatic download of Windows 10 on a limited bandwidth connection could have endangered their lives if a forced upgrade had begun during a mission.Disregarding User Privacy
The trouble with Windows 10 doesn’t end with forcing users to download the operating system. Windows 10 sends an unprecedented amount of usage data back to Microsoft, particularly if users opt in to “personalize” the software using the OS assistant called Cortana. Here’s a non-exhaustive list of data sent back: location data, text input, voice input, touch input, webpages you visit, and telemetry data regarding your general usage of your computer, including which programs you run and for how long.
While we understand that many users find features like Cortana useful, and that such features would be difficult (though not necessarily impossible) to implement in a way that doesn’t send data back to the cloud, the fact remains that many users would much prefer not to use these features in exchange for maintaining their privacy.
And while users can disable some of these settings, it is not a guarantee that your computer will stop talking to Microsoft’s servers. A significant issue is the telemetry data the company receives. While Microsoft insists that it aggregates and anonymizes this data, it hasn’t explained just how it does so. Microsoft also won’t say how long this data is retained, instead providing only general timeframes. Worse yet, unless you’re an enterprise user, no matter what, you have to share at least some of this telemetry data with Microsoft and there’s no way to opt-out of it.
Microsoft has tried to explain this lack of choice by saying that Windows Update won’t function properly on copies of the operating system with telemetry reporting turned to its lowest level. In other words, Microsoft is claiming that giving ordinary users more privacy by letting them turn telemetry reporting down to its lowest level would risk their security since they would no longer get security updates1. (Notably, this is not something many articles about Windows 10 have touched on.)
But this is a false choice that is entirely of Microsoft’s own creation. There’s no good reason why the types of data Microsoft collects at each telemetry level couldn’t be adjusted so that even at the lowest level of telemetry collection, users could still benefit from Windows Update and secure their machines from vulnerabilities, without having to send back things like app usage data or unique IDs like an IMEI number.
And if this wasn’t bad enough, Microsoft’s questionable upgrade tactics of bundling Windows 10 into various levels of security updates have also managed to lower users’ trust in the necessity of security updates. Sadly, this has led some people to forgo security updates entirely, meaning that there are users whose machines are at risk of being attacked.
There’s no doubt that Windows 10 has some great security improvements over previous versions of the operating system. But it’s a shame that Microsoft made users choose between having privacy and security.The Way Forward
Microsoft should come clean with its user community. The company needs to acknowledge its missteps and offer real, meaningful opt-outs to the users who want them, preferably in a single unified screen. It also needs to be straightforward in separating security updates from operating system upgrades going forward, and not try to bypass user choice and privacy expectations.
We at EFF have heard from many users who have asked us to take action, and we urge Microsoft to listen to these concerns and incorporate this feedback into the next release of its operating system. Otherwise, Microsoft may find that it has inadvertently discovered just how far it can push its users before they abandon a once-trusted company for a better, more privacy-protective solution.
Correction: an earlier version of the blogpost implied that data collection related to Cortana was opt-out, when in fact the service is opt in.
- 1. Confusingly, Microsoft calls the lowest level of telemetry reporting (which is not available on Home or Professional editions of Windows 10) the “security” level—even though it prevents security patches from being delivered via Windows Update.
Share this: Join EFF
CalGang is a joke.
California’s gang database contains data on more than 150,000 people that police believe are associated with gangs, often based on the flimsiest of evidence. Law enforcement officials would have you believe that it’s crucial to their jobs, that they use it ever so responsibly, and that it would never, ever result in unequal treatment of people of color.
But you shouldn’t take their word for it. And you don’t have to take ours either, or the dozens of other civil rights organizations calling for a CalGang overhaul. But you should absolutely listen to the California State Auditor’s investigation.
The state’s top CPA, Elaine Howle, cracked open the books and crunched the numbers as part of an audit:
This report concludes that CalGang’s current oversight structure does not ensure that law enforcement agencies (user agencies) collect and maintain criminal intelligence in a manner that preserves individuals’ privacy rights.
Brutal. But then there was more.
She wrote that CalGang receives “no state oversight” and operates “without transparency or meaningful opportunities for public input.”
She found that agencies couldn’t legitimize 23 percent of CalGang entries she reviewed. Thirteen out of 100 people had no substantiated reason for being in the database.
She found that law enforcement had ignored a five-year purging policy for more than 600 people, often extending the purge date to more than 100 years. They also frequently disregarded a law requiring police to notify the parents of minors before adding them to CalGang.
She found that there was “little evidence” that CalGang had met standards for protecting privacy and other constitutional rights.
As a result, user agencies are tracking some people in CalGang without adequate justification, potentially violating their privacy rights.
And then the other shoe dropped:
Further, by not reviewing information as required, CalGang’s governance and user agencies have diminished the system’s crime-fighting value.
To recap the audit: CalGang violates people’s rights, operates with no oversight, is chockfull of unsubstantiated information and data that should have been purged, and has diminished value in protecting public safety.
Assemblymember Shirley Weber has the start of a solution: A.B. 2298.
This bill would write into law all new transparency and accountability measures for the controversial CalGang database and at least 11 other gang databases managed by local law enforcement agencies in California.
- Law enforcement would be required to notify you if they intend to add you to the database.
- You would have the opportunity to challenge your inclusion in a gang database.
- Law enforcement agencies would have to produce transparency reports for anyone to look at with statistics on CalGang additions, removals, and demographics.
EFF has joined dozens of civil rights groups like the Youth Justice Coalition to support this bill. If you live in California, please join us by emailing your elected officials today to put this bill on the governor’s desk.
Here are some other things you should know about CalGang.What is CalGang?
CalGang is a data collection system used by law enforcement agencies to house information on suspected gang members. At last count, CalGang contained data on more than 150,000 people. As of 2016, the CalGang database is accessible by more than 6,000 law enforcement officers across the state from the laptops in their patrol vehicles.
As the official A.B. 2298 legislative analysis explains:
The CalGang system database, which is housed by the [California Department of Justice, is accessed by law enforcement officers in 58 counties and includes 200 data fields containing personal, identifying information such as age, race, photographs, tattoos, criminal associates, addresses, vehicles, criminal histories, and activities.
Something as simple as living on a certain block can label you as a possible Crip or Hell’s Angel, subjecting you to increased surveillance, police harassment, and gang injunctions. Police use the information in the database to justify an arrest, and prosecutors use it to support their request for maximum penalties.
Many of the Californians included in the CalGang database don’t know they’re on it. What’s worse: If you’re an adult on the list, you have no right to know you’re on it or to challenge your inclusion. Law enforcement agencies have lobbied aggressively to block legislation that would make the CalGang data more accessible to the public.How Does CalGang Work?
In use for almost 20 years, CalGang holds information collected by beat officers during traffic stops and community patrols. The officers fill out Field Identification Cards with details supporting their suspicions, which can include pictures of the person’s tattoos and clothing. They can collect this information from any person at any time, no arrest necessary. The cards are then uploaded to CalGang at the discretion of the officer. Detectives also add to the database while mapping out connections and associations to the suspects they investigate. Any officer can access the information remotely at any time. So if, during the course of writing a fix-it ticket, an officer runs the driver’s name through the database and sees an entry, that officer can potentially formulate a bias against the driver.
Ali Winston’s Reveal News article about the horrors of CalGang shows how Facebook photos with friends can lead to criminal charges.
Aaron Harvey, a 26-year-old club promoter in Las Vegas at the time, was arrested and taken back to his native city of San Diego. He was charged with nine counts of gang conspiracy to commit a felony due to the fact that a couple of his Facebook friends from the Lincoln Park neighborhood where he grew up were believed to be in a street gang. Police further suspected that those friends took part in nine shootings, all of which occurred after Harvey had moved to Nevada. Even though no suspects were ever charged in connection to the actual shootings, Harvey still spent eight months in jail before a judge dismissed the gang conspiracy charges against him as baseless. As a direct result of his unjust incarceration, he lost his job and his apartment in Las Vegas and had to move in with family in San Diego.
Asked about his experience of gang classification systems, Harvey said, “It’s like a virus that you have, that you don’t know you have… (Someone) infected me with this disease; now I have it, and there’s no telling how many other people I have infected.”It’s Based on Subjective Observations
The criteria used for determining gang affiliation are laughably broad. Much of the information that is considered to be evidence of gang activity is open to personal interpretation: being seen with suspected gang members, wearing “gang dress”, making certain hand signs, or simply being called a gang member by, as the CalGang procedural manual states, an “untested informant”. The presence of two of these criteria is considered enough evidence for people to be included in the database for at least 5 years and subject to a possible gang injunction (a court order that restricts where you can go and with whom you can interact).
A.B. 2298’s legislative analysis explains the flaw in this system.
[A]s a practical matter, it may be difficult for a minor, or a young-adult, living in a gang-heavy community to avoid qualifying criteria when the list of behaviors includes items such as “is in a photograph with known gang members,” “name is on a gang document, hit list or gang-related graffiti” or “corresponds with known gang members or writes and/or receives correspondence.” In a media-heavy environment, replete with camera phones and social network comments, it may be challenging for a teenager aware of the exact parameters to avoid such criteria, let alone a teenager unaware he or she is being held to such standards.
As we saw with Aaron Harvey, meeting three of the criteria can get you a gang conspiracy charge.It’s Racially Biased
Patrol officers, because they directly engage the public during their daily beat, make many of the entries. The problem is that communities of color tend to be heavily policed in the first place. In a state that is 45% black and brown, Hispanic and African-American individuals make up 85% of the CalGang database. In a country where people of color are already targeted and criminally prosecuted at disproportionately higher rates, having a database that intensifies racial bias and penalizes thousands of Californians based on the neighborhood and community in which they live, their friends and other personal connections, what they wear, and the way that they pose in pictures is unconstitutional.
That being said, false gang ties can be attributed to anyone (with all the negative ramifications that go along with them) regardless of race. The database also includes people with tenuous ties to Asian gangs, white nationalist groups, and motorcycle clubs.Lack of Transparency
Even though S.B. 458 was passed in 2013 requiring that the state of California notify parents of juveniles who are listed on the database (because some registrants are as young as 9 years old), a 2014 proposition that would have extended the notification to adults was heavily resisted by law enforcement agencies. That bill ultimately failed. As it stands today, if an adult Californian wanted to know if they are listed in CalGang, they would have absolutely no recourse. There is no way to challenge incorrect assertions of gang affiliation. Most of the adults who are listed as potential gang members won’t find out until after an arrest.
In terms of governance, the State Auditor noted that because CalGang wasn’t created by a statute, there is no formal state oversight. Instead, it’s managed by two secretive committees, the CalGang Executive Board and the CalGang Node Advisory Committee. She writes:
Generally, CalGang’s current operations are outside of public view… we found that the CalGang users self?administer the committee’s audits and that they do not meaningfully report the results to the board, the committee, or the public. Further, CalGang’s governance does not meet in public, and neither the board nor the committee invites public participation by posting meeting dates, agendas, or reports about CalGang.
The last report from the California Department of Justice explaining the data in CalGang was published way back in 2010.
Correction: The figure regarding the number of individuals in the CalGang database has been adjusted from 200,000 to 150,000 based on updated numbers from the auditor's report.
Share this: Join EFF
As the Rock Against the TPP tour continues its way around the country, word is spreading that it's not too late for us to stop the undemocratic Trans-Pacific Partnership (TPP) in its tracks. The tour kicked off in Denver on July 23 with a line-up that included Tom Morello, Evangeline Lilly, and Anti-Flag, before hitting San Diego the following week where Jolie Holland headlined. You can check out the powerful vibe of the kick-off show below.Privacy info. This embed will serve content from youtube.com
And the tour isn't even half done yet! This weekend, Rock Against the TPP heads to Seattle on August 19 and Portland on August 20, featuring a number of new artists including Danbert Nobacon of Chumbawamba in Seattle, and hip-hop star Talib Kweli in Portland. The latest tour date to be announced is a stop in EFF's home city of San Francisco on September 9, featuring punk legend Jello Biafra.
EFF will be on stage for each of the three remaining dates to deliver a short message about the threats that the TPP poses to Internet freedom, creativity, and innovation both here in the United States, and across eleven other Pacific Rim countries. These threats include:
- Doubling down on U.S. law that makes it easy for copyright owners to have content removed from the Internet without a court order, and hard for users whose content is wrongly removed.
- Forcing six other countries to go along with our ridiculously long copyright term—life of the author plus another 70 years—which stops artists and fans from using music and art from a century ago.
- Imposing prison terms for those who disclose corporate secrets, break copyright locks, or share files, even if they are journalists, whistleblowers, or security researchers, and even if they're not making any money from it.
In addition, the TPP completely misses the opportunity to include meaningful protections for users. It fails to require other countries to adopt an equivalent to the fair use right in U.S. copyright law, it includes only weak and unenforceable language about the importance of a free and open Internet and net neutrality, and its provisions on encryption technology and software source code fail to offer any protection against crypto backdoors.
Rock Against the TPP is an opportunity to spread the word about these problems and to stand up to the corporate lobbyists and their captive trade negotiators who have spent years pushing the TPP against the people's will. First and foremost it's also a celebration of the creativity, passion, and energy of the artists and fans who are going to help to stop this flawed agreement.
If you can make it to Portland, Seattle, or San Francisco, please join us! Did we mention that the concerts are absolutely free? Reserve your tickets now, and spread the word to all your family and friends. With your help, the TPP will soon be nothing but a footnote in history.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
A new federal government policy will result in the government releasing more of the software that it creates under free and open source software licenses. That’s great news, but doesn’t go far enough in its goals or in enabling public oversight.
A few months ago, we wrote about a proposed White House policy regarding how the government handles source code written by or for government agencies. The White House Office of Management and Budget (OMB) has now officially enacted the policy with a few changes. While the new policy is a step forward for government transparency and open access, a few of the changes in it are flat-out baffling.
As originally proposed (PDF), the policy would have required that code written by employees of federal agencies be released to the public. For code written by third-party developers, agencies would have been required to release at least 20% of it under a license approved by the Open Source Initiative—prioritizing “code that it considers potentially useful to the broader community.”
At the time, EFF recommended that OMB consider scrapping the 20% rule; it would be more useful for agencies to release everything, regardless of whether it was written by employees or third parties. Exceptions could be made in instances in which making code public would be prohibitively expensive or dangerous.
Instead, OMB went in the opposite direction: the official policy treats code written by government employees and contractors the same and puts code in both categories under the 20% rule. OMB was right the first time: code written by government employees is, by law, in the public domain and should be available to the public.
More importantly, though, a policy that emphasizes “potentially useful” code misses the point. While it’s certainly the case that people and businesses should be able to reuse and build on government code in innovative ways, that’s not the only reason to require that the government open it. It’s also about public oversight.
Giving the public access to government source code gives it visibility into government programs. With access to government source code—and permission to use it—the public can learn how government software works or even identify security problems. The 20% rule could have the unfortunate effect of making exactly the wrong code public. Agencies can easily sweep the code in most need of public oversight into the 80%. In fairness, OMB does encourage agencies to release as much code as they can “to further the Federal Government's commitment to transparency, participation, and collaboration.” But the best way to see those intentions through is to make them the rule.
Open government policy is at its best when its mandates are broad and its exceptions are narrow. Rather than trust government officials’ judgment about what materials to make public or keep private, policies like OMB’s should set the default to open. Some exceptions are unavoidable, but they should be limited and clearly defined. And when they’re invoked, the public should know what was exempted and why.
OMB has implemented the 20% rule as a three-year pilot. The office says that it will “evaluate pilot results and consider whether to allow the pilot program to expire or to issue a subsequent policy to continue, modify, or increase the minimum requirements of the pilot program.” During the next three years, we’ll be very interested to see how much code agencies release and what stays obscured.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Last year, while most of us were focused on the FCC’s Open Internet Order to protect net neutrality, the FCC quietly did one more thing: it voted to override certain state regulations that inhibit the development and expansion of community broadband projects. The net neutrality rules have since been upheld, but last week a federal appeals court rejected FCC’s separate effort to preempt state law.
The FCC’s goals were laudable. Municipalities and local communities have been experimenting with ways to foster alternatives to big broadband providers like Comcast and Time/Warner. Done right, community fiber experiments have the potential to create options that empower Internet subscribers and make Internet access more affordable. For example, Chattanooga, Tennessee, is home to one of the nation’s least expensive, most robust municipally-owned broadband networks. The city decided to build a high-speed network initially to meet the needs of the city’s electric company. Then, the local government learned that the cable companies would not be upgrading their Internet service fast enough to meet the city's needs. So the electric utility also became an ISP, and the residents of Chattanooga now have access to a gigabit (1,000 megabits) per second Internet connection. That’s far ahead of the average US connection speed, which typically clocks in at 9.8 megabits per second.
But 19 states have laws designed to inhibit experiments like these, which is why the FCC decided to take action, arguing that its mandate to promote broadband competition gave it the authority to override state laws inhibiting community broadband. The court disagreed, finding that the FCC had overstepped its legal authority to regulate.
While the communities that looked to the FCC for help are understandably disappointed, the ruling should offer some reassurance for those who worry about FCC overreach. Here, as with net neutrality rulings prior to the latest one, we see that the courts can and will rein in the FCC if it goes beyond its mandate.
But there are other lessons to be learned from the decision. One is that we cannot rely on the FCC alone to promote high speed Internet access. If a community wants the chance to take control of its Internet options, it must organize the political will to make it happen – including the will to challenge state regulations that stand in the way. Those regulations were doubtless passed to protect incumbent Internet access providers, but we have seen that a determined public can fight those interests and win. This time, the effort must begin at home. Here are a few ideas:Light Up the Dark Fiber, Foster Competiiton
In most U.S. cities there is only one option for high-speed broadband access. And this lack of competition means that users can’t vote with their feet when monopoly providers like Comcast or Verizon discriminate among Internet users in harmful ways. On the flipside, a lack of competition leaves these large Internet providers with little incentive to offer better service.
It doesn't have to be that way. Right now, 89 U.S. cities provide residents with high-speed home Internet, but dozens of additional cities across the country have the infrastructure, such as dark fiber, to either offer high-speed broadband Internet to residents or lease out the fiber to new Internet access providers to bring more competition to the marketplace (the option we prefer).
“Dark fiber” refers to unused fiber optic lines already laid in cities around the country, intended to provide high speed, affordable Internet access to residents. In San Francisco, for example, more than 110 miles of fiber optic cable run under the city. Only a fraction of that fiber network is being used.
And San Francisco isn’t alone. Cities across the country have invested in laying fiber to connect nonprofits, schools, and government offices with high-speed Internet. That fiber can be used by Internet service startups to help deliver service to residents, reducing the expensive initial investment it takes to enter this market.
So the infrastructure to provide municipal alternatives is there in many places—we just need the will and savvy to make it a reality that works."Dig Once"—A No Brainer
Building the infrastructure for high-speed internet is expensive. One big expense is tearing up the streets to build out an underground network. But cities regularly have to tear up streets for all kinds of reasons, such as upgrading sewer lines. They should take advantage of this work to create a network of conduits, and then let any company that wants to route their cables through that network, cutting the cost of broadband deployment.Challenge Artificial Political and Legal Barriers
In addition to state regulations, many cities have created their own unnecessary barriers to their efforts to light up dark fiber or extend existing networks. Take Washington, D.C., where the city’s fiber is bound up in a non-compete contract with Comcast, keeping the network from serving businesses and residents. If that's the case in your town, you should demand better from your representatives. In addition, when there's a local meeting to consider new construction, demand that they include a plan for installing conduit.
These are just a few ideas; you can find more here, along with a wealth of resources. It’s going to take a constellation of solutions to keep our Internet open, but we don't need to wait on regulators and legislators in D.C. This is one area where we can all be leaders. We can organize locally and tell our elected officials to invest in protecting our open Internet.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
Washington, D.C.—The Electronic Frontier Foundation (EFF) today filed a petition on behalf of its client Stephanie Lenz asking the U.S. Supreme Court to ensure that copyright holders who make unreasonable infringement claims can be held accountable if those claims force lawful speech offline.
Lenz filed the lawsuit that came to be known as the “Dancing Baby” case after she posted—back in 2007—a short video on YouTube of her toddler son in her kitchen. The 29-second recording, which Lenz wanted to share with family and friends, shows her son bouncing along to the Prince song "Let's Go Crazy," which is heard playing in the background. Universal Music Group, which owns the copyright to the Prince song, sent YouTube a notice under the Digital Millennium Copyright Act (DMCA), claiming that the family video was an infringement of the copyright.
EFF sued Universal on Lenz’s behalf, arguing that the company’s claim of infringement didn’t pass the laugh test and was just the kind of improper, abusive DMCA targeting of lawful material that so often threatens free expression on the Internet. The DMCA includes provisions designed to prevent abuse of the takedown process and allows people like Lenz to sue copyright holders for bogus takedowns.
The San Francisco-based U.S. Court of Appeals for the Ninth Circuit last year sided in part with Lenz, ruling that that copyright holders must consider fair use before sending a takedown notice. But the court also held that copyright holders should be held to a purely subjective standard. In other words, senders of false infringement notices could be excused so long as they subjectively believed that the material they targeted was infringing, no matter how unreasonable that belief. Lenz is asking the Supreme Court to overrule that part of the Ninth Circuit’s decision to ensure that the DMCA provides the protections for fair use that Congress intended.
“Rightsholders who force down videos and other online content for alleged infringement—based on nothing more than an unreasonable hunch, or subjective criteria they simply made up—must be held accountable,” said EFF Legal Director Corynne McSherry. “If left standing, the Ninth Circuit’s ruling gives fair users little real protection against private censorship through abuse of the DMCA process.”
For more on Lenz v. Universal:
Share this: Join EFF
With high-profile hacks in the headlines and government officials trying to reopen a long-settled debate about encryption, information security has become a mainstream issue. But we feel that one element of digital security hasn’t received enough critical attention: the role of government in acquiring and exploiting vulnerabilities and hacking for law enforcement and intelligence purposes. That’s why EFF recently published some thoughts on a positive agenda for reforming how the government, obtains, creates, and uses vulnerabilities in our systems for a variety of purposes, from overseas espionage and cyberwarfare to domestic law enforcement investigations.
Some influential commentators like Dave Aitel at Lawfare have questioned whether we at EFF should be advocating for these changes, because pursuing any controls on how the government uses exploits would be “getting ahead of the technology.” But anyone who follows our work should know we don’t call for new laws lightly.
To be clear: We are emphatically not calling for regulation of security research or exploit sales. Indeed, it’s hard to imagine how any such regulation would pass constitutional scrutiny. We are calling for a conversation around how the government uses that technology. We’re fans of transparency; we think technology policy should be subject to broad public debate, heavily informed by the views of technical experts. The agenda in the previous post outlined calls for exactly that.
There’s reason to doubt anyone who claims that it’s too soon to get this process started.
Consider the status quo: The FBI and other agencies have been hacking suspects for at least 15 years without real, public, and enforceable limits. Courts have applied an incredible variety of ad hoc rules around law enforcement’s exploitation of vulnerabilities, with some going so far as to claim that no process at all is required. Similarly, the government’s (semi-)formal policy for acquisition and retention of vulnerabilities—the Vulnerabilities Equities Process (VEP)—was apparently motivated in part by public scrutiny of Stuxnet (widely thought to have been developed at least in part by the U.S. government) and the long history of exploiting vulnerabilities in its mission to disrupt Iran's nuclear program. Of course, the VEP sat dormant and unused for years until after the Heartbleed disclosure. Even today, the public has seen the policy in redacted form only thanks to FOIA litigation by EFF.The status quo is unacceptable.
If the Snowden revelations taught us anything, it’s that the government is in little danger of letting law hamstring its opportunistic use of technology. Nor is the executive branch shy about asking Congress for more leeway when hard-pressed. That’s how we got the Patriot Act and the FISA Amendments Act, not to mention the impending changes to Federal Rule of Criminal Procedure 41 and the endless encryption “debate.” The notable and instructive exception is the USA Freedom Act, the first statute substantively limiting the NSA’s power in decades, born out of public consternation over the agency’s mass surveillance.
So let’s look at some of the arguments for not pursuing limits on the government’s use of particular technologies here.
On vulnerabilities, the question is whether the United States should have any sort of comprehensive, legally mandated policy requiring disclosure in some cases where the government finds, acquires, creates, or uses vulnerabilities affecting the computer networks we all rely on. That is, should we take a position on whether it is beneficial for the government to disclose vulnerabilities to those in the security industry responsible for keeping us safe?
In one sense, this is a strange question to be asking, since the government says it already has a considered position, as described by White House Cybersecurity Coordinator, Michael Daniel: “[I]n the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” Other knowledgeable insiders—from former National Security Council Cybersecurity Directors Ari Schwartz and Rob Knake to President Obama’s hand-picked Review Group on Intelligence and Communications Technologies—have also endorsed clear, public rules favoring disclosure.
But Aitel says all those officials are wrong. He argues that we as outsiders have no evidence that disclosure increases security. To the contrary, Aitel says it’s a “fundamental misstatement” and a “falsehood” that vulnerabilities exploited by the government might overlap with vulnerabilities used by bad actors. “In reality,” he writes, “the vulnerabilities used by the U.S. government are almost never discovered or used by anyone else.”
If Aitel has some data to back up his “reality,” he doesn’t share it. And indeed, in the past, Aitel himself has written that “bugs are often related, and the knowledge that a bug exists can lead [attackers] to find different bugs in the same code or similar bugs in other products.” This suggests that coordinated disclosure by the government to affected vendors wouldn’t just patch the particular vulnerabilities being exploited, but rather would help them shore up the security of our systems in new, important, and possibly unexpected ways. We already know, in non-intelligence contexts, that “bug collision,” while perhaps not common, is certainly a reality. We see no reason, and commentators like Aitel have pointed to none, that exploits developed or purchased by the government wouldn’t be subject to the same kinds of collision.
In addition, others with knowledge of the equities process, like Knake and Schwartz, are very much concerned about the risk of these vulnerabilities falling into the hands of groups “working against the national security interest of the United States.” Rather than sit back and wait for that eventuality—which Aitel dismisses without showing his work—we agree with Daniel, Knake and Schwartz and many others that the VEP needs to put defense ahead of offense in most cases.Democratic oversight won't happen in the shadows
Above all, we can’t have the debate all sides claim to want without a shared set of data. And if outside experts are precluded from participation because they don’t have a TS/SCI clearance, then democratic oversight of the intelligence community doesn’t stand much chance.
On its face, the claim that vulnerabilities used by the U.S. are in no danger of being used by others seems particularly weak when combined with the industry’s opposition to “exclusives,” clauses accompanying exploit purchase agreements giving the U.S. exclusive rights to their use. In a piece last month, Aitel’s Lawfare colleague Susan Hennessey laid out her opposition to any such requirements. But we know for instance that the NSA buys vulnerabilities from the prolific French broker/dealer Vupen. Without any promises of exclusivity from sellers like Vupen, it’s implausible for Aitel to claim that exploits the US purchases will “almost never” fall into others’ hands.
Suggesting that no one else will happen onto exploits used by the U.S. government seems overconfident at best, given that collisions of vulnerability disclosure are well-documented in the wild. And if disclosing vulnerabilities will truly burn “techniques” and expose “sensitive intelligence operations,” that seems like a good argument for formally weighing the equities on both sides on an individualized basis, as we advocate.
In short, we’re open to data suggesting we’re wrong about the substance of the policy, but we’re not going to let Dave Aitel tell us to “slow our roll.” (No disrespect, Dave.)
Our policy proposal draws on familiar levers—public reports and congressional oversight. Even those who say that the government’s vulnerability disclosure works fine as is, like Hennessey, have to acknowledge that there’s too much secrecy. EFF shouldn’t have had to sue to see the VEP in the first place, and we shouldn’t still be in the dark about certain details of the process. As recently as last year, the DOJ claimed under oath that merely admitting that the U.S. has “offensive” cyber capabilities would endanger national security. Raising the same argument about simply providing insight into that process is just as unpersuasive to us. If the government truly does weigh the equities and disclose the vast majority of vulnerabilities, we should have some way of seeing its criteria and verifying the outcome, even if the actual deliberations over particular bugs remain classified.
Meanwhile, the arguments against putting limits on government use of exploits and malware—what we referred to as a “Title III for hacking”—bear even less scrutiny.
The FBI’s use of malware raises serious constitutional and legal questions, and the warrant issued in the widely publicized Playpen case arguably violates both the Fourth Amendment and Rule 41. Further problems arose at the trial stage in one Playpen prosecution when the government refused to disclose all evidence material to the defense, because it “derivatively classified" the exploit used by the FBI. The government would apparently prefer dismissal of prosecutions to disclosure, under court-supervised seal, of exploits that would reveal intelligence sources and methods, even indirectly. Thus, even where exploits are widely used for law enforcement, the government’s policy appears to be driven by the Defense Department, not the Justice Department. That ordering of priorities is incompatible with prosecuting serious crimes like child pornography. Hence, those that ask us to slow down should recognize that the alternative to a Title III for hacking is actually a series of court rulings putting a stop to the government’s use of such exploits.
Adapting Title III to hacking is also a case where public debate should inform the legislative process. We’re not worried about law enforcement and the intelligence community advocating for their vision of how technology should be used. But given calls to slow down, however, we are very concerned that there be input from the public, especially technology experts charged with defending our systems—not just exploit developers with Top Secret clearances.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Cases: The Playpen Cases: Mass Hacking by U.S. Law EnforcementEFF v. NSA, ODNI - Vulnerabilities FOIA
Share this: Join EFF
Illinois has joined the growing ranks of states limiting how police may use cell-site simulators, invasive technology devices that masquerade as cell phone towers and turn our mobile phones into surveillance devices. By adopting the Citizen Privacy Protection Act, Illinois last month joined half a dozen other states—as well as the Justice Department and one federal judge—that have reiterated the constitutional requirement for police to obtain a judicial warrant before collecting people's location and other personal information using cell-site simulators.
By going beyond a warrant requirement and prohibiting police from intercepting data and voice transmissions or conducting offensive attacks on personal devices, the Illinois law establishes a new high watermark in the battle to prevent surveillance technology from undermining civil liberties. Illinois also set an example for other states to follow by providing a powerful remedy when police violate the new law by using a cell-site simulator without a warrant: wrongfully collected information is inadmissible in court, whether to support criminal prosecution or any other government proceedings.
Tools to monitor cell phones
Cell-site simulators are sometimes called “IMSI catchers” because they seize from every cell phone within a particular area its unique International Mobile Subscriber Identity, and force those phones to connect to them, instead of real cell towers.
Early versions of the devices—such as the Stingray device used by police in major U.S. cities since at least 2007 after having been used by federal authorities since at least the 1990s—were limited to location tracking, as well as capturing and recording data and voice traffic transmitted by phones. Later versions, however, added further capabilities which policymakers in Illinois have become the first to address.
Cell phone surveillance tools have eroded constitutional rights guaranteed under the Fourth Amendment’s protection from unreasonable searches and seizures in, at minimum, tens of thousands of cases. Stingrays were deployed thousands of times in New York City alone—and even more often in Baltimore—without legislative or judicial oversight, until in 2011 a jailhouse lawyer accused of tax fraud discovered the first known reference to a “Stingray” in court documents relating to the 2008 investigation that led to his arrest and conviction.
Meanwhile, government and corporate secrecy surrounding police uses of Stingrays has undermined Fifth and Sixth Amendment rights to due process, such as the right to challenge evidence used by one’s accusers. Contracts with police departments demanded by corporate device manufacturers imposed secrecy so severe that prosecutors walked away from legitimate cases across the country rather than risk revealing Stingrays to judges by pursuing prosecutions based on Stingray-collected evidence.
Citing the constraint of a corporate non-disclosure agreement, a police officer in Baltimore even risked contempt charges by refusing to answer judicial inquiries about how police used the devices. Baltimore public defender Deborah Levi explains, “They engage in a third-party contract to violate people’s constitutional rights.”
Several states agree: Get a warrant
In one respect, Illinois is walking well-settled ground.
By requiring that state and local police agents first seek and secure a judicial order based on individualized probable cause of criminal misconduct before using a cell-site simulator, Illinois has joined half a dozen other states (including California, Washington, Utah, Minnesota, and Virginia) that have already paved that road.
At the federal level, the Justice Department took action in 2015 to require federal agencies to seek warrants before using the devices. And just two weeks before Illinois enacted its new law, a federal judge in New York ruled for the first time that defendants could exclude from trial evidence collected from an IMSI-catcher device by police who failed to first obtain a judicial order.
These decisions vindicate core constitutional rights, as well as the separation of powers and underscore that warrants are constitutionally crucial.
It's true that warrants are not particularly difficult for police to obtain when based on legitimate reasons for suspicion. When New York Court of Appeals Chief Judge Sol Wachtler observed in 1985 that any prosecutor could persuade a grand jury to “indict a ham sandwich,” he was talking about the ease with which the government can satisfy the limited scrutiny applied in any one-sided process, including that through which police routinely secure search warrants.
But while judicial warrants do not present a burdensome constraint on legitimate police searches, they play an important role in the investigative process. Warrantless searches are conducted essentially by fiat, without independent review, and potentially arbitrarily. Searches conducted pursuant to a warrant, however, bear the stamp of impartial judicial review and approval.
Warrants ensure, for instance, that agencies do not treat their public safety mandate as an excuse to pursue personal vendettas, or the kinds of stalking “LOVEINT” abuses to which NSA agents and contractors have occasionally admitted. Requiring authorization from a neutral magistrate, put simply, maintains civilian control over police.
Despite its importance and ease for authorities to satisfy, the warrant requirement has ironically suffered frequent erosion by the courts—making all the more important efforts by states like Illinois to legislatively reiterate and expand it.
But in two important respects beyond the warrant requirement, the Illinois Citizen Privacy Protection Act breaks new ground.
Breaking new ground: Allowing an exclusionary remedy
First, the Illinois law is the first policy of its kind in the country that carries a price for law enforcement agencies that violate the warrant requirement. If police use a cell-site simulator to gather information without securing a judicial order, then courts will suppress that information and exclude it from any consideration at trial.
This vindicates the rights of accused individuals by enabling them to exclude illegally collected evidence. It also helps ensure that police use their powerful authorities for only legitimate reasons based on probable cause to suspect criminal activity, rather than fishing expeditions without real proof of misconduct, or for that matter, the personal, racial, or financial biases of police officers.
Like the warrant requirement created to limit the powers of police agencies, the exclusionary rule on which the judiciary relies to enforce the warrant requirement has endured doctrinal erosion over the past generation. Courts have allowed one exception after another, allowing prosecutors to use “fruits of the poisonous tree” in criminal trials despite violations of constitutional rights committed by police when collecting them.
In this context, the new statute in Illinois represents a crucial public policy choice explicitly extending the critical protections of the warrant requirement and exclusionary rule.
Breaking new ground: Prohibiting offensive uses
The new Illinois law also limits the purposes for which cell-site simulators may be used, even pursuant to a judicial order. It flatly prohibits several particularly offensive uses that remain largely overlooked elsewhere.
When Stingrays (and their frequent secret use by local police departments across the country) first attracted attention, most concerns addressed the location tracking capabilities of the device’s first generation, obtained by domestic police departments as early as 2003.
But while Stingrays presented profound constitutional concerns 10 years ago, they present even greater concerns now, because of technology advancements in the past decade enabling stronger surveillance and even militaristic offensive capabilities. Unlike early versions of the devices that could be used only for location monitoring or gathering metadata, later versions, such as the Triggerfish, Hailstorm and Stargrazer series, can be used to intercept voice communications or browsing history in real-time, mount offensive denial of service attacks on a phone, or even plant malware on a device.
Recognizing how invasive the latest versions of IMSI-catchers can be, legislators in Illinois authorized police to use cell-site simulators in only two ways: after obtaining a warrant, police may use the devices to locate or track a known device, or instead to identify an unknown device.
Even if supported by a judicial order, the Citizen Privacy Protection Act affirmatively bans all other uses of these devices. Prohibited activities include intercepting the content or metadata of phone calls or text messages, planting malware on someone’s phone, or blocking a device from communicating with other devices.
The use limitations enshrined in Illinois law are among the first of their kind in the country.
The Illinois statute also requires police to delete any data (within 24 hours after location tracking, or within 72 hours of identifying a device) incidentally obtained from third parties, such as non-targets whose devices are forced to connect to a cell-site simulator. These requirements are similar to those announced by a federal magistrate judge in Illinois who in November 2015 imposed on a federal drug investigation minimization requirements including an order to “immediately destroy all data other than the data identifying the cell phone used by the target. The destruction must occur within forty-eight hours after the data is captured.”
Enhancing security through transparency
Beyond enforcing constitutional limits on the powers of law enforcement agencies, and protecting individual rights at stake, the new law in Illinois also appropriately responds to an era of executive secrecy.
The secrecy surrounding law enforcement uses of IMSI-catchers has also compromised security. As the ACLU’s Chris Soghoian has explained alongside Stephanie Pell from West Point’s Army Cyber Institute and Stanford University, “the general threat that [any particular] technology poses to the security of cellular networks” could outweigh its “increasing general availability at decreasing prices.” With respect to cell-site simulators, in particular:
[C]ellular interception capabilities and technology have become, for better or worse, globalized and democratized, placing Americans’ cellular communications at risk of interception from foreign governments, criminals, the tabloid press and virtually anyone else with sufficient motive to capture cellular content in transmission. Notwithstanding this risk, US government agencies continue to treat practically everything about this cellular interception technology, as a closely guarded, necessarily secret “source and method,” shrouding the technical capabilities and limitations of the equipment from public discussion….
Given the persistent secrecy surrounding IMSI-catchers and the unknown risks they pose to both individual privacy and network security, the statutory model adopted by Illinois represents a milestone not only for civil liberties but also for the security of our technological devices. Khadine Bennett from the ACLU of Illinois explained the new law’s importance in terms of the secrecy pervading how police have used cell-site simulators:
For so long, uses of IMSI-catchers such as Stingrays have been behind the scenes, enabling searches like the pat down of thousands of cell phones at once without the users ever even knowing it happened. It’s exciting to see Illinois adopt a measure to ensure that these devices are used responsibly and appropriately, and I hope to see more like it emerge around the country.
EFF enthusiastically agrees with Ms. Bennett. If you’d like to see the Citizen Privacy Protection Act’s groundbreaking requirements adopted in your state, you can find support through the Electronic Frontier Alliance.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://www.youtube.com/embed/OuhYIeX7OqY??autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
On August 7th, the California Sixth Appellate District issued an opinion denying Matthew Pavlovich's motion to dismiss the case against him for lack of personal jurisdiction over him.
Pavlovich, who was a college student in Indiana and now lives in Texas, claims postings made to the LiVID mailing list, which he ran from his home computer should not subject him to defending himself in California. LiVID is an open source development team working to build a DVD player compatible with the Linux operating system that could compete with the movie studios' monopoly on DVD players. In January 2000, a California judge issued an injunction banning dozens of individuals, including Pavlovich, from publishing DeCSS computer code.
Today, the court held that because Pavlovich knew the movie business was in California, publishing information that might have an effect on its profits was a sufficient connection to find Pavlovich within the court's purview.
This ruling magnifies the ability of Hollywood or other businesses to successfully sue anyone in the world who publishes information on the Internet which the movie studios claim could hurt their profits. Pavlovich is considering an appeal of the order to the California Supreme Court on Constitutional Due Process grounds.
Share this: Join EFF
San Francisco - Texas resident Matthew Pavlovich yesterday for a second time asked the California Supreme Court to reverse a lower court decision requiring him to defend a trade secret case in a California court. Pavlovich, who did not reside in or have any contact with California, has resisted being forced to defend the case in that state.
"Courts have uniformly held that simply publishing something on the Internet is not sufficient to hold jurisdiction worldwide," noted EFF Intellectual Property Attorney Robin Gross. "Without the proper application of constitutional safeguards, the Internet will become a liability minefield for users."
In December 1999, a movie industry association called the DVD Copy Control Association (DVD CCA) sued hundreds of individuals, including Indiana college student Pavlovich, for allegedly publishing DVD decoding software called DeCSS on websites that hosted various Linux-based open-source projects. The DVD CCA claims that Internet republishing of DeCSS on their websites constitutes a trade secret violation.
Pavlovich's appeal is similar to the major case on the trade secret issue, DVD CCA v. Bunner, where an appeals court has stayed the trade secret misappropriation issue pending the outcome of Pavlovich's jurisdictional motion.
The U.S. Constitution's due process clause limits a state court's ability to assert power over out-of-state defendants who have no connection with that state. The Pavlovich case has already gone to the California Supreme Court once before; the Court sent the matter back to the Appellate Court to explain why it believed Pavlovich could be required to come to California. The Appellate Court again held that Pavlovich is required to defend himself in California.
"The lower court's ruling means that a person would be subject to jurisdiction everywhere the Internet reaches," said Allonn Levy, who represents Pavlovich. "It means that movie industry moguls can drag web publishers from anywhere in the world to defend themselves here in California."
DeCSS is free software that allows people to play DVDs without technological restrictions, such as region codes, preferred by movie studios.
Norwegian teenager Jon Johansen originally published DeCSS on the Internet in October 1999. Norwegian prosecutors recently indicted Johansen more than two years after the DVD CCA urged them to do so.
Share this: Join EFF