On matters implicating privacy, such as mass surveillance or the powers of investigatory agencies, Congress has too often failed to fulfill its responsibilities. By neglecting to examine basic facts, and deferring to executive agencies whose secrets preclude meaningful debate, the body has allowed proposals that undermine constitutional rights to repeatedly become enshrined in law. In last week’s launch of a new bipartisan Fourth Amendment Caucus in the House, however, the Constitution has gained a formidable ally.
Every Member of Congress swears an oath to “defend the Constitution against all enemies, foreign and domestic.” Yet the most significant threats to our Constitution include the powers of U.S. intelligence agencies, enabled by Congress’ faith in the agencies’ willingness to respect legal limits on their powers.
Deference to the executive branch—emboldened by Congress’ continuing failure to reform a “dysfunctional” classification system that enables executive secrecy—has left Congress in the dark on matters of fact that should inform its legislative decisions. As a predictable result, proposals that undermine our fundamental right to be free from unreasonable searches and seizures have been repeatedly enacted into law.
For instance, Congress has approved and re-authorized controversial domestic spying powers more than half a dozen times over the past 15 years. Yet even the intelligence committees have failed to gain answers to questions as basic as how many Americans are being monitored, or whether mass surveillance has ever actually helped stop a violent incident.
In addition to overlooking its responsibility to examine and investigate crucial matters of fact, Congress has also settled for holding secret hearings dominated by intelligence officials. Time after time, when domestic surveillance powers come up for re-authorization, Congress has declined to conduct public hearings, allowing executive officials to spin the facts without an opportunity for independent voices—like the whistleblowers who have repeatedly revealed fraud, waste, and abuse—to correct the record.
This is no merely hypothetical fear: intelligence agencies have been caught stating false facts under oath in response to congressional inquiries, and have even launched cyber-espionage operations to suppress a congressional investigation into their own abuses.
Unfortunately, Congress must share the blame for executive secrecy. Not only has it failed to pursue a long-overdue investigation, it has also tolerated and declined to reform a classification system so bloated and secretive that it obstructs Congress’ own ability to conduct oversight.
Instead, congressional leaders of both major political parties have played games of legislative brinksmanship.
In many cases—such as when controversial provisions of the Patriot Act were set to expire in 2005, 2006, 2009, 2011, and particularly in 2015—committee chairs waited until shortly before the re-authorization deadline, marginalized crucial public oversight, and then stoked fears about the security consequences of letting unconstitutional powers lapse. Other times, including 2014, and again earlier this year, the bipartisan establishment joined ranks to quell populists from both parties who sought to more actively check and balance executive power.
Constitutional rights are neither conservative nor liberal. They are simply American.
Yet they have been repeatedly undermined by ultimately authoritarian powers that congressional leaders from both of the major political parties have unfortunately supported.
In this context, the emergence of the bipartisan Fourth Amendment Caucus portends a potential sea change in Congress. Joined by 25 Members of the House from each of the major parties, the caucus is poised to champion privacy and help establish in Congress the consensus that already unites Americans across our various political perspectives.
During the July 13 briefing announcing the new Fourth Amendment Caucus, founding member Justin Amash (R-MI) explained its ambitions:
It’s important that we have this kind of group in Congress to stop [proposals to expand surveillance powers] before they become law, and before they have a chance to violate the rights of Americans.
From across the partisan aisle, Rep. Zoe Lofgren (D-CA) described some of the concerns that drew caucus members together:
The Fourth Amendment is fundamental to our liberty not just because it protects privacy rights, but because it’s the basis for exercising other rights. If you feel that you are being watched at all times by your government, you’re not going to feel as free to exercise your First Amendment rights of speech or assembly.
Over the next year, we look forward to the Fourth Amendment Caucus asserting its presence to influence a range of issues.
While recent attempts to prohibit strong encryption have thankfully failed, executive branch agencies continue to undermine encryption standards and devices. Members of the caucus have previously aimed to protect encryption in a measure (also aiming to end backdoor FBI searches of NSA intelligence to monitor Americans) that gained support from a remarkable bipartisan majority that the caucus may be poised to reconvene.
The Fourth Amendment Caucus may also help champion and secure a long overdue congressional investigation into the uses and continuing abuses of Section 702 of the Foreign Intelligence Surveillance Act, which enables much of the NSA Internet dragnet. Section 702 is set to expire at the end of 2017, and should at least be the focus of public hearings early in the year including voices beyond intelligence officials.
In years past, we could safely predict that Congress would sit on its hands until the last minute, and then bully Members into extending the law with vague appeals to security. With Members now organizing across the aisle to protect constitutional values, however, Congress may grow better poised to resist executive branch proposals and instead continue long-overdue surveillance reform.
Share this: Join EFF
Today, EFF joined a broad coalition of other public interest groups at Democratic Leader Nancy Pelosi's office in San Francisco, to present her with a petition carrying an incredible 209,419 signatures with a request to oppose the introduction of the Trans-Pacific Partnership (TPP) during the post-election "lame duck" session of Congress. And with your help, we succeeded! In a letter that she handed us at our meeting, Leader Pelosi wrote:
As Congress and the American people review the finalized terms of the Trans Pacific Partnership (TPP), we must put American workers first to allow our economy to grow and America to succeed. Please be assured that I will oppose the TPP as it is currently written or any deal that attempts to separate commerce from the environment and will work to ensure that our nation's trade policies include increased transparency, more consultation, and stronger protections to create jobs, strengthen human rights, and preserve the environment.
Thank you, Leader Pelosi, for standing up for users to block this undemocratic, anti-user deal. Combined with the stated opposition to the TPP of both presidential candidates, and the likelihood that other House Democrats will follow Leader Pelosi's courageous lead, it is now significantly less likely that the TPP will be introduced during the lame duck session, or if introduced, that it will pass the House.EFF's Jeremy Malcolm addresses a rally outside Pelosi's office. Photo by Kate Usher.
That in turn casts doubt on whether the United States will ever ratify this agreement in its present form. If it doesn't, then the TPP will never come into force, which will amount to a significant blow against the opaque, lobbyist-driven lawmaking practiced by the United States Trade Representative (USTR), and a cautionary lesson for the conclusion of subsequent trade agreements such as the Trans-Atlantic Trade and Investment Partnership (TTIP), and the Trade in Services Agreement (TiSA).
If the future of United States trade policy is not to reflect the failed legacy of anti-user agreements such as ACTA, the USTR will have little choice but to bite the bullet and embrace a more transparent, participatory model of trade negotiation that fairly reflects the needs of users, cultural institutions, and innovative businesses, rather than only those of rich entertainment corporations and pharmaceutical companies.
Although Leader Pelosi's stand is a significant victory, it is not an iron-clad guarantee that the TPP won't be put before Congress for a vote during the lame duck session of Congress. If it is, then the fight will begin again. That's why now is not the time to let up the pressure on the administration to finally abandon this fatally flawed deal.
This Saturday, we'll again be joining with a coalition of other groups to Rock Against the TPP, at the kick-off of a massive rally and concert tour in Denver. If you'll be in Denver, San Diego, Portland or Seattle over the next few weeks, join us at one of these concerts to celebrate our victory so far, and to sound the death knell for the TPP.
Share this: Join EFF
We’ve written many times about the need for comprehensive patent reform to stop innovation-killing trolls. While we continue to push for reform in Congress, there are a number of steps that companies and inventors can take to keep from contributing to the patent troll problem. These steps include pledges and defensive patent licenses. In recent years, companies like Twitter and Tesla have promised not to use their patents offensively. This week, blockchain startup Blockstream joins them with a robust set of commitments over how it uses software patents.
Blockstream’s commitments are meant to ensure that the company only uses its patents defensively, and therefore, to assure users and developers of Bitcoin technology that they can use Blockstream’s inventions without fear of patent litigation. It made three interlocking commitments:
- It adopted a patent pledge promising that it will only use its own software patents defensively—that is, it won’t use them to sue or demand licensing fees others for using similar technologies, but it may use them to defend itself from the patent lawsuits of others.
- It shared its patents under the Defensive Patent License (DPL), licensing its patents to any other person or company who agrees to the terms of the DPL.
- It introduced a modified version of the Innovator’s Patent Agreement, an agreement with Blockstream inventors that it may file patents for their inventions but may not use them offensively (if Blockstream ever assigns a patent to another party, the agreement would apply to that party too).
Together, these three commitments represent a huge step forward—they assure that Blockstream’s patents will never be used to attack others, even if the company goes out of business or sells them to another company.
In its patent pledge, the company admits that it would prefer not to file software patents at all and says that it hopes that the pledge will one day be irrelevant:
Software patents have too often been used as a means for stifling innovation, rather than encouraging it. In an ideal world, we would refuse to patent the software we invent. But this strategy would place the entire Bitcoin community at risk. […] Even when prior art exists that could challenge the validity of issued patents, such challenges are both difficult and expensive to mount, with often unpredictable results, and cannot be relied on as a solution. This leaves us with two choices: to work in secret, or to patent as a defensive strategy.
We believe, then, that the way to ensure that our technology remains most usable is to obtain patents ourselves and make binding promises for their use. We look forward to a legislative environment where this is not needed: our strategy is not a substitute for attempts to reform or eliminate software patents, or to invalidate issued patents, including our own. But until we can ensure that no one can own exclusive rights to the technologies surrounding Bitcoin, we want to ensure that they remain available to the community.
We’re glad to see Blockstream taking this step, and it’s provided an excellent template for other software companies to borrow. We also admire the company’s acknowledgement that licenses and pledges alone can’t fix a patent system that’s all too easily exploited by patent trolls, nor can it stop the real root cause of the problems in the patent system, the flood of stupid patents.
For more information on patent licensing alternatives, check out Hacking the Patent System, the updated and expanded guide we published earlier this year.
Share this: Join EFF
We are excited to announce that Onlinecensorship.org, a joint project of EFF and Visualizing Impact, is now available in Spanish. Onlinecensorship.org seeks to expose how social media sites moderate user-generated content. By launching the platform in the second-most widely spoken language in the world, we hope to reach several million more individuals who've experienced censorship on social media. Now, more users than ever can report on content takedowns from Facebook, Google+, Twitter, Instagram, Flickr, and YouTube and use Onlinecensorship.org as a resource to appeal unfair takedowns.
Since its launch in November 2015, Onlinecensorship.org released its first findings report based on data gathered from user reports received through the platform. The report highlighted the who, what, and why of content takedowns on social media sites. By cataloging and analyzing aggregated cases of social media censorship, Onlinecensorship.org unveils trends in content removals, provides insight into the types of content being taken down, and learns how these takedowns impact different communities of users.
Controversies over content takedowns seem to bubble up every few weeks, with users complaining about censorship of political speech, nudity, LGBT content, and many other subjects. The passionate debate about these takedowns reveals a larger issue: social media sites have an enormous impact on the public sphere, but are ultimately privately owned companies. Each corporation has their own rules and systems of governance that control users’ content, while providing little transparency about how these decisions are made.
The idea for Onlinecensorship.org was born in 2011, when Facebook took down a link posted by popular band Coldplay. The link, deemed “abusive” by the social network, was to a song of protest for Palestinian freedom, an issue where calls of manipulation and censorship by the mainstream media are frequent. You can read how the story unfolds here.
If social media companies control both the medium and the message—without oversight or transparency—they take the leap from a being a ‘walled garden’ to a selectively clear-cut forest. Onlinecensorship.org—now in two languages—can get us closer to untangling the important issues at stake.
Share this: Join EFF
Nos alegra anunciar que Onlinecensorship.org, un proyecto conjunto del EEF y Visualizing Impact, ya está disponible en español. Onlinecensorship.org busca exponer cómo las redes sociales moderan el contenido generado por el usuario. Con el lanzamiento de la plataforma en la segunda lengua más hablada en el mundo esperamos llegar a varios millones de personas que han experimentado la censura en las redes sociales. Ahora, existen más usuarios que nunca que pueden informar sobre contenido eliminado en Facebook, Google+, Twitter, Instagram, Flickr y YouTube y usar Onlinecensorship.org como un recurso para reclamar por el contenido eliminado injustamente.
Desde su lanzamiento en noviembre de 2015, Onlinecensorship.org ha publicado sus primeros informes de resultados sobre la base de los reportes recibidos de los usuarios mediante la plataforma. En el informe se destaca el quién, qué, y por qué de la eliminación de contenido de los sitios de las redes sociales. Al catalogar y analizar los casos acumulados de censura en las redes sociales, Onlinecensorship.org revela las tendencias en la eliminación de contenido, permitiendo comprender los tipos de contenido que están siendo eliminados y aprender cómo están afectando a diferentes comunidades de usuarios.
Las controversias sobre la eliminación de contenido parecen estallar cada pocas semanas, con usuarios quejándose de censura contra el discurso político, desnudez, contenido LGBT, y muchos otros temas. El apasionado debate en estas eliminaciones revela un problema mayor: los sitios de medios sociales tienen un enorme impacto en la esfera pública, pero son, en última instancia, propiedad privada de las empresas. Cada compañía tiene sus propias reglas y sistemas de gobierno que controlan el contenido de los usuarios, al tiempo que proporcionan poca transparencia sobre cómo se toman estas decisiones.
La idea de Onlinecensorship.org nació en 2011, cuando Facebook elimina un enlace publicado por la popular banda Coldplay. El enlace, considerado "abusivo" por la red social, era una canción de protesta por la libertad palestina, un problema por el que las acusaciones de manipulación y censura por parte de los medios de comunicación son frecuentes. Puedes leer como se desarrolla esta historia aquí.
Si las empresas de medios sociales controlan tanto el medio y el mensaje, sin supervisión y transparencia, pasarán de ser un "jardín amurallado" a un bosque podado selectivamente. Onlinecensorship.org - Ahora en dos lenguajes - puede acercarnos a desenredar las importantes cuestiones en juego.
Share this: Join EFF
Some day, your life may depend on the work of a security researcher. Whether it’s a simple malfunction in a piece of computerized medical equipment or a malicious compromise of your networked car, it’s critically important that people working in security can find and fix the problem before the worst happens.
And yet, an expansive United States law, passed in 1998 and emulated in legal codes all over the world, casts a dark legal cloud over the work of those researchers. It gives companies a blunt instrument with which to threaten that research, keeping potentially embarrassing or costly errors from seeing the light of day.
That law is Section 1201 of the Digital Millennium Copyright Act. Simply put, Section 1201 means that you can be sued or even jailed if you bypass digital locks on copyrighted works—from DVDs to software in your car—even if you are doing so for an otherwise lawful reason, like security testing.
It gets worse: Section 1201’s speech restrictions also apply to scholars, artists, and activists that are seeking to comment on culture or make it more accessible. The tools to make engaging remixes, annotations, or interactive commentaries are in the hands of more and more people, but the law has created a “gotcha” situation: while using that source material is legal, getting access to it might run afoul of these additional legal hurdles.
You can seek an exemption from the law to exercise a limited range of your fair use rights, but the avenue to do so is managed by an unsympathetic gatekeeper: the Library of Congress. The Librarian, working with the Register of Copyrights, has turned an already-onerous exemption process into legal obstacle course. And even if you win, you still have to come back every three years to do it again.
The intent behind that law was to create legal backing for DRM—the software that adds restrictions to “content” like music, movies, and books. But over nearly two decades, as software that the law counts as a “copyrighted work” became embedded in everything from tractors to light bulbs to kitty litter boxes, the prohibition has become best known for its unintended consequences.
Those unintended consequences create a problem of constitutional scale. Congress has the power to create copyright laws that “promote the progress of science and the useful arts,” but when it interferes with the traditional contours of copyright law, including fair use protections, it intrudes on the First Amendment. Section 1201 represents just such an intrusion, one that cannot pass constitutional scrutiny.
EFF has filed a lawsuit today to address that constitutional issue, and we’ve gone into more depth about the legal questions at hand in a companion post.
When Congress passed Section 1201, the hot-button copyright debates were about the terms under which people could copy and consume music, movies, and books. Those are important issues, and there is still work to do in getting the balance right for the producers, distributors, and consumers of those works—especially considering that, more than ever before, people jump between all those roles.
But while that work continues, copyright law shouldn’t be casting a legal shadow over activities as basic as popping the hood of your own car, offering commentary on a shared piece of culture (and helping others do so), and testing security infrastructure. It’s time for the courts to revisit Section 1201, and fix Congress’s constitutional mistake.
Share this: Join EFF
Section 1201 of the Digital Millennium Copyright Act forbids a wide range of speech, from remix videos that rely upon circumvention, to academic security research, to publication of software that can help repair your car or back up your favorite show. It potentially implicates the entire range of speech that relies on access to copyrighted works or describes flaws in access controls—even where that speech is clearly noninfringing.
At EFF, we’ve been worried about this law since before it was passed. We were counsel in one of the first major tests of the law, but in those early days, we failed to convince the courts of its dangerous risk to speech. Ever since, we’ve documented those speech consequences. We’ve called on Congress to reform the law, to no avail. So today, we’re going to back to court, armed with nearly twenty years of knowledge about Section 1201’s interference with lawful speech and with key Supreme Court cases that have been decided in that time. For more about the problems caused by this law, see our companion post on the issue.
Section 1201 was billed as a tool to prevent infringement by punishing those who interfered with technological restrictions on copyrighted works. After the DMCA was passed, the Supreme Court was asked to evaluate other overreaching copyright laws, and offered new guidance on the balance between copyright protections and free speech. It found that copyright rules can be consistent with the First Amendment so long as they adhere to copyright’s "traditional contours." These contours include fair use and the idea/expression dichotomy.
The dominant interpretation of Section 1201, however, can’t be squared with these First Amendment accommodations. As long as circumvention in furtherance of fair use risks civil damages or criminal penalties, Section 1201's barrier to noninfringing uses of copyrighted works oversteps the boundary set by the Supreme Court..
In First Amendment terms, the law is facially overbroad and therefore unconstitutional. By preventing valuable and noninfringing speech, it goes far beyond any restriction that might be justified by the purposes of copyright law.
Defenders of the law may point to the triennial exemption process. But that rulemaking, which was intended as a protection for lawful speech, instead acts as an unconstitutional speech-licensing regime. To comply with the First Amendment, a speech-licensing regime must conform to strict safeguards to ensure that government officials issue timely permission according to strict standards, rather than exercising too much discretion.
The opportunity to seek government permission once every three years hardly provides for timely review, and in the most recent rulemaking, the government went so far as to claim that permission may be denied at the Librarian's discretion. It is also not enough to prove that you have the right to speak, the government demands that you show a widespread impact on others in a similar position, or you will be refused.
Section 1201 is a draconian and unnecessary restriction on speech and the time has come to set it aside. The future of cultural participation and software-related research depends on it.
Share this: Join EFF
EFF Lawsuit Takes on DMCA Section 1201: Research and Technology Restrictions Violate the First Amendment
Washington D.C.—The Electronic Frontier Foundation (EFF) sued the U.S. government today on behalf of technology creators and researchers to overturn onerous provisions of copyright law that violate the First Amendment.
EFF’s lawsuit, filed with co-counsel Brian Willen, Stephen Gikow, and Lauren Gallo White of Wilson Sonsini Goodrich & Rosati, challenges the anti-circumvention and anti-trafficking provisions of the 18-year-old Digital Millennium Copyright Act (DMCA). These provisions—contained in Section 1201 of the DMCA—make it unlawful for people to get around the software that restricts access to lawfully-purchased copyrighted material, such as films, songs, and the computer code that controls vehicles, devices, and appliances. This ban applies even where people want to make noninfringing fair uses of the materials they are accessing.
Ostensibly enacted to fight music and movie piracy, Section 1201 has long served to restrict people’s ability to access, use, and even speak out about copyrighted materials—including the software that is increasingly embedded in everyday things. The law imposes a legal cloud over our rights to tinker with or repair the devices we own, to convert videos so that they can play on multiple platforms, remix a video, or conduct independent security research that would reveal dangerous security flaws in our computers, cars, and medical devices. It criminalizes the creation of tools to let people access and use those materials.
Copyright law is supposed to exist in harmony with the First Amendment. But the prospect of costly legal battles or criminal prosecution stymies creators, academics, inventors, and researchers. In the complaint filed today in U.S. District Court in Washington D.C., EFF argues that this violates their First Amendment right to freedom of expression.
“The creative process requires building on what has come before, and the First Amendment preserves our right to transform creative works to express a new message, and to research and talk about the computer code that controls so much of our world,” said EFF Staff Attorney Kit Walsh. “Section 1201 threatens ordinary people with financial ruin or even a prison sentence for exercising those freedoms, and that cannot stand.”
EFF is representing plaintiff Andrew “bunnie” Huang, a prominent computer scientist and inventor, and his company Alphamax LLC, where he is developing devices for editing digital video streams. Those products would enable people to make innovative uses of their paid video content, such as captioning a presidential debate with a running Twitter comment field or enabling remixes of high-definition video. But using or offering this technology could run afoul of Section 1201.
“Section 1201 prevents the act of creation from being spontaneous,’’ said Huang. “Nascent 1201-free ecosystems outside the U.S. are leading indicators of how far behind the next generations of Americans will be if we don’t end this DMCA censorship. I was born into a 1201-free world, and our future generations deserve that same freedom of thought and expression.”
EFF is also representing plaintiff Matthew Green, a computer security researcher at Johns Hopkins University who wants to make sure that we all can trust the devices that we count on to communicate, underpin our financial transactions, and secure our most private medical information. Despite this work being vital for all of our safety, Green had to seek an exemption from the Library of Congress last year for his security research.
“The government cannot broadly ban protected speech and then grant a government official excessive discretion to pick what speech will be permitted, particularly when the rulemaking process is so onerous,” said Walsh. “If future generations are going to be able to understand and control their own machines, and to participate fully in making rather than simply consuming culture, Section 1201 has to go.”
For the complaint:
Share this: Join EFF
If you only listened to entertainment industry lobbyists, you’d think that music and film studios are fighting a losing battle against copyright infringement over the Internet. Hollywood representatives routinely tell policymakers that the only response to the barrage of online infringement is to expand copyright or even create new copyright-adjacent rights.
New research from the United Kingdom paints a very different picture of the state of online media consumption (PDF). The new report shows that unauthorized access to copyrighted media is on a steady decline, with only 5% of Internet users getting all of their online media through rogue methods, and only 15% of users consuming any infringing content. Similar studies in the US have shown a steady decline in unauthorized downloads here too. The numbers show that if Hollywood really wants to curb infringing media consumption, the best thing it can do is improve its official offerings.Consumers Choose the Best Product
For the past five years, the UK’s Intellectual Property Office (IPO) has produced a study on how people in the UK access content online, including both authorized and unauthorized methods. The latest report—released earlier this month—found that consumption of infringing content online is now at the lowest point it’s been for the history of the study.
One of the key factors IPO found contributing to the decline is the rise of online subscription services—particularly for music. It’s easy to see why consumers are moving to services like Spotify for their music—they’re convenient for many users and they offer good selection. When IPO asked people what would make them stop accessing content via unauthorized methods, the most popular responses were to make legal services cheaper (24%) and for them to carry all of the content consumers want (20%).
IPO noted that when users choose where to get content online, legality isn’t much of a factor. Consumers look for convenience, selection, price, and quality. Simply put, listeners have moved to Spotify because they consider it the best product, legal or not.
In contrast with music, infringement of films and TV shows went up slightly (though it was still minuscule compared to viewership via legal methods). Given users’ stated reasons for using infringing methods, it follows that the relatively limited selection of content on streaming video services has played a role in some consumers’ reluctance to switch to them.
To be clear, we’re not cheerleading for streaming services. Most of those services lock down media in digital rights management (DRM) technology. Thanks to laws in the U.S. and many other countries that make it a crime to circumvent DRM, streaming services create legal uncertainty over what consumers can do with the content they access, potentially outlawing uses that wouldn’t otherwise constitute copyright infringement. The IPO report makes it clear that users are willing to pay for authorized methods of accessing content when those methods are the most convenient. If Hollywood invested in higher quality services for sharing content with fans that didn’t rely on DRM, that would do nothing but ease consumers’ transition to them.ISP “Education” Programs Don’t Work
While there might be disagreement about how the entertainment industry can create more loyal, paying customers, one thing is certain: pressuring Internet service providers to enforce copyright does very little to deter users. We’ve written several times about the Copyright Alert System—aka “Six Strikes”—a system whereby ISPs allow major entertainment companies to monitor customers’ activity for unauthorized sharing of films and TV shows. CAS launched in the US in 2013 and the UK launched a similar program in 2015.
The IPO study found that these attempts by ISPs to “educate” users on copyright infringement have very little impact on users’ behavior. Only 11% of users who’d admitted to unauthorized access said that they’d be deterred if they received letters from their ISPs threatening to suspend their accounts.More Copyright Is Not the Solution
If online infringement is on the decline, then why is it invoked so often when the entertainment industry tries to expand copyright? In the past few months alone:
- The Motion Picture Association of America has pressured domain name registries to agree to block websites over alleged copyright infringement.
- Lobbyists have proposed that the government institute a mandatory filtering regime which would require user-generated content platforms to build Content ID-style copyright bots.
- The cable industry has attempted to stop the FCC from bringing competition to the set-top box market, suggesting that copyright should let it control how people consume its content.
- Both Chile and Columbia have considered creating a new copyright-adjacent right of remuneration that creators of video works would not be able to waive, even if they wanted to.
And on it goes. Again and again, large content owners seem to think that the only way to fight unauthorized media consumption is to expand copyright. But more copyright won’t change users’ behavior. What it will do is chill innovation and free expression online. The way to bring in more paying customers isn’t to write new law; it’s to build a better product and get it to more customers at the right price.
Share this: Join EFF
Wednesday, July 20 is the final day of EFF's Summer Security Reboot, a two-week membership drive that focuses on taking stock of our digital security practices and bolstering the larger movement to protect digital civil liberties. Besides a reduced donation amount for the Silicon level membership, the Reboot features sets of random number generators: EFF dice with instructions on how to generate stronger and more memorable random passphrases. EFF even produced three new passphrase wordlists to improve upon Arnold Reinhold's popular Diceware list, first published in 1995.
EFF is a longtime advocate for personal security, and over the years we have continued to fight threats to user privacy and freedom. With the Summer Security Reboot, we want the public to engage with the larger questions of how one can and should control personal information in spite of high-profile attempt after attempt to compromise our devices. The world has increasingly recognized privacy and strong crypto as integral parts of protecting international human rights. A recent Amnesty International report states encryption is "an enabler of the rights to freedom of expression, information and opinion, and also has an impact on the rights to freedom of peaceful assembly, association and other human rights." Strong passphrase use is but one basic part of a diverse toolkit that can help you protect personal information, whether from identity thieves or government surveillance (ideally both!).A message about passphrase dice from internationally renowned
security technologist, author, and EFF Board Member Bruce Schneier: Privacy info. This embed will serve content from archive.org
We took to Twitter last week mashing up popular film titles and quotes with references to privacy and security to help draw attention to online rights issues. It resulted in a full day of #SecFlix.
Numerous supporters and even EFF staffers contributed a number of gems.
Make no mistake: EFF is on a mission to ensure that digital civil liberties are core issues everywhere and part of public discourse. While lighthearted and beautifully geeky in nature, EFF's Summer Security Reboot takes aim at accelerating the adoption of privacy practices and values. We must face the next stage of online challenges with action and the strength of our numbers. If you are not yet an EFF member for this year, I encourage you to join now to uphold civil liberties and protect yourself, your personal information, and your rights.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E';
Share this: Join EFF
It’s been a rough month for Internet freedom in Russia. After it breezed through the Duma, President Putin signed the “Yarovaya package" into law—a set of radical “anti-terrorism” provisions drafted by ultra-conservative United Russia politician Irina Yarovaya, together with a set of instructions on how to implement the new rules. Russia’s new surveillance laws include some of Bad Internet Legislation’s greatest hits, such as mandatory data retention and government backdoors for encrypted communications—policies that EFF has opposed in every country where they’ve been proposed.
As if that wasn’t scary enough, under the revisions to the criminal code, Russians can now be prosecuted for “failing to report a crime.” Citizens now risk a year in jail for simply not telling the police about suspicions they might have about future terrorist acts.
But some of the greatest confusion has come from Internet service providers and other telecommunication companies. These organizations now face impossible demands from the Russian state. Now they can be ordered to retain every byte of data that they transmit, including video, telephone calls, text messages, web traffic, and email for six months—a daunting and expensive task that requires the kind of storage capacity that’s usually associated with NSA data centers in Utah. Government access to this data no longer requires a warrant. Carriers must keep all metadata for three years; ISPs one year. Finally, any online service (including social networks, email, or messaging services) that uses encrypted data is now required to permit the Federal Security Service (FSB) to access and read their services’ encrypted communications, including providing any encryption keys.
Opposition to the Yarovaya package has come from many quarters. Technical experts have been united in opposing the law. Russia’s government Internet ombudsman opposed the bill. Putin’s own human rights head, Mikhail Fedotov, called upon the Senators of Russia’s Federal Council to reject the bill. ISPs have pointed out that compliance would cost them trillions of rubles.
But now the law is here, and in force. Putin has asked for a list of services that must hand over their keys. ISPs have begun to consider how to store an impossibly large amount of data. Service providers are required to consider how to either break unbreakable encryption or include backdoors for the Russian authorities.
It is clear that foreign services will not be spared. Last week, the VPN provider, Private Internet Access (PIA), announced that they believed their Russian servers had been seized by the Russian authorities. PIA says they do not keep logs, so they could not comply with the demand, but they have now discontinued their Russian gateways and “will no longer be doing business in the region.”
Russia’s ISPs, messaging services, and social media platforms have no such choice: because they cannot reasonably comply with all the demands of the Yarovaya package, they become de facto criminals whatever their actions. And that, in turn, gives the Russian state the leverage to extract from them any other concession it desires. The impossibility of full compliance is not a bug—it’s an essential feature.
Russia is not the only nation whose lawmakers and politicians are heading in this direction, especially when it comes to requiring backdoors for encrypted communications. Time and time again, technologists and civil liberties groups have warned the United States, France, Holland, and a host of other nations that the anti-encryption laws they propose cannot be obeyed without rewriting the laws of mathematics. Politicians have often responded by effectively telling the Internet’s experts “don’t worry, you’ll work out a way.” Let us be clear: government backdoors in encrypted communications make us all less safe, no matter which country is holding the keys.
Technologists have sometimes believed that technical impossibility means that the laws are simply unworkable – that a law that cannot be obeyed is no worse than no law at all. As Russia shows, regulations that no one can comply with aren’t dead-letter laws. Instead, they corrode the rule of law, leaving a rusting wreckage of partial compliance that can be exploited by powers who will use their enforcement powers for darker and more partial ends than justice.
Russians concerned with the fall of Internet freedom, including the Society for the Protection of the Internet (IPI), have planned a protest in cities across the country on July 26. EFF will continue to follow the situation closely as it develops.var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E';
Share this: Join EFF
Randomly-generated passphrases offer a major security upgrade over user-chosen passwords. Estimating the difficulty of guessing or cracking a human-chosen password is very difficult. It was the primary topic of my own PhD thesis and remains an active area of research. (One of many difficulties when people choose passwords themselves is that people aren't very good at making random, unpredictable choices.)
Measuring the security of a randomly-generated passphrase is easy. The most common approach to randomly-generated passphrases (immortalized by XKCD) is to simply choose several words from a list of words, at random. The more words you choose, or the longer the list, the harder it is to crack. Looking at it mathematically, for k words chosen from a list of length n, there are kn possible passphrases of this type. It will take an adversary about kn/2 guesses on average to crack this passphrase. This leaves a big question, though: where do we get a list of words suitable for passphrases, and how do we choose the length of that list?
Several word lists have been published for different purposes; thus far, there has been little scientific evaluation of their usability. The most popular is Arnold Reinhold's Diceware list, first published in 1995. This list contains 7,776 words, equal to the number of possible ordered rolls of five six-sided dice (7776=65), making it suitable for using standard dice as a source of randomness. While the Diceware list has been used for over twenty years, we believe there are several avenues to improve the usability and are introducing three new lists for use with a set of five dice (as part of its Summer Security Reboot Campaign, EFF is providing a dice set to donors).Enhancements over the Diceware list
The Diceware list can provide strong security, but offers some challenges to usability. In particular, some of the words on the list can be hard to memorize, hard to spell, or easy to confuse with another word.
- It contains many rare words such as buret, novo, vacuo
- It contains unusual proper names such as della, ervin, eaton, moran
- It contains a few strange letter sequences such as aaaa, ll, nbis
- It contains some words with punctuation such as ain't, don't, he'll
- It contains individual letters and non-word bigrams like tl, wq, zf
- It contains numbers and variants such as 46, 99 and 99th
- It contains many vulgar words
- Diceware passwords need spaces to be correctly decoded, e.g. in and put are in the list as well as input.
Note that several of these problems are exacerbated for users with a soft keyboard or other typing systems that relies on word recognition. Using only valid dictionary words makes this setup much easier.Our new "long" list
Our first new list matches the original Diceware list in size (7,776 words (65)), offering equivalent security for each word you choose. However, we have fixed the above problems, resulting in a list that is hopefully easy to type and remember.
We based our list off of data collected by Ghent University's Center for Reading Research. The Ghent team has long studied word recognition; you can participate yourself in their online quiz to measure your English vocabulary. This list gives us a good idea of which words are most likely to be familiar to English speakers and eliminates most of the unusual words in the original Diceware list. This data also includes "concreteness" ratings for each words, from very concrete words (such as screwdriver) to very abstract words (such as love).
We took all words between 3 and 9 characters from the list, prioritizing the most recognized words and then the most concrete words. We manually checked and attempted to remove as many profane, insulting, sensitive, or emotionally-charged words as possible, and also filtered based on several public lists of vulgar English words (for example this one published by Luis von Ahn). We further removed words which are difficult to spell as well as homophones (which might be confused during recall). We also ensured that no word is an exact prefix of any other word.
The result is our own list of 7,776 words [.txt] suitable for use in dice-generated passphrases. The words in our list are longer (7.0 characters) on average, than Reinhold's Diceware list (4.3 characters). This is a result of banning words under 3 characters as well as prioritizing familiar words over short but unusual words.
Note that the security of a passphrase generated using either list is identical; the differences are in usability, including memorability, not in security. For most uses, we recommend a generating a six-word passphrase with this list, for a strength of 77 bits of entropy. ("Bits of entropy" is a common measure for the strength of a password or passphrase. Adding one bit of entropy doubles the number of guesses required, which makes it twice as difficult to brute force.) Each additional word will strengthen the passphrase by about 12.9 bits.Our new "short" lists
We are also introducing new lists containing only 1,296 words (64), suitable for use with four six-sided dice. By reducing the number of words in the list, we were able to use words with a maximum of five characters. This can lead to more efficient typing for the same security if it requires fewer characters to enter N short words than N-1 long words.
Passphrases generated using the shorter lists will be weaker than the long list on a per-word basis (10.3 bits/word). Put another way, this means you would need to choose more words from the short list, to get comparable security to the long list—for example, using eight words from the short will provide a strength of about 82 bits, slightly stronger than six words from the long list.
The first short list [.txt] is designed to include the 1,296 most memorable and distinct words. Our hope is that this approach might offer a usability improvement for longer passphrases. Further study is need to determine conclusively which list will yield passphrases that are easier to remember.
Finally, we're publishing one more short list [.txt] which with a few additional features making the words easy to type:
- Each word has a unique three-character prefix. This means that future software could auto-complete words in the passphrase after the user has typed the first three characters
- All words are at least an edit distance of 3 apart. This means that future software could correct any single typo in the user's passphrase (and in many cases more than one typo).
We've added these features in the hope that they might be used by software in the future that was specially designed to take advantage of them, but will not offer a significant benefit today so this list is mostly a proof-of-concept for individual users. Software developers might be able to find interesting uses for this list.Summary
Different lists might be preferable in different situations, and that's perfectly fine. For example, you might consider using one of the short lists when you are prioritizing ease of remembering, or when you know that the highest level of passphrase strength is not necessary. This might cover a website login that offers additional protections, like two-factor authentication, and that rate-limits guesses to protect against brute force.
If you are typing the passphrase frequently (as opposed to using a passphrase database), you might prioritize reducing the length of the words. Our long list has an average length of 7.0 characters per word, and 12.9 bits of entropy per word, yielding an efficiency of 1.8 bits of entropy per character. Our short list has an average length of 4.5 characters per word, and 10.3 bits of entropy per word, yielding 2.3 bits of entropy per character. Our typo-tolerant list is much less efficient at only 1.4 bits of entropy per character. However, using a future autocomplete software feature, only three characters would need to be typed per word, in which case this would be the most efficient list to use at 3.1 bits of entropy per character typed.
You might find the shorter average length in the original Diceware list to be preferable. That's perfectly fine as well, given the caveats we mentioned about the difficulty of using this list. Note that the original Diceware list offers 3.0 bits of entropy per character and hence less typing. As discussed above, we feel the large number of short words in this list (including single letters and bigrams) are hard to remember and hence a bad tradeoff to decrease typing time.
Since passphrases are individually chosen, it's okay for multiple lists to exist. In fact, this might even increase security, as it means the attacker has some uncertainty about which list was used to generate a passphrase.
We think our lists will be useful for people generating passphrases using EFF's dice (or otherwise), though they certainly aren't the last word on the matter. There's plenty of room for further research and experimentation on memorability and ways of optimizing lists and we hope people will keep exploring this area.
Support EFF's work during our Summer Security Reboot!
Share this: Join EFF
Ninth Circuit Panel Backs Away From Dangerous Password Sharing Decision—But Creates Even More Confusion About the CFAA
Three judges of the Ninth Circuit Court of Appeals have taken a step back from criminalizing password sharing, limiting the dangerous rationale of a decision issued by a panel of three different judges of the same court last week. That’s good, but the new decision leaves so many unanswered questions that it’s clear we need en banc review of both cases—i.e., by 11 judges, not just three—so the court can issue a clear and limited interpretation of the notoriously vague federal hacking statute at the heart of both cases, the Computer Fraud and Abuse Act (CFAA).
To recap, the court’s language in last week’s case, U.S. v. Nosal, was so broad that it seemed to make it a federal crime to use someone else’s password, even with their knowledge and permission. In the new decision, in a case called Facebook v. Power Ventures, a separate Ninth Circuit panel acknowledged that a computer user can provide another person with valid authorization to use their username and password. That’s the good news. But the decision leaves unanswered so many other questions about how the law can be interpreted, and its rationale is so confusing, that it’s an invitation for more dangerous litigation and prosecutions under the CFAA.
The CFAA makes it illegal to engage in “unauthorized access” to any computer connected to the Internet. But the statue doesn’t say what “authorized access” means or make clear where authorization must come from. As we explain in an earlier post, under the rationale of last week’s decision in Nosal II (we call it Nosal II to differentiate it from an earlier ruling in this long-running case), only the person or entity that owns the computer—not someone who just uses it or holds an account to use it—can “authorize” another person to access the computer. That would mean a spouse could not lawfully log into their partner’s bank account to pay a bill, even with their permission or at their request, so long as the spouse knows that she doesn’t have permission from the bank to access its servers. The Ninth Circuit’s rationale turned anyone who has ever used someone else’s password without the approval of the computer owner into a potential felon. But we know that people use other people’s passwords all the time for good reasons. That’s why we’re happy the Power Ventures ruling, while claiming to be consistent with Nosal II, appears to have taken a step back from that bad result.Facebook v. Power Ventures: The Facts
In the Power Ventures appeal, the company, a social media aggregator, was given usernames and passwords from Facebook users who wanted it to help them view all their social media information in one place. Power Ventures then asked for and received permission from the users to send invitations to those their contacts. Facebook objected to this and sent Power Ventures a cease and desist letter. It also blocked one of Power Venture’s IP addresses, although the block wasn’t effective because Power Ventures had many IP addresses. The company continued to offer its social media aggregating services to Facebook users for a month or so, until Facebook blacklisted the phrase “Power.com.”
Facebook also sued Power Ventures, arguing that it violated the CFAA, the corresponding state law in California (California Penal Code § 502), and the CAN-SPAM Act—the federal law that prohibits sending commercial emails with “materially misleading” header information. More on that CAN-SPAM claim below.
The district court ruled back in 2012 that Power Ventures was liable to Facebook under the CFAA, the state law, and CAN-SPAM Act and, in 2013, ordered it and CEO Steven Vachani, personally, to pay Facebook a crazy amount—more than $3 million in damages. Power Ventures appealed, and EFF filed an amicus brief in support of the company and argued at the Ninth Circuit hearing about the danger of extending crippling civil and criminal liability on services that provide competing or follow-on innovation.The Ninth Circuit’s Facebook v. Power Decision
The Ninth Circuit found that Power Ventures violated the CFAA when it accessed Facebook’s data after receiving the cease and desist letter, on the ground that the letter gave the company notice that Facebook had revoked its authorization to access users’ Facebook accounts. The court acknowledged that Facebook users could give Power Ventures valid authorization to access their accounts without running into a CFAA violation—the step back from Nosal II’s blanket criminalization of password sharing. That was true even though Facebook’s terms of service expressly prohibit password sharing or letting anyone else use your account. But, according to the court, “[t]he consent that Power had received from Facebook users was not sufficient to grant continuing authorization to access Facebook’s computers after Facebook’s express revocation of permission.” Because Power “unequivocally” knew that it no longer had authorization from Facebook to access Facebook’s computers and continued to do so anyway, it violated the CFAA.
So if we’ve got this right, an authorized user can designate someone to use their account even if the Terms of Service or other contractual agreement expressly forbids it, but if the computer owner then says “no” again, somehow that authority is lost and continued use is a crime. Huh?
Thankfully, the court got things right as far as Facebook’s CAN-SPAM claims were concerned. Facebook argued that the promotional messages its users sent their friends inviting them to try Power Ventures were “materially misleading”—and thus illegal—because the messages appeared to come from Facebook rather than from the users or Power Ventures. But that’s how Facebook set up its messaging system. The Ninth Circuit acknowledged, rightfully, that there was nothing misleading about the invitations. Any Facebook user that received an invitation to try Power would be able to tell that there were three separate parties involved: the friend, who sent the invite; Facebook, who facilitated the message; and Power, who’s service was being promoted.Unanswered Questions
While we’re happy the court made it clear that using another person’s passwords in the first instance is OK, even despite a contractual agreement or terms of service forbidding it, the Ninth Circuit’s Power Ventures decision raises a host of new and unanswered questions about the scope of the CFAA.
The central problem is that, in both Power Ventures and Nosal II, by turning criminal liability on what someone knows or is told, the court seems to lose sight of the original goal of CFAA—targeting individuals who break into computer systems. Indeed, in the 2012 en banc Nosal I decision, the Ninth Circuit rejected turning the CFAA “into a sweeping Internet-policing mandate,” choosing instead to “maintain the CFAA’s focus on hacking[.]” Yet in Power Ventures (and earlier in Nosal II), there was no “breaking into” a computer; in both cases, legitimate passwords were used with the permission of the account holders. As a result, the Power Ventures court stretched the law to apply where it really wasn’t meant to go, turning criminal liability on Power Ventures' knowledge that Facebook revoked its “authority” to use those absolutely still good passwords. And because these decisions reach beyond the issue of breaking into computers, they suddenly implicate questions about the application of the CFAA to public websites, which have no technological barriers to access. (The court dropped a footnote saying that it wasn’t answering this question, but the fact that it felt the need to mention this was troubling. The CFAA should not reach that far.)
More importantly, if a computer system owner doesn’t like how someone is using a computer—whether directly or through someone else—the remedy should be terminating the user’s credentials, not suing or seeking criminal indictment of the person using the legitimate credentials.
These questions remain unanswered and leave many situations unclear, convoluting the good Ninth Circuit precedent of Nosal I. And that’s important because even though Power Ventures is a civil case, the CFAA is a criminal statute and must provide adequate notice of exactly what conduct is criminalized.
We’re glad the Ninth Circuit rejected the district court’s absurd extension of the CAN-SPAM Act and stepped back a bit from Nosal II’s dangerous language. But the unanswered questions it raises for the CFAA may prove highly problematic. En banc review of both cases is necessary to bring clarity back to the Ninth Circuit’s interpretation of the CFAA—and to ensure that the law maintains its focus on computers break-ins.
var mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E'; Related Cases: United States v. David NosalFacebook v. Power Ventures
Share this: Join EFF
Update July 14, 2016: Last week, the federal appeals court for the Ninth Circuit ruled in favor of Chaker. The court held that Chaker's blog posts did not violate his supervised release conditions because they were not harassment or defamation. Because the court ruled in favor of Chaker on these grounds, it did not need to reach the constitutional arguments presented by amici including EFF. Still, we are pleased that Chaker will not be punished for engaging in political speech on the Internet, and we hope that this decision will encourage government officials to respect the First Amendment rights of people on supervised release from prison.
Original post of December 17, 2015:
The First Amendment protects the right of everyone to use the Internet to criticize government officials–including people on supervised release from prison.
Take the case of Darren Chaker, whose supervised release was revoked earlier this year because he criticized a law enforcement officer in a blog post. Specifically, he wrote that the officer had been “forced out” by a police agency. The government argues that Chaker violated the terms of his release, which instructed him not to “harass” anyone else, including “defaming a person’s character on the internet.” To us, this is a classic example of political speech that should be subject to the highest level of First Amendment protection.
So earlier this fall, EFF joined with other free speech groups to file an amicus brief supporting Chaker, and by extension the free speech rights of everyone else on supervised release. The brief, filed in the federal appeals court for the Ninth Circuit, argues that when the government seeks to punish speech that criticizes government officials, it must prove by clear and convincing evidence that the speaker acted with “actual malice,” meaning they knew the statement was false, or they acted with reckless disregard for whether it was false. Government must meet this high standard whether it calls the criticism “defamation,” or “intentional infliction of emotional distress,” or (as here) “harassment.”
The good news is that last week, the government’s response to the amicus brief made several significant concessions. First, the government acknowledged that a release condition against “harassment” must be limited to situations where the parolee actually intends to harass someone. Second, the government recognized that harassment does not occur when a parolee merely posts a complaint about police brutality on a message board, writes a negative Yelp review, or publishes an essay criticizing the criminal justice system. Third, the government conceded that the release condition against “defaming” someone else only applies to situations where there is harassment.
The bad news is that the government continues to insist that it may punish the defendant for criticizing a government official absent proof of actual malice. The government does so by blurring its allegations of harassment and defamation. This would eviscerate a half-century of First Amendment protection of political speech criticizing government officials. Also, the government’s overbroad definition of harassment includes actions not directed at the specific person who the government alleges was the victim of harassment.
We will continue to monitor this case. Everyone, including court-involved people, has the First Amendment right to criticize the government on the Internet.
The amici include the ACLU of San Diego and Imperial Counties, the Cato Institute, the Brechner First Amendment Project, and the First Amendment Coalition. The amici’s brief was prepared by Robert Arcamona and Patrick Carome of Wilmer Cutler Pickering Hale and Dorr LLPvar mytubes = new Array(1); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E';
Share this: Join EFF
The World Wide Web Consortium has published a "Candidate Recommendation" for Encrypted Media Extensions, a pathway to DRM for streaming video.
A large community of security researchers and public interest groups have been alarmed by the security implications of baking DRM into the HTML5 standard. That's because DRM -- unlike all the other technology that the W3C has ever standardized — enjoys unique legal protection under a tangle of international laws, like the US Digital Millennium Copyright Act, Canada's Bill C-11, and EU laws that implement Article 6 of the EUCD.
Under these laws, companies can threaten legal action against researchers who circumvent DRM, even if they does so for lawful purposes, like disclosing security vulnerabilities. Last summer, a who's-who America's most esteemed security researchers filed comments with the US Copyright Office warning the agency that they routinely discovered vulnerabilities in systems from medical implants to voting machines to cars, but were advised not to disclose those discoveries because of the risk of legal reprisals under Section 1201 of the DMCA.
Browsers are among the most common technologies in the world, with literally billions of daily users. Any impediment to reporting vulnerabilities in these technologies has grave implications. Worse: HTML5 is designed to provide the kind of rich interaction that we see in apps, in order to challenge apps' dominance as control systems for networked devices. That means browsers are now intended to serve as front-ends for pacemakers and cars and home security systems. Now more than ever, we can't afford any structural impediments to identification and disclosure of browser defects.
There is a way to reconcile the demands of browser vendors and movie studios with the security of the web: last year, we proposed an extension to the existing W3C policy on patents, which says that members are forbidden from enforcing their patent rights to shut down implementations of W3C standards. Under our proposal, this policy would also apply to legal threats under laws like the DMCA. Members would agree upon a mutually acceptable, binding covenant that forbade them from using the DMCA and its global analogs to attack security researchers who revealed defects in browsers and new entrants into the browser market.
So far, the W3C has rejected this proposal, despite broad support from security and privacy professionals around the world, and despite new evidence of the need to investigate technical flaws in the EME specification. In June, security researchers in Israel and Germany revealed a showstopper bug in Chrome's implementation and promised to look at Firefox, Safari and Edge next.Privacy info. This embed will serve content from youtube-nocookie.com
We will keep working to persuade the W3C to adopt our sensible proposal. In the meantime, we urge the security research community to subject all EME implementations to the closest possible scrutiny. The black hats who are already doing this are not bound by fear of the DMCA, and they are delighted to have an attack surface that white hats are not allowed to investigate in detail.
Even with this handicap, white hats discover serious vulnerabilities. Every discovery proves the need to let researchers examine the full scope of possible security flaws. If you are investigating a system or wish to disclose a flaw and need legal advice, please contact our intake address.var mytubes = new Array(2); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E'; mytubes = '%3Ciframe src=%22https://www.youtube-nocookie.com/embed/5CkWjOvpZJw?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
The Affordable Care Act (ACA) provisions for employee wellness programs give employers the power to reward or penalize their employees based on whether they complete health screenings and participate in fitness programs. While wellness programs are often welcomed, they put most employees in a bind: give your employer access to extensive, private health data, or give up potentially thousands of dollars a year.
Sadly, the Equal Employment Opportunity Commission’s (EEOC) new regulations, which go into effect in January 2017, rubber stamp the ACA’s wellness programs with insufficient privacy safeguards. Because of these misguided regulations, employers can still ask for private health information if it is part of a loosely defined wellness program with large incentives for employees.
As EFF’s Employee Experience Manager, I had hoped the EEOC’s final ruling would protect employees from having to give up their privacy in order to participate in wellness programs. Upon reading the new rules, I was shocked at how little the EEOC has limited the programs’ scope. Without strict rules around how massive amounts of health information can be bought from employees and used, this system is ripe for abuse.
Employers are already using wellness programs in disturbing ways:
- The city of Houston requires municipal employees to tell an online wellness company about their disease history, drug use, blood pressure, and other delicate information or pay a $300 fine. The wellness company can give the data to “third party vendors acting on our behalf,” according to an authorization form. The information could be posted in areas “that are reviewable to the public.” It might also be “subject to re-disclosure” and “no longer protected by privacy law.”
- Plastics maker Flambeau terminated an employee’s insurance coverage when he chose not to take his work-sponsored health assessment and biometric screening.
- A CVS employee claimed she was fined $600 for not submitting to a wellness exam that asked whether she was sexually active.
- The Wall Street Journal reported in February that “third party vendors who are hired to administer wellness programs at companies mine data about the prescription drugs workers use, how they shop and even whether they vote, to predict their individual health needs and recommend treatments.”
- Castlight (a wellness firm contracted by Walmart) has a product that scans insurance claims to find women who have stopped filling their birth-control prescriptions or made fertility related searches on their health app. They match this data with a woman’s age and calculate the likelihood of pregnancy. This individual would then receive targeted emails and in-app messages about prenatal care.
The EEOC now provides guidance on the extent to which employers may offer incentives to employees to participate in wellness programs that ask them to answer disability-related questions or undergo medical examinations. The maximum allowable “incentive” or penalty an employer can offer is 30% of the total cost for self-only coverage of the plan in which the employee is enrolled. This can add up to thousands of dollars for an employee per year.
According to the new rule, employers may only receive information collected by a wellness program in aggregate form that does not disclose, and is not reasonably likely to disclose, the identity of specific individuals—except as necessary to administer the plan. This “as necessary to administer the plan” exception is alarming given that employers are permitted to base incentives and penalties on health outcomes and not just participation. Measuring outcomes typically involves gathering information on specific individuals over time.
The EEOC rejected a suggestion that would have allowed individuals to avoid disclosing medical information to employers if they could produce certification from a medical professional that they are under the care of a physician and that identified medical risks are under treatment. The EEOC’s stated reason was that this could undermine the effectiveness of wellness programs as a means of collecting data and was unnecessary.Why This Matters
A statement by the American Association of Retired Persons (AARP) expressed the organization's deep disappointment with the workplace wellness program final rules:
By financially coercing employees into surrendering their personal health information, these rules will weaken medical privacy and civil rights protections.
The American Society of Human Genetics also issued a statement opposing the EEOC final ruling for weakening genetic privacy:
The new EEOC rules mean that Americans could be forced to choose between access to affordable healthcare and keeping their health information private… Employers now have the green light to coerce employees into providing their health information and that of their spouse, which in turn reveals genetic information about their children.
The ACA was touted as a campaign to put consumers back in charge of their health care. EEOC rules do anything but. Employees should have the right to refuse invasive health surveys without fear of being punished with higher healthcare costs. Incentivizing Americans to be proactive about our health is smart, but putting loads of unnecessary private information into employers’ hands is bad policy.var mytubes = new Array(2); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E'; mytubes = '%3Ciframe src=%22https://www.youtube-nocookie.com/embed/5CkWjOvpZJw?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
When EFF analyzes state legislation regulating the operation of drones, we look for a few elements. How will the bill affect law enforcement use of drones? And how will the bill impact private drone use, whether for recreation, journalism, or innovative new business applications? Will the legislation protect the public from undue surveillance? Could it restrain the public’s ability to control its own technology?
Two bills before the California legislature this session—A.B. 1820 and S.B. 868—failed our test on all counts. Not only would the legislation have harmed our civil liberties, the bills could have criminalized certain drone sports, such as the aerial dogfights that have become one of the most popular attractions at Bay Area Maker Faire.
Now, we’re happy to announce that, after months of opposition from unlikely bedfellows—including the civil liberties community, law enforcement, and business groups—these drone bills have been grounded.
Both proposals passed through their originating houses (the Assembly and Senate respectively). But later in the legislative session, the bills died in committee. S.B. 868 failed on a vote in the Assembly Privacy and Consumer Protection Committee, while A.B. 1820 was voted down by the Senate Judiciary Committee.Regulating Law Enforcement Drones
EFF strongly believes that police should obtain a warrant anytime they want to use a drone (with narrow emergency exceptions), but A.B. 1820 would only have required a warrant when police wanted to use a drone to surveil private property. As we told the legislature:
Given the inexpensiveness of UAS [unmanned aircraft systems or drones] and their ease of deployment (as well as continued innovation in the development of lighter-than-air UAS, which can stay aloft for days or weeks at a time), law enforcement would be perfectly free under AB 1820 to continually monitor public spaces or inexpensively track the public movements of individuals indefinitely without probable cause. Further, given the nature of aerial surveillance it would be almost impossible to ensure that any data gathered by a drone comes solely from public property (except, perhaps, deep inside a state park). As a result, warrantless use of drones to surveil public property is likely to result in a tremendous amount of “incidental” collection, while simultaneously placing public spaces under a never-ending shadow of surveillance and monitoring.
EFF was also opposed to the bill because it did not include language to ensure that information collected by drones in violation of state law would be suppressed in court.Double Standards for Private Drone Use
With S.B. 828, EFF opposed the way the legislation would have treated “commercial” and “non-commercial” drone operators unequally. As we told the author, Sen. Hannah-Beth Jackson:
From a safety and privacy perspective, this approach makes absolutely no logical sense. It is true that this distinction usually makes sense when applied to manned aircraft, since commercial manned aircraft typically carry passengers or cargo, and thus the primary risk (and thus reason behind regulation) is to those passengers or cargo. However, the major risk from drones is typically only to people and property not onboard the drone. As such, whether or not the operator is being paid is a poor proxy for the potential risk to the public, and is thus also a poor proxy for whether or not the operation should be regulated to promote safety and privacy.
Further, due to the way S.B. 868 is written, this artificial distinction makes operations by non-commercial operators illegal when the very same operations would be legal if they were commercial in nature.
We also raised questions about how the bill would affect watchdogs, including news media and non-profit government accountability organizations, since it would allow the state’s Office of Emergency Services to declare no-drone zones where there is “critical infrastructure.” We explained:
However, as a result of S.B. 868, any non-profit that wishes to document, for example, a hazardous chemical spill or violation of environmental regulations would be forbidden from doing so via a drone. Similarly, the requirement that pilots obtain a permit before flying over state parks or waterways could stifle any effort by independent non-commercial operators to expose improper use of state lands.Drone Combat Games
Both bills would have outlawed “arming” or “weaponizing” drones. While the authors may have intended their language to prohibit drones from being armed with lethal projectile weapons, the legislation was written so vaguely that it would also have criminalized harmless hobbyist activities.
To defeat the bill, EFF teamed up with the Aerial Sports League (ASL)—an organization that runs drone combat games at events like Bay Area Maker Faire and at the Innovation Hangar at San Francisco’s Palace of the Fine Arts. At these competitions, drone pilots engage in dogfights involving rudimentary weapons, such as net guns or dangled pieces of wire meant to jam an opponent’s drone’s propellers. These overbroad bills could have outlawed these activities.
Ultimately, this was an issue about the right to control your own devices, and we were concerned about prosecutorial overreach. We’ve seen police around the country arrest and suspend students for activities like bringing a homemade clock to school or chewing a toaster pastry into the shape of a gun. It’s not hard to imagine overzealous law enforcement or school officials going after a student for simply attaching a ping-pong ball catapult to a drone.
Aerial Sports League and Game of Drones is a three-time winner of the “Best in Show” for Maker Faire and have developed a STEM educational program with drone combat games at the core of the curriculum. ASL is currently partnered with Hiller Aviation Museum, The Innovation Hangar at the Palace of Fine Arts, and other institutions to provide ongoing drone build-a-thon workshops for youth and adults, sharing the skills needed to build, safely fly—and register your drone with the FAA. These ASL initiatives are due in part to drone combat games’ accessibility for enthusiasts young and old as a gateway to computer programming, math, science, engineering and so many other beneficial skills. In fact, many of our competitors are youth who see drone sports as a way to pursue larger educational futures in aviation, engineering, and technology. While we understand the intent of your proposed prohibition on weaponized drones, this legislation should not criminalize innocent hobbyist activities that promote positive innovation, education and an interest in technology and engineering.Looking Toward the Future
Like any other technology, drones can be used for a variety of innovative and exciting purposes while at the same time posing a danger to privacy and safety. The trick is striking the right balance—ensuring that new laws and regulations protect our privacy and safety without restricting the technology’s use any more than necessary. Unfortunately, these bills were an overreaction to some of the hysteria about private use of drones and didn’t strike that balance. Also, they did not sufficiently limit police use of drones. Fortunately, they were defeated.
The next time legislators want to try to regulate a new technology, we invite them to reach out to us before drafting their legislation. EFF would be happy to work with any lawmaker, educate legislators on the facts, share examples of positive and negative uses, and help write legislation that protects people’s privacy and safety without hindering innovation.var mytubes = new Array(2); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E'; mytubes = '%3Ciframe src=%22https://www.youtube-nocookie.com/embed/5CkWjOvpZJw?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
EFF’s headline-making research earlier this year showed that T-Mobile’s Binge On program wasn’t exactly working as advertised. Now, researchers at Northeastern University and the University of Southern California have published a paper confirming EFF’s findings in detail—even revealing a major weakness in the program that would allow T-Mobile customers to trick the system.
Binge On is one of T-Mobile’s zero-rating programs, in which certain types of data—videos, in this case—don’t count toward customers’ data caps. When launching Binge On late last year, T-Mobile loudly proclaimed that it “optimized” data for mobile devices, which certainly sounds like a good thing. Sadly, EFF’s test showed T-Mobile wasn’t being entirely truthful: the “optimization” was actually just throttling, and T-Mobile was also throttling some video it wasn’t zero-rating—which meant that in some cases customers were getting lower quality video and still having to pay for it. Not so good, after all.
The researchers at Northeastern and USC confirmed that Binge On works by throttling video data to 1.5Mbps without doing any sort of optimization. But the researchers went even further, showing how Binge On can result in worse-quality video (especially for mobile devices with high-resolution screens), and explaining how it could also result in decreased battery lifetime (due to the longer download times Binge On causes).
And they didn’t stop there. They actually reverse-engineered the classifier T-Mobile uses to decide whether or not data should be zero-rated. In other words, they figured out exactly what parts of a data stream T-Mobile looks at to decide if a flow of packets should count against a customer’s data cap or not, and which values triggered zero-rating. With that knowledge in hand, they also figured how to subvert the classifier into zero-rating any data—not just video streams.Super-Technical Digression
There was one technical discrepancy between the researchers’ findings and our findings from back in January. The researchers found that changing the “Content-Type” HTTP header from “video/mp4” to something else prevents T-Mobile from recognizing that a file is actually video, and thus causes Binge On not to throttle or zero-rate the file. Our test, on the other hand, showed that changing the file extension (and thus the Content-Type header) wasn’t sufficient—T-Mobile still recognized the file as video and throttled it.
To figure out the source of the discrepancy, we ran our test again, and also provided a packet log of our test to the researchers. They confirmed our results, and also ran some different tests to explore further what was going on. Together, we realized that both of our results were correct. That’s because in addition to matching against the “Content-Type” header, T-Mobile also scans the first response packet for the string “mp4.” (This string was present in the video file we used for EFF’s tests, since it’s part of the headers of the file itself.) If either match is found, Binge On throttles the stream. Thus, our test with different headers did show throttling, since our file had the string “mp4” in it. And the researchers’ test with different headers didn’t show throttling, because the content payload in their test didn’t include the magic string.Non-CS Folks Can Start Reading Again
Either way, the fundamental point is that T-Mobile is doing deep-packet inspection to support a brittle zero-rating service that discriminates against edge providers who don’t want to make a private deal with T-Mobile. As a result, Binge On throttles—not optimizes—video regardless of whether or not it’s zero-rated, sometimes resulting in a poorer video streaming experience for T-Mobile customers. We said it back in January, and the researchers at Northeastern and USC have done a great job confirming it and going much, much farther to document it methodically and in rigorous detail.
While T-Mobile has made some positive changes since January (they’ve made it much easier to disable or re-enable Binge On, they’re now allowing edge providers to opt out, and their explanation of how Binge On works is more accurate), Binge On still suffers from two fundamental violations of net neutrality principles. First, it only applies to video, which means other large downloads or types of streaming data don’t get the same zero-rating treatment, thus putting artificial limits on how customers can choose to use their data. And second, it still puts T-Mobile in the position of acting as gatekeeper, forcing video providers to ask T-Mobile’s permission to get zero-rated. That’s an extra barrier that shouldn’t exist.var mytubes = new Array(2); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E'; mytubes = '%3Ciframe src=%22https://www.youtube-nocookie.com/embed/5CkWjOvpZJw?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
One of the hardest pills to swallow about the Trans-Pacific Partnership (TPP) is that our opinion about it (and yours) really isn't worth much. When we look at the only three industries that have reportedly been holding up passage of the deal through Congress—big pharma, big tobacco and big finance—we can reach no other conclusion. That's not the way it should be, but after five years of us constantly battling this undemocratic agreement with very little to show for it, that seems to be the way it is.
This makes us angry. And when we get angry, we like to rock.
That's why we're supporting Rock Against the TPP, an ambitious free music festival and rally around the country, principally organized by our friends at Fight for the Future, in collaboration with guitarist Tom Morello (of Rage Against the Machine, Prophets of Rage, and Audioslave) and his new label Firebrand Records. Joining Morello to headline the tour will be hip hop star Talib Kweli, actress Evangeline Lilly (star of Lost and the Hobbit), and a diverse line-up of other big-name acts (check out the full line-up in each city below).
But Rock Against the TPP is more than just an opportunity for us to rock out and vent our frustrations about this toxic deal—it could also be the key to finally sinking it once and for all. How? Because although big money unfortunately holds great power over our representatives in Congress, there's one thing that means even more to them—the voice of the people. But only when it's loud enough—and that's where the idea of a music festival comes in. If we can get thousands of ordinary music lovers to rise up against the TPP, this may be just the kind of groundswell of public sentiment that politicians can no longer ignore.
The dates and lineup for these free concerts include:
- July 23-24 in Denver, Colorado featuring Tom Morello, Evangeline Lilly, Anti-Flag (acoustic), Jonny 5 and Brer Rabbit of Flobots, Downtown Boys, Taina Asili, Ryan Harvey, Son of Nun, Lia Rose, and Evan Greer.
- July 30 in San Diego, California featuring Jolie Holland, Evangeline Lilly, Taina Asili, Lia Rose, Bonfire Madigan, and Evan Greer.
- August 19 in Seattle, Washington featuring Talib Kweli, Evangeline Lilly, Anti-Flag (acoustic), Downtown Boys, Taina Asili, Sihasin, Bell's Roar, and Evan Greer.
- August 20 in Portland, Oregon featuring headliners to be announced soon plus Evangeline Lilly, Anti-Flag (acoustic), Downtown Boys, Taina Asili, Sihasin, Bell's Roar, and Evan Greer.
And more dates are yet to come!
One of the most exciting things about these concerts—apart from the awesome lineup and the fact that entry is absolutely free—is that each event will also include a teach-in, with experts to inform and energize the crowd about how they can take back control of their democracy, and oppose this corporate-driven agreement. In particular, EFF hopes to have a speaker at each of these events to raise the alarm about how the TPP will affect your digital rights, by locking in the harshest aspects of U.S. copyright law and extending them across the world, while failing to meaningfully protect fair use or a free and open Internet.
If you'd like to join us at to Rock Against the TPP, and if you can make it to Denver, San Diego, Seattle, or Portland, then all you need to do is to get your free tickets now. Join us there to hear some awesome, powerful music from some of the tightest outfits in the country, while also feeling good about standing up for your rights and for your democracy. Oh, and one more thing—don't come by yourself. Tell all of your friends, share the news on social media, and help us to send our representatives a loud and clear message—our opinions about the TPP do matter.var mytubes = new Array(2); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E'; mytubes = '%3Ciframe src=%22https://www.youtube-nocookie.com/embed/5CkWjOvpZJw?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this: Join EFF
This week, the Ninth Circuit Court of Appeals, in a case called United States v. Nosal, held 2-1 that using someone else’s password, even with their knowledge and permission, is a federal criminal offense. This dangerous ruling threatens to upend a good decision that the Ninth Circuit sitting en banc—i.e., with 11 judges, not just 3—made in 2012 in the same case. EFF filed an amicus brief in the case and our arguments were echoed by the strong dissent, authored by Judge Stephen Reinhardt. We’re pleased that a further appeal is planned and will be supporting it as well.
This decision turns on the notorious Computer Fraud and Abuse Act (CFAA) and supports one of the most troubling applications of the law—prosecutions based on password sharing. As EFF has long warned, read broadly, the CFAA can be used to turn millions of ordinary computer users into criminals. This leaves innocent people to only hope that a prosecutor will not decide to throw a book at them, as they’ve been know to do in CFAA cases. Carmen Ortiz, a federal prosecutor, did exactly that to our friend Aaron Swartz. This threat underscores both the need for courts to course correct—to narrowly interpret the statute’s overbroad language—or, alternatively, for Congress to step in and clarify the vague terms. For instance, what does “authority” mean in the context of our increasingly interconnected world, where we use someone else’s computer every single day for our email, our entertainment, our social networks, our banking, our health care, and more?
This appeal involves whether David Nosal, a former employee of executive recruiting firm Korn/Ferry, violated the CFAA when other Korn/Ferry ex-employees, on Nosal’s behalf, used the password of a current employee, with her permission, to access an internal company database. This occurred after the company had expressly revoked Nosal’s own login credentials to prevent him from accessing the database.
Like most companies, Korn/Ferry’s corporate policy prohibited its employees from sharing passwords. This same restriction is also found in the EULAs and Terms of Service of many online services—everything from banks to social network. And things were looking good on this in the Ninth Circuit. As noted above, in the earlier version of this same case the Ninth Circuit, sitting en banc, ruled that violations of use restrictions by current employees themselves cannot give rise to CFAA liability. Regardless, a jury then convicted Nosal under three CFAA counts involving password sharing, along with trade secret theft under the Economic Espionage Act, because the access was done not by a current employee directly but by someone else using her username and password.
The CFAA makes it illegal to engage in “unauthorized access” to a computer connected to the Internet. In this appeal, the central question turned on what the undefined term “authorized access” means for purposes of the statute. More directly, since the people who did the access were not the original users (as in Nosal I), it turned on whether a user of a computer with legitimate login credentials can grant “authority” to a third party to access the computer, or if authority must be granted by the owner of the computer.
Nosal’s colleagues had the authority of an authorized user, the current employee who lent her credentials. Thus, if “authority” can come from the account holder—as with a wife who lends her bank credentials to her husband to pay a bill, a college student who uses a parent’s Hulu or Amazon password, or someone who checks Facebook for a sick friend—then Nosal and his colleagues did not violate the CFAA. And removing CFAA liability would not let Nosal off scot-free: the jury also found Nosal guilty of violating federal trade secret laws.
But the Ninth Circuit ruled that only the computer owner can “authorize” someone to access a computer, not a user or account holder. It said that “authorize” means “permission” and that Nosal didn’t have permission from Korn/Ferry. Worse, the court held that this interpretation of “authorize”—as meaning permission from only the computer owner and not an authorized computer user—was completely clear from the text of the statute. As a result, it said that the important rule requiring vague criminal statutes to be interpreted narrowly, called the Rule of Lenity, didn’t apply.
Despite the court’s assertions, the fact that “authority” means “permission” doesn’t really clear things up. Nosal’s colleagues had permission—just from the authorized user, not the owner. Judge Reinhardt, writing in dissent in Nosal II, recognized this lack of clarity:
The majority’s (somewhat circular) dictionary definition of “authorization” – “permission conferred by an authority” – hardly clarifies the meaning of the text. While the majority reads the statute to criminalize access by those without “permission conferred by” the system owner, it is also proper (and in fact preferable) to read the text to criminalize access only by those without “permission conferred by” either a legitimate account holder or the system owner.”
While the majority opinion said that the facts of this case “bear little resemblance” to the kind of password sharing that people often do, Judge Reinhardt’s dissent notes that it fails to provide an explanation of why that is. Using an analogy in which a woman uses her husband’s user credentials to access his bank account to pay bills, Judge Reinhardt noted: “So long as the wife knows that the bank does not give her permission to access its servers in any manner, she is in the same position as Nosal and his associates.” As a result, although the majority says otherwise, the court turned anyone who has ever used someone else’s password without the approval of the computer owner into a potential felon.
As Judge Reinhardt recognized, the CFAA’s “without authorization” language is decidedly not clear-cut, and not just with regard to password sharing. We’ve been pushing hard for CFAA reform for years precisely because the law’s language is so vague, and its provisions so harsh, that it scares security researchers out of publishing important findings. It also gives prosecutors broad discretion to bring criminal charges for behavior that in no way qualifies as “hacking.” Judge Reinhardt correctly points out that the majority “loses sight of the anti-hacking purpose of the CFAA, and despite our warning, threatens to criminalize all sorts of innocuous conduct engaged in daily by ordinary citizens.”
Judge Reinhardt was also right to recognize the serious implications of the majority’s holding. With the onset of the Internet of Things, everything from refrigerators and toasters to toilets and toothbrushes will be—if they aren’t already—connected to the Internet. The CFAA’s scope is tied to “protected computers,” which is broadly defined to include anything that goes online, so the law will therefore soon apply to almost every household appliance and every use of the cloud. As a result, what started with the criminalization of password sharing in the context of a work computer will have even farther-reaching consequences. And such far-reaching consequences are precisely why we’ll be filing another amicus brief in support of the Ninth Circuit rehearing this case.var mytubes = new Array(2); mytubes = '%3Ciframe src=%22https://archive.org/embed/schneier_random_chance%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22480%22 width=%22640%22%3E%3C/iframe%3E'; mytubes = '%3Ciframe src=%22https://www.youtube-nocookie.com/embed/5CkWjOvpZJw?autoplay=1%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E'; Related Cases: United States v. David Nosal
Share this: Join EFF