Aggregated News

EFF Joins Broad Coalition of Groups to Protest the TPP in Washington D.C. - Sat, 21/11/2015 - 06:19

We were out on the streets this week to march against the Trans-Pacific Partnership (TPP) agreement in the U.S. Capitol. We were there to demonstrate the beginning of a unified movement of diverse organizations calling on officials to review and reject the deal based on its substance, which we can finally read and dissect now that the final text is officially released.

Image of the final, officially-released version of the TPP agreement printed double-sided, taken at the Public Citizen Access to Medicines office. This photo by Maira Sutton can be reused under CC-BY 4.0

Contained within these 6,000-plus pages of the completed TPP text are a series of provisions that empower multinational corporations and private interest groups at the expense of the public interest. Civil society groups represent diverse concerns, so while we may disagree on our specific concerns about the TPP, we commonly recognize that this is a toxic, undemocratic deal that must be stopped at all costs.

Our TPP protest signs, slogans based on suggestions from Twitter users @ronmexicolives and @GabeNicholas. This photo by Maira Sutton can be reused under CC-BY 4.0

So on Monday, we kicked off the new phase of TPP campaigning to call on U.S. Congress members to reject the entire deal in the coming ratification vote in a few months.

Beginning of the rally in front of the Chamber of Commerce in downtown Washington D.C. This photo by Maira Sutton can be reused under CC-BY 4.0

Roughly a couple of hundred people came out to meet in front of the Chamber of Commerce. Some organizers and leading activists gave speeches about the impacts of the TPP on our local and global communities. Maira Sutton, EFF's Global Policy Analyst, spoke about the effects of the TPP's restrictive digital policy provisions that empower the rights of Hollywood and other corporations, and that it does little to nothing to safeguard the rights of the public interest on the Internet or over our digital devices. Other speakers discussed how the TPP would impact environmental protections and the raise the costs of affordable life-saving medicines and treatments.

We then started the march, with large banners and people carrying dozens of toilet paper-shaped lanterns with the words "flush the TPP" written across it.

This photo by Maira Sutton can be reused under CC-BY 4.0

The rally picked up many more people as we snaked around the downtown area and marched towards the Ronald Reagan International Trade Center:

This photo by Maira Sutton can be reused under CC-BY 4.0

Another rally was held on Tuesday morning, where we marched to each of the TPP country embassies to demonstrate our support of those who have been protesting it in other regions of the world. Protesters carried a 10-foot-tall figure of Mr. Monopoly, which puppeteered the flags of the 12 TPP countries participating countries. Others carried flags with "stop TPP" in all the languages of the TPP countries, and a gigantic globe of the earth on their shoulders to signify our common responsibility for the rights and interests of people and environments worldwide:

This photo by Maira Sutton can be reused under CC-BY 4.0

This photo by Maira Sutton can be reused under CC-BY 4.0

People from all over the United States came to attend these events in DC this week. We met people from Texas, Alabama, Florida, North Carolina, Michigan, and Washington state. They all traveled hundreds or thousands of miles to voice their opposition against the TPP, as well as the other secretive trade deals that harm our digital rights and actively erode transparent, public-interest driven policymaking. 

While we had a pretty good turn out of several hundred people at these events at the Capitol, a recent poll showed that 60% of people in the United States have no opinion on the TPP. Clearly, we still have a lot of work to do to make more people in the United States aware and actively working to stop this deal before it goes to Congress.

Stay tuned as we develop more materials and resources to spread the word about the TPP's impacts on your digital rights. For now, you can start by taking this action to urge your lawmakers to call a hearing on the contents of the TPP that will impact your digital rights, and more importantly, to vote this deal down when it comes to them for ratification:

Related Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalTrade Agreements and Digital RightsTrans-Pacific Partnership Agreement
Share this:   ||  Join EFF
Categories: Aggregated News

Unintended Consequences, European-Style: How the New EU Data Protection Regulation will be Misused to Censor Speech - Sat, 21/11/2015 - 06:05

Europe is very close to the finishing line of an extraordinary project: the adoption of the new General Data Protection Regulation (GDPR), a single, comprehensive replacement for the 28 different laws that implement Europe's existing 1995 Data Protection Directive. More than any other instrument, the original Directive has created a high global standard for personal data protection, and led many other countries to follow Europe's approach. Over the years, Europe has grown ever more committed to the idea of data protection as a core value. In the Union's Charter of Fundamental Rights, legally binding on all the EU states since 2009, lists the “right to the protection of personal data” as a separate and equal right to privacy. The GDPR is intended to update and maintain that high standard of protection, while modernising and streamlining its enforcement.

The battle over the details of the GDPR has so far mostly been a debate between advocates pushing to better defend data protection, against companies and other interests that find consumer privacy laws a hindrance to their business models. Most of the compromises between these two groups have now already been struck.

But lost in that extended negotiation has been another aspect of public interest. By concentrating on privacy, pro- or con-, the GDPR as it stands has omitted sufficient safeguards to protect another fundamental right: the right to freedom of expression, “to hold opinions and to receive and impart information... regardless of frontiers”.

It seems not to have been a deliberate omission. In their determination to protect the personal information of users online, the drafters of the GDPR introduced provisions that streamline the erasure of such information from online platforms—while neglecting to consider those who published that information to those platforms who were exercising their own human right of free expression in doing so, and their audiences who have the right to receive such information. Almost all digital rights advocates missed the implications, and corporate lobbyists didn't much care about the ramifications.

The result is a ticking time-bomb that will be bad for online speech, and bad for the future reputation of the GDPR and data protection in general.

Europe's data protection principles include a right of erasure, which has traditionally been about the right to delete data that a company holds on you, but has been extended over time to include a right to delete public statements that contain information about individuals that is “inadequate, irrelevant or excessive”. The first widely-noticed sign of how this might pose a problem for free speech online came from the 2014 judgment of the European Court of Justice, Google Spain v. Mario Costeja González—the so-called Right to Be Forgotten case.

We expressed our concern at the time that this decision created a new and ambiguous responsibility upon search engines to censor the Web, extending even to truthful information that has been lawfully published.

The current draft of the GDPR doubles down on Google Spain, and raises new problems. (The draft currently under negotiation is not publicly available, but July 2015 versions of the provisions that we refer to can be found in this comparative table of proposals and counter-proposals by the European institutions [PDF]. Article numbers referenced here, which will likely change in the final text, are to the proposal from the Council of the EU unless otherwise stated.)

First, it requires an Internet intermediary (which is not limited to a search engine, though the exact scope of the obligation remains vague) to respond to a request by a person for the removal of their personal information by immediately restricting the content, without notice to the user who uploaded that content (Articles 4(3a), 17, 17a, and 19a.). Compare this with the DMCA takedown notices, which include a notification requirement, or even the current Right to Be Forgotten process, which give search engines some time to consider the legitimacy of the request. In the new GDPR regime, the default is to block.

Then, after reviewing the (also vague) criteria that balance the privacy claim with other legitimate interests and public interest considerations such as freedom of expression (Articles 6.1(f), 17a(3) and 17.3(a)), and possibly consulting with the user who uploaded the content if doubt remains, the intermediary either permanently erases the content (which, for search engines, means removing their link to it), or reinstates it (Articles 17.1 and 17a(3)). If it does erase the information, it is not required to notify the uploading user of having done so, but is required to notify any downstream publishers or recipients of the same content (Articles 13 and 17.2), and must apparently also disclose any information that it has about the uploading user to the person who requested its removal (Articles 14a(g) and 15(1)(g)).

Think about that for a moment. You place a comment on a website which mentions a few (truthful) facts about another person. Under the GDPR, that person can now demand the instant removal of your comment from the host of the website, while that host determines whether it might be okay to still publish it. If the host's decision goes against you (and you won't always be notified, so good luck spotting the pre-emptive deletion in time to plead your case to Google or Facebook or your ISP), your comment will be erased. If that comment was syndicated, by RSS or some other mechanism, your deleting host is now obliged to let anyone else know that they should also remove the content.

Finally, according to the existing language, while the host is dissuaded from telling you about any of this procedure, they are compelled to hand over personal information about you to the original complainant. So this part of EU's data protection law would actually release personal information!

What are the incentives for the intermediary to stand by the author and keep the material online? If the host fails to remove content that a data protection authority later determines it should have removed, it may become liable to astronomical penalties of €100 million or up to 5% of its global turnover, whichever is higher (European Parliament proposal for Article 79).

That means there is enormous pressure on the intermediary to take information down if there is even a remote possibility that the information has indeed become “irrelevant”, and that countervailing public interest considerations do not apply.

These procedures are deficient in many important respects, a few of which are mentioned here:

  • Contrary to principle 2 of the Manila Principles on Intermediary Liability, they impose an obligation on an intermediary to remove content prior to any order by an independent and impartial judicial authority. Indeed, the initial obligation to restrict content comes even before the intermediary themselves has had an opportunity to substantively consider the removal request.
  • Contrary to principle 3 of the Manila Principles, the GDPR does not set out any detailed minimum requirements for requests for erasure of content, such as the details of the applicant, the exact location of the content, and the presumed legal basis for the request for erasure, which could help the intermediary to quickly identify baseless requests.
  • Contrary to principle 5, there is an utter lack of due process for the user who uploaded the content, either at the stage of initial restriction or before final erasure. This make the regime even more likely to result in mistaken over-blocking than the DMCA, or its European equivalent the E-Commerce Directive, which do allow for such a counter-notice procedure.
  • Contrary to principle 6, there is precious little transparency or accountability built into this process. The intermediary is not, generally, allowed to publish a notice identifying the restriction of particular content to the public at large, or even to notify the user who uploaded the content (except in difficult cases).

More details of these problems, and more importantly some possible textual solutions, have been identified in a series of posts by Daphne Keller, Director of Intermediary Liability at the Center for Internet and Society (CIS) of Stanford Law School. However at this late stage of the negotiations over the GDPR in a process of “trialogue” between the European Union institutions, it will be quite a challenge to effect the necessary changes.

Even so, it is not too late yet: proposed amendments to the GDPR are still being considered. We have written a joint letter with ARTICLE 19 to European policymakers, drawing their attention to the problem and explaining what needs to be done. We contend that the problems identified can be overcome by relatively simple amendments to the GDPR, which will help to secure European users' freedom of expression, without detracting from the strong protection that the regime affords to their personal data.

Without fixing the problem, the current draft risks sullying the entire GDPR project. Just like the DMCA takedown process, these GDPR removals won't just be used for the limited purpose they were intended for. Instead, it will be abused to censor authors and invade the privacy of speakers. A GDPR without fixes will damage the reputation of data protection law as effectively as the DMCA permanently tarnished the intent and purpose of copyright law.

Files:  EFF and ARTICLE 19 comment on the GDPR to EU policymakersRelated Issues: Section 230 of the Communications Decency ActInternationalInternational Privacy Standards
Share this:   ||  Join EFF
Categories: Aggregated News

New Report Rates Peruvian ISPs: Who Defends Your Data? - Sat, 21/11/2015 - 00:08

The Peruvian digital rights organization, Hiperderecho, together with the Electronic Frontier Foundation, launched ¿Quién Defiende Tus Datos? (Who Defends Your Data?) today, a report that evaluates the privacy practices of digital communication companies that Peruvians use every day. Along with similar reports published earlier this year in Colombia and Mexico, this investigation is part of a larger series of evaluations across Latin America that is based on EFF’s annual Who Has Your Back? report and adapted for local realities and needs. The reports compare phone companies and Internet Service Providers to determine which ones stand by their users when responding to government requests for personal information.

Peru is experiencing a digital revolution; its citizens are increasingly using the Internet and electronic devices to exercise free speech, organize social movements, and gather information. As more and more Peruvians use mobile phones and computers to access the Internet, more of their private data gets shared among companies who provide these services. As of July 2015, the government has been taking advantage of this shift, proposing brand new surveillance laws that compel ISPs to retain metadata for a certain period of time and allow warrantless access to geolocation data in emergency cases.

As such, Hiperderecho has released the ¿Quién Defiende Tus Datos? report that evaluates whether Peruvian ISPs and telephone companies stand by their customers when the government knocks at their door compelling user data. From its inception, this project has had two main goals: to provide users with a clear assessment of which telecommunications companies are adopting best practices to protect their users’ privacy; and, to provide companies with guidance and recommendations on how they can improve their privacy practices.

In their report, Hiperderecho analyzed whether companies publish appropriate and easy-to-understand privacy policies on their websites and if the outlined practices are sufficient enough to inform users about how they treat government requests.

Evaluation criteria
  1. Privacy Policy: To earn a star, a company must have published a privacy policy that is easy to understand. It should inform the reader about what data is collected from them, how long it is stored, and describe the guidelines and procedures the company has in place when an authority requests the data. Partial compliance was rewarded with half a star.
  2. Judicial Warrant: Companies earned a star in this category if they required the government to obtain a warrant from a judge before handing over communications (either content or metadata). Compliance with this requirement for the content of communications, but not for metadata, earned a company a half star.
  3. User notification: To earn a star in this category, companies must promise to inform their customers of a government request at the earliest moment permitted by the law. They could issue parallel notifications along with the official ones sent by the government after a surveillance measure took place through different means of communication.
  4. Transparency: This category looked for companies publishing transparency reports about government requests for user data. To earn a full star, the report must provide useful data about how many requests have been received and complied with, including details about the type of requests, the government agencies that made the requests, and the reasons provided by the authority. Partial compliance is rewarded with a half star.
  5. Commitment to privacy: This star recognizes companies who have challenged legislation that permits mass surveillance or surveillance that allows government access without judicial safeguards, as well as those that have publicly taken a position in favor of their users’ privacy before congress and other regulatory bodies.
The results

Results from Peruvian ISPs' privacy protections

Most of the companies have yet to earn a good evaluation in this first edition of the report, with some of them not even obtaining partial stars. As a result, telecommunications companies in Peru still have a long way to go to ensure the privacy of their users’ communications personal data. In categories like “Transparency Reports” and “User Notification Procedures,” no companies were awarded a star. In several cases, companies limited themselves to publishing privacy policies that neglected to include either what kind of data they were collecting or how long they would be storing the data.

Peru’s  recent adoption of a new data protection law has forced companies to disclose their data collection practices every time they sign up new users, but the law doesn’t compel them to provide a more comprehensive evaluation of the data they collect as a by-product of the usage of the service. Hence, there is little information on how companies treat information they collect from users, like IP addresses, traffic logs, and geolocation, among others.

This report asks companies to stand with their customers by implementing best practices to the fullest extent permitted by law. However, one of its key findings is that certain legal restrictions in Peruvian national law may prevent operators from adopting internationally-recognized best practices for user notice, which are designed to empower users to defend their own privacy.

According to the Criminal Procedure Code or the rules of the national intelligence system, ISPs and mobile companies are compelled to keep government access requests confidential. Accordingly, the companies may be prevented from notifying their users upfront. However, there’s still much more that companies could do within the space of their legal obligations. Under Peruvian law, courts must notify citizens after a surveillance measure has expired and, when this happens, companies could contact them in parallel through email or text message to call their attention on the notification. This would allow citizens to exercise their right to oppose and appeal any surveillance measure previously issued by the courts.

Some regional companies have better practices in countries other than Peru. For example, most of the Mexican companies, including Telmex (a subsidiary of América Móvil), have a privacy policy published on their website. However, Claro’s website in Peru does not publish this information.

Peruvian companies still have a long way to go in protecting customers’ personal data and being transparent about who has access to it. Hiperderecho expects to release this report annually to incentivize companies to improve transparency and protect users data. By making privacy policies accessible and understandable, Peruvians will know how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions.

Download report

Check the full report of Who Defends Your Data? [Spanish] [PDF].

Related Issues: InternationalSurveillance and Human RightsPrivacy
Share this:   ||  Join EFF
Categories: Aggregated News

Nuevo reporte muestra qué ISPs peruanas resguardan la privacidad de usuarios - Fri, 20/11/2015 - 08:29

La organización peruana defensora de los derechos digitales Hiperderecho junto con la Electronic Frontier Foundation, lanzaron hoy “¿Quién Defiende Tus Datos?”, un reporte que evalúa las prácticas de privacidad de las empresas de comunicación digital que los peruanos utilizan cada día. Junto con reportes similares lanzados a mediados de este año en Colombia y México, estas investigaciones forman parte de una amplia serie de evaluaciones a lo largo de América Latina basadas en el reporte anual de EFF “Who Has Your Back?” y adaptado a las necesidades y realidades locales. Este reporte pone en la mira a las compañías de telefonía e internet para determinar cuáles se destacan ante sus usuarios al responder a las solicitudes de información personal por parte del gobierno.

Perú está experimentando una revolución digital: Sus ciudadanos están utilizando cada vez más Internet y los dispositivos electrónicos para ejercer el derecho a la libertad de expresión, organizar movimientos sociales, y obtener información. A medida que más peruanos usan los teléfonos móviles y las computadoras para acceder Internet, cada vez más datos privados son compartidos entre las compañías que proveen estos servicios. Sin embargo, en julio de 2015 el gobierno ha aprovechado este cambio al proponer una ley que obliga a los Proveedores de Servicios de Internet (ISP en inglés) a retener metadatos de las comunicaciones por un periodo de tiempo y permitir el acceso sin orden judicial a datos de geolocalización en casos de emergencia.

Bajo este contexto, Hiperderecho ha lanzado el reporte “¿Quién Defiende Tus Datos?” en el que evalúa si los ISP peruanos y compañías telefónicas protegen a sus usuarios cuando el gobierno golpea sus puertas exigiendo los datos personales de sus usuarios. Desde sus inicios, este proyecto ha tenido dos objetivos principales: proporcionar a los usuarios con una evaluación clara de qué empresas de telecomunicaciones están adoptando las mejores prácticas para proteger la privacidad; y proporcionar a las empresas con una guía y recomendaciones sobre cómo pueden mejorar sus prácticas de privacidad.

En este informe, Hiperderecho analizó si las compañías publican políticas de privacidad adecuadas y fáciles de entender en sus sitios web oficiales y si las prácticas descritas son suficientes para informar a los usuarios acerca de cómo se tratan los pedidos del gobierno. Son cinco criterios de evaluación y el puntaje otorgado es visible mediante una estrella completa, media estrella o un cuarto de estrella.

Criterios de evaluación
  1. Política de privacidad: Para ganar una estrella, la compañía evaluada debe haber publicado una política de privacidad que sea fácil de entender. Se debe informar al usuario acerca de qué datos suyos son recopilados, cuánto tiempo se almacena, y describir directrices y los procedimientos que la compañía tiene instalados cuando una autoridad solicita los datos personales de sus usuarios. El cumplimiento parcial fue recompensado con media estrella.
  2. Orden judicial: Las compañías ganan una estrella en esta categoría si se requiere la orden de un juez antes de entregar las comunicaciones (ya sea el contenido o metadatos). El cumplimiento con este requerimiento para el contenido de las comunicaciones, pero no para los metadatos, otorga una media estrella.
  3. Notificación al usuario: Para ganar una estrella en esta categoría, las compañías deben comprometerse a informar a sus clientes sobre el pedido de datos por parte del gobierno lo antes posible permitido por la ley. Podrían emitir notificaciones paralelas junto con las notificaciones oficiales emitidas por el gobierno después de que una medida de vigilancia haya ocurrido a través de diferentes métodos de comunicación.
  4. Reportes de Transparencia: Esta categoría busca aquellas compañías que publican reportes de transparencia sobre los pedidos gubernamentales de datos de usuarios. Para ganar una estrella completa, el informe debe proporcionar datos útiles sobre el número de pedidos que se han recibido y cumplido, incluyendo detalles sobre el tipo de pedidos, los organismos gubernamentales que lo solicitaron, y las razones dadas por la autoridad. Cumplimiento parcial es recompensado con una media estrella.
  5. Compromiso con la privacidad: Esta estrella reconoce a aquellas compañías que han desafiado la legislación que permite la vigilancia masiva o la vigilancia que autoriza el acceso del gobierno a los datos sin garantías judiciales, así como aquellos que han tomado públicamente una posición a favor de la privacidad de sus usuarios ante el Congreso u otros organismos regulatorios.
Los resultados

Los resultados de QDTD Perú

La mayoría de las compañías deberían obtener una buena evaluación incluso en esta primera edición del informe, aunque algunas ni siquiera obtuvieron media estrella. Como resultado, las compañías de telecomunicaciones en el Perú tienen un largo camino que recorrer para proteger la privacidad de los datos de comunicaciones de sus usuarios. En categorías como "Informes de Transparencia" y "Notificación al usuario," no existen compañías que se ganaran una estrella. En varios casos, las empresas se limitaron a publicar las políticas de privacidad en las que no incluyeron el tipo de datos que recopilaban o por cuánto tiempo se almacenan los datos.

La reciente aprobación en el Perú de una nueva ley de protección de datos ha obligado a las compañías a divulgar sus prácticas de recolección de datos cada vez que se sumen nuevos usuarios, pero la ley no los obliga a proporcionar una evaluación más completa de los datos que recogen como un subproducto de la utilización del servicio. Por lo tanto, hay poco conocimiento sobre la forma en que tratan la información que recopilan de los usuarios tales como direcciones IP, registros de tráfico y de geolocalización, entre otros.

En este informe se urge a las compañías a estar del lado de sus clientes mediante la implementación de más y mejores prácticas en la medida permitida por la ley. Sin embargo, una de los principales hallazgos es que ciertas restricciones legales en la legislación nacional peruana pueden evitar que los operadores de telecomunicaciones adopten las mejores prácticas internacionalmente reconocidas para la notificación a usuarios, diseñadas para facultar a los usuarios la posibilidad de defender su propia privacidad.

De acuerdo con el Código de Procedimiento Penal o las reglas del sistema nacional de inteligencia, las compañías de telefonía móvil e internet están obligadas a mantener confidencialidad sobre los pedidos de acceso del gobierno. En consecuencia, las compañías pueden estar impedidas de notificar a sus usuarios por adelantado. Sin embargo, todavía hay mucho más que estas compañías podrían hacer dentro de sus obligaciones legales. Bajo la ley peruana, los tribunales deben notificar a los ciudadanos después de que una medida de vigilancia haya expirado y, cuando esto sucede, las compañías podrían contactar en paralelo a los usuarios mediante correo electrónico o teléfono para informarles sobre la notificación. Esto permitiría que los ciudadanos ejerzan su derecho a oponerse y apelar cualquier medida de vigilancia expedida previamente por los tribunales.

El informe revela que algunas compañías de alcance regional tienen mejores prácticas en países distintos a Perú. Por ejemplo, la mayoría de las empresas mexicanas, incluyendo Telmex (subsidiaria de América Móvil), tienen una política de privacidad publicada en su sitio web. Sin embargo, el sitio web de Claro en Perú no publica esta información.

Las compañías peruanas aún tienen un largo camino por recorrer en pos de proteger los datos personales de los clientes y ser transparentes acerca de quién tiene acceso a ella. Hiperderecho aguarda publicar este informe anualmente para incentivar a las empresas a mejorar la transparencia y la protección de los datos de usuarios. Al hacer políticas de privacidad accesibles y comprensibles, los peruanos sabrán cómo se utiliza su información personal y cómo es controlada por las compañías de telefonía e internet para que puedan tomar decisiones de consumo más inteligentes.

Descargar reporte

La versión completa del reporte ¿Quién defiende tus datos?: Reporte de evaluación de empresas de telecomunicaciones ante las medidas de vigilancia estatal puede ser descargada en PDF desde aquí.

Related Issues: InternationalSurveillance and Human RightsPrivacy
Share this:   ||  Join EFF
Categories: Aggregated News

YouTube Backs Its Users With New Fair Use Protection Program - Fri, 20/11/2015 - 05:24

In what we very much hope launches a “race to the top” to protect online fair use, today YouTube announced a new program to help users fight back against outrageous copyright threats. The company has created a ‘Fair Use Protection’ program that will cover legal costs of users who, in the company’s view, have been unfairly targeted for takedown.

We have criticized YouTube in the past for not doing enough to protect fair use on its service, including silencing videos based on vague “contractual obligations” and failing to fix the many problems with its Content ID program. However, when the company takes positive steps to protect its users, we take notice.

Google describes the program on its blog, but here are the basic details: When the company notices that a video targeted for takedown is clearly a lawful fair use, it may choose to offer the user the option of enrolling their video into the program. If the user decides to join, the video will stay up in the United States and, if the rightsholder sues, YouTube will provide assistance of up to $1 million dollars in legal fees.

YouTube has started the program off with four videos that the company believes represent fair use. You can watch them here.

While we would like the program to do a little bit more—for example, given that the main criteria is that a video must be clearly lawful we’d like YouTube to provide any user that meet that criteria the option of enrolling their video into the program, rather than hand-selecting which ones gets to participate—we think this is a solid and unprecedented step forward in protecting fair use on the site.

We commend YouTube for standing up for its users, and we hope the program will inspire other service providers on the web to follow its lead.

Related Issues: Fair Use and Intellectual Property: Defending the BalanceDMCARelated Cases: Lenz v. Universal
Share this:   ||  Join EFF
Categories: Aggregated News Tracks Content Takedowns by Facebook, Twitter, and Other Social Media Sites - Fri, 20/11/2015 - 02:15
New Project Will Gather Users' Stories of Censorship from Around the World

San Francisco – The Electronic Frontier Foundation (EFF) and Visualizing Impact launched today, a new platform to document the who, what, and why of content takedowns on social media sites. The project, made possible by a 2014 Knight News Challenge award, will address how social media sites moderate user-generated content and how free expression is affected across the globe.

Controversies over content takedowns seem to bubble up every few weeks, with users complaining about censorship of political speech, nudity, LGBT content, and many other subjects. The passionate debate about these takedowns reveals a larger issue: social media sites have an enormous impact on the public sphere, but are ultimately privately owned companies. Each corporation has their own rules and systems of governance that control users’ content, while providing little transparency about how these decisions are made.

At, users themselves can report on content takedowns from Facebook, Google+, Twitter, Instagram, Flickr, and YouTube. By cataloging and analyzing aggregated cases of social media censorship, seeks to unveil trends in content removals, provide insight into the types of content being taken down, and learn how these takedowns impact different communities of users.

“We want to know how social media companies enforce their terms of service. The data we collect will allow us to raise public awareness about the ways these companies are regulating speech,” said EFF Director for International Freedom of Expression and co-founder of Jillian C. York. “We hope that companies will respond to the data by improving their regulations and reporting mechanisms and processes—we need to hold Internet companies accountable for the ways in which they exercise power over people’s digital lives.”

York and co-founder Ramzi Jaber were inspired to action after a Facebook post in support of OneWorld’s “Freedom for Palestine” project disappeared from the band Coldplay’s page even though it had received nearly 7,000 largely supportive comments. It later became clear that Facebook took down the post after it was reported as “abusive” by several users.

“By collecting these reports, we’re not just looking for trends. We’re also looking for context, and to build an understanding of how the removal of content affects users’ lives. It’s important companies understand that, more often than not, the individuals and communities most impacted by online censorship are also the most vulnerable,” said Jaber. “Both a company’s terms of service and their enforcement mechanisms should take into account power imbalances that place already-marginalized communities at greater risk online.” has other tools for social media users, including a guide to the often-complex appeals process to fight a content takedown. It will also host a collection of news reports on content moderation practices.


Contact:  Jillian C.YorkDirector for International Freedom of RamziJaberCo-founder and co-director of Visualizing
Share this:   ||  Join EFF
Categories: Aggregated News

Baseless Calls to Expand Surveillance Fit Familiar, Cynical Pattern - Thu, 19/11/2015 - 18:09

Like clockwork, cynical calls to expand mass surveillance practices—by continuing the domestic telephone records collection and restricting access to strong encryption—came immediately following the Paris attacks. These calls came before the smoke had even cleared, much less before a serious investigation completed. They came from high places too, including CIA head John Brennan and New York Police Commissioner Bill Bratton.

Seasoned law enforcement officers and the heads of spy agencies should know better than jump to conclusions before the facts are in. Sadly, these premature demands for more surveillance in the wake of tragedies are not unprecedented.

The most prominent example is the Bush Administration's aggressive push for expanded surveillance powers after the 9/11 terrorist attacks, before a proper investigation could be carried out. We all now know from the 9/11 Commission that the Bush Administration failed to uncover the attacks and stop them not because of insufficient legal authority and not because they didn't have sufficient information, but because of operational failures and internal rules.

Yet despite this, the Bush Administration rushed to Congress to give it broad new collection authorities in the USA Patriot Act. We also now know that along with the public law, the government used secret legal interpretations to gather even more data about innocent people, interpretations that have since been revealed as both shocking and unsupported. In response to its failure to properly act on the data it had, the government pushed to collect even more data. 

Now we see a sadly similar exploitation of this latest international tragedy, again pushed by people who are supposed to be above petty politics.

First, Sen. Tom Cotton has floated a bill suggesting that, as a result of the Paris tragedy, we continuing throwing money at the domestic telephone record collection program—which was itself based on an improper interpretation of section 215 of the Patriot Act. The program is set to end on November 29, 2015, switching from mass surveillance to a model of surveillance that is still too broad, but more targeted than the indiscriminate dragnet of the existing system.

One major reason Congress ended the broader program is that it didn’t work. Millions of dollars and over 10 years of effort later, two independent panels held there was no indication that the mass domestic telephone collection had ever assisted in thwarting a domestic terrorist attack. Of course the 215 program hasn’t even ended yet, so if it could have been useful in stopping the Paris attack—an unlikely proposition, given its domestic focus—it failed at that too.

More relevant, the massive collection program the government continues in the U.S. purportedly aimed at foreigners abroad under FISA Amendments Act section 702 failed to catch the Paris terrorists before they struck, as did the even bigger set of collections occurring abroad under Executive Order 12333, which include collections aimed at both France and Belgium, where the terrorists were allegedly based. That’s because big data and mass surveillance techniques are simply not useful for predicting or uncovering terrorist plots. Terrorism is far more difficult to predict than the purchasing or other patterned behavior that big data is reasonably good at identifying. Forget trading essential liberty for a little temporary safety: when it comes to identifying terrorists, mass surveillance leaves the public with neither.

So whether the focus is on spending our money on things that actually work or on protecting our Constitutional rights and our ability to be "secure" in our papers in the digital age, it’s long past time we shifted focus from the expensive “collect it all” strategies to more focused surveillance.

Second, the attack on strong, non-backdoored encryption would make Americans, and people all over the world, less secure. Every serious computer scientist has pointed out that there is no such thing as a back door that only good guys can go through. And at least so far, the information we’ve received is that end-to-end encryption wasn’t even used here.

The world is rightly horrified at what happened in Paris. We understand the desire to do “something” to shore up our security. But terrorism is aimed, in part, at pushing us to jump to conclusions and take panicky steps that inflict more pain and misdirect our resources toward failed and dangerous ideas. Luckily this time many voices are urging caution and careful analysis, and rejecting the cynical ploy of some to use our terror to take expensive and dangerous steps in the wrong direction.

Related Issues: PrivacyNSA SpyingSecurityRelated Cases: Jewel v. NSAFirst Unitarian Church of Los Angeles v. NSA
Share this:   ||  Join EFF
Categories: Aggregated News

Has the TPP Ended the Crypto Wars? Hardly. - Thu, 19/11/2015 - 10:57

The U.S. Trade Representative (USTR) fears the grassroots tech community, and rightly so. Internet users are the community that killed SOPA and PIPA in the U.S. Congress and ACTA in the European Parliament. The USTR is right to fear that the same could happen to the Trans-Pacific Partnership agreement (TPP).

That's why they've taken such pains to present the TPP as being friendly to the Internet and tech users and have included a few provisions in the agreement that they can point to to justify this claim. We've covered (and debunked) some of these before—notably the free flow of information rules common to both TISA and the TPP—but there's another that deserves comment.

Under the heading “How the TPP Protects the Internet and Ensures Digital Freedom,” the USTR claims on its website that the TPP “ensures that companies and individuals are able to use the cybersecurity and encryption tools they see fit, without arbitrary restrictions that could stifle free expression.” This refers to a heretofore obscure provision hidden away in Annex 8-B of the Technical Barriers to Trade [PDF] chapter of the TPP, which provides:

With respect to a product that uses cryptography and is designed for commercial applications, no Party may impose or maintain a technical regulation or conformity assessment procedure that requires a manufacturer or supplier of the product, as a condition of the manufacture, sale, distribution, import or use of the product, to:

  1. transfer or provide access to a particular technology, production process, or other information (such as a private key or other secret parameter, algorithm specification or other design detail), that is proprietary to the manufacturer or supplier and relates to the cryptography in the product, to the Party or a person in the Party’s territory;
  2. partner with a person in its territory; or
  3. use or integrate a particular cryptographic algorithm or cipher, other than where the manufacture, sale, distribution, import or use of the product is by or for the government of the Party.

The USTR's characterization of these provision certainly seems to have convinced former Homeland Security policy secretary and NSA lawyer, Stewart Baker, who went so far as to proclaim in the Washington Post that the USTR wins the Crypto War. In his interpretation, the provision would prevent a TPP country from requiring a supplier of cryptographic software to provide it with a backdoor or “golden key," of the kind that law enforcement authorities have been demanding and that we have consistently and strongly denounced.

But this is much too rosy an interpretation, for several reasons. Most importantly, the provision quoted above is immediately followed by an exception whereby a service provider that uses encryption can still be required to provide unencrypted communications to law enforcement agencies pursuant to “legal procedures.” Since this is really all that law enforcement authorities are after, the fact that a provider can't actually be forced to disgorge the actual private key they are using, hardly matters at all.

But, for the sake of argument, supposing the government does want a product's private key, rather than just the decrypted communications, the TPP still allows them a way to get it. The Technical Barriers to Trade chapter is only about standards with which products must comply in order to be approved for commerce. Thus it prohibits the requirement that a private key allowing decryption be handed over as a condition of manufacture, sale distribution, import or use of the product. But it wouldn't do anything to prevent the government from seeking a court order against a software vendor requiring it to disclose the private key of a product that is lawfully marketed or supplied within the country.

Further, the Exceptions and General Provisions chapter provides that “Nothing in this Agreement shall be construed to … preclude a Party from applying measures that it considers necessary for the fulfilment of its obligations with respect to the maintenance or restoration of international peace or security, or the protection of its own essential security interests.” This lays the foundation for a government to override Annex 8-B altogether if it can claim that it considers it necessary to do so for national security reasons.

As if the above loopholes weren't large enough already, consider that the provision in Annex 8-B is only enforceable by other TPP countries. This means that if, say, the United States government compelled a home-grown encryption product such as Wickr to embed an encryption backdoor, there would be no restriction of trade between the TPP countries and thus no actionable claim under the TPP.

A similar situation would exist for products from non-TPP countries; nothing would prevent a TPP country from requiring the developers of, for instance, Telegram which is based in Germany, to backdoor their software. A claim under the TPP would only arise if the country demanding backdoor access to an encryption product, and the country from which that product is developed or supplied, are both different TPP signatories.

So what appears on the surface to be strong protection for crypto software in the TPP is actually much weaker than it seems: it doesn't prevent the government from requiring providers to give them access to decrypted data, it doesn't protect developers against backdoor demands from their own government, it doesn't protect tools from countries that aren't TPP signatories, it doesn't stop a country from demanding access to private keys of a product so long as this demand is not a condition of supply of that product within the country, and on top of all that, there is a sweeping national security exception that can override the provision altogether.

So much for winning the crypto wars.

Related Issues: Trade Agreements and Digital RightsTrans-Pacific Partnership AgreementLaw Enforcement Access
Share this:   ||  Join EFF
Categories: Aggregated News

EFF, Public Knowledge File Comments to Help Fix the Patent Office - Thu, 19/11/2015 - 10:38

EFF and Public Knowledge filed comments today at the United States Patent and Trademark Office discussing proposed changes to Patent Office trials. Our comments focus on making the process more fair and accessible for small entities that need to challenge bad patents.

Our first set of comments relates to proposed changes to inter partes review and covered business method review. Congress created these procedures in the America Invents Act, passed in 2011, to allow quicker and more efficient review of issued patents. We make a number of suggestion for how the current rules could be improved to promote fairness. For example, we argue that the rules should clearly require both petitioners and patent owners to support affirmative factual statements with evidence.

Our comments also address an issue that arose during the Patent Office’s nationwide “Roadshow.” This Roadshow was a joint presentation of the Patent Office and the American Intellectual Property Law Association (the “AIPLA”). The Patent Office touted [PDF] the Roadshow as allowing the public to “provide valuable input into how to improve the fairness and effectiveness of the AIA proceedings.” Unfortunately, the public had to pay a steep attendance fee of $375 to attend this Roadshow (the fee was significantly less if the participant was a member of the AIPLA). In addition, even though the Roadshow included an actual patent trial, there was no indication that the public could attend the trial for free (as at least one court has said the Constitution requires).

We were extremely disappointed in how the Patent Office carried out its Roadshow, both by affiliating with an organization with particular (sometimes controversial) views on patent policy and putting the price out of reach of many members of the public. In our comments, we highlight how problematic this was and our hope that the Patent Office will reconsider how it implements Roadshows in the future.

Our second set of comments relate to the Patent Office’s proposal to run a pilot program where trials are initially reviewed by only one judge (instead of a three-judge panel). We have reservations about this proposal. As our comments explain in more detail, there are many benefits of multi-member, collegial, bodies.

Some critics of the current process (such as the AIPLA) argue that having the same panel of administrative law judges making preliminary and final decisions in a proceeding creates bias against the patent owner. This is nonsense. The PTAB’s process is no different from federal courts where the same judge evaluates motions to dismiss, summary judgment, and then presides over trial. It would be extraordinarily wasteful for courts to bring in a different judge for every stage of a proceeding. The Patent Office should reject calls to create such a silly process for proceedings before the PTAB.

Finally, last month we submitted comments regarding the Patent Office’s implementation of guidelines relating to the Supreme Court’s Alice decision on abstract software patents. This is the third time we’ve filed comments on Alice (we filed our earlier comments in March 2015 and August 2014). We remain very concerned that the Patent Office is not applying Alice diligently and ineligible patent claims are still being issued. We highlight some recently issued patents that we believe are invalid under Alice. We also urge the Patent Office to provide clearer guidance on how the law has changed. Once they issue, bad patents are the favorite tool of the patent troll and are extremely expensive to invalidate. We need the Patent Office to do a better job in the first instance so that real innovators aren’t left fighting abusive patent suits.

Files:  public_knowledge_and_eff_comments_on_ptab_pilot_program.pdf eff_and_public_knowledge_comments_re_subject_matter_eligibility.pdf comments_of_eff_and_public_knowledge_on_ptab_rules_of_practice.pdf
Share this:   ||  Join EFF
Categories: Aggregated News

Movie Studios Scale Back Their Website-Blocking Strategy in the MovieTube case - Thu, 19/11/2015 - 10:00

On Friday, the major US movie studios quietly backed away from the worst parts of the censorship power-grab they attempted in July in the Paramount v. John Does (MovieTube) case. The studios are still hoping to take MovieTube’s Internet domain names away, but they are no longer asking for an order commanding the entire Internet to act as censors for them—a dangerous proposition that would open the door to more censorship and impede legitimate speech.

The studios, members of the Motion Picture Association of America, sued the anonymous operators of the movie-streaming site MovieTube back in July, accusing them of copyright and trademark infringement. At that time, they asked the court for an immediate order, known as a preliminary injunction. The studios wanted an order that would apply to all “Internet service providers, back-end service providers, sponsored search engine or ad-word providers, merchant account providers, payment processors, shippers, domain name registrar[s] and domain name registries” — in short, the entire Internet. With that order in hand, the studios could force any intermediary, or all of them, into helping make the MovieTube site disappear. 

We pointed out, at the time, why an order like the one the studios were asking for was extremely dangerous. The issue is not whether the MovieTube sites were infringing copyright or harming the movie studios, but rather that expanding the legal remedies for infringement will lead to other serious harms. Blocking entire websites almost always censors First Amendment-protected speech, and the power to block entire websites is a small step away from the power to dictate their contents. Conscripting Internet intermediaries to create site-blocking mechanisms makes the Internet less reliable and secure and emboldens other would-be censors, like repressive governments. This is the very power that the Internet blacklist bills SOPA and PIPA would have created, had they passed. In a rare moment of solidarity, Facebook, Google, Tumblr, Twitter, and Yahoo! filed a brief together in the MovieTube case to explain these dangers. They also pointed out that copyright and trademark law protect most Internet platforms against being forced to police or filter content posted by others—vital protections that the studios were trying to bypass.

After the Internet companies filed their brief, the studios seemed to recognize that they had overreached. They dropped their request for a preliminary injunction and waited.

The MovieTube defendants never appeared to defend themselves, so last Friday, the court declared a default, clearing the way for a final resolution of the case. The movie studios then asked the court for a permanent injunction. Surprisingly, the site-blocking powers they are asking for are narrower than their earlier ask. Gone are the references to ISPs, hosting providers, payment networks, ad networks, and search engines. The order now seems to cover only the defendants themselves, their close confederates, and domain name registrars and registries, who would be forced to turn the MovieTube domains over to the studios.

It’s good to see the studios back away from the worst parts of their earlier grab for site-blocking power, although their new proposal still gives us cause for concern. The studios want an order that bans “index[ing] . . .link[ing] to . . . or otherwise us[ing]” their movies “or portion(s) thereof”. In other words, they want an order that makes non-infringing uses of movies, including fair uses, illegal. And they want that order to apply not only to the MovieTube defendants but to “any persons in concert or participation with them.”

That phrase is important. Federal rules let a judge issue orders to bind people who are in “active concert or participation” with the defendant in a case, meaning a confederate or co-conspirator. Recently, some trademark and copyright owners, particularly major entertainment distributors, have tried to broaden the meaning of that phrase to include neutral service providers who handle all kinds of user data. By quietly dropping the key word “active” from their injunction proposal, the studios might still be trying to give themselves broad power to edit the Internet.

Of course, major entertainment companies haven't ended their quest for site-blocking power. They continue to pursue it in court, through federal and state agencies, and by pressuring the companies that run the Internet's domain name system. We’ll be watching for signs of a broader power-grab by the studios in the MovieTube case, and we hope that service providers large and small stand up for their users by refusing to follow site-blocking orders that don’t properly apply to them.

Related Issues: Fair Use and Intellectual Property: Defending the BalanceSOPA/PIPA: Internet Blacklist Legislation
Share this:   ||  Join EFF
Categories: Aggregated News

Misuse Rampant, Oversight Lacking at California’s Law Enforcement Network - Thu, 19/11/2015 - 06:58

Confirmed cases of misuse of California’s sprawling unified law enforcement information network have doubled over the last five years, according to records obtained by EFF under the California Public Records Act.

That adds up to a total 389 cases between 2010 and 2014 in which an investigation concluded that a user—often a peace officer—broke the rules for accessing the California Law Enforcement Telecommunications System (CLETS), such as searching criminal records to vet potential dates or spy on former spouses. More than 20 incidents have resulted in criminal charges.

Unfortunately, those figures only represent what was self-reported by government agencies to the California Attorney General. The actual number of misuse cases of CLETS are likely substantially higher since the California Attorney General’s Department of Justice (CADOJ) has let many agencies slide on their annual misuse disclosures. Among the delinquent are two of California’s largest law enforcement agencies: the Los Angeles Police Department (LAPD) and the Los Angeles County Sheriff’s Department.

What’s worse is the government body charged with overseeing disciplinary matters—the CLETS Advisory Committee (CAC)—seems to have taken no action to address the problem or ensure accountability from individual agencies. 

Law enforcement abuse of confidential databases have been a growing concern for privacy and civil liberties groups like EFF. It occurs at all levels of government. In 2013, the NSA acknowledged that agents used intelligence systems to snoop on romantic interests (a practice dubbed “LOVEINT”). Last month, a Border Patrol supervisor was arrested and charged for allegedly manipulating a Homeland Security database to retaliate against a man who had made “child-rape” allegations against the supervisor’s brother. 

Of the hundreds of cases of verified misuse of CLETS each year, only a handful of stories have reached the public, often years after the fact. Here are a few of the worst ways that police have abused the system in recent years:

  • In 2010, a Los Angeles Police officer used LAPD’s communications system, which is connected to CLETS, to pull information on witnesses who testified against his girlfriend’s brother in a murder case. Chief Charlie Beck told the press the department would “vigorously prosecute” the officer. Two years later, however, the Los Angeles County District Attorney dropped the case. By then, the officer had already resigned. (Los Angeles County District Attorney)
  • In the fall of 2010, an officer, who had been sending his estranged wife abusive text messages, used CLETS to dig up information on her new boyfriends. His wife complained to the police. The officer ultimately pled no contest to a misdemeanor harassment charge, but the charges for violating CLETS were dropped. He was also fired. (California Public Employee Relations Journal)
  • Two Fairfield Police officers were investigated for using CLETS to screen women from dating sites such as Tinder, eHarmony, and (Daily Republic)
  • Court records show that in 2009, a Westminster Police Officer was fired after accessing CLETS 96 times to gather information on 15 people for non-law enforcement purposes, such as meeting women and spying on his ex-wife and ex-girlfriends. In 2013, he pleaded guilty to domestic violence charges and unlawful disclosure of DMV records. (Orange County Register)
  • In 2013, the Madera County Sheriff’s Department of Corrections staff broke the rules by using a CLETS terminal at the county jail as a regular workstation. Consequently, officials failed to receive crucial communications, leading to the accidental release of a detainee. Days later, the released man was involved in a car chase that resulted in a crash that killed an innocent civilian. (Madera County Grand Jury

EFF began investigating CLETS after reviewing official “misuse statistics” presented in public hearings that made little sense and did not seem to reflect misuse at all. Digging deeper, we learned the CLETS Advisory Committee has aggressively moved to expand the system’s capabilities, while more often than not turning a blind eye to the also-growing misuse.

What Is CLETS?

Think of CLETS as California’s law enforcement “cloud.”

Source: Public Safety Communications Association [.pdf]

CLETS links together more than 5,200 unique “points of presence,” such as dedicated office computers and mobile terminals in patrol cars. It’s a system so large that CADOJ told EFF it doesn’t even keep a master list of which agencies have signed agreements to access the system. In addition, many CLETS features are accessible through a web app called “SmartJustice.” The system also allows CLETS users to send millions of messages to each other every day, such as all-points-bulletins and Amber alerts.

CLETS users are granted access to whole universes of databases that don’t just contain information on Californians, but records from other states and the federal government.

If you’ve got a California-issued ID, registered a car in California, received a parking citation, have any kind of criminal history or protective order, or any kind of record in 11 other databases, then you likely have files that can be accessed from CLETS.

But that’s not all: CLETS also connects to Oregon’s equivalent network, which means if you’re an Oregonian, California police may be able to access your information too, especially if you drive a car. But those datasets pale in comparison to the access CLETS provides to an interstate database called NLETS and the FBI’s National Crime Information Center.

A 15-part series of old-school CLETS training videos, chock-full of reenactments and animation, are available through Lemoore Police Department's Vimeo page.

Who Oversees CLETS?

Under state law, there are two government bodies in charge of overseeing CLETS.

The legislature assigned the California Attorney General the responsibility of administering CLETS on behalf of the state’s law enforcement agencies. But lawmakers also decided that the attorney general would take direction on policy and disciplinary matters from CAC (again, that stands for the CLETS Advisory Committee), a nine-member body that meets several times a year. Currently, members representing law enforcement and local government lobby groups have a voting majority. There are no members representing civil liberties or privacy organizations.

Agencies that sign up for CLETS agree to follow the CLETS "Policies, Procedures and Practices"—essentially the system's terms of use. According to this rulebook, when a law enforcement agency investigates a CLETS violation, it is supposed to report what happened and what action was taken to the attorney general, which in turn is supposed to present the information to CAC. At that point, CAC is supposed to recommend a course of action for the CADOJ, which could include issuing a letter of censure, temporarily suspending the agency's access to CLETS, or discontinuing access altogether. CAC can also call the head of the agency (say, the chief of police) before the committee to explain what happened. 

Over the last five years, CAC has never once pursued any of those measures against an agency over misuse of the system. In fact, there is nothing in CAC's meeting minutes to indicate that the body has ever publicly discussed the growing cases of misuse.

Based on our research and discussions with CADOJ, it seems the agency is not enforcing reporting requirements, nor is it presenting what information it does collect to CAC. Meanwhile, CAC doesn’t seem to mind that it’s not being provided this information. The problem is circular: CADOJ can’t take action against misuse unless it has been directed to do so by CAC. And CAC can’t recommend an action against misuse unless CADOJ provides the committee with misuse reports. As a result, neither body seems to be addressing the issue. 

(EFF could only identify one instance where CAC even discussed a particular misuse case, although it wasn't characterized as misuse at the time. In 2014, a Madera County Grand Jury investigation concluded that misuse of the CLETS terminal at the county jail resulted in the accidental release of an arrestee, who later killed a bystander during a car chase. According to CAC meeting minutes [.pdf], CADOJ only told the committee that Madera County was "not compliant with security awareness training" and would be given six months to get it together.)

The CLETS agreement also requires each agency to file an annual report of misuse statistics. The information in these reports includes: number of misuse complaints the agency received, whether those complaints were received from internal or external sources, the outcome of the investigation, and what actions were taken. If criminal charges were filed, the agency must report if prosecution resulted in a conviction.

CADOJ has not passed these statistics onto the oversight committee either. Instead, at each meeting, CADOJ staff present the committee with a series of numbers that they call “Misuse Statistics,” but are really nothing of the sort.

Here’s an example of a slide presented by CADOJ at the March CAC meeting:

CAC generally glosses over this information during its meetings without asking questions. However, when EFF asked what these numbers actually mean, CADOJ staff explained that these numbers only show how many times the access log was checked for misuse. It does not, in any way, indicate actual misuse. 

So, EFF filed a request under the California Public Records Act to get the real numbers for CLETS misuse. 

What We Know About CLETS Misuse

The data was astounding: CLETS abuse more than doubled between 2010 and 2014.

Agencies received 641 complaints over that period and between 586 and 619 investigations were conducted (the data is internally inconsistent). Approximately two-thirds of those investigations resulted in an affirmative finding that misuse had indeed occurred.

Of those 389 cases of confirmed misuse, 109 resulted in no action taken at all. As for the rest:

  • 6 cases resulted in a felony charge
  • 15 cases resulted in a misdemeanor charge 
  • 35 cases resulted in terminations 
  • 32 cases resulted in resignations
  • 62 cases resulted in suspension
  • 136 cases resulted in reprimands or counseling.
  • 56 cases were simply listed as resulting in “other” action

Even these numbers fail to paint a complete picture of the problem. Currently, 143 misuse investigations remain mysteries; their outcomes are listed as simply “pending,” and the documents were never updated after the investigations were concluded. Of the 21 cases where users faced criminal charges, only four so far have resulted in convictions, with the dispositions of the remaining cases undisclosed. 

In addition, even when an agency says it recorded zero CLETS misuse, that doesn't necessarily mean there was none. For example, Madera County didn’t report the 2013 jail case in its statistics because it didn’t start an investigation until a year after the incident (after the grand jury slammed the sheriff for failing to conduct an investigation). Furthermore, there are places where the numbers provided by agencies don't seem to add up. 

More alarmingly, however, was our discovery that many agencies hadn’t filed disclosures at all. Because CADOJ doesn’t keep a master list of agencies that should be reporting, EFF had no way to determine how many agencies were delinquent.

We were, however, able to confirm the Los Angeles County Sheriff’s Department did not file any disclosures between 2010 and 2014. LAPD—which caught an officer digging up information on murder witnesses in 2010—only filed a form once, in 2012. Calls to the sheriff department went unreturned, while LAPD staff could not determine who was responsible for filing the forms. Meanwhile, there’s nothing to indicate either the CADOJ or CAC ever followed up.

There's one further reason to be wary of the data: the source material no longer exists. 

In our initial request, we asked for each individual annual misuse report. Instead CADOJ provided us a series of tables, explaining that "once received, the data is entered onto a spreadsheet and the form destroyed." Throwing out the original records makes it difficult, if not impossible, to double-check inconsistencies in the data. 

Download CLETS misuse data for the years 2010 through 2014 [.zip]. 

What Is CAC Doing Instead of Overseeing Misuse?

CAC does provide critical oversight in one capacity: ensuring agencies are in compliance with CLETS and FBI security standards—such as encryption, password strength, and training. For example, a March 2014 audit by the FBI found widespread compliance issues among 10 agencies, including failure to conduct appropriate training, failure to fingerprint all personnel with access to the system, and failure using sophisticated encryption. Many of these issues remained unresolved more than a year after they had been identified.

Despite the skyrocketing misuse and the ongoing cybersecurity challenges, CAC has spent the last year coming up with new ways to expand CLETS. In December 2014, for example, CAC authorized CADOJ to link CLETS to an interstate driver license photo-sharing system, granting California police access to DMV photos from across the country. At CAC’s July 2015 meeting, the body quietly approved the 2015 Strategic Plan, which calls for expanding biometric data capture and sharing real time and historical GPS data on offenders statewide.

Can Anything Be Done?

EFF would like to see the California Attorney General and CAC do their jobs by properly monitoring CLETS and holding agencies responsible for misuse.

CADOJ should collect the misuse information it is supposed to, stop destroying the original records, and provide that data to the official oversight committee. CAC, in turn, should openly discuss how CLETS policies can be improved to reduce the potential for abuse and recommend action against agencies that fail to comply. Sadly, these bodies have demonstrated they see little value in enforcing the rules and even less value in public participation.

All year, EFF has been trying to ensure accountability with CLETS—filing public records requests, sending letters, and addressing the committee during public comment. Our goal so far has been to fight CLETS expansion plans and to demand greater transparency in how it conducts its meetings. 

In March 2015, EFF demanded CAC drop its plans to integrate facial recognition technology with the California DMV photo database and share DMV photos with other states. After 1,500 supporters sent emails to CAC, the committee removed that goal from its strategic plan. 

We were joined by the ACLU of California, Californians Aware, and First Amendment Coalition in a letter warning the committee that the way it is conducting its hearings is likely in violation of the state’s open meetings laws. At its July 2015 meeting, CAC responded that “convenience” for its members trumped the public’s right to meaningfully access and participate in decisions regarding CLETS. Then CAC voted to pass a 2015 Strategic Plan—a document that had never been publicly released or announced on an agenda before being finalized. 

It may be time for the California legislature to step in to protect the privacy of their constituents. Measures could include holding investigative hearings, adding new, non-law enforcement members to the committees, and requiring full and public disclosures of misuse statistics. 

In the meantime, you can count on EFF to remain vigilant. Stay tuned, because we may need your help.

Share this:   ||  Join EFF
Categories: Aggregated News

Once Again, DMCA Abused to Target Political Ads - Wed, 18/11/2015 - 11:47

If you live in San Francisco (or spend much time on social media) you probably saw a lot of discussion last month about Proposition F, a controversial proposal to regulate short-term property rental services like Airbnb. You may also know that Airbnb spent millions opposing the measure, many times the budget of the proposition’s supporters. Here’s what you might not know: the bill’s opposition also got a little unexpected assistance from the DMCA (Digital Millennium Copyright Act) takedown process.

Just a week before the vote, the only television ad supporting the measure disappeared from TV, YouTube, and the website of the organization that created it. Watch the ad yourself and see if you can guess why:

Privacy info. This embed will serve content from

Did you catch it? The “Hotel San Francisco” lyric and the soundalike background music were enough to earn a cease-and-desist letter and DMCA takedown notice from attorneys representing The Eagles’ Don Henley and Glenn Frey. Henley and Frey’s lawyers threatened to sue for massive damages if ShareBetterSF didn’t pull the ad.

If this story sounds familiar, that might be because political ads are often the targets of unfair DMCA takedowns. In 2008, the Obama and McCain campaigns were both hit with takedown notices for using news footage in their advertisements (from CBS and NBC, respectively). In 2009, NPR filed a YouTube takedown notice on an ad that criticized same-sex marriage; the ad had used a brief soundbite from an NPR program. In all three cases, a court would have recognized that the campaigns were within their rights. The ads use the clips simply to provide information; they don’t imply the news organizations’ endorsement or affect their viewership in any way. But thanks to the DMCA’s takedown-first-and-ask-questions-later procedure, none of them ever went to court.

The makers of “Hotel San Francisco” were clearly in the right too, but it didn’t matter. As they told the Internet Archive, there was no purpose in challenging the takedown. Even if they had immediately counter-noticed, the ad still wouldn’t have been restored for 2 weeks—too late to have any effect on the election. Once again, the DMCA was used to shut down political expression with no consequences for the sender. As the 2016 campaigns get into full swing, expect to see more of this kind of abuse.

This story might give you déjà vu for another reason: it’s not the first time Don Henley has accused the creators of a satirical political ad of copyright infringement. In 2010, Henley sued Senate candidate Charles DeVore for his parodies of two songs in political advertisements. A U.S. District Court rejected Henley’s claim that the ads falsely implied that he’d endorsed DeVore, but ruled that they did constitute infringement of his copyright.

We disagreed with that ruling, but whatever you might think of it, the facts here are very different. ShareBetterSF selected “Hotel California” to make a specific political point. The use was noncommercial and highly transformative, and it couldn’t possibly harm any market for the original work—all factors favoring fair use. But none of that mattered in light of the takedown.

Again and again, overzealous DMCA takedowns disregard fair use. That’s unfortunate, because fair use is designed to ensure that copyright law is compatible with the First Amendment. When fair use is overlooked in the face of a takedown notice, it really means that freedom of speech is compromised.

EFF is fighting to make sure the targets of DMCA abuse can hold the abusers accountable. In the meantime, we have been documenting the worst abuses in our Takedown Hall of Shame. Given its dangerous consequences for political speech, this takedown has earned a spot.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Related Issues: Fair Use and Intellectual Property: Defending the BalanceDMCANo Downtime for Free Speech
Share this:   ||  Join EFF
Categories: Aggregated News

Tell the U.S. Department of Education: Open Licensing Matters - Wed, 18/11/2015 - 05:43

The U.S. Department of Education (ED) is considering a rule change that would make the educational resources the Department funds a lot more accessible to educators and students—not just in the U.S., but around the world. We hope to see it adopted, and that it sets the standard for similar policies at other government agencies.

Sign the petition

The policy would require that grantees share all content that the ED funds under an open license. If the ED funds your work, you must share it under a license that allows anyone to use, edit, and redistribute it. For software, the license would also have to allow people to access and modify the source code.

This is a great move. The Executive Branch funds billions of dollars’ worth of content through various departmental grant programs. Those grants are intended to benefit the public, but too often the public never even sees the results, let alone uses them.

All federal grants are subject to a rule that allows the government itself to share grant-funded works with the public, but that rule has not been sufficient to make sure that sharing happens in practice. As the ED notes, the rule requires the public to be aware of the materials and contact the ED for access, both significant practical and informational hurdles. Open licenses allow publishers and other intermediaries to distribute materials much more widely than either the ED or grantees can alone.

The existing policy also doesn’t cover reuse. Simply being able to access a resource isn’t enough. To unlock its real value, you need to be allowed to modify it, merge it with other resources, and republish it. That’s especially true in education: when educators are empowered to customize materials for their needs and share them with other educators, everyone benefits. Sometimes open access or open education policies authorize specific types of reuse, but those carveouts end up creating doubt and confusion about how people are allowed to use the content. Open licenses are the right solution.

Sign the petition

The EFF will be submitting a comment in support of the rule change. If you’d like to show your support too, then sign our petition. We’ll include your name with our comment. Please sign the petition by December 1.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Related Issues: InnovationOpen Access
Share this:   ||  Join EFF
Categories: Aggregated News

Why Is Facebook Inspecting Your Private Videos? - Wed, 18/11/2015 - 05:27
New Copyright Bot Raises Questions About Fair Use and Privacy

In general, Facebook has some pretty decent copyright policies. If you upload content to Facebook and it’s removed because of a bogus takedown request, you can file a counter-notice via a form on Facebook’s website. If the claimant doesn’t take action against you in a federal court in 14 days, your content is restored. That’s how it’s supposed to work, and Facebook usually does it right. Unlike some platforms, it also doesn’t ding users as “repeat offenders” based on multiple phony claims.

But Facebook has recently introduced a new system for automatically recognizing copyright infringement in videos, and the way it works could raise a few eyebrows. In some circumstances, the new copyright bot actually requires Facebook users to share their private videos with a third party. While arguably well-intentioned, the system could threaten not only users’ free expression online, but also their privacy.

Facebook vs. Freebooting

Earlier this year, celebrity videoblogger Hank Green wrote a scathing critique of the way Facebook handles video content. Among other criticisms, Green said that Facebook hadn’t done enough to combat freebooting, the practice of downloading someone else’s video and reuploading it to your own profile. You’ve probably noticed freebooted videos on Facebook: they’re often the same funny videos you saw last month on YouTube, sometimes with crappy advertising or other text added to them.

Freebooting is more popular on Facebook than you might think: Green cited a study showing that in the first quarter of 2015, 725 of the 1,000 most popular videos were freebooted. According to Green, Facebook implicitly rewards freebooting by prioritizing native video uploads over YouTube embeds: “[W]hen embedding a YouTube video on your company’s Facebook page is a sure way to see it die a sudden death, we shouldn’t be surprised when they rip it off YouTube and upload it natively. Facebook’s algorithms encourage this theft.”

Green’s criticism struck a chord with a lot of content creators. People and companies that produced video wanted to know that Facebook had a plan to fight freebooting. The uproar came at an inconvenient time for Facebook, just as it was looking to ramp up its presence in the online video world and build better relationships with those same creators.

Content ID Lite?

In August, Facebook announced that it would be rolling out new features to combat unauthorized video sharing:

[W]e have been building new video matching technology that will be available to a subset of creators. This technology is tailored to our platform, and will allow these creators to identify matches of their videos on Facebook across Pages, profiles, groups, and geographies. Our matching tool will evaluate millions of video uploads quickly and accurately, and when matches are surfaced, publishers will be able to report them to us for removal.

Actually, Facebook has been building its content matching technology piecemeal for some time. For years, Facebook has partnered with Audible Magic, whose audio fingerprinting service is used by several social media sites to pinpoint copied music. Facebook has launched the new video matching system with a small group of content creators, with plans to roll it out to a larger base of users.

Facebook’s announcement invites comparisons to YouTube’s sometimes-problematic Content ID service. Content ID lets rights holders submit large databases of video and audio fingerprints. The bot scans every new upload for potential matches to those fingerprints. The rights holder can choose whether to block, monetize, or monitor matching videos. Since the system can automatically remove or monetize a video with no human interaction, it can often remove videos that are clearly lawful fair uses and even videos that haven’t copied from another work at all.

Fortunately, it appears that Facebook doesn’t currently remove anything automatically: it detects potential matches and flags them for the rights holder’s review. If the rights holder reports a video as potential copyright infringement, it’s still not deleted automatically. For now, all requests are reviewed by human staff at Facebook. We're glad that Facebook has introduced video matching in a way that won’t create unnecessary and annoying autotakedowns.

Sharing with Friends (and Alleged Rights Holders)

But what happens when you share a video only with your friends (or with a private group) and that video registers as a match? From what we’ve pieced together, when you upload a video intended only for friends and Facebook thinks it might be a match, you won’t be allowed to share it with your friends unless you are willing to show it to the rights holder as well. (We haven’t actually seen this happen. If you see a notification like this when uploading a video to Facebook, please let us know about it.)

The policy may be better than some of the alternatives. For example, it would be a disaster if Facebook sent the matching video to the rights holder without notifying the uploader, or if Facebook simply deleted private videos with no human review (as YouTube has been known to do).

Still, Facebook has effectively created a new restriction for private communications: if you’re not willing to share your private video with someone whose copyright a computer thinks you might be infringing, you can’t share it with your friends.

That’s unsettling for two reasons. First, it may put your privacy at risk. Rights holders can’t see your name, but there’s no way to scrub personally identifying information from the video itself. If you upload a very personal video that happens to have a Drake song playing in the background, it doesn’t make much sense to require you to share the video with Drake’s record label.

Second, it could undermine fair use rights. The section of U.S. law that defines fair use specifically dictates that using parts of a work for the purpose of criticizing that work doesn’t constitute copyright infringement. It doesn’t require that that criticism be shared directly with the rights holder.

It’s easy to think of scenarios in which an uploader wouldn’t want to share their criticism with the rights holder, or even ones in which doing so could be dangerous. For example, the target of criticism could be the uploader’s employer, or someone known for harassing their critics. Facebook is a great platform for video creators to privately share and discuss their work: what about mashup artists or activists using Facebook to share draft edits with each other? Does it make sense to require users to run their work by copyright holders as a condition of exercising their fair use rights?

Here’s a better question: is it really necessary to run privately uploaded videos through the copyright bot at all?

Why Filter Private Content?

The simplest solution might be for Facebook not to scan private videos for matches. Scanning private communications for copyright infringement is foolish at best, and downright scary at worst (imagine the backlash if Google started using Content ID in Gmail).

Uploading a video to share it with your friends is very different from sharing it publicly. When you share a video publicly, it can go viral and reach thousands of people. When you share it with your friends, it can only reach those friends. (Because of the way Facebook works, a video that you only share with your friends can’t even reach your friends’ friends: your friends can share your post, but only with people who are also on your friends list.)

One of the four factors used to determine whether a certain use of a copyrighted work is protected under fair use is the impact that the use might have on the market for the original work. The impact that private sharing on Facebook has on the demand for a work is minimal.

Unlike on YouTube, there’s no monetary reward for video views on Facebook. That’s not to say that no one uses Facebook commercially: many brands and content creators do, but their methods for monetizing their videos are more complicated than views alone. If a person wanted to use Rihanna’s videos to mislead viewers and compete for her album sales, he wouldn’t be very successful just sharing them with his friends.

To return to the original criticisms of Facebook that led to the matching system, private sharing isn’t the issue at all. The freebooters that Hank Green is annoyed with are, by definition, uploading their videos publicly.

Facebook Is the Internet

Imagine trying to send an email and having your email service tell you it can’t send it because of an alleged copyright violation. In essence, that’s what Facebook’s video matching system does. If you can’t share a video privately without allowing a third party to view it, then you can’t share a video privately.

If it sounds like we’re holding Facebook to a high standard; well, Facebook holds itself to a high standard. Facebook’s new Free Basics program brings a Facebook-centered version of the Internet to many mobile users in developing countries. Here in the U.S. and around the world, Facebook partners with mobile providers to offer special plans that include unlimited Facebook access. For millions of people, Facebook is the Internet. A Facebook policy that impacts people’s ability to communicate privately deserves extra scrutiny.

A lot of the time, Facebook withstands scrutiny. For example, Facebook shows more respect for its users than some of its peers do in the face of governments’ demands for user data.

In this case, though, Facebook is making a misstep. Going to reasonable lengths to earn content creators’ trust is a good thing, but when it gets in the way of private communications, it’s time to reevaluate.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Related Issues: Fair Use and Intellectual Property: Defending the BalanceDMCAPrivacySocial Networks
Share this:   ||  Join EFF
Categories: Aggregated News

EFF Demands Moroccan Authorities Drop Criminal Charges Against Human Rights Defenders - Tue, 17/11/2015 - 13:53

UPDATE 11-19-15: The trial of the seven Moroccan human rights workers has been delayed until January 2016, in part because of the increased international attention these cases have garnered.

On November 19, the Moroccan government will put seven activists on trial as part of its ongoing crackdown on journalists and human rights defenders.

EFF has joined a coalition of human rights organizations, including Free Press Unlimited, Article 19, and Mamfakinch to express our concern about the harassment and prosecution of these activists and to demand that all charges be dropped immediately. Maâti Monjib, Hicham Mansouri, Samad Iach, Mohamed Elsabr, and Hisham Almiraat (aka Hisham Khribchi) are all facing criminal charges of “threatening the internal security of the state.” If found guilty, they could face up to five years in prison. Rachid Tarek and Maria Moukrim are facing charges of “receiving foreign funding without notifying the General Secretariat of the government.” If found guilty, they could face fines.

Dissidents in Morocco face continuous interrogations, threats, arrests, and surveillance. These prosecutions, meant to silence dissent against the Moroccan government, violate the defendants’ right to freedom of expression, guaranteed under Article 19 of the Universal Declaration of Human Rights.

EFF and other organizations will be following the trial closely.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E';
Share this:   ||  Join EFF
Categories: Aggregated News

EFF to Court: California’s DNA Law Violates Privacy Protections Guaranteed by State - Tue, 17/11/2015 - 05:43
Law Allows DNA Collection From Arrestees Before They’re Charged, Convicted

San Francisco—Californians who’ve merely been arrested and not charged, much less convicted of a crime, have a right to privacy when it comes to their genetic material, EFF said in an amicus brief filed Nov. 13 with the state’s highest court.

EFF is urging the California Supreme Court to hold that the state’s arrestee DNA collection law violates privacy and search and seizure protections guaranteed under the California constitution. The law allows police to collect DNA from anyone arrested on suspicion of a felony—without a warrant or any finding by a judge that there was sufficient cause for the arrest. The state stores arrestees’ DNA samples indefinitely, and allows access to DNA profiles by local, state, and federal law enforcement agencies.

EFF is weighing in on People v. Buza, a case involving a San Francisco man who challenged his conviction for refusing to provide a DNA sample after he was arrested. EFF argues that the state should not be allowed to collect DNA from arrestees because our DNA contains our entire genetic makeup—private and personal information that maps who we are, where we come from, and who we are related to. Arrestees, many of whom will never be charged with or convicted of a crime, have a right to keep this information out of the state’s hands.

“Nearly a third of those arrested for suspected felonies in California are later found to be innocent in the eyes of the law. Hundreds of thousands of Californians who were once in custody but never charged still have their DNA stored in law enforcement databases, subject to continuous searches,” said EFF Senior Staff Attorney Jennifer Lynch. “This not only violates the privacy of those arrested, it could impact their family members who may someday be identified through familial searches. The court must recognize that warrantless and suspicionless DNA collection from arrestees puts us on a path towards a future where anyone’s DNA can be gathered, searched, and used for surveillance.”

California officials argue that the court should follow the lead of the U.S. Supreme Court, which ruled in Maryland v. King that citizens’ privacy rights are outweighed by the government’s need to use DNA to identify arrestees, just as it uses fingerprints.

But DNA samples contain our entire genome—fingerprints don’t. What’s more, Maryland limits DNA collection to those arrested and subsequently charged for serious offenses—in 2013 that amounted to 17,400 arrests. In California, all of the nearly 412,000 felony arrests that same year were subject to DNA collection. Maryland also prohibits familial searches and requires DNA samples to be automatically expunged from databases and destroyed if a person is never charged with or convicted of the crime leading to arrest. California law doesn’t prohibit familial searches, and the state makes it extremely difficult for citizens to have their DNA records removed from the system.

“A lower court in this case correctly recognized that California’s DNA collection law deeply intrudes on the privacy interests of arrestees. The California Supreme Court should come to the same conclusion and strike it down,” said Lynch.

Law professors at UC Davis School of Law, New York University School of Law, Georgia State University College of Law, and UC Berkeley School of Law,  as well the Office of the Maryland Public Defender and the National Association of Criminal Defense Lawyers joined EFF in filing the brief.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Contact:  LeeTienSenior Staff Attorney and Adams Chair for Internet
Share this:   ||  Join EFF
Categories: Aggregated News

Casualty of YouTube’s “Contractual Obligations”: Users’ Free Speech - Sat, 14/11/2015 - 09:57

Internet users generally think of YouTube as a platform where, if you play by the copyright rules, the content you post is safe from takedown and, if it's taken down improperly, you have some recourse. But that's not the case, thanks to an additional barrier to lawful sharing: meet YouTube's “contractual obligations.”

YouTube has made special deals with certain rightsholders that allows them to dictate where and how their content can be used on the site.

If your video uses content controlled by these rightsholders, and they object to that use, YouTube will take your video offline and won't restore it unless you can get the rightsholder's permission. Because the takedown isn't subject to the DMCA, the rightsholder has no legal obligations to consider whether your use is a lawful fair use.

Given the importance of YouTube as a platform for expression, these deals can have dangerous consequences for online speech.  A case in point: the lengthy ordeal of YouTube channel LiberalViewer. The channel regularly engages in political commentary, and usually criticizes Fox News' presentation of news events. In one such video, uploaded in January 2008, LiberalViewer criticized the way Fox News abruptly cut away from a speech being delivered by then-candidate Obama. Back then, Obama's campaign frequently used Stevie Wonder's “Signed, Sealed, Delivered I'm Yours” after campaign speeches. So naturally, in explaining Fox News' cut away of Obama, LiberalViewer's video included a snippet of the song.

Nothing happened for a few years, until November 2011, Universal Music Group issued a Content ID claim against the video. LiberalViewer immediately disputed the claim and contacted Universal Music Group's agent. After several emails exchanges, rather than lifting the claim, Universal responded with a DMCA takedown notice. Believing his video to be a lawful fair use, LiberalViewer counter-noticed. YouTube briefly reinstated the video, but then took it down again, claiming that the counter-notice was invalid because the user did not have “sufficient rights.” That's how things ended in 2012.

In 2015, partly encouraged by our win in Lenz v. Universal, LiberalViewer once-again submitted a counter-notice. This time, YouTube rejected the counter-notice for a new reason:

Universal Music has doubled down on its DMCA takedown notice and LiberalViewer has nowhere to turn at this point. The channel has little leverage with Universal, a rightholder that has not shown itself to be particularly interested in acknowledging online fair use. YouTube has washed its hands off of the whole affair.

You can still view the video on LiberalViewer's site here (the video plays the song between the 1:42 - 2:16 marks for a total of 34 seconds). But it has been taken off a major platform, with no means of recourse.

Given how much it owes its success to user contributions, it's a shame that YouTube has given some rightholders a private veto right on fair use.

But it is Universal that has chosen to exercise that right. Thanks to that choice, Universal has earned its third entry into the EFF Takedown Hall of Shame.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Related Issues: Fair Use and Intellectual Property: Defending the BalanceDMCAFree SpeechRelated Cases: Lenz v. Universal
Share this:   ||  Join EFF
Categories: Aggregated News

The New DMCA §1201 Exemption for Video Games: A Closer Look - Sat, 14/11/2015 - 09:18

Last month, EFF and I scored a major victory for video game archiving, preservation and play – we got an exemption to the Digital Millennium Copyright Act for some archival activities related to video games.

Before I throw a bunch of shade, I want to emphasize that the exemption is a victory for the video game archiving community. Although there were flaws in what the Library of Congress granted, more legal leeway in this space is a net positive.

First, what the Librarian of Congress granted: an exemption for the circumvention of authentication servers in order to render games playable, so long as the game content is stored on the player’s computer or console. There’s now more legal protection for modifying a single player game where the authentication server has been deactivated for continued play or for preservation. So if and when Blizzard deactivates those Diablo III servers, players can modify their own games to continue playing.

The exemption only covers “local gameplay,” which the Librarian defines as “gameplay conducted on a personal computer or video game console, or locally connected personally computers or consoles, and not through an online service or facility.” So to benefit from the exemption, a game may be modified to allow for local multiplayer play, but not online multiplayer. It’s unclear whether setting up a matchmaking server for local LAN play would be allowed under the current exemption – the definition suggests yes, as does the Copyright Office’s explanation, but it’s not specifically spelled out.

Libraries, archives and museums (which I will collectively refer to as institutions), get more latitude. Under the exemption, they can eliminate access controls on video game consoles (often called “jailbreaking”) in order to copy and modify games to get them running again after a server shutdown.

Still, activities that were under a legal cloud before are now protected. Surely we’re all going to rest on our laurels until 2 years from now? Nope. This round of exemptions revealed how fundamentally broken and unsustainable the triennial rulemaking has become.

EFF, primarily Senior Staff Attorney Mitch Stoltz, and I spent countless hours preparing arguments for the gaming proposal, participating in hearings, and working with experts. Our proposal is one of 27 that the Librarian ruled upon, one of 7 that EFF participated in, and one of many advocated for by civil society groups. The rulemaking happens every three years, and the Librarian does not recognize any presumption that a previously-granted exemption should be renewed. Every three years, groups like EFF have to reapply for the same exemptions, marshaling up evidence and experts in order to produce a record of the potential harms. In some of the classes granted this year, the Librarian put a year-long delay on the exemption – meaning that it now only lasts two years.

As if that wasn’t enough, the scope of the proceedings keeps expanding. Because so many things contain software, the DMCA now threatens legitimate activity on everything from tractors to medical devices. The video game archiving and preservation exemption at least involved alleged harms that were ostensibly related to copyright – the same can’t be said of car hacking or pacemaker security.

As the scope expands, it becomes incredibly difficult to offer information that is both specific enough for the Register of Copyrights and broad enough to cover the variety of different lawful activities inhibited by the anti-circumvention law.

This is not made any easier by the way that the Copyright Office arbitrarily imposes burdens upon the proponents. For example,an exemption to the anti-circumvention rule, §1201(a)(1), does not exempt you from other parts of the statute, including §1201(a)(2), which prohibits trafficking in circumvention tools. As such, the groups seeking exemptions don’t devote much time discussing the anti-trafficking provision as part of the triennial process – because even a granted exemption won’t allow an owner to traffic in a circumvention tool.

However, in deciding to allow the restoration of multiplayer play under the video game exemption, the Register said that we failed to “provide persuasive support for an exemption for online multiplayer play, in large part because it is not clear on the current record how the provision of circumvention tools to multiple users to facilitate an alternative matchmaking service could be accomplished without running afoul of the anti-trafficking provision in section 1201(a)(2).” The issue of the matchmaking servers running afoul of 1201(a)(2) did not come up in any of the comments for the rulemaking. Not in the first filing, not when Copyright Office staff asked follow-up questions, not from the opposing briefs from the ESA, and not at the hearings. It was a surprise for the exemption to be denied on the basis that the proposed activity might possibly violate a legal provision that is not at issue in the rulemaking, particularly since the issue was never presented or briefed.

This puts exemption proponents in a strange position. It suggests that people seeking exemptions may have to show that the behavior contemplated by the exemption is not just non-infringing, but that it is otherwise legal. Will proponents of a medical device exemption have to brief which uses do and do not implicate FDA regulations? And vehicle tinkerers brief every law that governs the modification or use of a vehicle? What about giving the courts a chance to determine the scope of 1201(a)(2) by removing the threat of simultaneous 1201(a)(1) liability?

As one of the proponents, I can definitively say that we were able to provide far more evidence of communities that wished to restore multiplayer access to games where servers had been deactivated than of single-player shutdowns. The decision to exclude multiplayer play from the exemption cannot have been motivated by a lack of evidence that multiplayer shutdowns harm communities and consumers. The Register could find that the countervailing harms of illegal copying based upon circumvention are too great. (I would, of course, disagree.) But not granting the exemption because game enthusiasts might possibly violate a different part of the statute, without informing proponents that they bore the burden of proving non-violation of that provision, is unacceptable.

The granted exemption gives an archival institution more latitude surrounding the jailbreaking of consoles, but requires in exchange that the institution does not distribute or make available the video game outside its physical premises. It’s unclear how distributing the video game outside the premises of the institution would make it more likely that the circumvention of the access controls would result in infringement. Remember, by the time the video game access controls have been circumvented, a) the server is down, by definition and b) we assume the institution is not trafficking in the circumvention tools because that would be illegal. (Although we didn’t have to prove that no institution would violate the trafficking provision, so don’t ask me how that works.)

Second, and perhaps most importantly, how does this work in practice? Does the distribution of the game outside the physical premises of the institution make the Section 1201 violation retroactively illegal? Does the legality have to do with the intent that the institution had at the time of the potential circumvention? How do these conditions actually relate to the legal regime?


Furthermore, physical premises are not the only or even the primary medium through which institutions interact with the public. The future is not in-person attendance at museums. But now the legality of actions that museums took in the past is somehow tied to their physical premises.

The Librarian of Congress granted a number of complex and specific exemptions, applying to a small field. They clear up legal uncertainty in some ways but create it in others. And that’s why any celebration of these DMCA exemption victories must be tempered with the knowledge that the process is broken. The Register made a number of compromises on many of the exemptions, designed to find a middle ground between proponents and opponents. That eliminates much of the legal clarity that the exemptions are meant to provide. As Sarah Jeong said on Twitter, it’s like the Copyright Office forgot that the moral of the story of King Solomon and the baby is “Don’t split the baby.” Because if you split the baby, the baby dies.

Kendra Albert is a student at Harvard Law School. She was an EFF Legal Intern in summer 2014.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Related Issues: Fair Use and Intellectual Property: Defending the BalanceDMCADMCA RulemakingDRMVideo GamesRelated Cases: 2015 DMCA Rulemaking
Share this:   ||  Join EFF
Categories: Aggregated News

EFF files brief calling for greater law enforcement transparency - Sat, 14/11/2015 - 08:39

EFF has long fought for the public’s right to use federal and state public records laws to uncover controversial and illegal law enforcement techniques. That’s why we filed an amicus brief in a federal appellate court case this week asking it to reconsider a decision that makes it much easier for law enforcement agencies such as the FBI to conceal their activities.

The case, Naji Hamdan v. U.S. Department of Justice, is a Freedom of Information Act (FOIA) lawsuit filed by Mr. Hamdan, an American citizen.  While in the United Arab Emirates (UAE) in 2008, Mr. Hamdan was arrested and tortured. Mr. Hamdan, represented by the ACLU of Southern California, alleges that his detention was part of a larger program in which the U.S. government relies on other nations to interrogate and torture individuals suspected of having ties to terrorism.

Mr. Hamdan filed suit in 2010 seeking records from the CIA, FBI, and other law enforcement and intelligence agencies regarding whether U.S. official knew about or participated in his detention and interrogation. The federal agencies claimed that many of the records related to Mr. Hamdan’s detention were exempt from FOIA for various reasons, including that they were classified, that they could be withheld under other national security laws, or that they were exempt under a FOIA provision that allows law enforcement agencies to withhold their techniques and procedures.

The trial court upheld the government’s exemption claims and Mr. Hamdan appealed. In September, the U.S. Court of Appeals for the Ninth Circuit agreed with the lower court that the agencies did not have to disclose the records. Although several parts of the court’s decision are concerning, EFF was particularly troubled by the Ninth Circuit’s broadening of a FOIA exemption that allows law enforcement agencies to withhold their techniques and procedures, known as Exemption 7(E).

In the past, agencies claiming Exemption 7(E) had to show that disclosing the records would give potential lawbreakers a roadmap on how to evade law enforcement or otherwise break the law. The Ninth Circuit in Hamdan ruled that law enforcement agencies no longer have to demonstrate such a risk before withholding records.

The upshot of the court’s ruling is that law enforcement techniques and procedures are categorically exempt from disclosure, meaning that the public could potentially no longer get access to such records in future FOIA requests.

EFF believes that the court interpreted FOIA incorrectly, and our brief provides several legal arguments for why the court needs to rehear the case. More broadly, however, EFF is concerned about the practical impact of the decision, which could severely limit the public’s ability to access law enforcement records under FOIA.

EFF regularly sues law enforcement agencies to learn more about their practices, including the FBI’s development of a massive biometrics database, how federal agents use evidence obtained from cellphones, and how law enforcement agencies use social media to investigate crimes. In many cases, the federal agencies claim that information is exempt from disclosure under Exemption 7(E).

In EFF’s experience however, Exemption 7(E) is often misapplied to withhold law enforcement techniques that either are not even remotely secret or, worse, involve federal agents engaging in controversial or illegal activities.

For example, in EFF’s social network monitoring FOIA suit, law enforcement agencies claimed that basic details about how they use Facebook and Twitter to investigate crime would disclose secret techniques, even though such monitoring is well-known and widely publicized. Also, history has shown that law enforcement investigations are often a cover for intimidating or harassing political activists or communities of color.

EFF does not believe that law enforcement agencies should be able to categorically shield how they investigate crimes from public scrutiny and we hope that the Ninth Circuit will reconsider its decision.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Files:  hamdan_eff_amicus_brief_filed.pdfRelated Issues: Uncategorized
Share this:   ||  Join EFF
Categories: Aggregated News

The FCC’s DNT Decision: The Right Call, For Now - Fri, 13/11/2015 - 09:42

Everybody knows we here at EFF are big fans of Do Not Track (an HTTP header users can have their web browsers send to websites, indicating that they don’t want the websites to track them). That’s why we developed Privacy Badger, a browser extension that blocks third parties that don’t honor Do Not Track (DNT) requests. It’s also why we continue to expand our DNT Coalition—a group of companies and organizations who have committed to honor DNT requests on their websites.

So when the FCC announced last week that [PDF] it would not create regulations requiring Internet “edge providers”1 like Google, Facebook, or Amazon to honor DNT requests, you might expect us to be outraged, arguing that the FCC had abandoned users’ privacy to the proverbial wolves. But in this case we think the FCC actually made the right call.

Simply put, while the FCC can and should have rules of the road for ISPs, the agency should not be in the business of regulating websites – no matter how laudable its intentions.

Why the distinction between websites and ISPs? Because ISPs occupy a much more privileged position on the network. They carry all of a user’s traffic. That gives them the power to act as gatekeepers, deciding what sorts of traffic users can send and receive. It also gives them the opportunity to modify user traffic, adding privacy-destroying tags like Verizon’s UIDH super-cookie.

Edge providers, on the other hand, don’t have quite as much power. It’s a lot easier for users to “vote with their feet” and use a different edge provider for search, social networking, blogging, etc., than it is to change ISPs. Users also have more control over what information they send to edge providers—that’s why tracker blockers like Privacy Badger work. And of course, there’s the matter of jurisdiction; while ISPs operate in specific geographical areas, websites are accessible (and hosted) all over the world, which would raise all sorts of terrible jurisdictional issues.

So while we agree that websites should honor DNT requests, and we will continue to develop tools that will enforce DNT requests at a technical level, we don’t think FCC regulation is the right approach right now.

  • 1. In FCC parlance, “edge provider” refers to any operator of a website or web service that’s not an ISP.
var mytubes = new Array(1); mytubes[1] = '%3Ciframe allowfullscreen=%22%22 src=%22;end=824.7%22 webkitallowfullscreen=%22true%22 mozallowfullscreen=%22true%22 frameborder=%220%22 height=%22450%22 width=%22600%22%3E%3C/iframe%3E'; Related Issues: Net NeutralityPrivacy
Share this:   ||  Join EFF
Categories: Aggregated News



Advertise here!

Syndicate content
All content and comments posted are owned and © by the Author and/or Poster.
Web site Copyright © 1995 - 2007 Clemens Vermeulen, Cairns - All Rights Reserved
Drupal design and maintenance by Clemens Vermeulen Drupal theme by Kiwi Themes.
Buy now