Aggregated News

Law Enforcement Use of Face Recognition Systems Threatens Civil Liberties, Disproportionately Affects People of Color: EFF Report

eff.org - Fri, 16/02/2018 - 01:45
Independent Oversight, Privacy Protections Are Needed

San Francisco, California—Face recognition—fast becoming law enforcement’s surveillance tool of choice—is being implemented with little oversight or privacy protections, leading to faulty systems that will disproportionately impact people of color and may implicate innocent people for crimes they didn’t commit, says an Electronic Frontier Foundation (EFF) report released today.

Face recognition is rapidly creeping into modern life, and face recognition systems will one day be capable of capturing the faces of people, often without their knowledge, walking down the street, entering stores, standing in line at the airport, attending sporting events, driving their cars, and utilizing public spaces. Researchers at the Georgetown Law School estimated that one in every two American adults—117 million people—are already in law enforcement face recognition systems.

This kind of surveillance will have a chilling effect on Americans’ willingness to exercise their rights to speak out and be politically engaged, the report says. Law enforcement has already used face recognition at political protests, and may soon use face recognition with body-worn cameras, to identify people in the dark, and to project what someone might look like from a police sketch or even a small sample of DNA.

Face recognition employs computer algorithms to pick out details about a person’s face from a photo or video to form a template. As the report explains, police use face recognition to identify unknown suspects by comparing their photos to images stored in databases and to scan public spaces to try to find specific pre-identified targets.

But no face recognition system is 100 percent accurate, and false positives—when a person’s face is incorrectly matched to a template image—are common. Research shows that face recognition misidentifies African Americans and ethnic minorities, young people, and women at higher rights that whites, older people, and men, respectively. And because of well-documented racially-biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African-Americans, Latinos, and immigrants.

For both reasons, inaccuracies in facial recognition systems will disproportionately affect people of color.

“The FBI, which has access to at least 400 million images and is the central source for facial recognition identification for federal, state, and local law enforcement agencies, has failed to address the problem of false positives and inaccurate results,” said EFF Senior Staff Attorney Jennifer Lynch, author of the report. “It has conducted few tests to ensure accuracy and has done nothing to ensure its external partners—federal and state agencies—are not using face recognition in ways that allow innocent people to be identified as criminal suspects.”

Lawmakers, regulators, and policy makers should take steps now to limit face recognition collection and subject it to independent oversight, the report says. Legislation is needed to place meaningful checks on government use of face recognition, including rules limiting retention and sharing, requiring notification when face prints are collected, ensuring robust security procedures to prevent data breaches, and establishing legal processes governing when law enforcement may collect face images from the public without their knowledge, the report concludes.

“People should not have to worry that they may be falsely accused of a crime because an algorithm mistakenly matched their photo to a suspect. They shouldn’t have to worry that their data will end up in the hands of identify thieves because face recognition databases were breached. They shouldn’t have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities,” said Lynch. “Without meaningful legal protections, this is where we may be headed.”

For the report:

Online version: https://www.eff.org/wp/law-enforcement-use-face-recognition

PDF version: https://www.eff.org/files/2018/02/15/face-off-report-1b.pdf

One pager on facial recognition: https://www.eff.org/document/facial-recognition-one-pager

Contact:  JenniferLynchSenior Staff Attorneyjlynch@eff.org
Categories: Aggregated News

Court Dismisses Playboy's Lawsuit Against Boing Boing (For Now)

eff.org - Thu, 15/02/2018 - 10:48

In a win for free expression, a court has dismissed a copyright lawsuit against Happy Mutants, LLC, the company behind acclaimed website Boing Boing. The court ruled [PDF] that Playboy’s complaint—which accused Boing Boing of copyright infringement for linking to a collection of centerfolds—had not sufficiently established its copyright claim. Although the decision allows Playboy to try again with a new complaint, it is still a good result for supporters of online journalism and sensible copyright.

Playboy Entertainment’s lawsuit accused Boing Boing of copyright infringement for reporting on a historical collection of Playboy centerfolds and linking to a third-party site. In a February 2016 post, Boing Boing told its readers that someone had uploaded scans of the photos, noting they were “an amazing collection” reflecting changing standards of what is considered sexy. The post contained links to an imgur.com page and YouTube video—neither of which were created by Boing Boing.

EFF, together with co-counsel Durie Tangri, filed a motion to dismiss [PDF] on behalf of Boing Boing. We explained that Boing Boing did not contribute to the infringement of any Playboy copyrights by including a link to illustrate its commentary. The motion noted that another judge in the same district had recently dismissed a case where Quentin Tarantino accused Gawker of copyright infringement for linking to a leaked script in its reporting.

Judge Fernando M. Olguin’s ruling quotes the Tarantino decision, noting that:

An allegation that a defendant merely provided the means to accomplish an infringing activity is insufficient to establish a claim for copyright infringement. Rather, liability exists if the defendant engages in personal conduct that encourages or assists the infringement.

Given this standard, the court was “skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.”

From the outset of this lawsuit, we have been puzzled as to why Playboy, once a staunch defender of the First Amendment, would attack a small news and commentary website. Today’s decision leaves Playboy with a choice: it can try again with a new complaint or it can leave this lawsuit behind. We don’t believe there’s anything Playboy could add to its complaint that would meet the legal standard. We hope that it will choose not to continue with its misguided suit.

Related Cases: Playboy Entertainment Group v. Happy Mutants
Categories: Aggregated News

Will Canada Be the New Testing Ground for SOPA-lite? Canadian Media Companies Hope So

eff.org - Thu, 15/02/2018 - 04:33

A consortium of media and distribution companies calling itself “FairPlay Canada” is lobbying for Canada to implement a fast-track, extrajudicial website blocking regime in the name of preventing unlawful downloads of copyrighted works. It is currently being considered by the Canadian Radio-television and Telecommunications Commission (CRTC), an agency roughly analogous to the Federal Communications Commission (FCC) in the U.S.

The proposal is misguided and flawed. We’re still analyzing it, but below are some preliminary thoughts.

The Proposal

The consortium is requesting the CRTC establish a part-time, non-profit organization that would receive complaints from various rightsholders alleging that a website is “blatantly, overwhelmingly, or structurally engaged” in violations of Canadian copyright law. If the sites were determined to be infringing, Canadian ISPs would be required to block access to these websites. The proposal does not specify how this would be accomplished.

The consortium proposes some safeguards in an attempt to show that the process would be meaningful and fair. It proposes the affected websites, ISPs, and members of the public would be allowed to respond to any blocking request. It also suggests that any blocking request would not be implemented unless a recommendation to block were adopted by the CRTC, and any affected party would have the right to appeal to a court.

Fairplay argues the system is necessary because, according to Fairplay, unlawful downloads are destroying the Canadian creative industry and harming Canadian culture.

(Some of) The Problems

As Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa points out, Canada had more investment in film and TV production last year than any other time in history. And it’s not just investment in creative industries that is seeing growth: legal means of accessing creative content is also growing, as Bell itself recognized in a statement to financial analysts. Contrary to the argument pushed by the content industry and other FairPlay backers, investment and lawful film and TV services are growing, not shrinking. The Canadian film and TV industries don’t need website-blocking.

The proposal would require service providers to “disappear” certain websites, endangering Internet security and sending a troubling message to the world: it’s okay to interfere with the Internet, even effectively blacklisting entire domains, as long as you do it in the name of IP enforcement. Of course, blacklisting entire domains can mean turning off thousands of underlying websites that may have done nothing wrong. The proposal doesn’t explain how blocking is to be accomplished, but when such plans have been raised in other contexts, we’ve noted the significant concerns we have about various technological ways of “blocking” that wreak havoc on how the Internet works.

And we’ve seen how harmful mistakes can be. For example, back in 2011, the U.S. government seized the domain names of two popular websites based on unsubstantiated allegations of copyright infringement. The government held those domains for over 18 months. As another example, one company named a whopping 3,343 websites in a lawsuit as infringing on trademark and copyright rights. Without an opposition, the company was able to get an order that required domain name registrars to seize these domains. Only after many defendants had their legitimate websites seized did the Court realize that statements made about many of the websites by the rightsholder were inaccurate.  Although the proposed system would involve blocking (however that is accomplished) and not seizing domains, the problem is clear: mistakes are made, and they can have long-lasting effect. 

But beyond blocking for copyright infringement, we’ve also seen that once a system is in place to take down one type of content, it will only lead to calls for more blocking, including that of lawful speech. This raises significant freedom of expression and censorship concerns.

We’re also concerned about what’s known as “regulatory capture” with this type of system, the idea that the regulator often tends to align its interests with those of the regulated. Here, the system would be initially funded by rightsholders, would be staffed “part-time” by those with “relevant experience,” and would get work when rightsholders view it as a valuable system. These sort of structural aspects of the proposal have a tendency to cause regulatory capture. An impartial judiciary that sees cases and parties from across a political, social, and cultural spectrum helps avoid this pitfall.

Finally, we’re also not sure why this proposal is needed at all. Canada already has some of the strongest anti-piracy laws in the world. The proposal just adds complexity and strips away some of the protections that a court affords those who may be involved in legitimate business (even if the content owners don’t like those businesses).

These are just some of the concerns raised by this proposal. Professor Geist’s blog highlights more, and in more depth.

What you can do

The CRTC is now accepting public comment on the proposal, and has already received over 4,000 comments. The deadline is March 1, although an extension has been sought. We encourage any interested members of the public to submit comments to let the Commission know your thoughts. Please note that all comments are made public, and require certain personal information to be included.

Categories: Aggregated News

Let's Encrypt Hits 50 Million Active Certificates and Counting

eff.org - Thu, 15/02/2018 - 04:02

In yet another milestone on the path to encrypting the web, Let’s Encrypt has now issued over 50 million active certificates. Depending on your definition of “website,” this suggests that Let’s Encrypt is protecting between about 23 million and 66 million websites with HTTPS (more on that below). Whatever the number, it’s growing every day as more and more webmasters and hosting providers use Let’s Encrypt to provide HTTPS on their websites by default.

Source: https://letsencrypt.org/stats/ as of February 14, 2018

Let’s Encrypt is a certificate authority, or CA. CAs like Let’s Encrypt are crucial to secure, HTTPS-encrypted browsing. They issue and maintain digital certificates that help web users and their browsers know they’re actually talking to the site they intended to.

One of the things that sets Let’s Encrypt apart is that it issues these certificates for free. And, with the help of EFF’s Certbot client and a range of other automation tools, it’s easy for webmasters of varying skill and resource levels to get a certificate and implement HTTPS. In fact, HTTPS encryption has become an automatic part of many hosting providers’ offerings.

50 million active certificates represents the number of certificates that are currently valid and have not expired. (Sometimes we also talk about “total issuance,” which refers to the total number of certificates ever issued by Let’s Encrypt. That number is around 217 million now.) Relating these numbers to names of “websites” is a bit complicated. Some certificates, such as those issued by certain hosting providers, cover many different sites. Yet some certificates are also redundant with others, so there may be a handful of active certificates all covering precisely the same names.

Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

One way to count is by “fully qualified domains active”—in other words, different names covered by non-expired certificates. This is now at 66 million. This metric can overcount sites; while most people would say that eff.org and www.eff.org are the same website, they count as two different names here.

Another way to count the number of websites that Let’s Encrypt protects is by looking at “registered domains active,” of which Let’s Encrypt currently has about 26 million. This refers to the number of different top-level domain names among non-expired certificates. In this case, supporters.eff.org and www.eff.org would be counted as one name. In cases where pages under the same top-level domain are run by different people with different content, this metric may undercount different sites.

No matter how you slice it, Let’s Encrypt is one of the largest CAs. And it has grown largely by giving websites their first-ever certificate rather than by grabbing websites from other CAs. That means that, as Let’s Encrypt grows, the number of HTTPS-protected websites on the web tends to grow too. Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

Categories: Aggregated News

The Revolution and Slack

eff.org - Thu, 15/02/2018 - 03:44

The revolution will not be televised, but it may be hosted on Slack. Community groups, activists, and workers in the United States are increasingly gravitating toward the popular collaboration tool to communicate and coordinate efforts. But many of the people using Slack for political organizing and activism are not fully aware of the ways Slack falls short in serving their security needs. Slack has yet to support this community in its default settings or in its ongoing design.  

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. In the meantime, this post provides context and things to consider when choosing a platform for political organizing, as well as some tips about how to set Slack up to best protect your community.

The Mismatch

Slack is designed as an enterprise system built for business settings. That results in a sometimes dangerous mismatch between the needs of the audience the company is aimed at serving and the needs of the important, often targeted community groups and activists who are also using it.

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them.

Two things that EFF tends to recommend for digital organizing are 1) using encryption as extensively as possible, and 2) self-hosting, so that a governmental authority has to get a warrant for your premises in order to access your information. The central thing to understand about Slack (and many other online services) is that it fulfills neither of these things. This means that if you use Slack as a central organizing tool, Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace.

We know that for many, especially small organizations, self-hosting is not a viable option, and using strong encryption consistently is hard. Meanwhile, Slack is easy, convenient, and useful. Organizations have to balance their own risks and benefits. Regardless of your situation, it is important to understand the risks of organizing on Slack.

First, The Good News

Slack follows several best practices in standing up for users. Slack does require a warrant for content stored on its servers. Further, it promises not to voluntarily provide information to governments for surveillance purposes. Slack also promises to require the FBI to go to court to enforce gag orders issued with National Security Letters, a troubling form of subpoena. Additionally, federal law prohibits Slack from handing over content (but not metadata like membership lists) in response to civil subpoenas.

Slack also stores your data in encrypted form, which means that if it leaks or is stolen, it is not readable. This is excellent protection if you are worried about attacks and data breaches. It is not useful, however, if you are worried about governments or other entities putting pressure on Slack to hand over your information.

Risks With Slack In Particular

And now the downsides. These are things that Slack could change, and EFF has called on them to do so.

Slack can turn over content to law enforcement in response to a warrant. Slack’s servers store everything you do on its platform. Since Slack can read this information on its servers—that is, since it’s not end-to-end encrypted—Slack can be forced to hand it over in response to law enforcement requests. Slack does require warrants to turn over content, and can resist warrants it considers improper or overbroad. But if Slack complies with a warrant, users’ communications are readable on Slack’s servers and available for it to turn over to law enforcement.

Slack may fail to notify users of government information requests. When the government comes knocking on a website’s door for user data, that website should, at a minimum, provide users with timely, detailed notice of the request. Slack’s policy in this regard is lacking. Although it states that it will provide advance notice to users of government demands, it allows for a broad set of exceptions to that standard. This is something that Slack could and should fix, but it refuses to even explain why it has included these loopholes

Slack content can make its way into your email inbox. Signing up for a Slack workspace also signs you up, by default, for email notifications when you are directly mentioned or receive a direct message. These email notifications can include the content of those mentions and messages. If you expect sensitive messages to stay in the Slack workspace where they were written and shared, this might be an unpleasant surprise. With these defaults in place, you have to trust not only Slack but also your email provider with your own and others’ private content.

Risks With Third-Party Platforms in General

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Most of these are problems with the law that we all must work on to fix together. Nevertheless, organizers must consider these risks when deciding whether Slack or any other online third-party platform is right for them.

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform.

Much of your sensitive information is not subject to a warrant requirement.  While a warrant is required for content, some of the most sensitive information held by third-party platforms—including the identities and locations of the people in a Slack workspace—is considered “non-content” and not currently protected by the warrant requirement federally and in most states. If the identities of your organization’s membership is sensitive, consider whether Slack or any other online third party is right for you. 

Companies can be legally prevented from giving users notice. While Slack and many other platforms have promised to require the FBI to justify controversial National Security Letter gags, these gags may still be enforced in many cases. In addition, many warrants and other legal process contain different kinds of gags ordered by a court, leaving companies with no ability to notify you that the government has seized your data.

Slack workspaces are subject to civil discovery. Government is not the only entity that could seek information from Slack or other third parties. Private companies and other litigants have sought, and obtained, information from hosts ranging from Google to Microsoft to Facebook and Twitter. While federal law prevents them from handing over customer content in civil discovery, it does not protect “non-content” records, such as membership identities and locations.

A group is only as trustworthy as its members. Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on. Establishing trusted admins or moderators to facilitate these agreements can also be beneficial.

Making Slack as Secure as Possible

If using Slack is still right for you, you can take steps to harden your security settings and make your closed workspaces as private as possible.

The lowest-hanging privacy fruit is to change a workspace’s retention settings. By default, Slack retains all the messages in a workspace or channel (including direct messages) for as long as the workspace exists. The same goes for any files submitted to the workspace. Workspace admins have the ability set shorter retention periods, which can mean less content available for government requests or legal inquiries.

Users can also address the email-leaking concern described above by minimizing email notification settings. This works best if all of the members of a group agree to do it, since email notifications can expose multiple users’ messages. 

The privacy of a Slack workspace also relies on the security of individual members’ accounts. Setting up two-factor authentication can add an extra layer of security to an account, and admins even have the option of making two-factor authentication mandatory for all the members of a workspace

However, no settings tweak can completely mitigate the concerns described above. We strongly urge Slack to step up to protect the high-risk groups that are using it along with its enterprise customers.  And all of us must stand together to push changes to the law.

Technology should stand with those who wish to make change in our world. Slack has made a great tool that can help, and it’s time for Slack to step up with its policies.

Categories: Aggregated News

Companies Must Be Accountable to All Users: The Story of Egyptian Activist Wael Abbas

eff.org - Wed, 14/02/2018 - 06:44

Egyptian journalist Wael Abbas holds a special distinction: Over the years, he’s experienced censorship at the hands of four of Silicon Valley’s top companies. Although more extreme, his story isn’t so different from that of the many individuals who, following a single misstep or mistake at the hands of a content moderator, find themselves unceremoniously removed from a social platform.

When YouTube was still fairly new, Abbas began posting videos depicting police brutality in his native Egypt to the platform. The award-winning journalist and anti-torture activist found utility in the global platform, which even then had massive reach. One of the videos he had posted even resulted in a rare conviction of police officers in Cairo. But in late 2007, he found that his account had been removed without warning. The reason? His content, often graphic in nature, had been receiving large numbers of complaints.

Rights activists rallied around Abbas and were able to convince YouTube to restore his account; his archive of videos were eventually restored. YouTube later adjusted its rules to be more permissive of violent content that is documentarian in nature. Around the same time, Abbas’ Yahoo! email account was shut down—and later restored—on accusations that he was spamming other users.

More recently, Abbas has faced off with Facebook over an erroneous content decision made by the company. In November 2017, Abbas was issued a 30-day suspension by Facebook for a post in which he named and accused an individual of running a scam and threatening other people. As a result of the suspension, Abbas was unable to post to Facebook or use Messenger or other platform tools. After we contacted the company the suspension was reversed and Abbas’s access restored.

In another, more recent instance, Abbas had an image removed from Facebook, and received only a vague notification stating:

You uploaded a photo that violates our Terms of Use, and this photo has been removed. Facebook does not allow photos that attack an individual or group, or that contain nudity, drug use, violence, or other violations of the Terms of Use. These policies are designed to ensure Facebook remains a safe, secure and trusted environment for all users, including the many children who use the site.

Although Facebook pointed to their policies, they did not identify to Abbas which of his photos had actually violated the Terms of Use, leaving him guessing as to what he’d done wrong. A Facebook spokesperson commented:

In most instances involving content removals, we send people a generic message to let them know that they've violated our Community Standards. We're in the process of trying to be more specific with our language so that people have a better understanding of why we've taken down their content and how can they avoid similar removals in the future.

Wael Abbas writes about his Twitter account being suspended

Abbas was able to hold on to his Facebook account, but with Twitter, he wasn’t so lucky. In December, he was suddenly suspended from the platform without warning or notification. His account, which was verified and had 350,000 followers, was described by Egyptian human rights activist Sherif Azer as “a live archive to the events of the revolution and till today one of few accounts still documenting human rights abuses in Egypt.” EFF contacted Twitter about the suspension, but the company did not respond to our query.

Platforms must be accountable to their users

Social media companies took great pride in the role they were said to have played in the 2011 Arab uprisings. But as a recent article from Middle East Eye points out, Egyptians are facing a significant increase in content takedowns on Facebook. The article asks the question: “Would those social media accounts which supported Egypt's uprisings in 2011 now be shut down?”

In fact, the most famous of those social media accounts—the page entitled “We Are All Khaled Said” that first called for protests on January 25, 2011—was actually shut down by Facebook in 2010, just a few months before the uprising. The page, which was later revealed to have been created by Google executive Wael Ghonim, was removed because Ghonim had been using a fake name, and only restored after US-based NGOs stepped in to help.

Similarly, Abbas was only able to have his suspension overturned after contacting EFF.  Verified Egyptian Reuters journalist Amina Ismail was able to get a Twitter suspension overturned through her contacts. Abbas and Ismail are both high-profile journalists, however—most users don’t have access to contacts at Silicon Valley’s top tech companies.

Wael Abbas's experience demonstrates the precarity of our online lives, and the dire need for platforms to institute transparent practices. As we recently wrote, social media platforms must notify users clearly when they violate a policy, and offer a clear path of recourse so that all users have an opportunity to appeal content decisions. Abbas's experience is the tip of the iceberg: for every prominent journalist documenting injustice who manages to get through their filters, how many more have lost the fight against the censors before they had a chance to reach a wider public?

It is vital that technology companies recognize the role they play in fostering free expression and act accordingly. To learn more about our efforts to hold companies accountable on freedom of expression, visit Onlinecensorship.org.

Categories: Aggregated News

We Don’t Need New Laws for Faked Videos, We Already Have Them

eff.org - Wed, 14/02/2018 - 04:39

Video editing technology hit a milestone this month. The new tech is being used to make porn. With easy-to-use software, pretty much anyone can seamlessly take the face of one real person (like a celebrity) and splice it onto the body of another (like a porn star), creating videos that lack the consent of multiple parties.

People have already picked up the technology, creating and uploading dozens of videos on the Internet that purport to involve famous Hollywood actresses in pornography films that they had no part in whatsoever.

While many specific uses of the technology (like specific uses of any technology) may be illegal or create liability, there is nothing inherently illegal about the technology itself. And existing legal restrictions should be enough to set right any injuries caused by malicious uses.

As Samantha Cole at Motherboard reported in December, a Reddit user named “deepfakes” began posting videos he created that replaced the faces of porn actors with other well-known (non-pornography) actors. According to Cole, the videos were “created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.”

Just over a month later, Cole reported that the creation of face-swapped porn, labeled “deepfakes” after the original Redditor, had “exploded” with increasingly convincing results. And an increasingly easy-to-use app had launched with the aim of allowing those without technical skills to create convincing deepfakes. Soon, a marketplace for buying and selling deepfakes appeared in a subreddit, before being taken off the site. Other platforms including Twitter, PornHub, Discord, and Gfycat followed suit. In removing the streams, each platform noted a concern that the people depicted in the deepfakes did not consent to their involvement in the videos themselves.

We can quickly imagine many terrible uses for this face-swapping technology, both in creating nonconsensual pornography and false accounts of events, and in undermining the trust we currently place in video as a record of events.

But there can be beneficial and benign uses as well: political commentary, parody, anonymization of those needing identity protection, and even consensual vanity or novelty pornography. (A few others are hypothesized towards the end of this article.)

The knee-jerk reaction many people have towards any new technology that could be used for awful purposes is to try and criminalize or regulate the technology itself. But such a move would threaten the beneficial uses as well, and raise unnecessary constitutional problems.

Fortunately, existing laws should be able to provide acceptable remedies for anyone harmed by deepfake videos. In fact, this area isn’t entirely new when it comes to how our legal framework addresses it. The US legal system has been dealing with the harm caused by photo-manipulation and false information in general for a long time, and the principles so developed should apply equally to deepfakes.

What Laws Apply

If a deepfake is used for criminal purposes, then criminal laws will apply. For example, if a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. And for any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.

On the tort side, the best fit is probably the tort of False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes. Deepfakes fit into those areas quite easily.

To win a false light lawsuit, a plaintiff—the person harmed by the deepfake, for example—must typically prove that the defendant—the person who uploaded the deepfake, for example—published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense, in such a way that would be highly offensive to a reasonable person, and caused the plaintiff mental anguish or suffering. It seems that in many situations the placement of someone in a deepfake without their consent would be the type of “highly offensive” conduct that the false light tort covers.

The Supreme Court further requires that in cases pertaining to matters of public interest, the plaintiff must also prove an intent that the audience believe the impression to be true. This is the actual malice requirement found in defamation law. 

False light is recognized as a legal action in about two-thirds of the states. It can be difficult to distinguish false light from defamation, and many courts treat them identically. The courts that treat them differently focus on the injury: defamation compensates for damage to reputation, false light compensates for being subject to offensiveness.  But of course, a plaintiff could sue for defamation if a deepfake has a natural tendency to damage their reputation.

The tort of Intentional Infliction of Emotional Distress (IIED) will also be available in many situations. A plaintiff can win an IIED lawsuit if they prove that a defendant—again, for example, a deepfake creator and uploader—intended to cause the plaintiff severe emotional distress by extreme and outrageous conduct, and that the plaintiff actually suffered severe emotional distress as a result of the extreme and outrageous conduct. The Supreme Court has found that where the extreme and outrageous conduct is the publication of a false statement and when the statement is about either a matter of public interest or a public figure, the plaintiff must also prove an intent that the audience believe the statement to be true, an analog to defamation law’s actual malice requirement. The Supreme Court has further extended the actual malice requirement to all statements pertaining to matters of public interest.

And to the extent deepfakes are sold or the creator receives some other benefit from them, they raise the possibility of right of publicity claims as well by those whose images are used without their consent.

Lastly, one whose copyrighted material–either the facial image or the source material into which the facial image is embedded–may have a claim for copyright infringement, subject of course to fair use and other defenses.

Yes, deepfakes can present a social problem about consent and trust in video, but EFF sees no reason why the already available legal remedies will not cover injuries caused by deepfakes.

Categories: Aggregated News

How Have Europe's Upload Filtering and Link Tax Plans Changed?

eff.org - Wed, 14/02/2018 - 03:26

Although we have been opposing Europe's misguided link tax and upload filtering proposals ever since they first surfaced in 2016, the proposals haven't been standing still during all that time. In the back and forth between a multiplicity of different Committees of the European Parliament, and two other institutions of the European Union (the European Commission and the Council of the European Union), various amendments have been offered up in an attempt at political compromise. Unfortunately, the point at which these compromises seem to have landed still poses the same problems as before.

What Has Happened with the Link Tax?

Article 11 is its official designation, but "link tax" is a far better informal description of this proposal, which would impose a requirement for Internet platforms to pay money to news publishers for providing links to news articles, accompanied by a short summary of what they are linking to. This isn't a copyright, because the link tax is paid to the publisher rather than the author, and because it is payable even if the portion of the news article taken isn't copyright-protected, falls within a copyright exception, or is freely licensed.

It's unclear why this proposal wasn't abandoned a long time ago. A similar link tax in Spain resulted in the closure of the Spanish version of Google News, a German equivalent has also been deemed a dismal failure, and both small publishers and even a European Commission-funded study have slammed the proposal. Nevertheless as of February 2018, it remains firmly on the table, with virtually nothing to sweeten the thoroughly rotten deal that it offers to Internet platforms and publishers alike.

The most recent attempt at compromise comes in a discussion paper [PDF] from the Bulgarian Council Presidency, prepared as input for a meeting of the Council's Intellectual Property Working Party that was held on February 12. The paper proposes only minor tweaking to the European Commission's original text, such as excluding individual Internet users from liability for the tax, and carving out "individual words or very short excerpts of text" from its scope, but without specifying what "very short excerpts" actually means. 

The discussion paper also briefly acknowledges the alternative proposal of dropping the link tax altogether, and instead addressing publishers' concerns without creating any new copyright-like impost. This alternative proposal would create a legal presumption that news publishers are entitled to enforce the existing copyrights in news articles written by their journalists. If Internet platforms are reproducing such large parts of news articles that permission from the copyright owner is required, this would enable the publishers to negotiate directly with those platforms to license that use. This is the only sensible compromise that can be made to the Article 11 proposal, but it is one that the Bulgarian Presidency unfortunately gives short shrift.

What has Happened with Upload Filtering?

The same discussion paper also tinkers around the edges of the upload filtering mandate, without addressing the fundamental dangers that it continues to pose to freedom of expression online. For those who came in late, the European Commission's initial upload filter proposal,  formally designated as Article 13, would require Internet platforms to put in place costly and ineffective automatic filters to prevent copyright-infringing content from being uploaded by users, creating a kind of robotic censorship regime.

What has changed since then? Not much. The Bulgarian Presidency proposes being slightly more specific about what kinds of online platforms are the target of the measure ("online content sharing services"). It also proposes introducing a new, expansive definition of "communication to the public"; an exclusive right reserved to copyright holders in Europe that had previously only been defined by way of a complicated series of court decisions. By deeming an Internet platform to be engaged in "communication to the public" whenever it allows a user to upload a copyright-protected work for sharing, the Bulgarian Presidency aims to justify excluding that platform from the copyright safe harbor that the existing E-Commerce Directive provides.

The only other change worth noting is that the proposal is now more equivocal about whether Internet platforms would actually have to install automated upload filters, or whether it would be sufficient for them to prevent the uploading of copyright-infringing material in some other way. But as European Digital Rights (EDRi) has cogently pointed out, this is a distinction without a difference.

To comply with Article 13 and to avoid liability under the E-Commerce Directive (per the Bulgarian Presidency's amendment), platforms are required to "take effective measures to prevent the availability on its services of ... unauthorized works or other subject-matter identified by the rightholders," and if such works do nevertheless appear on the platform, must "act expeditiously to remove or disable access to the specific unauthorized work or other subject matter and ... take steps to prevent its future availability."

There is no way in which platforms could possibly comply with this directive other than by agreeing to monitor all of the content they accept, either manually or automatically. By daring not to speak this uncomfortable truth, the Bulgarian Presidency skirts around the fact that such a general monitoring obligation would contravene both Article 14 of the E-Commerce Directive and European human rights law. But that kind of clever circumlocution can't hide the repressive nature of this censorship proposal, and does nothing to improve on the flaws of the original text.

What Can You Do?

The fight against Article 11 and Article 13 is entering its closing days. That makes every voice that we can raise in opposition to these harmful proposals more important than ever before. European voices are best placed to convince European policymakers of the harm that their proposals would wreak upon European businesses and users. Thankfully, our allies in Europe are on the case, and if you are European or have colleagues or friends in Europe, here are the links you need to contact your representatives and speak out against their misguided plans:

  • Mozilla has put together an awesome call-in tool and response guide, which makes it easy to identify your specific concerns as a technologist, creator, innovator, scientist or librarian. You can also read more on Mozilla's site about how all of these category of user, and more, are affected by the Article 11 and Article 13 proposals, along with some of the other more obscure (but still important) provisions of the broader Digital Single Market Directive.
  • A coalition called Create.Refresh have a brilliant, viral campaign that encourages creators to create and share their own works that address the problems inherent in restrictive filtering systems, such as those that Article 13 would effectively mandate.
  • OpenMedia's Save the Link network has updated their click-to-call website this month with a brand new petition on Article 11 that enables you to identify yourself as one of the impacted groups, from a drop down menu on the new page. If you are a librarian, software developer, creator, researcher, or journalist, you'll be able to demonstrate how the link tax proposals are harmful to you specifically.

As you can see, there are many options for you to get involved in this fight—and with the final Committee vote in the European Parliament coming up on March 26-27, now is the best time to do so. If we lose this one, the link tax and upload filtering mandates could be here to stay, and the Internet as we know it will never be the same.

Categories: Aggregated News

Internet Users Spoke Up To Keep Safe Harbors Safe

eff.org - Tue, 13/02/2018 - 12:04

Today, we delivered a petition to the U.S. Copyright Office to keep copyright’s safe harbors safe.  We asked the Copyright Office to remove a bureaucratic requirement that could cause websites and Internet services to lose protection under the Digital Millennium Copyright Act (DMCA). And we asked them to help keep Congress from replacing the DMCA safe harbor with a mandatory filtering law. Internet users from all over the U.S. and beyond added their voices to our petition.

Under current law, the owners of websites and online services can be protected from monetary liability when their users are accused of infringing copyright through the DMCA “safe harbors.” In order to take advantage of these safe harbors, owners must meet many requirements, including participating in the notorious notice-and-takedown procedure for allegedly infringing content. They also must register an agent—someone who can respond to takedown requests—with the Copyright Office.

The DMCA is far from perfect, but provisions like the safe harbor allow websites and other intermediaries that host third-party material to thrive and grow without the constant threat of massive copyright penalties. Without safe harbors, small Internet businesses could face bankruptcy over the infringing activities of just a few users.

Now, a lot of those small sites risk losing their safe harbor protections. That’s because of the Copyright Office’s rules for registering agents. Those registrations used to be valid as long as the information was accurate. Under the Copyright Office’s new rules, website owners must renew their registrations every three years or risk losing safe harbor protections. That means that websites can risk expensive lawsuits for nothing more than forgetting to file a form. As we’ve written before, because the safe harbor already requires websites to submit and post accurate contact information for infringement complaints, there’s no good reason for agent registrations to expire. We’re also afraid that it will disproportionately affect small businesses, nonprofits, and hobbyists, who are least able to have a cadre of lawyers at the ready to meet bureaucratic requirements.

Many website owners have signed up under the Copyright Office’s new agent registration system, which is designed to send reminder emails when the three-year registrations are set to expire. While the new registration system is a vast improvement over the old paper filing system, the expiration requirement is unnecessary and dangerous.

We explained these problems in our petition, and we also explained how the DMCA faces even greater threats. If certain major media and entertainment companies get their way, it will become much more difficult for websites of any size to earn their safe harbor status. That’s because those companies’ lobbyists are pushing for a system where platforms would be required to use computerized filters to check user-uploaded material for potential copyright infringement.

Requiring filters as a condition of safe harbor protections would make it much more difficult for smaller web platforms to get off the ground. Automated filtering technology is expensive—and not very good. Even when big companies use them, they’re extremely error-prone, causing lots of lawful speech to be blocked or removed. A filtering mandate would threaten smaller websites’ ability to host user content at all, cementing the dominance of today’s Internet giants.

If you run a website or online service that stores material posted by users, make sure that you comply with the DMCA’s requirements. Register a DMCA agent through the Copyright Office’s online system, post the same information on your website, and keep it up to date. Meanwhile, we’ll keep telling the Copyright Office, and Congress, to keep the safe harbors safe.

Categories: Aggregated News

Imprisoned Blogger Eskinder Nega Won't Sign a False Confession

eff.org - Tue, 13/02/2018 - 11:43

Online publisher and blogger Eskinder Nega has been imprisoned in Ethiopia since September 2011 for the "crime" of writing articles critical of his government. He is one of the longest-serving prisoners in EFF's Offline casefile of writers and activists unjustly imprisoned for their work online.

Now a chance he may finally be freed has been thrown into doubt because of the Ethiopian authorities' outrageous demand that he sign a false confession before being released.

The Ethiopian Prime Minister, Hailemariam Desalegn, announced in January surprise plans to close down the notorious Maekelawi detention center and release a number of prisoners. The Prime Minister said that the move was intended to "foster national reconciliation."

While Ethiopia's own officials have declined to call the recipients of the amnesty "political prisoners," the bulk of the candidates named so far for release are either opposition politicians and activists, or others, like Eskinder, caught up in previous crackdowns on dissent and free speech.

Despite the government's apparent desire to use the release to moderate tensions in Ethiopia, prison officials have undermined its message—and Eskinder's chance at freedom—by requiring him to sign a false confession before his release.

The document, given to Eskinder without warning last week, included a claim that Eskinder was a member of Ginbot 7, a group the government has previously declared a terrorist organization. Eskinder refused to sign the document, and was subsequently returned to his cell, even as other prisoners were being released. The Committee to Protect Journalists subsequently told Quartz Africa that Eskinder was asked to sign the form a second time over the weekend.

EFF continues to follow Eskinder's case closely, and urges the Ethiopian government to live up to its promise of a new era of reconciliation and renewal by returning Eskinder to his friends and family, unconditionally and immediately.

Categories: Aggregated News

Oregon Steps Up to the Plate on Network Neutrality This Month

eff.org - Tue, 13/02/2018 - 05:19

It should not be surprising that arguably the biggest mistake in Internet policy history is going to invoke a vast political response. Since the FCC repealed federal Open Internet Order in December, many states have attempted to fill the void. With a new bill that reinstates net neutrality protections, Oregon is the latest state to step up.

Oregon’s Majority Leader Jennifer Williamson recently announced her intention to fight to restore much of what the FCC repealed last December under its so-called “Restoring Internet Freedom Order.” Her legislation, H.B. 4155, responds to the FCC’s decision by requiring that any ISP that receives funds from the state to adhere to net neutrality principles—not blocking or throttling content or prioritizing its own content over that of competitors, for example.

If you’re an Oregonian, tell your state representative to act to restore net neutrality.

Oregon is following in what is clearly a trend of state legislatures and executives acting to protect their citizens’ digital rights where the federal government has abdicated responsibility. To date, 17 states have introduced network neutrality legislation and four Governors have issued Executive Orders (Montana, New York, New Jersey, and Hawaii).

The national response to the FCC’s decision to abandon its role as the consumer protection agency overseeing cable and telephone companies is to be expected. It is wildly unpopular with voters of all political leanings; 83% of voters overall including 3 out of 4 Republican voters opposed the FCC decision. Yet despite millions of Americans submitting comments to the FCC to oppose the decision, they were promptly ignored in favor of the interests of AT&T, Verizon, and Comcast. Where else should this vast swath of the American public go if not their state and local representation?

And while both Verizon and their association, the CTIA, made last minute requests to the FCC to try to prevent state privacy and network neutrality laws, they are not going to be successful. Their problem is the plan to eviscerate the law that empowers the FCC also disables the agency’s ability to block state laws. In other words, they cannot have it both ways.

While the FCC's order did contain a lot of words about how states cannot pass their own network neutrality laws, it did so without citing any specific legal authority. We remain skeptical that the FCC itself has that power. And while states still have to navigate the Commerce Clause, EFF has provided guidance on how to do that.

Notably, states and local government and in particular governors have caught onto the obvious weakness in the FCC’s authority and have acted. EFF will continue working to support the states in their effort to protect a free and open Internet until we are able to fully restore the protections we once had at the federal level.

Take Action

Tell your state representatives to support H.B. 4155 and restore the neutral net

Categories: Aggregated News

The CLOUD Act: A Dangerous Expansion of Police Snooping on Cross-Border Data

eff.org - Fri, 09/02/2018 - 12:09

This week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world.

The Clarifying Overseas Use of Data (CLOUD) Act expands American and foreign law enforcement’s ability to target and access people’s data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to federal agents in Immigration and Customs Enforcement) to access “the contents of a wire or electronic communication and any record or other information” about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider—like Google, Facebook, or Snapchat—to hand over a user’s content and metadata, even if it is stored in a foreign country, without following that foreign country’s privacy laws.[1]

Second, the bill would allow the President to enter into “executive agreements” with foreign governments that would allow each government to acquire users’ data stored in the other country, without following each other’s privacy laws.

For example, because U.S.-based companies host and carry much of the world’s Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is an enormous erosion of current data privacy laws.

This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the United States. The case, United States v. Microsoft (often called “Microsoft Ireland”), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law.

Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act, which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed unanimous support in the House for the past two years.

The CLOUD Act and the US-UK Agreement

The CLOUD Act’s proposed language is not new. In 2016, the Department of Justice first proposed legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post broke the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In 2017, the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States.

In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter to Congress opposing the Justice Department’s revamped bill.

The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ’s 2017 bill. None of EFF’s concerns have been addressed. The legislation still:

  • Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.
  • Fails to require foreign law enforcement to seek individualized and prior judicial review.
  • Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.
  • Fails to place adequate limits on the category and severity of crimes for this type of agreement.
  • Fails to require notice on any level – to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)

The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations. But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the Stored Communications Act protects all members of the “public” from the unlawful disclosure of their personal communications.

An Expansion of U.S. Law Enforcement Capabilities

The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information – meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders, including data stored in the United States.

EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court amicus brief in the Microsoft Ireland case.

When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government’s laws. To do so, the company must object within 14 days, and undergo a complex “comity” analysis – a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments.

Failure to Support Mutual Assistance

Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs). This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation.

It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment’s warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the data privacy rules where the data is stored, which may include important “necessary and proportionate” standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries.

While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining.

The CLOUD Act raises dire implications for the international community, especially as the Council of Europe is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced legislation that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system.

A growing chorus of privacy groups in the United States opposes the CLOUD Act’s broad expansion of U.S. and foreign law enforcement’s unilateral powers over cross-border data. For example, Sharon Bradford Franklin of OTI (and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities “in the wrong direction, by sacrificing digital rights.” CDT and Access Now also oppose the bill.

Sadly, some major U.S. technology companies and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a “good start.” Nor does it do a “remarkable job of balancing these interests in ways that promise long-term gains in both privacy and security.” Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law enforcement and U.S. technology companies.

Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people’s privacy. EFF strongly opposes the bill. Now is the time to strengthen the MLAT system, not undermine it.

[1] The text of the CLOUD Act does not limit U.S. law enforcement to serving orders on U.S. companies or companies operating in the United States. The Constitution may prevent the assertion of jurisdiction over service providers with little or no nexus to the United States.

Related Cases: In re Warrant for Microsoft Email Stored in Dublin, Ireland
Categories: Aggregated News

IPR Process Saves 80 Companies From Paying For a Sports-Motion Patent

eff.org - Thu, 08/02/2018 - 10:17

The importance of the US Patent Office’s “inter partes review” (IPR) process was highlighted in dramatic fashion yesterday. Patent appeals judges threw out a patent [PDF] that was used to sue more than 80 companies in the fitness, wearables, and health industries.

US Patent No. 7,454,002 was owned by Sportbrain Holdings, a company that advertised a kind of ‘smart pedometer’ as recently as 2011. But the product apparently didn’t take off, and in 2016, Sportbrain turned to patent lawsuits to make a buck.

A company called Unified Patents challenged the ’002 patent by filing an IPR petition, and last year, the Patent Office agreed that the patent should be reviewed. Yesterday, the patent judges published their decision, canceling every claim of the patent.

The ’002 patent describes capturing a user’s “personal data,” and then sharing that information with a wireless computing device and over a network. It then analyzes the data and provides feedback.

After reviewing the relevant technology, a panel of patent office judges found there wasn’t much new to the ’002 patent. Earlier patents had already described collecting and sharing various types of sports data, including computer-assisted pedometers and a system that measured a skier’s “air time.” Given those earlier advances, the steps of the Sportbrain patent would have been obvious to someone working in the field. The office cancelled all the claims.

That means the dozens of different companies sued by Sportbrain won’t have to each spend hundreds of thousands of dollars—potentially millions—to defend against a patent that, the government now acknowledges, never should have been granted in the first place.

A Critical Tool for Innovators

Bad patents like the one asserted by Sportbrain are a drain on the innovation economy, especially for small businesses. But the damage that could be caused by such patents was much worse before the advent of IPRs.

The IPR process has proven to be the most effective part of the 2012 America Invents Act. In most cases, the IPR process is far more efficient than federal courts when it comes to evaluating a patent to figure out if it’s truly new and non-obvious.

IPRs have other advantages for small companies. Often, companies that get sued or threatened by patent trolls will end up paying a licensing fee, even though they don’t think the patents are legitimate. Through the IPR process, defendants can band together to file IPRs.  That’s enabled the success of membership-based for-profit companies like RPX and Unified Patents—in fact, it was member-funded Unified that filed the petition which shut down the Sportbrain Holdings patent.

The IPR process also enables non-profits like EFF to fight bad patents. That’s how EFF was able to knock out the Personal Audio “podcasting” patent. The petition was paid for by the more than 1,000 donors who gave to our “Save Podcasting” campaign. Last year, EFF’s victory in that case was upheld by a federal appeals court.

But the IPR process could be in danger. Senator Chris Coons has twice proposed legislation (the STRONG Patents Act and the STRONGER Patents Act) that would gut the IPR system. EFF has opposed these bills. Other opponents of IPRs have taken their complaints to the courts. One company has asked the Supreme Court to declare the process unconstitutional. This case, Oil States, will decide the future of IPRs. We’ve submitted a brief explaining why we think the process of reviewing patents at the Patent Office is not only constitutional, it’s good public policy. We hope both Congress and the high court see their way to upholding this critical tool that saved 80 companies from damaging litigation—and that was just yesterday.

Related Cases: EFF v. Personal Audio LLC
Categories: Aggregated News

John Perry Barlow, Internet Pioneer, 1947-2018

eff.org - Thu, 08/02/2018 - 08:21

With a broken heart I have to announce that EFF's founder, visionary, and our ongoing inspiration, John Perry Barlow, passed away quietly in his sleep this morning. We will miss Barlow and his wisdom for decades to come, and he will always be an integral part of EFF.

It is no exaggeration to say that major parts of the Internet we all know and love today exist and thrive because of Barlow’s vision and leadership. He always saw the Internet as a fundamental place of freedom, where voices long silenced can find an audience and people can connect with others regardless of physical distance.

Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity's problems without causing any more. As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth. Barlow knew that new technology could create and empower evil as much as it could create and empower good. He made a conscious decision to focus on the latter: "I knew it’s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls 'turn-key totalitarianism.'”

Barlow’s lasting legacy is that he devoted his life to making the Internet into “a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth . . . a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”

In the days and weeks to come, we will be talking and writing more about what an extraordinary role Barlow played for the Internet and the world. We've updated our collection of his work here.  And as always, we will continue the work to fulfill his dream.

Categories: Aggregated News

Newly Released Surveillance Orders Show That Even with Individualized Court Oversight, Spying Powers Are Misused

eff.org - Thu, 08/02/2018 - 08:20

Once-secret surveillance court orders obtained by EFF last week show that even when the court authorizes the government to spy on specific Americans for national security purposes, that authorization can be misused to potentially violate other people’s civil liberties.

These documents raise larger questions about whether the government can meaningfully protect people’s privacy and free expression rights under Section 702 of the Foreign Intelligence Surveillance Act (FISA), which permits officials to engage in warrantless mass surveillance with far less court oversight than is required under the “traditional” FISA warrant process.

The documents are the third and final batch of Foreign Intelligence Surveillance Court (FISC) opinions released to EFF as part of a FOIA lawsuit seeking all significant orders and opinions of the secret court. Previously, the government released opinions dealing with FISA’s business records and pen register provisions, along with opinions under Section 702.

Although many of the 13 opinions are heavily redacted—and the government withheld another 26 in full—the readable portions show several instances of the court blocking government efforts to expand its surveillance or ordering the destruction of information obtained improperly as a result of its spying.

Court Rejects FBI Effort to Log Communications of Individuals Not Targeted by FISA Order

For example, in a 40-page opinion issued in 2004 or 2005, FISC Judge Harold Baker rejected the FBI’s proposal to log copies of recorded conversations of people who, while not targeted by the agency, were still swept up in its surveillance. This likely occurred when innocent people used the same communications service as the FBI’s target, possibly a shared phone line. The opinion demonstrates both the risks of overcollection as part of targeted surveillance as well as the benefits of engaged, detailed court oversight.

Here’s how that oversight works: Once the FISC approves electronic surveillance under FISA’s Title I, the FBI can record a target’s communications, but it must follow “minimization procedures” to avoid unnecessarily listening in on conversations by others who are using the same “facility” (like a telephone line). In this case, however, the FBI employed a surveillance technique that apparently captured a lot of innocent communications. (This is often referred to as “incidental collection” because the recording of these conversations is incidental to spying on the target who uses the same phone line.)

Although redactions make it difficult to understand details of the FBI’s request to the court, it apparently sought to mark these out-of-scope conversations for later use, which would be inconsistent with the “Standard Minimization Procedures” approved for use in FISA Title I cases.

The FBI seems to have presented its request to the FISC as no big deal, with “minimal, if any” impact on the Fourth Amendment. Judge Baker saw it differently. He explained that “it is not sufficient to assert that, because the Standard Procedures already permit the FBI a great deal of latitude, it is reasonable to grant a little more.”

More fundamentally, the court took the FBI to task for the “surprising occasion” of seeking to expand its use of incidentally collected communications, rather than getting rid of them. It faulted the FBI for failing to account “for the possibility that overzealous or ill-intentioned personnel might be inclined to misuse information, if given the opportunity.” As the court put it, “the advantage of minimization at the acquisition stage is clear. Information that is never acquired in the first place cannot be misused.”

NSA Makes Ridiculous Argument to Keep Communications it Obtained Without Court Authorization

Other opinions EFF obtained detail the NSA’s unauthorized surveillance of a number of individuals and the government’s efforts to hold onto the data despite a FISA court’s order that the communications be destroyed.

A December 2010 order by FISC Judge Frederick Scullin, Jr. describes how over a period between 15 months and three years, the NSA obtained a number of communications of U.S. persons. The precise number of communications obtained is redacted.

Rather than notifying the court that it had destroyed the communications it obtained without authorization, the NSA made an absurd argument in a bid to retain the communications: because the surveillance was unauthorized, the agency’s internal procedures that require officials to delete non-relevant communications should not apply. Essentially, because the surveillance was unlawful, the law shouldn’t apply and the NSA should get to keep what it had obtained.

The court rejected the NSA’s argument. “One would expect the procedures’ restrictions on retaining and disseminating U.S. person information to apply most fully to such communications, not, as the government would have it, to fail to apply at all,” the court wrote.

The court went on to say that “[t]here is no persuasive reason to give the (procedures) the paradoxical and self-defeating interpretation advanced by the government.”

The court then ordered the NSA to destroy the communications it had obtained without FISC authorization. But another opinion issued by Judge Scullin in May 2011 shows that rather than immediately complying with the order, the NSA asked the FISC once more to allow it to keep the communications.

Again the court rejected the government’s arguments. “No lawful benefit can plausibly result from retaining this information, but further violation of law could ensue,” the court wrote. The court then ordered the NSA to not only delete the data, but to provide reports on the status of its destruction “until such time as the destruction process has been completed.”

If Government Abuse of Surveillance Powers Occurs With Careful Oversight, What Happens Under Section 702?

The new opinions show that even when the FISC judges actually approve targeted surveillance on particular individuals, the government still collects the contents of innocent people’s communications in ways that are incompatible with the law. Which raises the question: what is the government getting away with when it engages in surveillance that has even less FISC oversight?

Although the opinions discussed above concern FISA’s statutory requirements of minimization rather than constitutional limits, these are the sort of concerns that EFF has raised in the context of the NSA’s warrantless surveillance under Section 702 of FISA. Unlike FISA Title I, Section 702 does not require the FISC to conduct such detailed oversight of the government’s activities. The court does approve minimization procedures, but it does not review targets or facilities, meaning that it has less insight into the actual surveillance. That necessarily reduces opportunities to prevent overbroad collection or check an intelligence agency’s incremental loosening of its own rules. And, as we’ve seen, it has led to significant “compliance violations” by the NSA and other agencies using Section 702. 

All surveillance procedures come with risks, especially with the level of secrecy involved in FISA. Nevertheless, opinions like these demonstrate that detailed court oversight offers the best hope of curtailing these risks. We hope it informs future debate in those areas where oversight is limited by statute, as with Section 702. If anything, the decisions are more evidence that warrantless surveillance must end. 

Related Cases: Significant FISC Opinions
Categories: Aggregated News

EFF vs IoT DRM, OMG!

eff.org - Thu, 08/02/2018 - 08:06

What with the $400 juicers and the NSFW smart fridges, the Internet of Things has arrived at that point in the hype cycle midway between "bottom line" and "punchline." Hype and jokes aside, the reality is that fully featured computers capable of running any program are getting cheaper and more powerful and smaller with no end in sight, and the gadgets in our lives are transforming from dumb hunks of electronics to computers in fancy cases that are variously labeled "car" or "pacemaker" or "Alexa."

We don't know which designs and products will be successful in the market, but we're dead certain that banning people from talking about flaws in existing designs and trying to fix those flaws will make all the Internet of Things' problems worse.

But a pernicious American law stands between the Internet of Defective Things and your right to know about those defects and remediate them. Section 1201 of the Digital Millennium Copyright Act bans any act that weakens or bypasses a lock that controls access to copyrighted works (these locks are often called Digital Rights Management or DRM). These locks were initially used to lock down the design of DVD players and games consoles, so that manufacturers could prevent otherwise legal activities, like watching out-of-region discs or playing independently produced games.

Today, these locks have proliferated to every device with embedded software: cars, tractors, pacemakers, voting machines, phones, tablets, and, of course, "smart speakers" used to interface with voice assistants. Corporations have figured out that they can deploy DRM to control how you use your device, and then use DMCA 1201 to threaten competitors whose products unlock legal, legitimate features that benefit you, instead of some company's shareholders.

This means that, for example, a printer company can use digital locks to control who can refill your printer-ink cartridges, ensuring that you buy ink from them, at whatever price they want to charge. It means that cellphone manufacturers get to decide who can fix your phone and tractor companies can choose who can fix your tractors.

What's worse: companies have exploited DMCA 1201 to attack security researchers who came forward to report defects in their products, arguing that any disclosures of vulnerabilities in the stuff you own might help you break the DRM, meaning that it's illegal to tell you truthful things about the risks you face from your badly secured gadgets.

Every three years, the US Copyright Office lets us petition for limited exemptions to this law, and we have been slowly, surely carving out a space for Americans to bypass digital locks in order to use their property in legitimate, legal ways—even if there's some DRM between them and that use.

In 2015, we won the right to jailbreak your phones and tablets—to change how they're configured so that you can unlock features that you want (even if the manufacturer doesn't), and remove the ones you don't. We also won an exemption that protects security researchers' right to bypass DRM to investigate and test the security of all sorts of gadgets. Taken together, these two rights—the right to discover defects and the right to change your device configuration—form a foundation on which solutions to the pernicious problems of our vital, ubiquitous, badly secured gadgets can be built.

This year, we're liberating your smart speakers: Apple HomePods, Amazon Echos, Google Homes, and lesser-known offerings from other manufacturers and platforms. These gadgets are finding their way into our living rooms, kitchens—even our bedrooms and bathrooms. They have microphones that are always on and listening (many of them have cameras, too), and they're connected to the Internet. They only run manufacturer-approved apps, and use encryption that prevents security researchers from investigating them and ensuring that they're working as intended.

We've asked the Copyright Office to extend the jailbreaking exemption to cover these smart speakers, giving you the right to load software of your choosing on them—and letting security researchers probe them to make sure they're not sneaking around behind your back. These exemptions include the right to bypass the devices' bootloaders and to activate or disable hardware features. These are rights that you've always had, for virtually every gadget you've ever owned—that is, until manufacturers discovered DMCA 1201's potential to control how you use of their products after they become your property.

We don't have all the answers about how to make smart speakers better, or more secure, but we are one hundred percent certain that banning people from finding out what's wrong with their smart speakers and punishing anyone who tries to improve them isn't helping.

These Copyright Office hearings are important, because they help the Copyright Office understand and acknowledge that DMCA 1201 is causing problems for people who want to do legitimate activities, but the hearings are still grossly insufficient. DMCA 1201 says the Copyright Office can give you the right to use your device in ways that are prevented by DRM, but not the right to acquire a tool to enable you to make that use. Under the DMCA's rules, every person who has the right to bypass DRM is expected to hand-whittle a tool for their own personal use and treat the design of that tool as a matter of strictest secrecy.

This is absurd. It's one of the reasons we're suing the U.S. government over the constitutionality of DMCA 1201, with the intention of having a court rule that the law is unenforceable, killing it altogether or sending it back to Congress for a major overhaul that terminates the ability of corporations to use a so-called anti-piracy law to ban activities that have no connection to copyright infringement.

Categories: Aggregated News

Startup Won't Give In to Motivational Health Messaging's $35,000 Patent Demand

eff.org - Thu, 08/02/2018 - 03:44

Trying to succeed as a startup is hard enough. Getting a frivolous patent infringement demand letter in the mail can make it a whole lot harder. The experience of San Francisco-based Motiv is the latest example of how patent trolls impose painful costs on small startups and stifle innovation.

Motiv is a startup of fewer than 50 employees competing in the wearable technology space. Founded in 2013, the company creates fitness trackers, housed in a ring worn on your finger.

In January, Motiv received a letter alleging infringement of U.S. Patent No. 9,069,648 (“the ’648 Patent”). The letter gave Motiv two options: pay $35,000 to license the ’648 Patent, or face the potential of costly litigation.

The '648 Patent, owned by Motivational Health Messaging LLC (“MHM”), is titled “Systems and methods for delivering activity based suggestive (ABS) messages.” The patent describes sending “motivational messages,” based “on the current or anticipated activity of the user,” to a “personal electronic device.” It provides examples such as sending the message “don't give up” when the user is running up a hill, or messages like “do not fear” and “God is with you” when a “user enters a dangerous neighborhood.” Simply put, the patent claims to have invented using a computer to send tailored messages based on activity or location.

While the name “Motivational Health Messaging” may sound new, the actors behind it aren’t: the people associated with MHM and its patent overlap with the people associated with notorious patent assertion entities Shipping & Transit, Electronic Communication Technologies, ArrivalStar, and Eclipse IP, who we’ve written about on numerous occasions. Collectively, these entities have filed over 700 lawsuits, with Shipping & Transit setting the 2016 record for most patent infringement lawsuits filed.

Though MHM and its the patent may be new, the business model seems to be the same as the other, related entities: make patent infringement demands, often against small businesses, and leverage the high cost of litigation to extract settlements in the $25,000 to $45,000 range. (As of the date of this post, MHM has not yet filed any lawsuits and the related entities have been faring very poorly in court.)

Unfortunately, for many small businesses it often makes sense to simply pay for a license instead of spending years tied up in court challenging a patent. Receiving a demand letter frivolously asserting infringement is annoying enough. Even more frustrating is being forced to divert resources away from product development in order to defend against a non-practicing entity with bad patents.

Nevertheless, Motiv decided it would not go down without a fight. Motiv retained Rachael Lamkin, who replied with her own letter explaining why Motiv does not infringe, and why MHM’s patent is invalid. Lamkin also says that in the event of litigation, Motiv would seek to join the individuals behind MHM to the lawsuit—and make them personally responsible for “any sanction or fee award.” The letter laid out in painstaking detail many of the numerous deficiencies with MHM’s patent and infringement claim, and refused to pay MHM a cent. The complete set of materials sent to MHM can be found at the end of this post.

We hope that MHM does not push ahead with a business model that preys on the vulnerability of small businesses, and only succeeds when undeserved settlements are paid. Patent holders like this takes advantage of inefficiencies in our legal system, despite the extreme weakness of their cases. By publishing Motiv’s response letter and supporting documentation, Motiv and EFF hope that others may benefit and not pay the troll under the bridge.

If you have recently been sued or received a demand letter from MHM, contact info@eff.org.

Links to documents and correspondence between Motivational Health Messaging, LLC and Motiv, Inc.

Categories: Aggregated News

Twilio Demonstrates Why Courts Should Review Every National Security Letter

eff.org - Wed, 07/02/2018 - 10:09

The list of companies who exercise their right to ask for judicial review when handed national security letter gag orders from the FBI is growing. Last week, the communications platform Twilio posted two NSLs after the FBI backed down from its gag orders. As Twilio’s accompanying blog post documents, the FBI simply couldn’t or didn’t want to justify its nondisclosure requirements in court. This might be the starkest public example yet of why courts should be involved in reviewing NSL gag orders in all cases.

National security letters are a kind of subpoena that give the FBI the power to require telecommunications and Internet providers to hand over private customer records—including  names, addresses, and financial records. The FBI nearly always accompanies these requests with a blanket gag order, shutting up the providers and keeping the practice in the shadows, away from public knowledge or criticism.

Although NSLs gag orders severely restrict the providers’ ability to talk about their involvement in government surveillance, the FBI can issue them without court oversight. Under the First Amendment, “prior restraints” like these gag orders are almost never allowed, which is why EFF and our clients CREDO Mobile and Cloudflare have for years been suing to have the NSL statute declared unconstitutional. In response to our suit, Congress included in the 2015 USA FREEDOM Act a process to allow providers to push back against those gag orders.

The new process (referred to as “reciprocal notice”) gives technology companies a right to request judicial review of the gag orders accompanying NSLs. When a company invokes the reciprocal notice process, the government is required to bring the gag order before a judge within 30 days. The judge then reviews the gag order and either approves, modifies, or invalidates it. The company can appear in that proceeding to argue its case, but is not required to do so.

Under the law, reciprocal notice is just an option. It’s no substitute for the full range of First Amendment protections against improper prior restraints, let alone mandatory judicial review of NSL gags in all cases. Nevertheless, EFF encourages all providers to invoke reciprocal notice because it’s the best mechanism available to Internet companies to voice their objections to NSLs. In our 2017 Who Has Your Back report, we awarded gold stars to companies that promised to tell the FBI to go to court for all NSLs, including giants like Apple and Dropbox.

Twilio is the latest company to follow this best practice. It received the two national security letters in May 2017, both of which included nondisclosure requirements preventing Twilio from notifying its users about the government request. And both times, Twilio successfully invoked reciprocal notice, leading to FBI to give permission to publish the letters. This might seem surprising, given that in order to issue a gag, the FBI is supposed to certify that disclosure of the NSL risks serious harm related to an investigation involving national security.

But rather than going to court to back up its certification, the FBI backed down. It retracted one of the NSLs entirely, so that Twilio was not forced to hand over any information at all. For the other, the FBI simply removed the gag order, allowing Twilio to inform its customer and publish the NSL.

This is not what the proper use of a surveillance tool looks like. Instead, it reveals a regime of censorship by attrition. The FBI imposes thousands of NSL gag orders a year, and by default, these gag orders remain in place indefinitely. Only when a company like Twilio objects, does the government have any minimal burden of showing its work. Without a legal obligation to do so in all cases, the FBI can simply hope most companies don’t speak up.

That’s why it’s so crucial that companies like Twilio take responsibility and invoke reciprocal notice. Better still,Twilio also published a list of best practices that companies can look to when responding to NSLs, including template language to push back on standard nondisclosure requirements. (Automattic, the company behind Wordpress, published a similar template last year.)

As the company explained, “The process for receiving and responding to national security letters has become less opaque, but there’s still more room for sunlight.”

We couldn’t agree more. Hopefully if more companies follow the lead of Apple, Dropbox, Twilio and the others who received stars on our report, the courts and Congress will see the need for further reform of the law.

Categories: Aggregated News

Fair Use Overcomes Chrysler's Bogus Copyright Notice

eff.org - Tue, 06/02/2018 - 08:53

If you watched this year’s Super Bowl, you might have seen an advertisement for Dodge Ram featuring a Dr. Martin Luther King, Jr. voiceover. To criticize the ad, and to show how antithetical it was to King’s views, Current Affairs magazine created a new version. The altered version overlays audio from elsewhere in the same speech where King criticizes excessively commercial culture and specifically calls out car ads. Although this is about as clear a fair use as one could imagine, Chrysler responded with a copyright claim.

Fortunately, the takedown did not last long. The Streisand Effect quickly kicked into gear and others reposted the video. A copy on Twitter has collected over one million views. The copyright claim was then withdrawn. We reached out to Chrysler and a spokesperson responded that the video was taken down by YouTube's Content ID system but that it was restored after Chrysler discovered the error. While we are glad that this video was restored, in many less high-profile cases, automated takedowns are never reviewed or challenged.

Many, including the King Center, have commented on how Chrysler came to use a speech that included criticism of car ads in a car ad. Chrysler has defended the ad saying it had permission from King’s estate. King’s estate partnered with EMI in 2009 to create new “revenue streams” for King’s works and image. But where the use has been unauthorized by King’s estate, it has tended to enforce its rights quite aggressively. It once sued CBS for using a lengthy clip of the “I have a Dream” speech in a documentary. The estate also exacted an $800,000 payment for “permission” to use King’s words and image on the Martin Luther King Jr. Memorial in Washington. The award-winning movie Selma couldn’t use any of King’s speeches because the rights had been licensed to another studio.

Lengthy copyright terms and post-mortem rights of publicity mean that King’s words and image will be fueling EMI’s revenue streams until approximately 2039. Fortunately, fair use offers a counter-balance for the public interest. This is why we can watch Chrysler’s commercial combined with King’s real feelings about car ads. Fair use won the day this time.

Categories: Aggregated News

BMG v. Cox: ISPs Can Make Their Own Repeat-Infringer Policies, But the Fourth Circuit Wants A Higher "Body Count"

eff.org - Tue, 06/02/2018 - 07:09

Last week’s BMG v. Cox decision has gotten a lot of attention for its confusing take on secondary infringement liability, but commentators have been too quick to dismiss the implications for the DMCA safe harbor. Internet service providers are still not copyright police, but the decision will inevitably encourage ISPs to act on dubious infringement complaints, and even to kick more people off of the Internet based on unverified accusations.

This long-running case involves a scheme by copyright troll Rightscorp to turn a profit for shareholders by demanding money from users whose computer IP addresses were associated with copyright infringement. Turning away from the tactic of filing lawsuits against individual ISP subscribers, Rightscorp began sending infringement notices to ISPs, coupled with demands for payment, and insisting that ISPs forward those notices to their customers. In other words, Rightscorp and its clients, including BMG, sought to enlist ISPs to help coerce payments from Internet users, threatening the ISPs themselves with an infringement suit if they don’t join in. Cox, a midsize cable operator and ISP,  pushed back and was punished for it.

Before the suit, Cox had quite reasonably decided to stick up for its customers by refusing to forward Rightscorp’s money demands. Going along would have put Cox’s imprimatur on Rightscorp’s vaguely worded threats. The Digital Millennium Copyright Act safe harbors, which protect ISPs and other Internet services from copyright liability, don’t require ISPs who simply transmit data to respond to infringement notices, much less forward them.

Unfortunately, Cox failed to comply with another of the DMCA’s requirements. To receive protection, an ISP must “reasonably implement” a policy for terminating “subscribers and account holders” who are “repeat infringers” in “appropriate circumstances.” Past decisions haven’t defined what “appropriate circumstances” are, but they do make clear that a repeat infringer policy has to be more than mere lip service. Cox’s defense foundered—as many do—on a series of unfortunate emails. As shown in court, Cox employees discussed receiving many infringement notices for the same subscriber, and giving repeated warnings to those subscribers, but never actually terminating them, or terminating them only to reconnect them immediately. The emails painted a picture of a company only pretending to observe the repeat-infringer requirement, while maintaining a real policy of never terminating subscribers. The reason, said the Cox employees to one another, was to eke out a bit more revenue.

Despite the emails, BMG’s case had a weakness: the notices from Rightscorp and others were mere accusations of infringement, their accuracy and veracity far from certain. Nothing in the DMCA requires an ISP to kick customers off the Internet based on mere accusations. What’s more, the “appropriate circumstances” for terminating someone’s entire Internet connection are few and far between, given the Internet’s still-growing importance in daily life. As the Supreme Court wrote last year, “Cyberspace . . . in general” and “social media in particular” are “the most important places (in a spatial sense) for the exchange of views.” Even more than a website or social network, an ISP can and should save termination for the most egregious violations, backed by substantial evidence.

The Court of Appeals for the Fourth Circuit acknowledged this, to a point. The court was “mindful of the need to afford ISPs flexibility in crafting repeat infringer policies, and of the difficulty of determining when it is ‘appropriate’ to terminate a person’s access to the Internet.” The court ruled that Cox had lost its safe harbor, not because its termination policy was too lenient, but because it failed to implement its own policy. “Indeed,” wrote the court, “in carrying out its thirteen-strike process, Cox very clearly determined not to terminate subscribers who in fact repeatedly violated the policy.”

The court also ruled that “repeat infringer” isn’t limited to those who are found liable by a court. But the court stopped short of holding that mere accusations should lead to terminations. The court pointed to “instances in which Cox failed to terminate subscribers whom Cox employees regarded as repeat infringers” after conversations with those subscribers, implying that they, at least, should have been terminated.

The court should have stopped there. Unfortunately, it also pointed to the number of actual suspensions Cox engaged in—less than one per month, compared to thousands of warnings and temporary suspensions—as a factor in denying Cox the safe harbor. That focus on “body counts” ignores the reality that terminating home Internet service is akin to “cutting off someone's water." And the court didn’t acknowledge that Cox’s decision to stop accepting Rightscorp’s notices—which included demands for money—protected Cox customers from an exploitative “speculative invoicing” business.

So where does this decision leave ISPs? Certainly, they should not repeat Cox’s mistake by making it clear that their termination policy is an illusion. But nothing in the decision forbids an ISP from standing up for its customers by demanding strong and accurate evidence of infringement, and reserving termination for the most egregious cases—even if that makes actual terminations extremely rare.

The case isn’t over; losing the DMCA safe harbor doesn’t mean that Cox is liable for copyright infringement by its customers. BMG still needs to show that Cox is liable under the contributory, vicarious, or inducement theories that apply to all service providers. The Fourth Circuit ruled that the jury got the wrong instructions, and that contributory liability requires more than a finding that Cox “should have known” about customers’ infringement. Because of that faulty instruction, the appeals court sent the case back for a new trial. The court’s ruling on inducement liability was confusing, as it seemed to conflate “intent” with “knowledge.” It’s important that the courts treat secondary liability doctrines thoughtfully and clearly, as they have a profound effect on how Internet services are designed and what users can do on them. That’s why, while we expect to see more suits like this, we hope that ISPs will continue to stand up for their users as Cox has in defending this one.

Categories: Aggregated News

Advertising

 


Advertise here!

Syndicate content
All content and comments posted are owned and © by the Author and/or Poster.
Web site Copyright © 1995 - 2007 Clemens Vermeulen, Cairns - All Rights Reserved
Drupal design and maintenance by Clemens Vermeulen Drupal theme by Kiwi Themes.
Buy now