Due to the unprecedented secrecy surrounding the Trans-Pacific Partnership (TPP) negotiations taking place this week in Ottawa, there was no formal opportunity to engage with negotiators about the concerns that EFF and many others have expressed—over issues such as the extension of copyright protection by 20 years, and the delegation of ISPs as copyright police with the power to remove content and terminate accounts.
With the alternative of allowing this round of negotiations to proceed without any public input on these important issues (and bearing in mind the maxim “If the mountain won’t come to Muhammad…”), EFF and its partners in the Our Fair Deal coalition decided to hold a side event of our own next to the venue of the negotiations. TPP negotiators were invited to watch keynote talks by two of Canada’s top copyright experts.
In a gratifying indication that the negotiators do remain willing to receive public input into their closed process if given the opportunity to do so, our small room was packed to capacity with 22 negotiators from 9 of the 12 TPP negotiating countries (and those from the other countries that we invited apologised due to another engagement).
The first speaker to present to the assemblage was Howard Knopf, Counsel from Ottawa law firm Macera & Jarzyna, who maintains a widely-read blog titled Excess Copyright. His presentation, for which you can download the slides below, was aptly titled Just Say No to Term Extension—Why More is Less, and explained how term extension takes a toll on the public domain and the many stakeholders who depend upon it, without providing a countervailing benefit to authors or to the economy.
Second to present was well-known Canadian law professor Michael Geist, who focused on the question of ISP or intermediary liability for alleged copyright infringements committed by users. Under U.S. law, ISPs and websites are incentivized to remove material that users upload if a claim of copyright infringement is made. But because many of those claims are false (indeed, many are made by automated systems), such a “notice-and-takedown” regime can stifle freedom of expression, as well as an infringing users’ privacy. This is what U.S. negotiators are putting forward as a template for the TPP.
Michael presented Canada’s “notice-and-notice” regime, which does not result in automatic takedowns or threaten the cancellation of user accounts, as an alternative. Perhaps surprisingly, his presentation (which is also available for download below) also suggested that a notice-and-notice regime could also be extremely effective at deterring ongoing infringement in a large majority of cases.
The third speaker was Reilly Yeo from OpenMedia.ca, an Internet advocacy organisation with a track record of success in mobilizing ordinary users to speak out about threats to the Internet in Canada, which is now reaching out to a broader constituency around the world. Reilly presented results from their Your Digital Future survey, indicating that for most of the more than 60,000 respondents, freedom of expression should be the priority value when designing copyright law.
Finally, Jeremy Malcolm and Maira Sutton from EFF formally released and presented the negotiators with two open letters, endorsed by dozens of businesses and organizations from around the world, including the Wikimedia Foundation, Reddit, Creative Commons, the International Federation of Library Associations (IFLA) and many others. These letters addressed the same two themes as our side event; namely, the dangers associated with the extension of the copyright term, and of appointing ISPs as copyright police.
Even as the TPP negotiations continue to be conducted under ever more secrecy, events like these can help inform negotiators about the risks they take if they concede to restrictive copyright provisions. It is an opportunity to try to improve the language in the agreement, if it comes to the event that this trade agreement passes containing new copyright standards. In the meantime, we are continuing to fight against efforts to include digital policy measures in trade agreements at all—measures that harm and undermine user rights through backdoor, corporate-captured processes.~ Presentations
Howard Knopf Presentation: "Just Say No to Term Extension—Why More is Less"
Michael Geist Presentation: "Notice the Difference: Canada’s Notice-and-Notice Rules"
OpenMedia’s Reilly Yeo Presentation: "Citizen Perspectives on the TPP"Related Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalTrans-Pacific Partnership Agreement
Share this: || Join EFF
After months of waiting, a Ninth Circuit panel has finally responded to Google's plea, supported by public interest groups (including EFF), journalists, librarians, other service providers, and law professors, to reconsider its disastrous opinion in the case of Garcia v. Google. The good news is that we managed to get the panel to revisit its opinion. The bad news is that it essentially doubled down.
Quick background: several months ago, and over a vigorous dissent, a panel majority ordered Google to remove copies of the notorious Innocence of Muslims film from YouTube. Why? Because one of the actors in the film insists she has a copyright interest in her performance and, based on that interest, claims to have a right to have the video taken offline. Actress Cindy Lee Garcia—who was tricked into appearing on-screen, overdubbed, for five seconds—sued Google to have the footage removed. The district court refused and Garcia appealed. The Ninth Circuit concluded Garcia's copyright claim was "doubtful" but nonetheless ordered Google to remove the film from YouTube and take steps to prevent future uploads.
The uproar was immediate, for good reason. As we and others explained, the order amounts to a prior restraint of speech, something that should never happen where the underlying claim is "doubtful." (In fact the Copyright office later refused to register Garcia's performance). The majority dismissed that concern by claiming that the First Amendment doesn’t protect copyright infringement, which missed the point. The First Amendment does protect lawful speech, which is why courts shouldn’t issue censorship orders in any but the rarest circumstances, and only where it is highly likely that the speech is actually unlawful. What is worse the panel’s ruling was accompanied by a gag order forbidding Google from discussing the ruling for almost a week.
The amended opinion issued today recognizes some of our concerns but continues to sidestep the key issues. Notably, the amended opinion does not address our most basic concern: that the court applied the wrong standard altogether. The takedown order was a mandatory preliminary injunction, which should never occur unless the law and the facts clearly favor the person asking for it. What is worse, the court gave short shrift to the public interest. Innocence of Muslims is doubtless a highly offensive video, but it is also part of the historical record. Thanks to the injunction, the public can continue to discuss the video—but we can't see what we are discussing.
The amended opinion dismissed several other concerns on procedural grounds, insisting that the district court can consider them later:
Nothing we say today precludes the district court from concluding that Garcia doesn’t have a copyrightable interest, or that Google prevails on any of its defenses. ...
After we first published our opinion, amici raised other issues, such as the applicability of the fair use doctrine, see 17 U.S.C. § 107, and section 230 of the Communications Decency Act, see 47 U.S.C. § 230. Because these defenses were not raised by the parties, we do not address them. The district court is free to consider them if Google properly raises them.
First Amendment protections are ‘embodied in the Copyright Act’s distinction between copyrightable expression and uncopyrightable facts and ideas,’ and in the ‘latitude for scholarship and comment’ safeguarded by the fair use defense.” Golan v. Holder, 132 S. Ct. 873, 890 (2012) (quoting Harper & Row Publishers, Inc. v. Nation Enters., 471 U.S. 539, 560 (1985)). Google hasn’t raised fair use as a defense in this appeal, see page 11 supra, so we do not consider it in determining its likelihood of success. This does not, of course, preclude Google from raising the point in the district court, provided it properly preserved the defense in its pleadings.
This is profoundly disappointing. When a case clearly implicates the public's right to access the historical record, courts shouldn't kick the can down the road. And as the amended dissent recognizes, the panel had ample legal authority to address the flaws in its opinion, rather than referring them to another court.
Google's request for a rehearing en banc (i.e., a rehearing by the entire court rather than just a three-judge panel) is still pending, so the Ninth Circuit has another chance to revisit the case. Let's hope it does so.Files: garciaamended.pdfRelated Issues: Fair Use and Intellectual Property: Defending the BalanceNo Downtime for Free SpeechFree SpeechRelated Cases: Garcia v. Google, Inc
Share this: || Join EFF
Last week, Microsoft completed a legal attack on two large and quite nasty botnets by obtaining a court order transferring 23 domain names to Microsoft’s control. The botnets went down and the Internet was a better place for it. But in doing so, Microsoft also took out the world’s largest dynamic DNS provider using a dangerous legal theory and without any prior notice to Vitalwerks Internet Solutions—the company that runs No-IP.com—or to the millions of innocent users who rely on No-IP.com every day.
Just two days later, Microsoft reversed course and began returning control of the seized domains to Vitalwerks. And yesterday, Microsoft and Vitalwerks announced a settlement agreement, with Microsoft admitting that “Vitalwerks was not knowingly involved with the subdomains used to support malware.”
We commend Microsoft’s prompt about face. A company with less integrity would have stuck to its guns, and we are pleased that Microsoft instead worked quickly to rectify this situation. That said, we are disappointed that Microsoft crafted its lawsuit in a way that created these problems in the first place.A Flawed Plan
First, some background. No-IP.com provides what’s known as dynamic DNS service at both free and paid levels. With dynamic DNS, users who lack a static IP address (mostly mobile users and home and small business DSL or cable subscribers) can host servers at a constant URL, for instance example.no-ip.org, despite the fact that the IP address of that server, and hence the route needed to find it on the Internet changes frequently. I actually use No-IP.com on my parents’ computer in Los Angeles so that I can provide remote tech support from San Francisco without having them locate and read me their IP address over the phone every time they need help. Prior to Microsoft’s action, No-IP.com boasted more than 18,000,000 users of its free service alone.
Microsoft claims to have had no problem with the vast majority of No-IP.com’s users, and we have no reason to doubt Microsoft's sincerity. Instead, Microsoft was concerned by the use of No-IP.com’s service by a pair of botnet operators controlling a total of just over 18,000 nodes at as many subdomains. The botnets used dynamic DNS for essentially the same reasons that I do; it allowed the operators to keep track of the individual nodes of the botnet without having to maintain a current list of their IP addresses or a static command and control server. Microsoft’s plan was to use its own nameservers to send requests to resolve the botnet-associated subdomains to a blackhole, while continuing to resolve requests for the legitimate subdomains to their appropriate IP addresses. So they went to court, in secret and without telling No-IP.com, and convinced a Federal District Judge in Nevada to order the domain name registries to list Microsoft’s nameservers as authoritative for 23 of No-IP.com’s most popular domains.
But Microsoft’s plan failed catastrophically. The transfer resulted in more 5,000,000 subdomains served by No-IP.com simply failing to resolve. The details of the technical failure are obscure from outside Microsoft, but those numbers are worth repeating. In order to take down an 18,000-node botnet, Microsoft commenced a legal action that resulted in the termination of DNS service to nearly 5,000,000 subdomains with which Microsoft had no complaint. In other words, the seizure order that Microsoft asked for, and a federal judge approved, was 99.6% overbroad.
Drawing an analogy to the real world, imagine a busy shopping mall filled with legitimate businesses and a single mafia front. Microsoft, feeling injured by the mafia front’s usage of its trademark and attacks on its users, went to federal court in secret and obtained an order transferring control of the mall to Microsoft's own mall cops, who vowed to keep out only the mafia. But Microsoft’s mall cops were apparently overwhelmed by the number of visitors and simply locked the mall’s doors, keeping out everyone, including the 99.6% of visitors who had legitimate shopping to do.
Microsoft’s plan could have worked. Apparently Microsoft simply lacked the infrastructure capacity to put it into place. How did they make such a gross miscalculation? By telling themselves, and the court, that their “goal is to cut-off traffic to [the botnet] while allowing traffic through to any other sub-domains, if there are any such sub-domains at all.” Microsoft's lawsuit was intended to blackhole only the .1% of No-IP.com’s subdomains that were involved with the botnets it sought to disrupt, and it glossed over the effect on the millions of other domains, even suggesting it was possible that they were all bad actors. And because No-IP.com was kept in the dark, the judge heard only Microsoft's version.A Flawed Process
Microsoft’s technical failure, as well as its suggestion to the court that there might not have been any innocent users of No-IP.com, both depended on the ex parte (legalese for without the participation of the other side) nature of the proceedings. Had No-IP.com been aware of the lawsuit, and the pending order to seize what amounted to a large fraction of its business, it would have been able to correct both of Microsoft’s failures and spare the owners of the nearly 5,000,000 innocent subdomains (including yours truly) from having their DNS service cut without notice.
Microsoft argued to the court that an ex parte hearing was required because if notice to the defendants was given, the botnets would pack up shop, switch to a different dynamic DNS provider, and disappear. Perhaps that was a good reason to keep notice from the botnet defendants, but it’s no reason to keep knowledge of the lawsuit from No-IP.com. Microsoft appears to be suggesting to the judge that No-IP.com would surely have tipped off the botnet operators, or at least allowed the botnet operators to somehow escape. That is utter nonsense.
In ex parte proceedings, lawyers owe a heightened duty of candor to the court, since there’s no adversary to challenge their assertions. We would have hoped that would have resulted in a more thorough pre-lawsuit investigation. Now, just over a week after convincing a judge that it was vital to keep notice from No-IP.com, Microsoft has admitted that it is confident that No-IP.com was not acting in concert or even involved with the botnet operators. Thus withholding notice from No-IP.com was never warranted.A Flawed Legal Theory
Not only did Microsoft bungle the facts and the tech underlying its seizure of No-IP.com’s core business, its case against the provider was based on a downright dangerous legal theory. Microsoft argued that, as a provider of free network services, No-IP.com was negligent. Indeed, Microsoft claims that No-IP.com had a legal obligation to:
- Require all users to provide their real name, address, and telephone number.
- Put that information in a public database.
- Use a “web reputation” service to identify bad actors.
- And encrypt its customers’ usernames and passwords.
Every one of those points is rubbish, and none is a legal duty of service providers. First, anonymity online is unambiguously protected by the First Amendment and is a cornerstone of our democracy. Service providers are free to allow their users the option of exercising their constitutional rights. Second, publishing a public database of users is by no means a best practice, and in fact would be one of the worst. Third, several companies offer “web reputation” services, including Microsoft. While a service provider is certainly free to use one of those services if it so chooses, the claim that it is legally required to do so is spurious. To the contrary, under federal law, service providers are not held responsible for the acts of their users, and not made responsible for failing to adequately block bad content. And finally, did Microsoft actually argue that it is a security best practice, and in fact a legal duty, for service providers to encrypt passwords? Because storing users’ passwords in a form that could be decrypted to plaintext by anyone, including the provider, is absolutely terrible security hygiene. If Microsoft meant that the best practice is to store the passwords in a table of cryptographic hashes, it should have said so.
In sum, Microsoft’s theory of why No-IP.com was negligent would condemn essentially every provider of free network services on the Internet, as well as many paid providers. We strongly disagree that following any of the four practices that Microsoft claimed No-IP.com failed to follow would be a good idea, much less best practice or a legal obligation.1Going Forward
We're glad that the disruption to No-IP.com's users lasted only a few days, and we have these suggestions for any company that wants to use the courts to eliminate threats to its users:
- Give notice to innocent intermediaries, before seizing their business.
- Don't gloss over innocent uses and users of a service, especially when those uses may make up 99.9% of the service.
- Abandon Microsoft's half-baked negligence theory that, if accepted, would mean the end to free network services.
- Be prepared to actually meet the infrastructure demands that any proposed legal solution presents, so as not to cause more disruption than necessary.
At the end of the day, we commend Microsoft for dropping its suit against No-IP.com so quickly, and we’re left hoping that the next time the company decides to take it upon itself to clean up the Internet, it will reconsider the tactics it employs to do so.
- 1. We have an additional technical legal quibble with the way Microsoft’s lawsuit against No-IP.com proceeded. The ex parte restraining order that Microsoft obtained, compelling the domain name registries to transfer No-IP.com domains to Microsoft, was authorized by Federal Rule of Civil Procedure 65. That rule however specifically provides that only the parties, their agents, and people in “active concert” with the parties can be bound by an ex parte restraining order. Microsoft’s order purported to bind the third-party domain name registries (companies that are neither agents of, nor in active concert with, No-IP.com) despite Rule 65’s prohibition.
Share this: || Join EFF
EFF asked the Second Circuit Court of Appeals today to reject a last-ditch attempt by the Authors Guild to block the Google Books project and rewrite the rules of fair use.
This is a long-running case that culminated in a tremendous victory in November. After years of litigation, Judge Denny Chin ruled that Google Books does not infringe copyrights in the books it indexes. But the Authors Guild appealed Judge Chin’s clear-headed decision. EFF—joined by Public Knowledge and the Center for Democracy and Technology—filed an amicus brief with the appeals court today, asking the court to affirm the decision below and help ensure that fair use continues to operate as a crucial safety valve for innovation.
In the Google Books project, Google has been cooperating with libraries to digitize books and create a massive, publicly available, searchable database. Users enter a keyword, and get results including titles, page numbers, and small snippets of text. Understandably, it’s become an invaluable tool for librarians, researchers, and the public.
But the Authors Guild has argued that its members are owed compensation if their books are digitized and included in the database, even though participating in the database benefits those authors by helping readers find their works. For example, many librarians say they have purchased new books for their collections after learning of them through the Google Books project. Nonetheless, the Guild still contends that the mere fact that a whole book was digitized means that authors’ copyrights were violated.
In this latest appeal, the Authors Guild (and its supporters) claim that fair use is being unjustly expanded, portraying Judge Chin’s ruling and other recent court opinions as some kind of fair-use creep, stretching beyond the original intent of the doctrine. Specifically, the Guild argues that the first of the four statutory fair use factors—the purpose of the use, which asks whether the use of the copyrighted material is transformative and/or non-commercial—weighs against Google. The Authors Guild and its amici insist that a use cannot be transformative if it doesn’t add new creative expression to the pre-existing work. But as Judge Chin so rightly recognized, a use can be transformative if serves a new and distinct purpose.
Which is a good thing. As EFF's amicus brief explains, this case is just one of many where copyright owners seek to use the copyright laws to shut down new technologies. And the cramped interpretation of fair use that the Guild and its amici offer has been argued with increasing frequency in recent years, in large part because technological innovation increasingly depends on and facilitates the use of copyrighted works. In the 21st century, technological innovation often depends on copying and reverse engineering copyrighted works, in many ways and at scale. If that copying—most of which is entirely invisible to the public—infringes copyright, then huge swaths of innovation must come to a halt, to the public’s detriment and with little benefit to authors.
Fortunately that is not the rule. Fair use operates precisely as it is supposed to when it protects this kind of technological use. Copyright was always intended to protect and indeed foster innovation, and a robust fair use doctrine is one of the key means by which it accomplishes that purpose. We hope the appeals court understands this as well as Judge Chin did in November.Related Issues: Fair Use and Intellectual Property: Defending the BalanceInnovationRelated Cases: Authors Guild v. Google, Part II: Fair Use Proceedings
Share this: || Join EFF
Tell President Obama to stand up for our fundamental rights.Take Action Now!
When faced with a digital emergency—whether someone has hijacked your social media account or your website is being DDoSed—it can be difficult for non-technical people to discern what the problem is and what the appropriate next steps may be for seeking help. To help fill this niche in the universe of privacy and security guides, a group of NGOs ( including EFF, Hivos, Internews, VirtualRoad, and CIRCL) have teamed up to write a guide that combines advice for self-assessment with advice for “first responders” to help non-technical users all over the world identify and respond to their digital emergencies.
The Digital First Aid Kit aims to provide preliminary support for people facing the most common types of digital threats. The Kit offers a set of self-diagnostic tools for human rights defenders, bloggers, activists and journalists facing attacks themselves, as well as providing guidelines for digital first responders to assist a person under threat.
The kit begins by offering ways to establish secure communication when you or a contact are facing a digital threat and want to reach out for support. The Kit also provides sections on account hijacking, seizure of devices, malware infections and DDoS attacks. Each section begins with a series of questions about the user, their devices and their situation. These questions will guide them through a self-assessment or help a first responder better understand the challenges they are facing. Finally, the Kit lays out initial steps to understand and potentially fix the problems. The steps should also help users or first responders to recognize when to request help from a specialist.
The Digital First Aid Kit is not meant to serve as the ultimate solution to every digital emergency, but it strives to give users and first responders tools that can help them to make a first assessment of what is happening and determine if they can mitigate the problem on their own.
Because this guide is a living document, The Digital First Aid Kit is available on github under a Creative Commons Attribution Share Alike International license. We encourage people to annotate the guide, fork their own versions, contribute feedback about advice that does or does not work, and make translations.Related Issues: InternationalPrivacySecurity
Share this: || Join EFF
When EFF joined with a coalition of partners to fly an airship over the NSA's Utah Data Center, the goal was to emphasize the need for accountability in the NSA spying debate. In particular, we wanted to point people to our new Stand Against Spying scorecard for lawmakers. But while we were up there, we got a remarkable and unusual view.
Today, continuing in the spirit of transparency and building on earlier efforts to shed some light on the physical spaces the US intelligence community has constructed, we're releasing a photograph of the Utah Data Center into the public domain, completely free of copyright and other restrictions. That means it can be used for any purpose—copied, edited, or even sold—online or in print, with or without attribution to the Electronic Frontier Foundation. We hope that making such an image available will help support conversations about the actions of the NSA.
The image below is just a preview—click through for the full high-resolution version.
This picture makes clear the scope and scale of the NSA's facilities—necessary because of the agency's "collect it all" posture and misguided dedication to creating ever-larger haystacks in pursuit of needles. Alongside our other efforts to bring accountability to massive NSA spying, hopefully this image can help make the infrastructure of that spying more tangible to the public.
The fine print: this image is released into the public domain under the terms of the CC0 waiver from Creative Commons. It is available from eff.org and the Wikimedia Commons.
To the extent possible under law, Electronic Frontier Foundation has waived all copyright and related or neighboring rights to this photograph of the NSA Utah Data Center.
Share this: || Join EFF
There is a lot in our current patent system that is in need of reform. The Patent Office is too lax in granting patents. Federal Circuit case law has consistently favored patentees. Another part of this problem is the forum shopping by patentees that leads to a disproportionate number of cases being filed in the Eastern District of Texas.
Back in 2011, This American Life did a one-hour feature called “When Patents Attack!” The story included a tour of ghostly offices in Marshall, Texas, where shell companies have fake headquarters with no real employees. For many people, it was their first introduction to the phenomenon that is the Eastern District of Texas, a largely rural federal court district that has somehow attracted a huge volume of high-tech patent litigation.
The Eastern District of Texas is still number one for patent cases. Last year, there were just over 6,000 patent suits filed in federal courts around the country. One in four of those cases (24.54% to be exact) were filed in the Eastern District of Texas. But why do patent plaintiffs, especially trolls, see it as such a favorable forum? Partly, the district's relatively rapid litigation timetable can put pressure on defendants to settle. But other local practices in the Eastern District also favor patentees. And, in our view, they do so in a way that is inconsistent with the governing Federal Rules, and work to mask the consistent refusal by the courts in the Eastern District to end meritless cases before trial.
The podcasting patent troll litigation provides a recent case study. EFF is currently fighting the patent troll Personal Audio at the Patent Office, where we’re arguing that U.S. Patent 8,112,504 (the “podcasting patent”) is invalid. But Personal Audio is also involved in litigation against podcasters and TV companies in the Eastern District of Texas. We’ve been following that case, and unsurprisingly, the defendants there are also arguing that the podcasting patent is invalid. Specifically, the defendants are arguing that earlier publications and websites describe the system for “disseminating media content” that Personal Audio says it invented.
Recently, something happened in that case that we thought deserved notice: the defendants were denied the opportunity to have the judge rule on summary judgment on this issue. This deserves a bit of explanation: generally, parties go to trial to have their rights decided by a jury. But the Federal Rules provide the parties the right to get “summary judgment” (i.e., a decision from the judge) where there is no “genuine dispute as to any material fact.” To be clear, this doesn’t mean the parties have to agree on all the facts. What it means is that where the only disputes are not genuine (e.g., there isn’t enough evidence to support an argument) or not material (e.g., the resolution of the dispute would not change the outcome) summary judgment should be granted.
Unfortunately, the podcasting defendants in Texas weren’t even given this opportunity. You see, in the Eastern District of Texas, judges require parties to seek permission to file a motion for summary judgment. That is, unless and until the judge lets you file your motion (even if it is clear as day that you’re going to win), you’re going to trial. The defendants in Texas sought that permission, but in a one-sentence order, their request was denied. (Note: The judge is allowing the defendants to file summary judgment on other issues, namely non-infringement and license).
Why this is important is that according to Federal Rules of Civil Procedure 56, defendants have a right to file a summary judgment motion and to have that motion decided. But in the Eastern District of Texas, the judge’s “rule” effectively denies them these rights, which we think is contrary to the law. Furthermore, this requirement likely masks the true value of the already low grant rate of summary judgment. A recent study found that judges in the Eastern District of Texas granted only 18% of motions for summary judgment of invalidity. (In contrast, the grant rate nationwide is 31%.) Considering that the study did not include instances where the defendant wasn’t allowed to file summary judgment in the first place, we wouldn’t be surprised if the true grant rate were much lower, and thus even further out-of-whack with the national average.
So why don’t parties challenge the judge’s rule? We don’t know for sure, but we have a good guess. And it has to do with the fact that a single judge in the Eastern District had over 900 patent cases assigned to him in 2013.
Patentees and defendants (and of course, their lawyers) are often "repeat players," meaning they will be in front of the same judge on many different occasions in different cases. It’s easy to see how telling a judge his rules are invalid may not be the best thing to do when you’re usually trying to get him to agree with you. Given the volume of high-stakes litigation there, no one wants to be unpopular in Eastern District of Texas. (Indeed, of all the ice rinks in all the towns in all the world, why would patent heavyweight Samsung sponsor a rink directly in front of the courthouse in Marshall?) Another reason that this type of rule may not get challenged is that it’s just not worth it. Even if you get to file your summary judgment motion, that doesn’t mean that the judge will actually rule in a timely fashion (thus saving the expense of preparing for an unnecessary trial) or that you’ll win. By the time you get to the point of appeal, you have many more important issues that you want the appeals court to consider. In the end, the parties are just stuck with the judge’s rules and cases that should be decided quickly and early are left to languish.
And for patent trolls, this is a good thing. A plaintiff that doesn’t have its weak case quickly and cheaply rejected increases its settlement pressure and keeps its patent alive longer. In contrast, a defendant, faced with the possibility of significant trial costs, will more likely succumb to settlement pressure in order to get the case to go away at the least cost. Thus patent trolls, who are often asserting extremely broad and likely-invalid patents, are incentivized to file in the Eastern District of Texas knowing that there’s another hurdle an accused infringer has to overcome in order to win the case.
To be clear, local rules like those in the Eastern District violate the rights of both plaintiffs and defendants. By either refusing to rule on summary judgment or delaying a ruling right until the eve of trial, both sides incur significant costs. But it is easy to see how this would have a larger impact on those accused of infringing patents, especially in cases where the damages are less than the cost to go to trial.
We sympathize with judges who are trying to manage busy dockets. Understandably, the Court does not want to be faced with frivolous motions, or with five motions from both sides. But the court has other methods of dealing with these issues (for example limiting page length or allowing only one brief on all issues). What the court is not entitled to do, however, is prevent the parties from filing at all.
With respect to the podcasting patent, we’ve linked to the parties’ papers on this issue here (defendants’ letter requesting permission to file a motion), here (Personal Audio’s response), and here (defendants’ reply letter). You can make up your own mind, but, in our view, Personal Audio made no showing of any genuine or material dispute. The Federal Rules, properly applied, do not allow a party to survive summary judgment with such weak and unsupported arguments.
The defendants in the podcasting case may still win a motion for summary judgment of non-infringement, but unfortunately that could leave Personal Audio free to sue others. But because of the judge’s order, if the current defendants in Texas want to invalidate the podcasting patent, they’re going to have to go to trial. It is unfair and irregular procedures like these that make the Eastern District of Texas such a popular destination for patent trolls. As part of any true patent reform, this kind of forum-shopping incentive needs to end.Files: ecf_122_-_letter_brief_re_sj_of_invalidity.pdf ecf_185_-_order_re_sj_filing.pdf ecf_148_-_pa_response_to_letter_brief_re_sj_of_invalidity.pdf ecf_165_-_reply_letter_brief_re_sj_of_invalidity.pdfRelated Issues: PatentsPatent TrollsInnovationRelated Cases: EFF v. Personal Audio LLC
Share this: || Join EFF
Wikipedia readers and editors can now enjoy a higher level of long-term privacy, thanks to the Wikimedia Foundation's rollout last week of forward secrecy on its encrypted connections. Forward secrecy is an important Web privacy protection; we've been tracking its implementation across many popular sites with our Encrypt the Web Report. And though it may sound like an obscure technical switch, the impact is dramatic: forward secrecy ensures that every new connection uses unique and ephemeral key information, so traffic intercepted once can't later be decrypted if the private key gets compromised.
That kind of compromise can happen at the hands of law enforcement who demand a copy of a server's private key, or who compromise servers to get a copy without asking. It could also be exposed by a bug in the encryption software, as we saw earlier this year in the case of the widely discussed Heartbleed bug. Forward secrecy provides stronger protection against all of these possibilities, limiting exposure to the traffic collected after the key compromise and before a new key is in place.
As always, the privacy offered by this update is not absolute. One major caveat is that it only applies to connections that are encrypted with HTTPS in the first place, and currently that's not the case for many users. Wikipedia only offers a default of encryption to users that are actually logged in to the site, which likely excludes most non-editors. To take advantage of the enhanced privacy protection, users can log in—or even better, install our HTTPS Everywhere browser extension for Firefox, Firefox for Android, Chrome, or Opera to automatically rewrite browser requests to Wikipedia whether or not you are logged in.
Another limitation is that encrypted pages can still be subjected to traffic analysis. A sufficiently large and active adversary could keep a record of the file size of each article and request, for example, and could make inferences about intercepted traffic based on that information. In the future, that sort of attack could be mitigated by “padding” files in transit—adding some filler data so they cannot be identified by their size. But even in the short term, there are definite advantages to raising the sophistication and expense needed to mount an attack.
The case for long-term privacy is easy to understand where a site contains private communications, but it's just as important for sites like Wikipedia or news sites that mostly present public information. That's because HTTPS protects not just the contents of each page, but also data about which specific pages a user visits. Without HTTPS, your browsing history through Wikipedia could be exposed to an eavesdropper that is on the same network, has access to your Internet service provider, or is widely scooping up traffic. With HTTPS and forward secrecy, that history is much more difficult to access.
Giving Wikipedia readers an enhanced level of privacy is undoubtedly a good thing for fostering intellectual freedom, and allows users to explore issues they might otherwise shy away from. It's heartening to see the Wikimedia Foundation take this next step on its encryption roadmap, then, especially in light of the mounting disclosures about government surveillance.
With this update, Wikipedia joins the growing ranks of high profile sites enabling this important Web privacy feature. Google was the first major site to do so all the way back in 2011. As we've tracked forward secrecy on our Encrypt the Web Report, we've seen adoption by many more major sites, such as Dropbox, Facebook, Twitter, Wordpress, and recently Microsoft's OneDrive and Outlook services.Related Issues: PrivacyEncrypting the WebSecurity
Share this: || Join EFF
Today, EFF and its partners in the global Our Fair Deal coalition join together with an even more diverse international network of creators, innovators, start-ups, educators, libraries, archives and users to release two new open letters to negotiators of the Trans-Pacific Partnership (TPP).
The TPP, although characterized as a free trade agreement, is actually far broader in its intended scope. Amongst many changes to which it could require the twelve negotiating countries to agree are a slate of increased rights and privileges for copyright rights holders.
With no official means of participating in the negotiations, the global community of users and innovators who will be affected by these proposed changes have been limited to expressing their concerns through open letters to their political representatives and to the officials negotiating the agreement.
Each of the two open letters released today focuses on a separate element of the heightened copyright regime that the TPP threatens to introduce, and is endorsed by a separate groups of signatories representing those most deeply impacted by the proposed changes in each case.Intermediary Copyright Enforcement
As the document below describes, countries around the Pacific rim are being pressured to agree to proposed text for the TPP that would require them to adopt a facsimile of the DMCA to regulate the take-down of material hosted online, upon the mere allegation of copyright infringement by a claimed rights-holder. Indeed, industry lobbyists are pushing for an even stricter regime, dubbed "notice and staydown", that would make it harder than ever before for users and innovators to safely publish creative, transformational content online.
The rash 20 year extension of the term of copyright protection in the United States in 1998 confounded economists, and frustrated librarians, archivists and consumers, who were consequently starved of new public domain works until 2019. Now the USA intends to compound its error by extending it to all of the other TPP negotiating countries—or at least, those that haven't already yielded to bilateral pressure to extend their copyright terms. As the letter below explains, this would be a senseless assault on the public domain and on those libraries, authors, educators, users and others who depend upon it.
The letter on copyright term extension has been endorsed by 35 organizations so far, including Creative Commons, the Wikimedia Foundation, Public Knowledge and the International Federation of Library Associations and Institutions (IFLA).Express your support
Although the letters have been presented to TPP negotiators today, they will remain open for further signatories to express their support, and may be presented again in future rounds. Interested organizations can express their interest in endorsing the open letters on copyright term extension and intermediary liability using the links given here.
For individuals who are not affiliated with a company or organization, we encourage them instead to take action through the Our Fair Deal coalition's petition (can we take it to 20,000 signatories by this weekend?), and for those who are American citizens, through EFF's action to oppose fast-track authority.Related Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalTrans-Pacific Partnership Agreement
Share this: || Join EFF
The Intercept published an article last night describing secret foreign intelligence surveillance targeting American citizens. One of those citizens, Nihad Awad, is the executive director and founder of the Council on American-Islamic Relations (CAIR), the nation’s leading Muslim advocacy and civil rights organization and a long-time client of EFF.
In response, EFF Staff Attorney Mark Rumold stated:
EFF unambiguously condemns government surveillance of people based on the exercise of their First Amendment rights. The government’s surveillance of prominent Muslim activists based on constitutionally protected activity fails the test of a democratic society that values freedom of expression, religious freedom, and adherence to the rule of law.
Today’s disclosures – that the government has actively targeted leaders within the American Muslim community – are sadly reminiscent of government surveillance of civil rights activists and anti-war protesters in the 1960s and 70s. Surveillance based on First Amendment-protected activity was a stain on our nation then and continues to be today. These disclosures yet again demonstrate the need for ongoing public attention to the government’s activities to ensure that its surveillance stays within the bounds of law and the Constitution. And they once again demonstrate the need for immediate and comprehensive surveillance law reform.
We look forward to continuing to represent CAIR in fighting for its rights, as well as the rights of all citizens, to be free from unconstitutional government surveillance.
EFF represents CAIR Foundation and two of its regional affiliates, CAIR-California and CAIR-Ohio, in a case challenging the NSA’s mass collection of Americans’ call records. More information about that case is available at: First Unitarian Church of Los Angeles v. NSA.
Related Issues: NSA SpyingRelated Cases: First Unitarian Church of Los Angeles v. NSA
Share this: || Join EFF
Google’s handling of a recent decision by the European Court of Justice (ECJ) that allows for Europeans to request that public information about them be deleted from search engine listings is causing frustration amongst privacy advocates. Google—which openly opposed interpreting Europe’s data protection laws as including the removal of publicly available information—is being accused by some of intentionally spinning the ECJ’s ruling to appear ‘unworkable’, while others—such as journalist Robert Peston—have expressed dissatisfaction with the ECJ ruling itself.
The issue with the ECJ judgement isn't European privacy law, or the response by Google. The real problem is the impossibility of an accountable, transparent, and effective censorship regime in the digital age, and the inevitable collateral damage borne of any attempt to create one, even from the best intentions. The ECJ could have formulated a decision that would have placed Google under the jurisdiction of the EU’s data protection law, and protected the free speech rights of publishers. Instead, the court has created a vague and unappealable model, where Internet intermediaries must censor their own references to publicly available information in the name of privacy, with little guidance or obligation to balance the needs of free expression. That won’t work in keeping that information private, and will make matters worse in the global battle against state censorship.
Google may indeed be seeking to play the victim in how it portrays itself to the media in this battle, but Google can look after itself. The real victims in this battle lie further afield, and should not be ignored.
The first victim of Google’s implementation of the ECJ decision is transparency under censorship. Back in 2002—in the wake of bad publicity following the company’s removal of content critical of the Church of Scientology—Google established a policy of informing users when content was missing from search engine results. This, at least, gave some visibility when data was hidden away from them. Since then, whenever content has been removed from a Google search, the company has posted a message at the bottom of each search page notifying its users, and if possible they’ve passed the original legal order to Chilling Effects. Even during its ill-considered collaboration with Chinese censors, Google maintained this policy of disclosure to users; indeed, one of the justifications the company gave for working in China is that Chinese netizens would know when their searches were censored.Right to be Non-Existent: Google warns of potential removals, even when the person you've searched for doesn't exist.
Google's implementation of the ECJ decision has profoundly diluted that transparency. While the company will continue to place warnings at the bottom of searches, those warnings have no connection to deleted content. Rather, Google is now placing a generic warning at the bottom of any European search that it believes is a search for an individual name, whether or not content related to that name has been removed.
Google’s user notification warnings have now been rendered useless for providing any clear indication of censored content. (As an aside, this means that Google is also now posting warnings based on what its algorithms think "real names" look like—even though these determinations are notoriously inaccurate, as we pointed out during Google Plus's Real Names fiasco.)
The second victim of Google’s ECJ implementation is fairness. After Google informed major news media like the Guardian UK and BBC that they were being censored, those sites noted—correctly—that legitimate journalism was being silenced. Google subsequently restored some of the news stories it had been told to remove. Will Google review its decisions when smaller media, such as bloggers, complain? Or does the power to undo censorship remain only with the traditional press and their bully pulpit? Even the flawed DMCA takedown procedure includes a legally defined path for appealing the removal of content. For now it seems that restorations will rely not on a legal right for publishers to appeal, but rather on the off chance that intermediaries like Google will assume the risk of legal penalties from data protection authorities, and restore deleted links.
Which brings us to the third victim: Europe's privacy law itself. Europe's privacy regime has long been a model for effective and reasonable governance of privacy. Its recent updated data protection regulation provides an opportunity for the European Union to define how the right to privacy can be defended by governments in the modern digital era.
Tying the data protection regulation to censorship risks discrediting its aims and impugning its practicality.
“Minor” censorship is still censorship
Before the Google Spain vs. Gonzalez was decided by the ECJ, the court’s advisor, Advocate General Jääskinen, spelled out a reasonable model for deciding the case which would have placed Google and other non-European companies as liable to follow EU privacy law, but would not have required the deletion or hiding of public knowledge. Instead, the court gave credence to the idea that public censorship had a place in “fixing” privacy violations.
Some of the arguments in favor of the ECJ censorship model are reminiscent of other attempts to aggressively block and filter data, including the ongoing regulatory battles against online copyright infringement. While it can be argued that the latest removals are “hugely less significant than the restrictions that Google voluntarily imposes for copyright and other content,” they are no less insidious. Every step down the road of censorship is damaging. And when each step proves—as it did with copyright—to be ineffective in preventing the spread of information, the pressure grows to take the next, more repressive, step.
Currently the EU’s requirement on Google to censor certain entries can easily be bypassed by searching on Bing, say, or by using the US Google search, or by appending extra search terms than simply a name. That is not surprising. Online censorship can almost always be circumvented. Turkey’s ban on Twitter was equally useless, but extremely worrying nonetheless. Even Jordan’s draconian censorship of news sites that fail to register for licenses has been bypassed using Facebook...but should be condemned on principle regardless.
And a fundamentally unenforceable law is guaranteed to be the target of calls for increasingly draconian enforcement, as the legal system attempts to sharpen it into effectiveness. If Bing is touted as an alternative to Google, then the pressure will grow on Bing to perform the same service (Microsoft says it is already preparing a similar deletion service). Europe’s data protection administrators may grow unhappy that simply searching on google.com instead of google.fr or google.co.uk will reveal what was meant to be forgotten, and—as Canada's courts have already demanded—order search engines to delete data globally.
At the very least, European regulators need to stop thinking that handing over the reins of content regulation to the Googles and Facebooks of this world will lead anywhere good. The intricacies of privacy law need to be supervised by regulators, not paralegals in tech corporations. Restrictions on free expression need to be considered, in public, by the courts, on a case-by-case basis, and with both publishers and plaintiffs represented, not via an online form and the occasional outrage of a few major news sources. And online privacy needs better protection than censorship, which doesn't work, and causes so much more damage than it prevents.Related Issues: Free SpeechPrivacySearch Engines
Share this: || Join EFF
Recent debate about network neutrality has largely focused on how to make sure broadband providers don’t manipulate their customers’ Internet connections (or as John Oliver put it, how to prevent “cable company f*ckery”). But in today’s world of smartphones and tablets people are spending less of their time on the Internet typing at a computer and more of it swiping on a smartphone. This is why it’s critically important for net neutrality principles to apply to mobile broadband too.
The good news is that there is greater competition in the mobile broadband space than the wired broadband market. Unsatisfied customers should be more able to vote with their wallet and pick a new carrier (absent unduly burdensome, anti-competitive switching costs). That could change, however, and that means we need to be paying attention. To help that along, here’s a quick explainer.Smartphones and tablets are computers
A smartphone (or a tablet) is just another type of computer—it just happens to be able to make phone calls and take pictures too. And a smartphone’s Internet connection is its most important feature: after all, how “smart” is a phone that can’t look up directions, share photos or videos, or browse the web (except through Wi-Fi)?
At the same time, people are spending more and more time on the Internet via mobile devices. And for many, mobile devices are the primary source of Internet access. Over half of Americans adults use smartphones. What’s more, African American and Latino communities are more likely to access the Internet on a mobile device than a home wire-line connection. The Internet should be no less open on these platforms.
The ubiquity and necessity of mobile Internet means that it’s vital that we ensure that mobile providers don’t abuse their control. And that means we need net neutrality for mobile broadband too.What mobile net neutrality looks like today
Unfortunately, having more competition in this space isn’t preventing non-neutral behavior. Equally important, there aren't enough transparency requirements for mobile Internet so that users can exercise their right to vote with their wallets.
As a result, mobile broadband providers are discriminating against certain types of applications and trying to extract more money from consumers depending on how they use their data. For example:
- AT&T blocked Apple’s FaceTime service in order to force customers to pay higher prices;
- Both AT&T and Sprint forbid users from maintaining “network connections to the Internet such as through a web camera” unless there’s an active user on the other end;
- Both AT&T and T-Mobile forbid users from using peer-to-peer file-sharing applications;
- In 2011, Verizon blocked tethering, the practice of using your phone’s wireless data for other devices, in order to get customers to pay additional fees, until the FCC stopped them. (T-Mobile, Sprint, and AT&T still make users pay extra for tethering.)
As of now mobile providers are delivering a second-class Internet, where they get to decide what can and cannot be accessed via your smartphone.What mobile net neutrality needs to look like
Instead, mobile device owners should enjoy the same levels of control for networked applications on their mobile devices as they do on their laptops and desktops. Service providers shouldn't be blocking sites, shaping traffic, discriminating based on application, etc. In particular, they shouldn't be restricting tethering.Mobile transparency
Mobile broadband providers should also adhere to the same sort of enhanced transparency that’s needed from traditional wire-line broadband providers. That means mobile providers need to regularly disclose what sorts of congestion management techniques they use as well as statistics on download and upload speed, latency, and packet loss, indexed by cell tower location and time of day.
Mobile providers should meet this requirement in two different ways.
For one, we know that mobile companies have gone to extraordinary and intrusive lengths to collect data about network performance and user activity from their customers. While EFF in no way endorses intrusive data collection, if providers insist on continuing the practice, and that data can be released in a way that still protects users' privacy (e.g. via aggregation and anonymization) then service providers should be required to share that data. Such aggregated and anonymized data could help the FCC (and the public) see how mobile broadband network performance varies over time and rough geographical area--great coverage here/not enough coverage there, etc; or (if the data is broken out by endpoints), which services are being throttled due to peering, hosting, and content delivery network arrangements. If providers complain that they have been unfairly accused of non-neutral behavior, let them prove it.
Additionally, providers should give consumers access to the phone's "baseband chip,” the chip in the device that actually communicates with the cellular network, so that we can take measurements of connection quality ourselves. Access to the baseband chip is vital because without baseband-layer performance measurement, consumers are stuck measuring performance from the OS layer, which is only an approximation to the true picture. This is like the difference between measuring traditional broadband speed using your laptop, versus actually measuring it at the cable or DSL modem—if your laptop is running slowly due to other programs the measurements could be skewed.Zero-rating: when some websites don’t count against your data use
Zero-rating refers to when providers don’t count data to and from certain websites or services toward users’ monthly data limits. T-Mobile’s recent announcement of its Music Freedom plan is a good example of zero-rating: users can stream all the music they want from certain services without worrying about their data limit.
Technically, zero-rating is a type of data discrimination: it allows a mobile broadband provider to influence what Internet services people are more likely to use. In this way zero-rating allows mobile broadband providers to pick winners instead of leaving that determination to the market, thereby stifling competition and innovation.
To be clear, zero-rating has sometimes been used for laudable purposes. For example, the Wikipedia Zero program allows users to access Wikipedia for free. But while to tempting to defend preferential treatment for an invaluable service like Wikipedia, zero-rating still makes it harder for other innovative services -- for-profit or nonprofit -- to get in the game.Where do we go from here?
Right now the focus of the net neutrality is traditional broadband. But we need to prevent “mobile broadband f*ckery” too. Accessing the Internet is accessing the Internet, no matter which kind of computer you use. And just as there’s no silver bullet to ensure a neutral net for wired broadband service, it’ll take an ensemble of solutions to keep our mobile connectivity non-discriminatory as well. That means more competition, community based solutions (like the Open Wireless Movement), innovation, transparency, and prohibitions against non-neutral practices, like the blocking of tethering by mobile providers.
In the meantime, the FCC wants to hear from people across the country about how the proposed network neutrality rules will impact us all. So speak up and tell the FCC how and why you use the Internet over your mobile device. Let’s be sure they hear us loud and clear: network neutrality must extend to every way we access the Internet, regardless of whether or not we’re at a desk or on a smartphone.Related Issues: Net Neutrality
Share this: || Join EFF
EFF is in Ottawa this week for the Trans-Pacific Partnership (TPP) negotiations, to influence the course of discussions over regressive digital policy provisions in this trade agreement that could lead to an increasingly restrictive Internet. But this round is different from the others—the secrecy around the talks is wholly unprecedented. The Canadian trade ministry, who is hosting this round of talks, has likely heightened the confidentiality due to the mass public opposition that is growing against this undemocratic, corporate-driven trade deal.
The trade offices from the 12 countries negotiating this deal no longer pre-announce details about the time and location of these negotiations. They don't bother releasing official statements about the negotiations because they no longer call these “negotiation rounds” but “officials' meetings.” But the seeming informality of these talks is misleading—negotiators are going to these so-called meetings to secretly pull together a deal. As far as we know, they're still discussing whether they could expand the international norm of copyright terms to make it even longer. They are negotiating provisions that could lead to users getting censored and filtered over copyright, with no judicial oversight or consideration for fair use. And trade delegates are deliberating how much of a crime they should make it if users break the DRM on their devices and content, even if users don't know it's illegal and the content they're unlocking isn't even restricted by copyright in the first place.
So for this negotiation, we had to rely on rumors and press reports to know when and where it was even happening. At first, there were confirmed reports that the next TPP meeting would take place at a certain luxury hotel in downtown Vancouver. So civil society began to mobilize, planning events in the area to engage users and members of the public about the dangers of TPP. Then seemingly out of the blue, the entire negotiating round was moved across the country to Ottawa. There's no way to confirm whether this was a deliberate misdirection, but either way it felt very fishy.
Already given this level of secrecy, it goes without saying that there will be no room for members of civil society or the public to engage directly with TPP negotiators. Towards the beginning of TPP talks, we were given 15 minutes to present to stakeholders, in addition to a stakeholder event that allowed us to hang around a big room to meet and pass information to negotiators who walked by. Then it was cut down to ten minutes (after we made some noise that it was going to be cut down to a mere eight minutes). In the following rounds, the stakeholder event was completely removed from the schedules of the official rounds. These didn't provide sufficient time to convey to negotiators about the major threats we saw in this agreement, so those events already seemed to be a superficial nod to public participation. But now, they don't even pretend to give us their ear.
Of course, corporate lobbyists continue to have easy access to the text. Advisors to major content industries can comment and read the text of the agreement on their private computers. But those of us who represent the public interest are left to chase down negotiators down the halls of hotels to let our concerns be heard and known to them.
As we watch TPP crawl its way towards getting finalized, signed, and eventually taint our laws with its one-sided corporate agenda, we need to continue to remember this fact: laws made in secret, with no public oversight or input, are illegitimate. That is not how law is made in democracies. If we're to defend the fundamental democratic rule that law is based on transparent, popular consensus, we need to fight back against an agreement that engages in such a secretive, corporate-captured process.
Council of Canadian: Secretive critical talks on the Trans Pacific Partnership happening in OttawaRelated Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalTrans-Pacific Partnership Agreement
Share this: || Join EFF
Philip Johnson is Chief Intellectual Property Counsel of Johnson & Johnson, one of the largest pharmaceutical companies in the world. He is also a representative member of the Coalition for 21st Century Patent Reform, the leading trade group opposing patent reform this past year.
And now he's rumored to be next in line to be the director of the United States Patent and Trademark Office.What?
That's exactly what we're asking ourselves. Why would an administration that has ostensibly been so pro-reform over the last year nominate such an entrenched insider? (Though perhaps this isn't so shocking a question, as those of us working on the net neutrality fight would remind ourselves.) Although it would seem that Johnson is eminently qualified as a patent expert, many of his views lie contrary to recent reform efforts—including reforms proposed by the White House itself.
The Coalition for 21st Century Patent Reform, or 21C, represents companies in the pharmaceutical and biotech industries, among others. These industries, flanked by trial lawyers and universities, helped water down and ultimately kill patent reform this year. In fact, Johnson's employer, a member of 21C, was recently called out as leading the charge against patent reform.
For example, 21C worked to remove the expansion of covered business method (CBM) patent review from proposed legislation [PDF], which is a big part of the reason why language around such reform didn't appear in the final version of the House's Innovation Act nor the latest Senate proposals. Johnson's perspective lies directly contrary to the White House's proposal in early 2013, recommending that the legislature expand the CBM provision to include computer-enabled patents.
The latest versions of the Senate bill—known as the "Schumer-Cornyn Compromise"—contained solid language about stays of discovery and heightened pleading, both of which 21C had vigorously opposed [PDF] and worked to remove.What We Need Instead
What we need is someone who understands the problems with patent law, especially when it comes to software patents. Some are pointing to the fact that David Kappos, the previous director of the Patent Office, was from the tech industry, so the next one has to come from pharma or biotech. This push does a great job of highlighting the fact that one single patent system shouldn't apply to technologies as different as pharmaceuticals and software. In any event, the nominee to head the Patent Office shouldn't be the face of opposition to patent reform that was championed by the White House, passed by a majority of the House, and supported by a considerable proportion of Senators.
We need someone who would champion needed reform—not just tackling trolls, but focusing on the most pressing issue in the patent world: quality. When Kappos became director of the Patent Office in 2009, he inherited a huge backlog of unexamined (or rejected and refiled) patent applications, and he set out to fix this problem. His solution, however, was to lower standards, allowing a flood of low-quality patents to issue.
The patent logjam is a symptom of a system that allows poor quality patents. When the issuance rate for applications is so high, of course more people are going to apply for patents. And more of these applications are going to be very stupid. And more of these patents will haunt our innovation space just shy of two decades from now.
Let's not appoint someone who prefers the status quo when it comes to the most serious patent issues—especially when his views run contrary to the Obama Administration's stated positions. Instead, we need a director who understands that bad patents are the root of the many problems we see today, and who will see the need to reject quantity in favor of quality.Related Issues: PatentsLegislative Solutions for Patent ReformPatent Trolls
Share this: || Join EFF
In early March, Yangon—the former capital of Myanmar (Burma)—played host to a conference held by the East-West Center, called "Challenges of a Free Press." The event (which I attended) featured speakers from around the world, but was more notable for its local speakers, including Aung San Suu Kyi and Nay Phone Latt, a blogger who spent four years as a political prisoner before being released under a widespread presidential amnesty in 2012. In a country where the Internet was heavily censored for many years, online freedom was discussed with surprising openness, although concerns about hate speech on platforms like Facebook were raised repeatedly.
This past week, as violence escalated in Mandalay, authorities blocked Facebook to coincide with a curfew imposed on the city. While leaders in the country, including Suu Kyi, have spoken of the responsibility of journalists in reporting the truth (which some have interpreted as an early call for online censorship), journalist Aung Zaw, writing for the Burmese publication, The Irrawaddy, says "...don’t expect the government to take action against the hatemongers—it isn’t going to happen." Instead of dealing with the calls for violence, it seems, they've taken the easy way out, by imposing censorship.
In his piece, Aung Zaw cautions against Western governments putting too positive a spin on Myanmar's "reforms" while the country's "dream of democracy looks increasingly like it is turning into a carefully orchestrated nightmare." Indeed, as the violence escalates, remaining vigilant about increased censorship is imperative. The recent opening of the Internet may allow hate speech to spread more quickly, but it has also enabled innovation, access to information, and growth of a small technology sector. So while the intent behind censoring Facebook is to stem the tide of hateful speech, Myanmar's history suggests that hate speech might merely be the first thing to go.Related Issues: Free SpeechInternational
Share this: || Join EFF
Learning about Linux is not a crime—but don’t tell the NSA that. A story published in German on Tagesschau, and followed up by an article in English on DasErste.de today, has revealed that the NSA is scrutinizing people who visit websites such as the Tor Project’s home page and even Linux Journal. This is disturbing in a number of ways, but the bottom line is this: the procedures outlined in the articles show the NSA is adding "fingerprints"—like a scarlet letter for the information age—to activities that go hand in hand with First Amendment protected activities and freedom of expression across the globe.
What we know
The articles, based on an in-depth investigation, reveal XKeyscore source code that demonstrates how the system works. Xkeyscore is a tool which the NSA uses to sift through the vast amounts of data it obtains. This source code would be used somewhere in the NSA’s process of collecting and analyzing vast amounts of data to target certain activities. According to the Guardian, XKeyscore’s deep packet inspection software is run on collection sites all around the world, ingesting one or two billion records a day.
The code contains definitions that are used to determine whether to place a "fingerprint" on an online communication, to mark it for later. For example, the NSA marks online searches for information about certain tools for better communications security, or comsec, such as Tails.
As the code explained, "This fingerprint identifies users searching for the TAILs (The Amnesic Incognito Live System) software program, viewing documents relating to TAILs, or viewing websites that detail TAILs." Tails is a live operating system that you can start on almost any computer from a DVD, USB stick, or SD card. It allows a user to leave no trace on the computer they are using, which is especially useful for people communicating on computers that they don’t trust, such as the terminals in Internet cafes.
The NSA also defines Tor directory servers (by IP number) and looks for connections to the Tor Project website. This is hardly surprising, considering the documentation of the NSA’s distaste for Tor. It is, however, deeply disappointing. Using privacy and anonymity software, like Tor and Tails, is essential to freedom of expression.
Most shocking is the code that fingerprints users who visit Linux Journal, the website of a monthly magazine for enthusiasts of the open-source operating system. The comments in the NSA’s code suggest that the NSA thinks Linux Journal is an "extremist forum," where people advocate for Tails. The only religious wars in the Linux Journal are between the devoted users of vi and emacs.
Learning about security is not suspicious
The idea that it is suspicious to install, or even simply want to learn more about, tools that might help to protect your privacy and security underlies these definitions—and it’s a problem. Everyone needs privacy and security, online and off. It isn’t suspicious to buy curtains for your home or lock your front door. So merely reading about curtains certainly shouldn’t qualify you for extra scrutiny.
Even the U.S. Foreign Intelligence Surveillance Court recognizes this, as the FISA prohibits targeting people or conducting investigations based solely on activities protected by the First Amendment. Regardless of whether the NSA is relying on FISA to authorize this activity or conducting the spying overseas, it is deeply problematic. The U.S. Constitution still protects people outside U.S. borders, and, as a U.S. appeals court recently recognized, even non-citizens are not bereft of its protections.
Moreover, privacy is a human right, which the U.S. has recognized by signing the International Covenant on Civil and Political Rights. The fingerprinting program revealed today is fundamentally inconsistent with this right.
Tor is used to circumvent Internet censorship
The code focuses a lot on the Tor Project and its anonymity software. Tor is an essential tool for circumventing Internet censorship, which is used extensively by the governments of countries such as China and Iran to control the flow of information and maintain their hold on power. In fact, Tor was developed with the help of the U.S. Navy, and currently gets funding from several sources within the U.S. government, including the State Department. Secretary of State Hillary Clinton made support for anti-censorship tools a key element of her Internet policy at the State Department, declaring: "The freedom to connect is like the freedom of assembly in cyberspace."
You can still use Tor and TAILs
One question that is sure to come up is whether this means people desiring anonymity should stop using Tor or Tails. Here’s the bottom line: If you’re using Tor or Tails, there is a possibility that you will be subject to greater NSA scrutiny. But we believe that the benefits outweigh the burdens.
In fact, the more people use Tor, the safer you are. That’s why we’re continuing to run the Tor Challenge. The ubiquitous use of privacy and security tools is our best hope for protecting the people who really need those tools—people for whom the consequences of being caught speaking out against their government can be imprisonment or death. The more ordinary people use Tor and Tails, the harder it is for the NSA to make the case that reading about or using these tools is de facto suspicious.Related Issues: PrivacyNSA SpyingSecurity
Share this: || Join EFF
Coordinated enforcement of intellectual property (IP) rights—copyright, patents and trade marks—has been an elusive goal for Europe. Back in 2005, the European Commission struggled to introduce a directive known as IPRED2 that would criminalize commercial-scale IP infringements, but abandoned the attempt in 2010 due to jurisdictional problems. IP maximalists took another run at it through ACTA, the Anti-Counterfeiting Trade Agreement, but that misguided treaty was roundly defeated in 2012 when the European Parliament rejected it, 478 votes to 39.
Undeterred, the European Commission is trying once again. This time, it is trying to avoid a similarly humiliating defeat in Parliament by focusing on non-legislative strategies. But its effort to sidestep Parliament also means less political or judicial oversight. So it behooves us to take a close look at what is being proposed.Intermediary enforcement
The most significant item of the Commission's 10 point action plan is the proposal to conclude "Memoranda of Understanding to address the profits of commercial scale IP infringements in the online environment, following Stakeholder Dialogues involving advertising service providers, payment services and shippers." For example, such Memoranda of Understanding might commit payment intermediaries or advertisers to undertake that they will not accept payments or run advertisements for a site accused of hosting infringing material, thereby depriving those sites of revenue.
This strategy is touted in an accompanying communication as a "a rapid response mechanism to the IP infringement problem", since intermediaries can step in to halt alleged infringements much more quickly than it would take judges, who have to go to the time and trouble of actually hearing evidence from both sides and who are trained in copyright law.
The Commission describes this strategy for IP enforcement as a "follow the money" approach. Superficially, this perhaps sounds reasonable. But look closer at what "the money" really means. It does not—as you might expect—mean only targeting those who are making money from copyright infringement. Instead, it extends to anyone accused of infringing copyright on a "commercial scale."
This is where things get muddy, because there is no agreed definition within Europe of "commercial scale." Indeed, this lack of consensus was the motivation behind the failed IPRED2 directive. We do, however, have a good idea what the Commission would like "commercial scale" to include: any large-scale infringement "in which the professional organization of the activity, for example systematic cooperation with other persons, indicates a business dimension".
That potentially includes an incredibly broad swathe of non-commercial activity, including the hosting of fan art and fan fiction, fansubs and remixes—remembering that Europe, unlike the US, does not have a "fair use" copyright limitation. Indeed, the Commission has admitted that "it is difficult to determine in the abstract which acts of wilful trademark counterfeiting or copyright piracy are not 'on a commercial scale'", allowing only that "occasional acts of a purely private nature carried out by end-consumers normally would not constitute 'commercial scale' activities".
The danger that non-commercial, transformative activity will be swept up in the copyright enforcement frenzy is not merely theoretical. Last week it was revealed that 15 Korean fan subtitlers could face up to 5 years jail in exchange for the time and effort they have devoted to popularizing their favorite soap operas. (Whilst you might wonder what this has to do with the EU, it was in partly in response to its Free Trade Agreement with the EU that Korean copyright law was amended in 2011 to raise criminal penalties to this level.)
Admittedly, the European Commission's latest action plan is not about imprisoning people. But it would lock up many non-commercial websites. Deprived of the ability to receive money from donations or advertisements to cover hosting expenses, popular sites that host user-generated content may have no choice but to close.Another option
There is an easier way in which the European Commission could avoid sweeping noncommercial and publicly beneficial uses in the enforcement net, and it wouldn't require any extra enforcement resources at all. The solution is simply to make such content legal. Currently, the copyright limitations and exceptions allowed under EU law are itemized in Article 5 of its InfoSoc Directive, and although the list is quite detailed ("use in connection with the demonstration or repair of equipment", for instance), it does not contain the flexible copyright limitations that can help protect most mash-ups, remixes and fan works.
Modern copyright laws need to leave space for such new uses that don't unfairly detract from existing commercial markets for copyright works. In the United States and a growing number of other countries, this space is provided by flexible copyright limitations such as fair use.
Earlier this year, the European Commission wrapped up an online public consultation on the future of copyright in Europe, which garnered an amazing 11,117 responses. Many of those responses called for the modernization of European copyright law, including the introduction of new and updated copyright limitations and exceptions that would be better suited for the digital environment, such as an open-ended fair use exception or a more limited exception for user-generated content (UGC).
Although no response to that consultation has yet been officially released, we can get an inkling of how the Commission might view these proposals for reform from the recently leaked draft of a whitepaper that examines areas of EU copyright policy for possible review.
The whitepaper claims that there is "a lack of evidence that the current legal framework for copyright puts a break on or inhibits UGC" and recommends merely that the EU "clarify the application of relevant exceptions as they exist in EU law" and promote "licensing mechanisms…for those uses that clearly do not fall into these exceptions". (As to the latter, the Licenses for Europe consultation aimed at filling the digital deficits in copyright law through licensing was last year boycotted by civil society groups due to the artificially narrow scope of the exercise.)
Similar reticence towards copyright law reform was demonstrated by the Commission this week at WIPO where its representative made a very clear statement that it was not willing to consider work leading to international instrument for limitations and exceptions for libraries and archives; doubling down on a position it adopted at the previous meeting of the same WIPO committee.
This does not paint a positive picture of the future of copyright in Europe. A single-minded focus on enforcement, even when limited to supposedly commercial scale infringements, will do little to foster innovation and creativity, and may indeed achieve the opposite effect. Rather than simply pandering to the IP enforcement lobby, Europe needs to start thinking outside the box—something that its talented fan artists, writers and remixers have been doing for years, in the shadow of an outdated copyright regime.Related Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalEFF Europe
Share this: || Join EFF