Aggregated News

Librarians Call on W3C to Rethink its Support for DRM

eff.org - Wed, 19/07/2017 - 10:36

The International Federation of Library Associations and Institutions (IFLA) has called on the World Wide Web Consortium (W3C) to reconsider its decision to incorporate digital locks into official HTML standards. Last week, W3C announced its decision to publish Encrypted Media Extensions (EME)—a standard for applying locks to web video—in its HTML specifications.

IFLA urges W3C to consider the impact that EME will have on the work of libraries and archives:

While recognising both the potential for technological protection measures to hinder infringing uses, as well as the additional simplicity offered by this solution, IFLA is concerned that it will become easier to apply such measures to digital content without also making it easier for libraries and their users to remove measures that prevent legitimate uses of works.

[…]

Technological protection measures […] do not always stop at preventing illicit activities, and can often serve to stop libraries and their users from making fair uses of works. This can affect activities such as preservation, or inter-library document supply. To make it easier to apply TPMs, regardless of the nature of activities they are preventing, is to risk unbalancing copyright itself.

IFLA’s concerns are an excellent example of the dangers of digital locks (sometimes referred to as digital rights management or simply DRM): under the U.S. Digital Millennium Copyright Act (DMCA) and similar copyright laws in many other countries, it’s illegal to circumvent those locks or to provide others with the means of doing so. That provision puts librarians in legal danger when they come across DRM in the course of their work—not to mention educators, historians, security researchers, journalists, and any number of other people who work with copyrighted material in completely lawful ways.

Of course, as IFLA’s statement notes, W3C doesn’t have the authority to change copyright law, but it should consider the implications of copyright law in its policy decisions: “While clearly it may not be in the purview of the W3C to change the laws and regulations regulating copyright around the world, they must take account of the implications of their decisions on the rights of the users of copyright works.”

EFF is in the process of appealing W3C’s controversial decision, and we’re urging the standards body to adopt a covenant protecting security researchers from anti-circumvention laws.

Categories: Aggregated News

Do Last Week's European Copyright Votes Show Publishers Have Captured European Politics?

eff.org - Wed, 19/07/2017 - 08:49

Three European Parliament Committees met during the week of July 10, to give their input on the European Commission's proposal for a new Directive on copyright in the Digital Single Market. We previewed those meetings last week, expressing our hope that they would not adopt the Commission's harmful proposals. The meetings did not go well.

All of the compromise amendments to the Directive proposed by the Committee on Culture and Education (CULT) that we previously catalogued were accepted in a vote of that committee, including the upload filtering mechanism, the link tax, the unwaivable right for artists, and the new tax on search engines that index images. Throwing gasoline on the dumpster fire of the upload filtering proposal, CULT would like to see cloud storage services added to the online platforms that are required to filter user uploads. As for the link tax, they have offered up a non-commercial personal use exemption as a sop to the measure's critics, though it is hard to imagine how this would soften the measure in practice, since almost all news aggregation services are commercially supported.

The meeting of the Industry, Research and Energy (ITRE) Committee held in the same week didn't go much better than that of the CULT Committee. The good news, if we can call it that, is that they softened the upload filtering proposal a little. The ITRE language no longer explicitly refers to content recognition technologies as a measure to be agreed between copyright holders and platforms that host "significant amounts" (the Commission proposal had said "large amounts") of copyright protected works uploaded by users. On the other hand, such measures aren't ruled out, either; so the change is a minor one at best.

There is no similar saving grace in the ITRE's treatment of the link tax. Oddly for a committee dedicated to research, it proposed amendments to the link tax that would make life considerably harder for researchers, by extending the tax to become payable not only on snippets from news publications but also those taken from academic journals, and whether those publications are online or offline. The extension of the link tax to journals came by way of a single word amendment to recital 33 [PDF]:

Periodical publications which are published for scientific or academic purposes, such as scientific journals, should  n?o?t?  also be covered by the protection granted to press publications under this Directive.

This deceptively small change would open up a whole new class of works for which publishers could demand payment for the use of small snippets, apparently including works that the author had released under an open access license (since it's the publisher, not the author, that is the beneficiary of the new link tax).

The JURI Committee also met during the week, although it did not vote on any amendments. Even so, the statements and discussions of the participants at this meeting are just as important as the votes of the other committees, given JURI's leadership of the dossier. The meeting (a recording of which is available online) was chaired by German MEP Axel Voss, who has recently replaced the previous chair Theresa Comodini as rapporteur. Whereas MEP Comodini's report for the committee had been praised for its balance, Voss has taken a much more hardline approach. Addressing him as Chair, Pirate Party MEP Julia Reda stated during the meeting:

I have never seen a Directive proposal from the Commission that has been met with such unanimous criticism from academia. Europe's leading IP law faculties have stated in an open letter, and I quote, "There is independent scientific consensus that Articles 11 and 13 cannot be allowed to stand," and that the proposal for a neighboring right is "unnecessary, undesirable, and unlikely to achieve anything other than adding to complexity and cost". 

The developments in the CULT, ITRE and JURI committees last week were disappointing, but they do not determine the outcome of this battle. More decisive will be the votes of the Civil Liberties, Justice and Home Affairs (LIBE) Committee in September, followed by negotiations around the principal report in the JURI Committee and its final vote on October 10. Either way, by year's end we will know whether European politicians have been utterly captured by their powerful publishing lobby, or whether the European Parliament still effectively represents the voices of ordinary European citizens.

Categories: Aggregated News

Why the Ninth Circuit Got It Wrong on National Security Letters and How We’ll Keep Fighting

eff.org - Wed, 19/07/2017 - 08:10

In a disappointing opinion issued on Monday, the Ninth Circuit upheld the national security letter (NSL) statute against a First Amendment challenge brought by EFF on behalf of our clients CREDO Mobile and Cloudflare. We applaud our clients’ courage as part of a years-long court battle, conducted largely under seal and in secret.

We strongly disagree with the opinion and are weighing how to proceed in the case. Even though this ruling is disappointing, together EFF and our clients achieved a great deal over the past six years. The lawsuit spurred Congress to amend the law, and our advocacy related to the case caused leading tech companies to also challenge NSLs. Along the way, the government went from fighting to keep every single NSL gag order in place to the point where many have been lifted, some in whole and many in part. That includes this case, of course, where we can now proudly tell the names of our clients to the world

No matter what happens with these particular lawsuits, we are not done fighting unconstitutional use of NSLs and similar laws. 

Making sense of a disappointing ruling

National security letters are a kind of subpoena issued by the FBI to communications service providers like our clients to force them to turn over customer records. NSLs nearly always contain gag orders preventing recipients from telling anyone about these surveillance requests, all without any mandatory court oversight. As a result, the Internet and communications companies that we all trust with our most sensitive information cannot be truthful with their customers and the public about the scope of government surveillance. 

NSL gags are perfect examples of “prior restraints,” government orders prohibiting speech rather than punishing it after the fact. The First Amendment embodies the Founders’ strong distrust of prior restraints as powerful censorship tools, and the Supreme Court has repeatedly said they are presumptively unconstitutional unless they meet the “most exacting” judicial scrutiny. Similarly, because NSLs prevent recipients from talking about the FBI’s request for customer data, they are content-based restrictions on speech, which are subject to strict scrutiny. So NSL gags ought to be put to the strictest of First Amendment tests.

Unfortunately, the Ninth Circuit questioned whether NSLs are prior restraints at all. And although the court did acknowledge they are separately content-based restrictions on speech, it said the law is narrowly tailored even though it plainly allows censorship that is broader in scope and longer in duration than the government actually needs.  As a result, the court held the government’s interest in national security overcomes any First Amendment interests at stake.

The ruling is seriously flawed.

Not-so-narrow tailoring 

In order to find that the law satisfied strict scrutiny, the court overlooked both the overinclusiveness and indefinite duration of NSL gag orders. Narrow tailoring requires that a restriction on speech be fitted carefully to just what the government needs to protect its investigation and that no less speech-restrictive alternatives are available. 

But NSLs are often wildly overinclusive. For example, they prevent even a company with millions of users like Cloudflare from simply saying it has received an NSL, on the theory that individual users engaged in terrorism or espionage might somehow infer from that fact alone that the government is on their trail.

The court admitted that a blanket gag in this scenario might well be overinclusive, but it simply deferred to the FBI’s decisionmaking. But of course, under the First Amendment, decisions about censorship aren’t supposed to be left to officials whose "business is to censor.” And here, we know that NSLs routinely issue to big tech companies with large numbers of users like both Cloudflare and CREDO, and only in rare circumstances does the FBI allow these companies to report on specific NSLs they’ve received.

Similarly, the FBI often leaves NSL gags in place indefinitely, sometimes even permanently. Indeed, the FBI has told our client CREDO that one of the NSLs in the case is now permanent, and the Bureau will not further revisit the gag it imposed to determine whether it still serves national security. Here again, the court acknowledged that at the least, narrow tailoring requires a gag “must terminate when it no longer serves” the government’s national security interests. But instead of applying the First Amendment’s narrow tailoring requirement, the court declined to “quibble” with the censoring agency, the FBI, and its loophole-ridden internal procedures for reviewing NSLs. Nevertheless, these procedures “do not resolve the duration issue entirely,” as the Ninth Circuit understatedly put it, since they may still produce permanent gags, as with CREDO. As a result, the court suggested that NSL recipients can repeatedly challenge permanent gags until they’re finally lifted. 

The problem of prior restraints and judicial review

However, that points to the other fundamental problem with NSLs: they are issued without any mandatory court oversight. As discussed above, prior restraints are almost never constitutional. The Supreme Court has said that even in the rare circumstance when prior restraints can be justified, they must be approved by a neutral court, not just an executive official. But the NSL statute doesn’t require a court to be involved in all cases; instead, judicial review takes place only if NSL recipients file a lawsuit, like our clients did, or if they ask the government to go to court to review the gag using a procedure known as “reciprocal notice.” 

The Ninth Circuit had two responses to this lack of judicial oversight.

First, it wrongly suggested the law of prior restraints simply does not apply here. The theory is that unlike cases involving newspapers that are prevented from publishing, NSL recipients haven’t shown a preexisting desire to speak, and when they do, they’re asking to publish information they supposedly learned from the government. But as we pointed out, that’s inconsistent with case law that says, for instance, that witnesses at grand jury proceedings—which are historically both secret and subject to court oversight—cannot be indefinitely gagged from talking about their own testimony. NSL gags go much further.

Second, the court suggested that even though the burden is on NSL recipients to challenge gags, this is a “de minimis” burden that doesn’t violate the First Amendment. When Congress passed the USA FREEDOM Act in 2015, it gave recipients the option of invoking reciprocal notice and asking the government to go to court rather than filing their own lawsuit. That’s simply not good enough; the First Amendment requires the government be the one to go to court to prove to a judge it actually requires an NSL accompanied by a gag.  Not to mention that forcing companies that receive NSLs to fight them in court and defend user privacy may actually be a heavy burden. 

Big progress nonetheless 

Despite these considerable errors in the Ninth Circuit’s opinion, we shouldn’t lose sight of progress made along the way. Nearly all of the features of the NSL statute that the court pointed to as saving graces of the law—the FBI’s internal review procedures and the option for reciprocal notice most notably—exist only because Congress stepped in during our lawsuit to amend the law.

So what’s left to providers that receive NSLs? Push back on the gags early and often. The “reciprocal notice” process, which the government says only requires a short letter or a phone call, should be done as a matter of course for any company receiving an NSL.  And since the Ninth Circuit said that courts retain the ability to re-evaluate the gags as long as they remain in place, gagged providers should ask a court to step in and make sure the FBI can still prove the need for the gag—potentially over and over—until the gag is finally lifted. EFF wants to help with this, and we’re happy to consult with anyone subject to an NSL gag.

We’ve also encouraged technology companies to make the best of the reciprocal notice procedure as part of our annual Who Has Your Back? report. If the government continues to argue that recipients don’t necessarily “want to speak” about NSLs, we can now point to the growing trend of major tech companies—Apple, Adobe, and Dropbox, among others—that have committed to invoking reciprocal notice and challenging every NSL they receive. 

Finally, we’ve seen other courts question gag orders in related contexts, and we’ve supported companies like Facebook and Microsoft in these fights. We’re confident that in the long run, these prior restraints will be roundly rejected yet again.

Related Cases: National Security Letters (NSLs)In re: National Security Letter 2011 (11-2173)In re National Security Letter 2013 (13-80089)In re National Security Letter 2013 (13-1165)
Categories: Aggregated News

Microsoft Bing Reverses Sex-Related Censorship in the Middle East

eff.org - Tue, 18/07/2017 - 23:27

Imagine trying to do online research on breast cancer, or William S. Burroughs’ famous novel Naked Lunch, only to find that your search results keep coming up blank. This is the confounding situation that faced Microsoft Bing users in the Middle East and North Africa for years, made especially confusing by the fact that if you tried the same searches on Google, it did offer results for these terms.

Problems caused by the voluntary blocking of certain terms by intermediaries are well-known; just last week, we wrote about how payment processors like Venmo are blocking payments from users who describe the payments using certain terms—like Isis, a common first name and name of a heavy metal band, in addition to its usage as an acronym for the Islamic State. Such keyword-based filtering algorithms will inevitably results in overblocking and false positives because of their disregard for the context in which the words are used.

Search engines also engage in this type of censorship—in 2010, I co-authored a paper [PDF] documenting how Microsoft Bing (brand new at the time) engaged in filtering of sex-related terms in the Middle East and North Africa, China, India, and several other locations by not allowing users to turn off “safe search”. Despite the paper and various advocacy efforts over the years, Microsoft refused to budge on this—until recently.

At RightsCon this year, I led a panel discussion about the censorship of sexuality online, covering a variety of topics from Facebook’s prudish ideas about the female body to the UK’s restrictions on “non-conventional” sex acts in pornography to Iceland’s various attempts to ban online pornography. During the panel, I also raised the issue of Microsoft’s long-term ban on sexual search terms in the Middle East, noting specifically that the company’s blanket ban on the entire region seemed more a result of bad market research than government interference, based on the fact that a majority of countries in the MENA region do not block pornography, let alone other sexual content.

Surprisingly, not long after the conference, I did a routine check of Bing and was pleased to discover that “Middle East” had disappeared from the search engine’s location settings, replaced with “Saudi Arabia.” The search terms are still restricted in Saudi Arabia (likely at the request of the government), but users in other countries across the diverse region are no longer subject to Microsoft’s safe search. Coincidence? It's hard to say; just as we didn't know Microsoft's motivations for blacklisting sexual terms to begin with, it was no more transparent about its change of heart.

Standing up against this kind of overbroad private censorship is important—companies shouldn’t be making decisions based on assumptions about a given market, and without transparency and accountability. Decisions to restrict content for a particular reason should be made only when legally required, and with the highest degree of transparency possible. We commend Microsoft for rectifying their error, and would like to see them continue to make their search filtering policies and practices more open and transparent.

Categories: Aggregated News

Network Engineers Speak Out for Net Neutrality

eff.org - Tue, 18/07/2017 - 09:09

Today, a group of over 190 Internet engineers, pioneers, and technologists filed comments with the Federal Communications Commission explaining that the FCC’s plan to roll back net neutrality protections is based on a fundamentally flawed and outdated understanding of how the Internet works.

Signers include current and former members of the Internet Engineering Task Force and Internet Corporation for Assigned Names and Numbers' committees, professors, CTOs, network security engineers, Internet architects, systems administrators and network engineers, and even one of the inventors of the Internet’s core communications protocol.

This isn’t the first time many of these engineers have spoken out on the need for open Internet protections. In 2015, when the EFF and ACLU filed a friend-of-the-court brief defending the net neutrality rules, dozens of engineers signed onto a statement supporting the technical justifications for the Open Internet Order.

The engineers’ statement filed today contains facts about the structure, history, and evolving nature of the Internet; corrects technical errors in the proposal; and gives concrete examples of the harm that will be done should the proposal be accepted.

The engineers explain that:

"Based on certain questions the FCC asks in the Notice of Proposed Rulemaking (NPRM), we are concerned that the FCC (or at least Chairman Pai and the authors of the NPRM) appears to lack a fundamental understanding of what the Internet's technology promises to provide, how the Internet actually works, which entities in the Internet ecosystem provide which services, and what the similarities and differences are between the Internet and other telecommunications systems the FCC regulates as telecommunications services."

The engineers point to specific errors in the NPRM. As one example among many: the NPRM tries to argue that ISPs, not edge providers, are the main drivers for services such as streaming movies, sharing photos, posting on social media, automatic translation, and so on. The NPRM also erroneously assumes that transforming an IP packet from IPv4 to IPv6 somehow changes the form of the payload.

The engineers explain how the Internet (and in particular broadband) has changed since 2002, when the FCC first explicitly classified broadband internet access service as an information service, and why that classification is no longer appropriate in light of technical developments. Drawing on this background information, they then respond to specific questions from the NPRM in order to correct the FCC's mistakes.

The statement provides nearly a dozen different examples of consumer harm that could have been prevented by the light-touch, bright-line rules—like when AT&T distorted the market for content by using its gatekeeping power to not charge its customers for its DIRECTV video service while charging third-parties more to similarly zero-rate data. It also gives several examples of consumer benefits that happened as a result of the 2015 Open Internet Order, like mobile service providers finally removing the prohibition that was stopping customers from tethering their personal computers to their mobile devices in order to use their mobile broadband connections.

The NPRM fundamentally misunderstands the basic technology underlying how the Internet works. If the FCC were to move forward with its NPRM as proposed, the results could be disastrous: the FCC would be making a major regulatory decision based on plainly incorrect assumptions about the underlying technology and Internet ecosystem that will have a disastrous effect on innovation in the Internet ecosystem as a whole.

TAKE ACTION

Stand up for net neutrality

Categories: Aggregated News

EFF to FCC: Tossing Net Neutrality Protections Will Set ISPs Free to Throttle, Block, and Censor the Internet for Users

eff.org - Tue, 18/07/2017 - 08:48
FCC Plan to Scuttle Open Internet Rule 'Disastrous' For the Future of the Internet, Experts Say

Washington, D.C.—The Electronic Frontier Foundation (EFF) urged the FCC to keep in place net neutrality rules, which are essential to prevent cable companies like Comcast and Verizon from controlling, censoring, and discriminating against their subscribers’ favorite Internet content.

In comments submitted today, EFF came out strongly in opposition to the FCC’s plan to reverse the agency’s 2015 open Internet rules, which were designed to guarantee that service providers treat everyone’s content equally. The reversal would send a clear signal that those providers can engage in data discrimination, such as blocking websites, slowing down Internet speeds for certain content—known as throttling—and charging subscribers fees to access movies, social media, and other entertainment content over “fast lanes.” Comcast, Verizon, and AT&T supply Internet service to millions of Americans, many of whom have no other alternatives for high-speed access. Given the lack of competition, the potential for abuse is very real.

EFF’s comments join those of many other user advocates, leading computer engineers, entrepreneurs, faith communities, libraries, educators, tech giants, and start-ups that are fighting for a free and open Internet. Last week those players gave the Internet a taste of what a world without net neutrality would look like by temporarily blocking and throttling their content. Such scenarios aren’t merely possible—they are likely, EFF said in its comments. Internet service providers (ISPs) have already demonstrated that they are willing to discriminate against competitors and block content for their own benefit, while harming the Internet experience of users.

“ISPs have incentives to shape Internet traffic and the FCC knows full well of instances where consumers have been harmed. AT&T blocked data sent by Apple’s FaceTime software, Comcast has interfered with Internet traffic generated by certain applications, and ISPs have rerouted users’ web searches to websites they didn’t request or expect,” said EFF Senior Staff Attorney Mitch Stoltz. “These are just some examples of ISPs controlling our Internet experience. Users pay them to connect to the Internet, not decide for them what they can see and do there.”

Nearly 200 computer scientists, network engineers, and Internet professionals also submitted comments today highlighting deep flaws in the FCC’s technical description of how the Internet works. The FCC is attempting to pass off its incorrect technical analysis to justify its plan to reclassify ISPs so they are not subject to net neutrality rules. The engineers’ submission—signed by such experts as Vint Cerf, co-designer of the Internet’s fundamental protocols; Mitch Kapor, a personal computer industry pioneer and EFF co-founder; and programmer Sarah Allen, who led the team that created Flash video—sets the record straight about how the Internet works and how rolling back net neutrality would have disastrous effects on Internet innovation.

“We are concerned that the FCC (or at least Chairman Pai and the authors of the Notice of Proposed Rulemaking) appears to lack a fundamental understanding of what the Internet’s technology promises to provide, how the Internet actually works, which entities in the Internet ecosystem provide which services, and what the similarities and differences are between the Internet and other telecommunications systems the FCC regulates as telecommunications services,” the letter said.

“It is clear to us that if the FCC were to reclassify broadband access service providers as information services, and thereby put the bright-line, light-touch rules from the Open Internet Order in jeopardy, the result could be a disastrous decrease in the overall value of the Internet.”

For EFF’s comments:
https://www.eff.org/document/eff-comments-fcc-nn

For the engineers’ letter:
https://www.eff.org/document/internet-engineers-commentsfcc-nn

For more about EFF’s campaign to keep net neutrality:
https://www.eff.org/issues/net-neutrality

Contact:  MitchStoltzSenior Staff Attorneymitch@eff.org CorynneMcSherryLegal Directorcorynne@eff.org
Categories: Aggregated News

With Release of NAFTA Negotiating Objectives, Our New Infographic Makes Sense of It All

eff.org - Tue, 18/07/2017 - 08:37

The United States Trade Representative (USTR) has just released its trade negotiating objectives [PDF] for a revision of NAFTA, the North American Free Trade Agreement between the United States, Mexico, and Canada. NAFTA is expected to open up a new front in big content's neverending battle for stricter copyright rules, following the unexpected defeat of the Trans-Pacific Partnership (TPP). Meanwhile, big tech companies are now wielding increasing influence with the USTR, and demanding that it negotiate rules that protect their businesses also, such as prohibitions against restrictions on the cross-border transfer of data.

In EFF's comments to the USTR about what its negotiating objectives should be, we urged it not to include new copyright rules in NAFTA, because of how this would prevent the United States from improving its current law or adapting to technological change. We also expressed the need for caution about including some of the new digital trade (or e-commerce) rules that big tech companies have been asking for, for similar reasons, and because the trade negotiation process notoriously lacks the balance that would be required for it to negotiate a sound set of rules.

Copyright Rules

The negotiating objectives are hopelessly general, but it seems that our requests largely fell on deaf ears. The negotiating objectives on intellectual property relevantly include to:

  • Ensure provisions governing intellectual property rights reflect a standard of protection similar to that found in U.S. law.
  • Provide strong protection and enforcement for new and emerging technologies and new methods of transmitting and distributing products embodying intellectual property, including in a manner that facilitates legitimate digital trade. ...
  • Ensure standards of protection and enforcement that keep pace with technological developments, and in particular ensure that rightholders have the legal and technological means to control the use of their works through the Internet and other global communication media, and to prevent the unauthorized use of their works.
  • Provide strong standards [of, sic] enforcement of intellectual property rights, including by requiring accessible, expeditious, and effective civil, administrative, and criminal enforcement mechanisms. 

These provisions are consistent with the U.S. demanding similar provisions to those that had been contained in the TPP, including life plus 70 year terms of copyright protection, criminal penalties for "commercial scale" copyright infringement, and legal protections for DRM—all of which would be new to NAFTA. Disappointingly, there is no reference to be found to the inclusion of a "fair use" exception to copyright, as we had requested in our submission.

Digital Trade (E-Commerce) Rules

As for digital trade, the objectives include to:

  • Ensure non-discriminatory treatment of digital products transmitted electronically and guarantee that these products will not face government-sanctioned discrimination based on the nationality or territory in which the product is produced.
  • Establish rules to ensure that NAFTA countries do not impose measures that restrict crossborder data flows and do not require the use or installation of local computing facilities.
  • Establish rules to prevent governments from mandating the disclosure of computer source code.

While some of these rules might not be harmful, if they were drafted in an adequately open and consultative fashion, we have previously expressed concerns that the ban on restrictions on crossborder data flows may not allow countries adequate policy space to protect the privacy of users' data. We are also worried about the possibility that a blanket ban on laws requiring the disclosure of source code could limit countries from introducing new measures to protect users from vulnerabilities in digital products such as routers and Internet of Things (IoT) devices.

Our New Infographic Makes Sense of It All

You might well be wondering how the new version of NAFTA will compare with other digital trade negotiations, such as the TPP (which could still rise again between the other eleven countries besides the United States), and the Regional Comprehensive Economic Partnership (RCEP, whose negotiators are meeting this week in Hyderabad, India). To help explain, we've put together this infographic which illustrates five of the major ongoing trade agreements that are likely to contain provisions on digital issues. It provides a quick overview of their current status, the countries involved, and the issues that they contain.

Click to view full-size

One thing that all of these agreements have in common is that there is no easy way for users to access them. Negotiation rounds take place in far-flung cities of the world, with little or sometimes no notice to the general public, and next to no transparency about the texts under discussion, and with little or no official means of access to the negotiators for public interest advocates such as EFF. Nevertheless, EFF is on the ground in Hyderabad this week to stand up for users, and we plan to do the same in the coming NAFTA negotiations too.

Despite today's release of the USTR's negotiating objectives for NAFTA, they are nowhere near detailed enough for us to know what rules the USTR will really be asking for from our partners. And that's dangerous, because we don't really know what we're fighting against, and whether our fears are justified or overblown. Worse, we might never know until the agreement is concluded—unless it is leaked in the meantime. That's just not acceptable, and it needs to change.

Keep reading Deeplinks for updates on the progress of each of these trade agreements, and how they will affect you. And if you'd like to support our difficult work in fighting for users' rights in all of these secretive venues, you can help by donating to EFF.

Categories: Aggregated News

CBP Responds to Sen. Wyden: Border Agents May Not Search Travelers’ Cloud Content

eff.org - Tue, 18/07/2017 - 06:27

Border agents may not use travelers’ laptops, phones, and other digital devices to access and search cloud content, according to a new document by U.S. Customs and Border Protection (CBP). CBP wrote this document on June 20, 2017, in response to questions from Sen. Wyden (D-OR). NBC published it on July 12. It states:

In conducting a border search, CBP does not access information found only on remote servers through an electronic device presented for examination, regardless of whether those servers are located abroad or domestically. Instead, border searches of electronic devices apply to information that is physically resident on the device during a CBP inspection.

This is a most welcome change from prior CBP policy and practice. CBP’s 2009 policy on border searches of digital devices does not prohibit border agents from using those devices to search travelers’ cloud content. In fact, that policy authorizes agents to search “information encountered at the border,” which logically would include cloud content encountered by searching a device at the border.

We do know that border agents have used travelers’ devices to search their cloud content. Many news reports describe border agents scrutinizing social media and communications apps on travelers’ phones, which show agents conducting cloud searches.

EFF will monitor whether actual CBP practice lives up to this salutary new policy. To help ensure that border agents follow it, CBP should publish it. So far, the public only has second-hand information about this “nationwide muster” (the term CBP’s June 17 document uses to describe this new CBP written policy on searching cloud data). Also, CBP should stop seeking social media handles from foreign visitors, which blurs CBP’s new instruction to border agents that cloud searches are off limits.

Separately, CBP’s responses to Sen. Wyden’s questions explain what will happen to a U.S. citizen who refuses to comply with a border agent’s demand to disclose their device password (or unlock their device) in order to allow the agent to search their device:

[A]lthough CBP may detain an arriving traveler’s electronic device for further examination, in the limited circumstances when that is appropriate, CBP will not prevent a traveler who is confirmed to be a U.S. citizen from entering the country because of a need to conduct that additional examination.

This is what EFF told travelers would happen in our March 2017 border guide, based on law and reported CBP practice. It is helpful that CBP has confirmed this in writing. However, CBP also should publicly state whether U.S. lawful permanent residents (green card holders) will be denied entry for not facilitating a CBP search of their devices. They should not be denied entry. Notably, Sen. Wyden asked CBP to answer this question about all “U.S. persons,” and not just U.S. citizens.

CBP’s responses leave other important questions unanswered. For example, CBP should publicly state whether, when border agents ask travelers for their device passwords, the agents must (in the words of Sen. Wyden) “first inform the traveler that he or she has the right to refuse.” CBP did not answer this question. The international border is an inherently coercive environment, where harried travelers must seek permission to come home from uniformed and frequently armed agents in an unfamiliar space. To ensure that agents do not strong-arm travelers into surrendering their digital privacy, agents should be required to inform travelers that they may choose not to unlock their devices.

Also, CBP should publicly answer Sen. Wyden’s question about how many times in the last five years CBP has searched a device “at the request of another government agency.” Such searches will usually be improper. Historically, courts have granted border agents greater search powers than other law enforcement officials, but only for purposes of enforcing customs and immigration laws. If border agents search travelers at the request of other agencies, they presumably do so for others purposes, and so use of their heightened powers is improper. While CBP’s document provides information about CBP’s assistance requests to other agencies (for example, to seek technical help with decryption), this sheds no light on other agencies’ requests to CBP to use a traveler’s presence at the border as an excuse to conduct a warrantless search, which likely would not be justified at the interior of the country.

EFF applauds Sen. Wyden for his leadership in congressional oversight of CBP’s border device searches. We also thank CBP for answering some of Sen. Wyden’s questions. But many questions remain.

CBP’s June 2017 responses confirm that much more must be done to protect travelers’ digital privacy at the U.S. border. An excellent first step would be to enact Sen. Wyden’s bipartisan bill to require border agents to get a warrant before searching the digital devices of U.S. persons.

Categories: Aggregated News

EFF to Minnesota Supreme Court: Sheriff Must Release Emails Documenting Biometric Technology Use

eff.org - Tue, 18/07/2017 - 03:34

A Minnesota sheriff’s office must release emails showing how it uses biometric technology so that the community can understand how invasive it is, EFF argued in a brief filed in the Minnesota Supreme Court on Friday.

The case, Webster v. Hennepin County, concerns a particularly egregious failure to respond to a public records request that an individual filed as part of a 2015 EFF and MuckRock campaign to track biometric technology use by law enforcement across the country.

EFF has filed two briefs in support of web engineer and public records researcher Tony Webster’s request, with the latest brief [.pdf] arguing that agencies must provide information contained in emails to help the public understand how a local sheriff uses biometric technology. The ACLU of Minnesota joined EFF on the brief.

As we write in the brief:

This case is not about whether or how the government may collect biometric data and develop and domestically deploy information-retrieval technology as a potential sword against the general public. That is just one debate we must have, but critical to it and all public debates is that it be informed by public [records]

The case began when Webster filed a request based on EFF’s letter template with Hennepin County, a jurisdiction that includes Minneapolis, host city of the 2018 Super Bowl.  He sought emails, contracts, and other records related to the use of technology that can scan and recognize fingerprints, faces, irises, and other forms of biometrics.

After the county basically ignored the request, Webster sued. An administrative law judge ruled in 2015 that the county had violated the state’s public records law both because it failed to provide documents to Webster and because it did not have systems in place to quickly search and disclose electronic records.

An intermediate appellate court ruled in 2016 that the county had to turn over the records Webster sought, but it reversed the lower court’s ruling that the county did not have adequate procedures in place to respond to public records requests.

Both Webster and the county appealed the ruling to the Minnesota Supreme Court. In its appeal, the county argues that public records requesters create undue burden on agencies when they specify that they search for particular key words or search terms.

EFF’s brief in support of Webster points out the flaws in the county’s search term argument. Having requesters identify specific search terms for documents they seek helps agencies conduct better searches for records while narrowing the scope of the request. This ultimately reduces the burden on agencies and leads to records being released more quickly.

EFF would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.

Categories: Aggregated News

Australian PM Calls for End-to-End Encryption Ban, Says the Laws of Mathematics Don't Apply Down Under

eff.org - Sat, 15/07/2017 - 06:39

"The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia", said Australian Prime Minister Malcolm Turnbull today. He has been rightly mocked for this nonsense claim, that foreshadows moves to require online messaging providers to provide law enforcement with back door access to encrypted messages. He explained that "We need to ensure that the internet is not used as a dark place for bad people to hide their criminal activities from the law." It bears repeating that Australia is part of the secretive spying and information sharing Five Eyes alliance.

But despite the well-deserved mockery that ensued, we shouldn't make too much light of the real risk that this poses to Internet freedom in Australia. It's true enough, for now, that a ban on end-to-end encrypted messaging in Australia would have absolutely no effect on "bad people", who would simply avoid using major platforms with weaker forms of encryption, in favor of other apps that use strong end-to-end encryption based on industry standard mathematical algorithms. It would hurt ordinary citizens who rely on encryption to make sure that their conversations are secure and private from prying eyes.

However, as similar demands are made elsewhere around the world, more and more app developers might fall under national laws that require them to compromise their encryption standards. Users of those apps, who may have a network of contacts who use the same app, might hesitate to shift to another app that those contacts don't use, even if it would be more secure. They might also worry that using end-to-end encryption would be breaking the law (a concern that "bad people" tend to be far less troubled by). This will put those users at risk.

If enough countries go down the same misguided path, that sees Australia following in the steps of Russia and the United Kingdom, the future could be a new international agreement banning strong encryption. Indeed, the Prime Minister's statement is explicit that this is exactly what he would like to see. It may seem like an unlikely prospect for now, with strong statements at the United Nations level in support of end-to-end encryption, but we truly can't know what the future will bring. What seems like a global accord today might very well start to crumble as more and more countries defect from it.

We can't rely on politicians to protect our privacy, but thankfully we can rely on math ("maths", as Australians say). That's what makes access to strong encryption so important, and Australia's move today so worrying. Law enforcement should have the tools they need to investigate crimes, but that cannot extend to a ban on the use of mathematical algorithms in software. Mr Turnbull has to understand that we either have an internet that "bad people" can use, or we don't have an Internet. It's actually as simple as that.

Categories: Aggregated News

California's Top Newspapers Endorse Broadband Privacy Bill

eff.org - Sat, 15/07/2017 - 03:56

Broadband privacy? Say what? That was probably what you were asking yourself in March when you read about Congress’s vote to repeal privacy rules for your Internet provider. If you were paying attention—and you should in an era where free press, voter privacy, and other constitutional rights are being challenged—you quickly realized that what Congress did. It sold out your right to keep your browsing history and personal information private so the cable companies can sell it and make even more money off of you than they already do. Nice, right?

Luckily, many states, including California, have stepped up to the plate for you. They have introduced bills that give back to you the right to control how your private information is used by the companies that control the Internet pipeline into your home. In California, lawmakers in Sacramento are considering a bill that would reinstate those privacy rules, requiring Internet providers to get your permission before they can profit off of your personal information.

Silicon Valley should rally behind Chau’s AB 375 and ensure online privacy protections for all Californians —San Jose Mercury News

California has always led the country on many fronts: the environment, civil liberties, to name a few. It’s time for us to lead now. California’s top media organizations have gotten behind this legislation, A.B. 375, introduced by Assemblymember Ed Chau, a Democrat from Monterey Park.

If you care about your online privacy, you should, too. Here’s what the editorial boards of the state’s leading newspapers have to say:

Sacramento Bee Editorial Board

AT&T, Comcast and other Internet service providers can continue to track every search you make and website you visit and sell that information to the highest bidder, under legislation recently signed by President Donald Trump.

That legislation, which reversed an Obama regulation, ought to alarm any American who ventures online, no matter their political persuasion. Now comes Assemblyman Ed Chau, a Democrat from Monterey Park, carrying a bill that for Californians would reverse the legislation and provide some privacy at a time when seemingly nothing is private.

San Diego Union-Tribune Editorial Board

Assembly Bill 375 would require Internet service providers to have customers “opt in” before they are allowed to sell information on their online searches and visits. Here’s hoping state lawmakers realize the value of having such a law and reject the telecom companies’ claim that it is “unfair” to not let them capitalize on the sort of information that Facebook and Google accumulate about their users.

The difference, of course, is that people pay heavily for Internet service because in the modern era, it is akin to a must-have utility. Facebook and Google are free. It is absurd that consumers paying companies for a service should be expected to accept that the price paid includes a gross loss of privacy.

San Francisco Chronicle 

AB375, by Assemblyman Ed Chau, D-Monterey Park (Los Angeles County), would address actions taken in March by President Trump and the Republican-dominated Congress that killed an FCC privacy rule allowing customers to prevent giant phone and cable companies from gathering and using personal data such as their financial and health choices. Chau’s bill, which is still in committee, would restore those protections for Californians. It should pass.

Press Democrat

California is uniquely able to take a strong stand in favor of consumer privacy. If the digital age has a technological and corporate center, it is here. We’re also large enough to make a difference nationally.

San Jose Mercury News

California has an obligation to take a lead in establishing the basic privacy rights of consumers using the Internet. Beyond being the right thing to do for the whole country, building trust in tech products is an essential long-term business strategy for the industry that was born in this region. California Assemblyman Ed Chau, D-Monterey Park, understands this.  After Congressional Republicans erased Americans’ Internet broadband privacy protections in March, Chau crafted A.B. 375 to at least provide these rights to Californians.

Categories: Aggregated News

Payment Processors Are Profiling Heavy Metal Fans as Terrorists

eff.org - Sat, 15/07/2017 - 03:28

If you happen to be a fan of the heavy metal band Isis (an unfortunate name, to be sure), you may have trouble ordering its merchandise online. Last year, Paypal suspended a fan who ordered an Isis t-shirt, presumably on the false assumption that there was some association between the heavy metal band and the terrorist group ISIS.

Then last month Internet scholar and activist Sascha Meinrath discovered that entering words such as "ISIS" (or "Isis"), or "Iran", or (probably) other words from this U.S. government blacklist in the description field for a Venmo payment will result in an automatic block on that payment, requiring you to complete a pile of paperwork if you want to see your money again. This is even if the full description field is something like "Isis heavy metal album" or "Iran kofta kebabs, yum."

These examples may seem trivial, but they reveal a more serious problem with the trust and responsibility that the Internet places in private payment intermediaries. Since even many non-commercial websites such as EFF's depend on such intermediaries to process payments, subscription fees, or donations, it's no exaggeration to say that payment processors form an important part of the financial infrastructure of today's Internet. As such, they ought to carry corresponding responsibilities to act fairly and openly towards their customers.

Unfortunately, given their reliance on bots, algorithms, handshake deals, and undocumented policies and blacklists to control what we do online, payment intermediaries aren't carrying out this responsibility very well. Given that these private actors are taking on responsibilities to help address important global problems such as terrorism and child online protection, the lack of transparency and accountability with which they execute these weighty responsibilities is a matter of concern.

The readiness of payment intermediaries to do deals on those important issues leads as a matter of course to their enlistment by governments and special interest groups to do similar deals on narrower issues, such as the protection of the financial interests of big pharma, big tobacco, and big content. It is in this way that payment intermediaries have insidiously become a weak leak for censorship of free speech.

Cigarettes, Sex, Drugs, and Copyright

For example, if you're a smoker, and you try to buy tobacco products from a U.S. online seller using a credit card, you'll probably find that you can't. It's not illegal to do so, but thanks to a "voluntary" agreement with law enforcement authorities dating back to 2005, payment processors have effectively banned the practice—without any law or court judgment.

Another example that we've previously written about are the payment processors' arbitrary rules blocking sites that discuss sexual fetishes, even though that speech is constitutionally protected. The congruence between the payment intermediaries' terms of service on the issue suggests a degree of coordination between them, but their lack of transparency makes it impossible to be sure who was behind the ban and what channels they used to achieve it.

A third example is the ban on pharmaceutical sales. You can still buy pharmaceuticals online using a credit card, but these tend to be from unregulated, rogue pharmacies that lie to the credit card processors about the purpose for which their merchant account will be used. For the safer, regulated pharmacies that require a prescription for the drugs they sell online, such as members of the Canadian International Pharmacy Association (CIPA), the credit card processors enforce a blanket ban.

Finally there are "voluntary" best practices on copyright and trademark infringement. These include the RogueBlock program of the International Anti-Counterfeiting Coalition (IACC) in 2012, about which information is available online, along with a 2011 set of "Best Practices to Address Copyright Infringement and the Sales of Counterfeit Products on the Internet," about which no online information is found. The only way that you can find out about the standards that payment intermediaries use to block websites accused of copyright or trademark infringement is by reading what academics have written about it.

Lack of Transparency Invites Abuse

The payment processors might respond that their terms of service are available online, which is true. However, these are ambiguous at best. On Venmo, transactions for items that promote hate, violence, or racial intolerance are banned, but there is nothing in its terms of service to indicate that including the name of a heavy metal band in your transaction will place it in limbo. Similarly, if you delve deep enough into Paypal's terms of service you will find out that selling tickets to professional UK football matches is banned, but you won't find out how this restriction came about, or who had a say in it.

Payment processors can do better. In 2012, in the wake of the payment industry's embargo of Wikileaks and its refusal to process payments to European vendors of horror films and sex toys, the European Parliament Committee on Economic and Monetary Affairs made the following resolution

[The Committee c]onsiders it likely that there will be a growing number of European companies whose activities are effectively dependent on being able to accept payments by card; [and] considers it to be in the public interest to define objective rules describing the circumstances and procedures under which card payment schemes may unilaterally refuse acceptance.

We agree. Bitcoin and other cryptocurrencies notwithstanding, online payment processing remains largely oligopolistic. Agreements between the few payment processors that make up the industry and powerful commercial lobbies and governments, concluded in the shadows, can have deep impacts on entire online communities. When payment processors are drawing their terms of service or developing algorithms that are based on industry-wide agreements, standards, or codes of conduct—especially if these involve governments or other third parties—they ought to be developed through a process that is inclusive, balanced and accountable.

The fact that you can't use Venmo to purchase an Isis t-shirt is just one amusing example. But the Shadow Regulation of the payment services industry is much more serious than that, also affecting culture, healthcare, and even your sex life online. Just as we've called other Internet intermediaries to account for the ways in which their "voluntary" efforts threaten free speech, the online payment services industry needs to be held to the same standard. 

Categories: Aggregated News

A Record-Breaking Day of Action as Millions Join Fight for Net Neutrality

freepress.net - Fri, 14/07/2017 - 05:42
A Record-Breaking Day of Action as Millions Join Fight for Net Neutrality Amy KroinJuly 13, 2017The Internet-Wide Day of Action to Save Net Neutrality was a mammoth deal. And one thing is clear: No one — except the big broadband providers and their assorted lobbyists and trade groups — likes the Trump FCC’s plan to destroy the internet.
Categories: Aggregated News

Net Neutrality Won't Save Us if DRM is Baked Into the Web

eff.org - Fri, 14/07/2017 - 04:14

Yesterday's record-smashing Net Neutrality day of action showed that the Internet's users care about an open playing field and don't want a handful of companies to decide what we can and can't do online.

Today, we should also think about other ways in which small numbers of companies, including net neutrality's biggest foes, are trying to gain the same kinds of control, with the same grave consequences for the open web. Exhibit A is baking digital rights management (DRM) into the web's standards.

ISPs that oppose effective net neutrality protections say that they've got the right to earn as much money as they can from their networks, and if people don't like it, they can just get their internet somewhere else. But of course, the lack of competition in network service means that most people can't do this.

Big entertainment companies -- some of whom are owned by big ISPs! -- say that because they can make more money if they can control your computer and get it to disobey you, they should be able to team up with browser vendors and standards bodies to make that a reality. If you don't like it, you can watch someone else's movies.

Like ISPs, entertainment companies think they can get away with this because they too have a kind of monopoly --copyright, which gives rightsholders the power to control many uses of their creative works. But just like the current FCC Title II rules that stop ISPs from flexing their muscle to the detriment of web users, copyright law places limits on the powers of copyright holders.

Copyright can stop you from starting a business to sell unlicensed copies of the studios' movies, but it couldn't stop Netflix from starting a business that mailed DVDs around for money; it couldn't stop Apple from selling you a computer that would "Rip, Mix, Burn" your copyrighted music, and it couldn't stop cable companies from starting businesses that retransmitted broadcasters' signals.

That competitive balance makes an important distinction between "breaking the law" (not allowed) and "rocking the boat" (totally allowed). Companies that want to rock the boat are allowed to enter the market with new, competitive offerings that go places the existing industry fears to tread, and so they discover new, unmapped and fertile territory for services and products that we come to love and depend on.

But overbroad and badly written laws like Section 1201 of the 1998 Digital Millennium Copyright Act (DMCA) upset this balance. DMCA 1201 bans tampering with DRM, even if you're only doing so to exercise the rights that Congress gave you as a user of copyrighted works. This means that media companies that bake DRM into the standards of the web get to decide what kinds of new products and services are allowed to enter the market, effectively banning others from adding new features to our media, even when those features have been declared legal by Congress.

ISPs are only profitable because there was an open Internet where new services could pop up, transforming the Internet from a technological curiosity into a necessity of life that hundreds of millions of Americans pay for. Now that the ISPs get steady revenue from our use of the net, they want network discrimination, which, like the discrimination used by DRM advocates, is an attempt to change "don't break the law" into "don't rock the boat" -- to force would-be competitors to play by the rules set by the cozy old guard.

For decades, activists struggled to get people to care about net neutrality, and their opponents from big telecom companies said, "people don't care, all they want is to get online, and that's what we give them." The once-quiet voices of net neutrality wonks have swelled into a chorus of people who realize that an open web was important to their future. As we saw yesterday, the public broadly demands protection for the open Internet.

Today, advocates for DRM say that "People don't care, all they want is to watch movies, and that's what we deliver." But there is an increasing realization that letting major movie studios tilt the playing field toward them and their preferred partners also endangers the web's future.

Don't take our word for it: last April, Professor Tim Wu, who coined the term "net neutrality" and is one of the world's foremost advocates for a neutral web, published an open letter to Tim Berners-Lee, inventor of the web and Director of the World Wide Web Consortium (W3C), where there is an ongoing effort to standardize DRM for the web.

In that letter, Wu wrote:

I think more thinking need be done about EME’s potential consequences for competition, both as between browsers, the major applications, and in ways unexpected. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know. That’s the principal concern of net neutrality, and has been a concern when it comes to browsers and their associated standards. It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.

EME, of course, brings the anti-circumvention laws into play, and as you may know anti-circumvention laws have a history of being used for purposes different than the original intent (i.e., protecting content). For example, soon after it was released, the U.S. anti-circumvention law was quickly by manufacturers of inkjet printers and garage-door openers to try and block out aftermarket competitors (generic ink, and generic remote controls). The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected.

This week, Berners-Lee made important and stirring contributions to the net neutrality debate, appearing in this outstanding Web Foundation video and explaining how anti-competitive actions by ISPs endanger the things that made the web so precious and transformative.

Last week, Berners-Lee disappointed activists who'd asked for a modest compromise on DRM at the W3C, one that would protect competition and use standards to promote the same level playing field we seek in our Net Neutrality campaigns. Yesterday, EFF announced that it would formally appeal Berners-Lee's decision to standardize DRM for the web without any protection for its neutrality. In the decades of the W3C's existence, there has never been a successful appeal to one of Berners-Lee's decisions.

The odds are long here -- the same massive corporations that oppose effective net neutrality protections also oppose protections against monopolization of the web through DRM, and they can outspend us by orders of magnitude. But we're doing it, and we're fighting to win. That's because, like Tim Berners-Lee, we love the web and believe it can only continue as a force for good if giant corporations don't get to decide what we can and can't do with it.

Categories: Aggregated News

Industry Efforts to Censor Pro-Terrorism Online Content Pose Risks to Free Speech

eff.org - Thu, 13/07/2017 - 14:45

In recent months, social media platforms—under pressure from a number of governments—have adopted new policies and practices to remove content that promotes terrorism. As the Guardian reported, these policies are typically carried out by low-paid contractors (or, in the case of YouTube, volunteers) and with little to no transparency and accountability. While the motivations of these companies might be sincere, such private censorship poses a risk to the free expression of Internet users.

As groups like the Islamic State have gained traction online, Internet intermediaries have come under pressure from governments and other actors, including the following:

  • the Obama Administration;
  • the U.S. Congress in the form of legislative proposals that would require Internet companies to report “terrorist activity” to the U.S. government;
  • the European Union in the form of a “code of conduct” requiring Internet companies to take down terrorist propaganda within 24 hours of being notified, and via the EU Internet Forum;
  • individual European countries such as the U.K., France and Germany that have proposed exorbitant fines for Internet companies that fail to take down pro-terrorism content; and,
  • victims of terrorism who seek to hold social media companies civilly liable in U.S. courts for providing “material support” to terrorists by simply providing online platforms for global communication.

One of the coordinated industry efforts against pro-terrorism online content is the development of a shared database of “hashes of the most extreme and egregious terrorist images and videos” that the companies have removed from their services. The companies that started this effort—Facebook, Microsoft, Twitter, and Google/YouTube—explained that the idea is that by sharing “digital fingerprints” of terrorist images and videos, other companies can quickly “use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”

As a second effort, the same companies created the Global Internet Forum to Counter Terrorism, which will help the companies “continue to make our hosted consumer services hostile to terrorists and violent extremists.” Specifically, the Forum “will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.” The Forum will focus on technological solutions; research; and knowledge-sharing, which will include engaging with smaller technology companies, developing best practices to deal with pro-terrorism content, and promoting counter-speech against terrorism.

Internet companies are also taking individual measures to combat pro-terrorism content. Google announced several new efforts, while both Google and Facebook have committed to using artificial intelligence technology to find pro-terrorism content for removal.

Private censorship must be cautiously deployed

While Internet companies have a First Amendment right to moderate their platforms as they see fit, private censorship—or what we sometimes call shadow regulation—can be just as detrimental to users’ freedom of expression as governmental regulation of speech. As social media companies increase their moderation of online content, they must do so as cautiously as possible.

Through our project Onlinecensorship.org, we monitor private censorship and advocate for companies to be more transparent and accountable to their users. We solicit reports from users of when Internet companies have removed specific posts or other content, or whole accounts.

We consistently urge companies to follow basic guidelines to mitigate the impact on users’ free speech. Specifically, companies should have narrowly tailored, clear, fair, and transparent content policies (i.e., terms of service or “community guidelines”); they should engage in consistent and fair enforcement of those policies; and they should have robust appeals processes to minimize the impact on users’ freedom of expression.

Over the years, we’ve found that companies’ efforts to moderate online content almost always result in overbroad content takedowns or account deactivations. We, therefore, are justifiably skeptical that the latest efforts by Internet companies to combat pro-terrorism content will meet our basic guidelines.

A central problem for these global platforms is that such private censorship can be counterproductive. Users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules if, for example, their post includes an image of one of more than 600 “terrorist leaders” designated by Facebook. In one instance, a journalist from the United Arab Emirates was temporarily banned from the platform for posting a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s content moderators failed to grasp.

A more fundamental problem is that having narrow definitions is difficult. What counts as speech that “promotes” terrorism? What even counts as “terrorism”? These U.S.-based companies may look to the State Department’s list of designated terrorist organizations as a starting point. But Internet companies will sometimes go further. Facebook, for example, deactivated the personal accounts of Palestinian journalists; it did the same thing for Chechen independence activists under the guise that they were involved in “terrorist activity.” These examples demonstrate the challenges social media companies face in fairly applying their own policies.

A recent investigative report by ProPublica revealed how Facebook’s content rules can lead to seemingly inconsistent takedowns. The authors wrote: “[T]he documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.” The report emphasized the need for companies to be more transparent about their content rules, and to have rules that are fair for all users around the world.

 Artificial intelligence poses special concerns

 We are concerned about the use of artificial intelligence automation to combat pro-terrorism content because of the imprecision inherent in systems that automatically block or remove content based on an algorithm. Facebook has perhaps been the most aggressive in deploying AI in the form of machine learning technology in this context. The company’s latest AI efforts include using image matching to detect previously tagged content, using natural language processing techniques to detect posts advocating for terrorism, removing terrorist clusters, removing new fake accounts created by repeat offenders, and enforcing its rules across other Facebook properties such as WhatsApp and Instagram.

This imprecision exists because it is difficult for humans and machines alike to understand the context of a post. While it’s true that computers are better at some tasks than people, understanding context in written and image-based communication is not one of those tasks. While AI algorithms can understand very simple reading comprehension problems, they still struggle with even basic tasks such as capturing meaning in children’s books. And while it’s possible that future improvements to machine learning algorithms will give AI these capabilities, we’re not there yet.

Google’s Content ID, for example, which was designed to address copyright infringement, has also blocked fair uses, news reporting, and even posts by copyright owners themselves. If automatic takedowns based on copyright are difficult to get right, how can we expect new algorithms to know the difference between a terrorist video clip that’s part of a satire and one that’s genuinely advocating violence?

Until companies can publicly demonstrate that their machine learning algorithms can accurately and reliably determine whether a post is satire, commentary, news reporting, or counter-speech, they should refrain from censoring their users by way of this AI technology.

Even if a company were to have an algorithm for detecting pro-terrorism content that was accurate, reliable, and had a minimal percentage of false positives, AI automation would still be problematic because machine learning systems are not robust to distributional change. Once machine learning algorithms are trained, they are as brittle as any other algorithm, and building and training machine learning algorithms for a complex task is an expensive, time-intensive process. Yet the world that algorithms are working in is constantly evolving and soon won’t look like the world in which the algorithms were trained.

This might happen in the context of pro-terrorism content on social media: once terrorists realize that algorithms are identifying their content, they will start to game the system by hiding their content or altering it so that the AI no longer recognizes it (by leaving out key words, say, or changing their sentence structure, or a myriad of other ways—it depends on the specific algorithm). This problem could also go the other way: a change in culture or how some group of people express themselves could cause an algorithm to start tagging their posts as pro-terrorism content, even though they’re not (for example, if people co-opted a slogan previously used by terrorists in order to de-legitimize the terrorist group).

We strongly caution companies (and governments) against assuming that technology will be the panacea in identifying pro-terrorism content, because this technology simply doesn’t yet exist.

Is taking down pro-terrorism content actually a good idea?

Apart from the free speech and artificial intelligence concerns, there is an open question of efficacy. The sociological assumption is that removing pro-terrorism content will reduce terrorist recruitment and community sympathy for those who engage in terrorism. In other words, the question is not whether terrorists are using the Internet to recruit new operatives—the question is whether taking down pro-terrorism content and accounts will meaningfully contribute to the fight against global terrorism.

Governments have not sufficiently demonstrated this to be the case. And some experts believe this absolutely not to be the case. For example, Michael German, a former FBI agent with counter-terrorism experience and current fellow at the Brennan Center for Justice, said, “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.” In fact, as we’ve argued before, censoring the content and accounts of determined groups could be counterproductive and actually result in pro-terrorism content being publicized more widely (a phenomenon known as the Streisand Effect).

Additionally, permitting terrorist accounts to exist and allowing pro-terrorism content to remain online, including that which is publicly available, may actually be beneficial by providing opportunities for ongoing engagement with these groups. For example, a Kenyan government official stated that shutting down an Al Shabaab Twitter account would be a bad idea: “Al Shabaab needs to be engaged positively and [T]witter is the only avenue.”

Keeping pro-terrorism content online also contributes to journalism, open source intelligence gathering, academic research, and generally the global community’s understanding of this tragic and complex social phenomenon. On intelligence gathering, the United Nations has said that “increased Internet use for terrorist purposes provides a corresponding increase in the availability of electronic data which may be compiled and analysed for counter-terrorism purposes.”

In conclusion

While we recognize that Internet companies have a right to police their own platforms, we also recognize that such private censorship is often in response to government pressure, which is often not legitimately wielded.

Governments often get private companies to do what they can’t do themselves. In the U.S., for example, pro-terrorism content falls within the protection of the First Amendment. Other countries, many of which do not have similarly robust constitutional protections, might nevertheless find it politically difficult to pass speech-restricting laws.

Ultimately, we are concerned about the serious harm that sweeping censorship regimes—even by private actors—can have on users, and society at large. Internet companies must be accountable to their users as they deploy policies that restrict content.

First, they should make their content policies narrowly tailored, clear, fair, and transparent to all—as the Guardian’s Facebook Files demonstrate, some companies have a long way to go.

Second, companies should engage in consistent and fair enforcement of those policies.

Third, companies should ensure that all users have access to a robust appeals process—content moderators are bound to make mistakes, and users must be able to seek justice when that happens.

Fourth, until artificial intelligence systems can be proven accurate, reliable and adaptable, companies should not deploy this technology to censor their users’ content.

Finally, we urge those companies that are subject to increasing governmental demands for backdoor censorship regimes to improve their annual transparency reporting to include statistics on takedown requests related to the enforcement of their content policies.

Categories: Aggregated News

Historic Day of Action: Net Neutrality Allies Send 1.6 Million Comments to FCC

eff.org - Thu, 13/07/2017 - 14:25

When you attack the Internet, the Internet fights back.

Today, the Internet went all out in support of net neutrality. Hundreds of popular websites featured pop-ups suggesting that those sites had been blocked or throttled by Internet service providers. Some sites got hilariously creative—Twitch replaced all of its emojis with that annoying loading icon. Netflix shared GIFs that would never finish loading. PornHub simply noted that “slow porn sucks.”

Together, we painted an alarming picture of what the Internet might look like if the FCC goes forward with its plan to roll back net neutrality protections: ISPs prioritizing their favored content sources and deprioritizing everything else. (Fight for the Future has put together a great collection of examples of how sites participated in the day of action.)

Today has been about Internet users across the country who are afraid of large ISPs getting too much say in how we use the Internet. Voices ranged from huge corporations to ordinary Internet users like you and me.

Together with Battle for the Net and other friends, we delivered 1.6 million comments to the FCC, breaking the record we set during Internet Slowdown Day in 2014. The message was clear: we all rely on the Internet. Don’t dismantle net neutrality protections.

If you haven’t added your voice yet, it’s not too late. Take a few moments to tell the FCC why net neutrality is important to you. If you already have, take a moment to encourage your friends to do the same.

TAKE ACTION

Stand up for net neutrality

Here are just a few examples of what Team Internet has been saying about net neutrality today.

“We live in an uncompetitive broadband market. That market is dominated by a handful of giant corporations that are being given the keys to shape telecom policy. The big internet companies that might challenge them are doing it half-heartedly. And [FCC Chairman] Ajit Pai seems determined to offer up a massive corporate handout without listening to everyday Americans.

“Is this what you want? Does this sound like a path toward better, faster, cheaper internet access? Toward better products and services in a more competitive market? To me, it sounds like Americans need to demand that our government actually hear our concerns, look at our skyrocketing bills, and make real policy that respects us, instead of watching the staff of an unelected official laugh as he ignores us. It sounds like we need to flood the offices of the FCC and Congress with calls and paperwork, demanding to know how giving handouts to huge corporations will help us.”

Nilay Patel, The Verge

“Title II net neutrality protections are the civil rights and free speech rules for the internet. When traditional media outlets refuse to pay attention, Black, indigenous, queer and trans internet users can harness the power of the Internet to fight for lives free of police brutality and discrimination. This is why we’ll never stop fighting for enforcement of the net neutrality rules we fought for and saw passed by the FCC two years ago. There’s too much at stake to urge anything less.”

Malkia Cyril, Co-Founder and Executive Director, Center for Media Justice

“We’re still picking ourselves off the floor from all the laughing we did when AT&T issued a press release this afternoon announcing that it was joining the ‘Day of Action for preserving and advancing the open internet.’

“If only it were true. In reality, AT&T is just a company that is deliberately misleading the public. Their lobbyists are lying. They want to kill Title II — which gives the FCC the authority to actually enforce net neutrality — and are trying to sell a congressional ‘compromise’ that would be as bad or worse than what the FCC is proposing. No thanks.”

Craig Aaron and Candace Clement, Free Press

InternetIRL, presented by Color of Change

“Everyone except these ISPs benefits from an open Internet… that’s it. It’s like a handful of companies. Not only is this about business—and it is about business and innovation—it’s also about freedom of speech.”

Sen. Al Franken

“No matter what, do not get discouraged or retreat into a state of silence and inaction. There are many like me who are listening and the role each of us plays is vital. We are not alone in believing that the FCC should be a governmental agency ‘of the people, by the people, and for the people.’”

Mignon Clyburn, FCC Commissioner

To everyone who has participated in today’s day of action, thank you.

TAKE ACTION

Stand up for net neutrality

Categories: Aggregated News

Dear Security Conference Speakers – EFF’s Coders Rights Project Has Your Back

eff.org - Thu, 13/07/2017 - 06:44

Every year, EFF has lawyers with its Coders’ Rights Project on hand in Las Vegas at Black Hat, B-Sides and DEF CON for security researchers with legal questions about their research or presentations. EFF’s Coders’ Rights Project protects programmers, researchers, hackers, and developers engaged in cutting-edge exploration of technology. Security and encryption researchers help build a safer future for all of us using digital technologies, but too many legitimate researchers face serious legal challenges that prevent or inhibit their work.

The 2017 summer security conference legal team will include:

  • Staff Attorney Kit Walsh, who works on exemptions protecting security research and vehicle repair, along with a host of other beneficial activities threatened by Section 1201, the anti-circumvention provision of the Digital Millennium Copyright Act (DMCA).
  • Criminal Defense Staff Attorney Stephanie Lacambra, a former Federal and San Francisco Public Defender who has turned her expertise toward defending your civil liberties online.
  • Senior Staff Attorney Nate Cardozo, a Computer Fraud and Abuse Act expert who works on issues including the Wassenaar Arrangement, cryptography, hardware hacking, and electronic privacy law.
  • Deputy Executive Director and General Counsel Kurt Opsahl, who leads the Coders’ Rights Project and has been helping security researchers present at the summer security conferences since DEF CON was at the Alexis Park.

If you are wondering about whether your research came into a legal gray area, or concerned that the vendor will threaten legal action, please reach out to info@eff.org. All EFF legal consultations are pro bono (free), part of our commitment to help the security researcher community. You can also stop by the EFF booths at each conference to make an appointment with one of our attorneys, though we highly recommend contacting us as far in advance of your talk as possible. 

And as always, even if you don’t have a legal question, come say hi at the booth or watch one of our talks at DEF CON

Categories: Aggregated News

Opponents Hope to Mislead California’s Legislators Before They Vote on Broadband Privacy Next Week

eff.org - Thu, 13/07/2017 - 03:58

The large broadband providers and their associations who spent millions in Washington, D.C. to repeal broadband privacy just a few months ago in Congress are fighting to protect their victory in California. They are throwing every superficial argument against A.B. 375 in hopes to confuse California’s legislature enough to give them a pass despite an overwhelming 83% of the American public demanding a response to the Congressional Review Act repeal of their privacy rights.

EFF obtained copies of their letters and feel it is vitally important California’s elected officials know that the industry is unloading a plethora of misleading arguments, some of which they themselves are actively contradicting in other forums. Here are some examples of their attempt to have it both ways—where they repealed our privacy rights in D.C. yet express shock and dismay that state legislatures would respond to the public’s demands.

We Warned ISPs That Repealing the Federal Protections Would Result in a Patchwork of State by State Laws

The irony in the very companies who spent millions of dollars lobbying in DC to repeal our federal broadband privacy rights are now fighting state attempts to protect consumers because they supposedly prefer a federal rule.  It is not lost on EFF that each state having to engage in broadband privacy individually without a federal floor is not ideal, we have said as much during the fight in DC. While California’s A.B. 375 represents model legislation EFF supports, not every state will enact the same law and some states may leave their citizens completely unprotected. That is a far cry from where we were in 2016 before Congress repealed our broadband privacy rights, and it is because of companies like Comcast, AT&T, and Verizon that we have arrived at this point.

We fought hard to stop Congress from repealing our broadband privacy rights. Tens of thousands of Americans picked up the phone to demand Congress vote no on the broadband privacy repeal but they were ignored. Today 83% of the public, regardless of political affiliation, all believe that ISPs must secure their permission first before being allowed to sell their personal data. In other words, more than 8 out of 10 Americans support what A.B. 375 seeks to codify into law.

Despite our repeated warnings to the industry and Congress that eliminating a uniform federal framework that protected personal information will result in states responding to protect their citizens, they pushed ahead and now find themselves on defense across the country.

EFF supports states responding to the demands of the public for privacy protections, particularly in light of Congress having failed to do so. It has become even more important as the Federal Communications Commission itself is actively undermining consumer protections on behalf of Comcast, AT&T, and Verizon. It should surprise no one that state legislators who care about consumer privacy will act and ultimately having as many state laws on the books as possible to protect personal information is a superior outcome to having no clear protections at all.  

And if A.B. 375 becomes law, we hope it would serve as the model for states across the country to avoid a patchwork problem, but again this problem was created by the ISP lobby repealing the federal rules in the first place.

AT&T is a Leader in Contradicting Itself

To California’s Legislature, AT&T right now is saying the following:

“AT&T and other major Internet service providers have committed to legally enforceable Privacy Principles that are consistent with the privacy framework developed by the FTC over the past twenty years.”

In essence, there is no need to pass a state law because the Federal Trade Commission can enforce the law on us. But what exactly is AT&T saying about the FTC’s enforcement power in the courts?

Source: AT&T’s 2016 Brief in FTC vs AT&T Mobility

That is right. They are arguing that the FTC has no legal enforcement power over them. They are making that argument right now in the Ninth Circuit Court of Appeals, which means if they win there a second time (the case is on en banc appeal) then California will have no Federal Trade Commission enforcer on privacy.

On other fronts AT&T and others are arguing that the bill is unnecessary because the FCC’s powers remain perfectly intact after the Congressional Review Act repeal.

“The bill is not needed. The FCC retains statutory authority to enforce consumer privacy protections with respect to Internet service providers.”  - AT&T

"We want to assure you that the action taken by Congress earlier this year has changed nothing for consumers." -CompTIA, TechNet, Bay Area Council

We have explained in detail exactly what Congress did when it invoked the Congressional Review Act repeal of our broadband privacy rights. Ironically, last week AT&T agreed with us when their association US Telecom petitioned the FCC to help clear up the mess created by the CRA broadband privacy repeal because it has also muddied up the waters for their efforts to combat robocalls. In essence, they do not know their legal rights to sharing telephone customer information in that instance just like customers now no longer have clear legal rights to their broadband privacy. It is also worth noting that the FCC that is on course now to end the legal obligations of AT&T to preserve an open Internet and protect privacy.

“We Don’t Engage in That Kind of Activity”

This is the biggest whopper they are spreading here in Sacramento because anyone who takes the time to look up the history of ISP conduct will quickly find out that they have been trying to profit off their customers’ personal information for years. The problem for them has been the law got in the way (until recently) or elected officials put political pressure on ISPs to change their plans.

In 2008, Charter play tested the idea of recording everything you do on the Internet and packaging it into profiles by using Deep Packet Inspection technology that was capable of detailed monitoring of your activity. The bipartisan political response from Congress was fierce and Charter quickly backed down from its plans. It is worth noting that cable broadband services were not clearly covered under the Communications Act’s privacy obligations until the 2015 Open Internet Order.

We know as of 2015 telecom carriers work with Ad Adage to “ingest” data from cellphones close to 300 times a day every day across 20 to 25 million mobile subscribers (we aren’t told which mobile telephone companies participate in this practice, they keep that a secret). That data is used to inform retailers about customer browsing info, geolocation, and demographic data.

We know in 2011 ISPs engaged in search hijacking where your Internet search queries were monitored in order to be rerouted in coordination with a company called Paxfire.

We know AT&T was inserting ads into the traffic of people who use their wifi hotspots in airports. Even small rural ISPs have engaged in ad injection to advertise on behalf of third parties.

We know AT&T, Sprint, and T-Mobile preinstalled “Carrier IQ” on their phones, which gave them the capability to track everything you do, from what websites you visit to what applications you use. It took a class action lawsuit for the carriers to begin backing down from this idea.

And lastly, we know in 2014 Verizon tagged every one of their mobile customers’ HTTP connections with a semi permanent super-cookie, and used those super-cookies to enable third parties such as advertisers to target individual customers. Not only that, but Verizon’s super-cookie also allowed unaffiliated third parties to track you, no matter what steps you took to preserve your privacy. And worst of all, AT&T was going to follow suit to get in on the action but quickly retreated after Verizon got into legal trouble with the federal government.

Pretending a Straight Forward and Widely Accepted Definition of Broadband is Untested

In several opposition letters the opponents assert the definition of “Internet access service” may result in any Internet business suddenly becoming affected by the legislation. This is a false reading of the definition in the bill and likely an attempt to stall the legislation by pretending we have not been living with these definitions for seven years.  

A.B. 375’s definition of ISPs mirrors the Federal Communication Commission’s definition of broadband service, which has been on the books since 2010 to institute Network Neutrality. The Public Utilities Code (the underlying statute for the Public Utilities Commission) has connected the definition of broadband to the FCC’s definition for the last 11 years.

A.B. 375 defines ISPs as follows:

“Internet service provider” means a person or entity engaged in the provision of Internet access service, but only to the extent that the person or entity is providing Internet access service.

“Internet access service” means a mass-market retail service by wire or radio that provides the capability to transmit data to and receive data from all or substantially all Internet endpoints, including any capabilities that are incidental to and enable the operation of the communications service, but excluding dial-up Internet access service. “Internet access service” also encompasses any service that the Federal Communications Commission or the Public Utilities Commission finds to be providing a functional equivalent to the service described in this subdivision.

Opponents are raising concerns with the term “functional equivalent” despite the 70 words preceding the term to limit and explicitly define what an eligible functional equivalent is. Lets break down the definition in its component parts to demonstrate. An ISP covered under A.B. 375 must be the following things:

1) Mass-market retail service
2) Transmit data by wire or radio
3) Capable of receiving and sending data to all or substantially all Internet endpoints
4) Includes capabilities that are incidental to and enable the operation of the communications service
5)Does not include dial up Internet
6) Directly provide the Internet access service
7) Includes services the FCC or CPUC finds to do parts 1-6 above

If this Level of Obfuscation and Attempts to Prevent a Law That Restores Your Broadband Privacy Rights Upsets You? You Need to Pick Up The Phone

Take Action

Tell your representatives to support online privacy.

Categories: Aggregated News

Notice to the W3C of EFF's appeal of the Director's decision on EME

eff.org - Thu, 13/07/2017 - 02:20

[[Update, July 13: After consultation with W3C CEO Jeff Jaffe on timing, we've temporarily withdrawn this appeal, for one week, for purely logistical purposes. I am teaching a workshop all next week at UC San Diego and will re-file the objection at the end of the week, so that I will be able to devote undivided attention to garnering the necessary support from other W3C members. -Cory]]

Dear Tim, Jeff, and W3C colleagues,

On behalf of the Electronic Frontier Foundation, I would like to formally submit our request for an appeal of the Director's decision to publish Encrypted Media Extensions as a W3C Recommendation, announced on 6 July 2017.

The grounds for this appeal are that the question of a covenant to protect the activities that made DRM standardization a fit area for W3C activities was never put to the W3C membership. In the absence of a call for consensus on a covenant, it was improper for the Director to overrule the widespread members' objections and declare EME fit to be published as a W3C Recommendation.

The announcement of the Director's decision enumerated three ways in which DRM standardization through the W3C -- even without a covenant -- was allegedly preferable to allowing DRM to proceed through informal industry agreements: the W3C's DRM standard was said to be superior in its accessibility, its respect of user privacy, and its ability to level the playing field for new entrants to the market.

However, in the absence of a covenant, none of these benefits can be realized. That is because laws like the implementations of Article 6 of the EUCD, Section 1201 of the US Digital Millennium Copyright Act, and Canada's Bill C-11 prohibit otherwise lawful activity when it requires bypassing a DRM system.

1. The enhanced privacy protection of a sandbox is only as good as the sandbox, so we need to be able to audit the sandbox.

The privacy-protecting constraints the sandbox imposes on code only work if the constraints can't be bypassed by malicious or defective software. Because security is a process, not a product and because there is no security through obscurity, the claimed benefits of EME's sandbox require continuous, independent verification in the form of adversarial peer review by outside parties who do not face liability when they reveal defects in members' products.

This is the norm with every W3C recommendation: that security researchers are empowered to tell the truth about defects in implementations of our standards. EME is unique among all W3C standards past and present in that DRM laws confer upon W3C members the power to silence security researchers.

EME is said to be respecting of user privacy on the basis of the integrity of its sandboxes. A covenant is absolutely essential to ensuring that integrity.

2. The accessibility considerations of EME omits any consideration of the automated generation of accessibility metadata, and without this, EME's accessibility benefits are constrained to the detriment of people with disabilities.

It's true that EME goes further than other DRM systems in making space available for the addition of metadata that helps people with disabilities use video. However, as EME is intended to restrict the usage and playback of video at web-scale, we must also ask ourselves how metadata that fills that available space will be generated.

For example, EME's metadata channels could be used to embed warnings about upcoming strobe effects in video, which may trigger photosensitive epileptic seizures. Applying such a filter to (say) the entire corpus of videos available to Netflix subscribers who rely on EME to watch their movies would safeguard people with epilepsy from risks ranging from discomfort to severe physical harm.

There is no practical way in which a group of people concerned for those with photosensitive epilepsy could screen all those Netflix videos and annotate them with strobe warnings, or generate them on the fly as video is streamed. By contrast, such a feat could be accomplished with a trivial amount of code. For this code to act on EME-locked videos, EME's restrictions would have to be bypassed.

It is legal to perform this kind of automated accessibility analysis on all the other media and transports that the W3C has ever standardized. Thus the traditional scope of accessibility compliance in a W3C standard -- "is there somewhere to put the accessibility data when you have it?" -- is insufficient here. We must also ask, "Has W3C taken steps to ensure that the generation of accessibility data is not imperiled by its standard?"

There are many kinds of accessibility metadata that could be applied to EME-restricted videos: subtitles, descriptive tracks, translations. The demand for, and utility of, such data far outstrips our whole species' ability to generate it by hand. Even if we all labored for all our days to annotate the videos EME restricts, we would but scratch the surface.

However, in the presence of a covenant, software can do this repetitive work for us, without much expense or effort.

3. The benefits of interoperability can only be realized if implementers are shielded from liability for legitimate activities.

EME only works to render video with the addition of a nonstandard, proprietary component called a Content Decryption Module (CDM). CDM licenses are only available to those who promise not to engage in lawful conduct that incumbents in the market dislike.

For a new market entrant to be competitive, it generally has to offer a new kind of product or service, a novel offering that overcomes the natural disadvantages that come from being an unknown upstart. For example, Apple was able to enter the music industry by engaging in lawful activity that other members of the industry had foresworn. Likewise Netflix still routinely engages in conduct (mailing out DVDs) that DRM advocates deplore, but are powerless to stop, because it is lawful. The entire cable industry -- including Comcast -- owes its existence to the willingness of new market entrants to break with the existing boundaries of "polite behavior."

EME's existence turns on the assertion that premium video playback is essential to the success of any web player. It follows that new players will need premium video playback to succeed -- but new players have never successfully entered a market by advertising a product that is "just like the ones everyone else has, but from someone you've never heard of."

The W3C should not make standards that empower participants to break interoperability. By doing so, EME violates the norm set by every other W3C standard, past and present.

Through this appeal, we ask that the membership be formally polled on this question: "Should a covenant protecting EME's users and investigators against anti-circumvention regulation be negotiated before EME is made a Recommendation?"

Thank you. We look forward to your guidance on how to proceed with this appeal.

Categories: Aggregated News

We Must Keep the Internet Free and Open. EFF, Tech Giants, Startups and Internet Users Tell FCC: Don’t Sell Out Net Neutrality To Appease ISPs

eff.org - Wed, 12/07/2017 - 23:24
AirBnB, Amazon, ACLU, Google, Etsy, Y Combinator Among Organizations Standing Up To Government Plan To Let ISPs Block Content, Charge Fees for ‘Fast Lanes’

San Francisco—The Electronic Frontier Foundation (EFF) and a broad coalition of user advocacy groups and major technology companies and organizations joined forces today to protest the FCC’s plan to toss out net neutrality rules that preserve Internet freedom and prevent cable and telecommunications companies from controlling what we can see and do online.

Without net neutrality, Internet service providers (ISPs) can block your favorite content, throttle or slow down Internet speeds to disadvantage competitors’ content, or make you pay more than you already do to access movies and other online entertainment.

To show just how important net neutrality is to free choice on the Internet, EFF and a host of other organizations are temporarily halting full access to their website homepages today with a prominent message that they’re “blocked.” Only upgrading to “premium” (read: more expensive) service plans will allow users access to blocked sites and services, the message says. (Don’t worry, the sites aren’t really blocked. Clicking on the message will take you to a link for DearFCC, our tool for submitting comments to the FCC and making your voice heard.)

“We’re giving subscribers a preview of their Internet experience if the FCC dismantles the current net neutrality rules,” said EFF Legal Director Corynne McSherry. “AT&T, Comcast, and Verizon will be able to block your favorite content or steer you to the content they choose—often without you knowing it. Those without deep pockets—libraries, schools, startups and nonprofits—will be relegated to Internet slow lanes.”

The online community—gig economy site AirBnb, maker site Etsy, file storage provider DropBox, and hundreds more—have joined EFF and other user advocates today to deliver a message to the FCC: we want real net neutrality protections.

“It’s our Internet and we will defend it,” said EFF Senior Staff Attorney Lee Tien. “We won’t allow cable companies and ISPs, which already garner immense profits from customers, to become Internet gatekeepers.”

For EFFs Day Of Action page:
https://www.eff.org/deeplinks/2017/07/todays-day-lets-save-net-neutrality

For more about net neutrality:
https://www.eff.org/issues/net-neutrality

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org LeeTienSenior Staff Attorney and Adams Chair for Internet Rightslee@eff.org
Categories: Aggregated News

Advertising

 


Advertise here!

Syndicate content
All content and comments posted are owned and © by the Author and/or Poster.
Web site Copyright © 1995 - 2007 Clemens Vermeulen, Cairns - All Rights Reserved
Drupal design and maintenance by Clemens Vermeulen Drupal theme by Kiwi Themes.
Buy now