News feeds

Releasing a Public Domain Image of the NSA's Utah Data Center

eff.org - Thu, 10/07/2014 - 08:10

When EFF joined with a coalition of partners to fly an airship over the NSA's Utah Data Center, the goal was to emphasize the need for accountability in the NSA spying debate. In particular, we wanted to point people to our new Stand Against Spying scorecard for lawmakers. But while we were up there, we got a remarkable and unusual view.

Today, continuing in the spirit of transparency and building on earlier efforts to shed some light on the physical spaces the US intelligence community has constructed, we're releasing a photograph of the Utah Data Center into the public domain, completely free of copyright and other restrictions. That means it can be used for any purpose—copied, edited, or even sold—online or in print, with or without attribution to the Electronic Frontier Foundation. We hope that making such an image available will help support conversations about the actions of the NSA.

The image below is just a preview—click through for the full high-resolution version.

This picture makes clear the scope and scale of the NSA's facilities—necessary because of the agency's "collect it all" posture and misguided dedication to creating ever-larger haystacks in pursuit of needles. Alongside our other efforts to bring accountability to massive NSA spying, hopefully this image can help make the infrastructure of that spying more tangible to the public.

The fine print: this image is released into the public domain under the terms of the CC0 waiver from Creative Commons. It is available from eff.org and the Wikimedia Commons.



To the extent possible under law, Electronic Frontier Foundation has waived all copyright and related or neighboring rights to this photograph of the NSA Utah Data Center.

Related Issues: NSA Spying
Share this:   ||  Join EFF
Categories: Aggregated News

Why Do Patent Trolls Go to Texas? It’s Not for the BBQ

eff.org - Thu, 10/07/2014 - 04:46

There is a lot in our current patent system that is in need of reform. The Patent Office is too lax in granting patents. Federal Circuit case law has consistently favored patentees. Another part of this problem is the forum shopping by patentees that leads to a disproportionate number of cases being filed in the Eastern District of Texas.

Back in 2011, This American Life did a one-hour feature called “When Patents Attack!” The story included a tour of ghostly offices in Marshall, Texas, where shell companies have fake headquarters with no real employees. For many people, it was their first introduction to the phenomenon that is the Eastern District of Texas, a largely rural federal court district that has somehow attracted a huge volume of high-tech patent litigation.

The Eastern District of Texas is still number one for patent cases. Last year, there were just over 6,000 patent suits filed in federal courts around the country. One in four of those cases (24.54% to be exact) were filed in the Eastern District of Texas. But why do patent plaintiffs, especially trolls, see it as such a favorable forum? Partly, the district's relatively rapid litigation timetable can put pressure on defendants to settle. But other local practices in the Eastern District also favor patentees. And, in our view, they do so in a way that is inconsistent with the governing Federal Rules, and work to mask the consistent refusal by the courts in the Eastern District to end meritless cases before trial.

The podcasting patent troll litigation provides a recent case study. EFF is currently fighting the patent troll Personal Audio at the Patent Office, where we’re arguing that U.S. Patent 8,112,504 (the “podcasting patent”) is invalid. But Personal Audio is also involved in litigation against podcasters and TV companies in the Eastern District of Texas. We’ve been following that case, and unsurprisingly, the defendants there are also arguing that the podcasting patent is invalid. Specifically, the defendants are arguing that earlier publications and websites describe the system for “disseminating media content” that Personal Audio says it invented.

Recently, something happened in that case that we thought deserved notice: the defendants were denied the opportunity to have the judge rule on summary judgment on this issue. This deserves a bit of explanation: generally, parties go to trial to have their rights decided by a jury. But the Federal Rules provide the parties the right to get “summary judgment” (i.e., a decision from the judge) where there is no “genuine dispute as to any material fact.” To be clear, this doesn’t mean the parties have to agree on all the facts. What it means is that where the only disputes are not genuine (e.g., there isn’t enough evidence to support an argument) or not material (e.g., the resolution of the dispute would not change the outcome) summary judgment should be granted.

Unfortunately, the podcasting defendants in Texas weren’t even given this opportunity. You see, in the Eastern District of Texas, judges require parties to seek permission to file a motion for summary judgment. That is, unless and until the judge lets you file your motion (even if it is clear as day that you’re going to win), you’re going to trial. The defendants in Texas sought that permission, but in a one-sentence order, their request was denied. (Note: The judge is allowing the defendants to file summary judgment on other issues, namely non-infringement and license).

Why this is important is that according to Federal Rules of Civil Procedure 56, defendants have a right to file a summary judgment motion and to have that motion decided. But in the Eastern District of Texas, the judge’s “rule” effectively denies them these rights, which we think is contrary to the law. Furthermore, this requirement likely masks the true value of the already low grant rate of summary judgment. A recent study found that judges in the Eastern District of Texas granted only 18% of motions for summary judgment of invalidity. (In contrast, the grant rate nationwide is 31%.) Considering that the study did not include instances where the defendant wasn’t allowed to file summary judgment in the first place, we wouldn’t be surprised if the true grant rate were much lower, and thus even further out-of-whack with the national average.

So why don’t parties challenge the judge’s rule? We don’t know for sure, but we have a good guess. And it has to do with the fact that a single judge in the Eastern District had over 900 patent cases assigned to him in 2013.

Patentees and defendants (and of course, their lawyers) are often "repeat players," meaning they will be in front of the same judge on many different occasions in different cases. It’s easy to see how telling a judge his rules are invalid may not be the best thing to do when you’re usually trying to get him to agree with you. Given the volume of high-stakes litigation there, no one wants to be unpopular in Eastern District of Texas. (Indeed, of all the ice rinks in all the towns in all the world, why would patent heavyweight Samsung sponsor a rink directly in front of the courthouse in Marshall?) Another reason that this type of rule may not get challenged is that it’s just not worth it. Even if you get to file your summary judgment motion, that doesn’t mean that the judge will actually rule in a timely fashion (thus saving the expense of preparing for an unnecessary trial) or that you’ll win. By the time you get to the point of appeal, you have many more important issues that you want the appeals court to consider. In the end, the parties are just stuck with the judge’s rules and cases that should be decided quickly and early are left to languish.

And for patent trolls, this is a good thing. A plaintiff that doesn’t have its weak case quickly and cheaply rejected increases its settlement pressure and keeps its patent alive longer. In contrast, a defendant, faced with the possibility of significant trial costs, will more likely succumb to settlement pressure in order to get the case to go away at the least cost. Thus patent trolls, who are often asserting extremely broad and likely-invalid patents, are incentivized to file in the Eastern District of Texas knowing that there’s another hurdle an accused infringer has to overcome in order to win the case.

To be clear, local rules like those in the Eastern District violate the rights of both plaintiffs and defendants. By either refusing to rule on summary judgment or delaying a ruling right until the eve of trial, both sides incur significant costs. But it is easy to see how this would have a larger impact on those accused of infringing patents, especially in cases where the damages are less than the cost to go to trial.

We sympathize with judges who are trying to manage busy dockets. Understandably, the Court does not want to be faced with frivolous motions, or with five motions from both sides. But the court has other methods of dealing with these issues (for example limiting page length or allowing only one brief on all issues). What the court is not entitled to do, however, is prevent the parties from filing at all.

With respect to the podcasting patent, we’ve linked to the parties’ papers on this issue here (defendants’ letter requesting permission to file a motion), here (Personal Audio’s response), and here (defendants’ reply letter). You can make up your own mind, but, in our view, Personal Audio made no showing of any genuine or material dispute. The Federal Rules, properly applied, do not allow a party to survive summary judgment with such weak and unsupported arguments.

The defendants in the podcasting case may still win a motion for summary judgment of non-infringement, but unfortunately that could leave Personal Audio free to sue others. But because of the judge’s order, if the current defendants in Texas want to invalidate the podcasting patent, they’re going to have to go to trial. It is unfair and irregular procedures like these that make the Eastern District of Texas such a popular destination for patent trolls. As part of any true patent reform, this kind of forum-shopping incentive needs to end.

Files:  ecf_122_-_letter_brief_re_sj_of_invalidity.pdf ecf_185_-_order_re_sj_filing.pdf ecf_148_-_pa_response_to_letter_brief_re_sj_of_invalidity.pdf ecf_165_-_reply_letter_brief_re_sj_of_invalidity.pdfRelated Issues: PatentsPatent TrollsInnovationRelated Cases: EFF v. Personal Audio LLC
Share this:   ||  Join EFF
Categories: Aggregated News

Forward Secrecy Brings Better Long-Term Privacy to Wikipedia

eff.org - Thu, 10/07/2014 - 04:32

Wikipedia readers and editors can now enjoy a higher level of long-term privacy, thanks to the Wikimedia Foundation's rollout last week of forward secrecy on its encrypted connections. Forward secrecy is an important Web privacy protection; we've been tracking its implementation across many popular sites with our Encrypt the Web Report. And though it may sound like an obscure technical switch, the impact is dramatic: forward secrecy ensures that every new connection uses unique and ephemeral key information, so traffic intercepted once can't later be decrypted if the private key gets compromised.

That kind of compromise can happen at the hands of law enforcement who demand a copy of a server's private key, or who compromise servers to get a copy without asking. It could also be exposed by a bug in the encryption software, as we saw earlier this year in the case of the widely discussed Heartbleed bug. Forward secrecy provides stronger protection against all of these possibilities, limiting exposure to the traffic collected after the key compromise and before a new key is in place.

As always, the privacy offered by this update is not absolute. One major caveat is that it only applies to connections that are encrypted with HTTPS in the first place, and currently that's not the case for many users. Wikipedia only offers a default of encryption to users that are actually logged in to the site, which likely excludes most non-editors. To take advantage of the enhanced privacy protection, users can log in—or even better, install our HTTPS Everywhere browser extension for Firefox, Firefox for Android, Chrome, or Opera to automatically rewrite browser requests to Wikipedia whether or not you are logged in.

Another limitation is that encrypted pages can still be subjected to traffic analysis. A sufficiently large and active adversary could keep a record of the file size of each article and request, for example, and could make inferences about intercepted traffic based on that information. In the future, that sort of attack could be mitigated by “padding” files in transit—adding some filler data so they cannot be identified by their size. But even in the short term, there are definite advantages to raising the sophistication and expense needed to mount an attack.

The case for long-term privacy is easy to understand where a site contains private communications, but it's just as important for sites like Wikipedia or news sites that mostly present public information. That's because HTTPS protects not just the contents of each page, but also data about which specific pages a user visits. Without HTTPS, your browsing history through Wikipedia could be exposed to an eavesdropper that is on the same network, has access to your Internet service provider, or is widely scooping up traffic. With HTTPS and forward secrecy, that history is much more difficult to access.

Giving Wikipedia readers an enhanced level of privacy is undoubtedly a good thing for fostering intellectual freedom, and allows users to explore issues they might otherwise shy away from. It's heartening to see the Wikimedia Foundation take this next step on its encryption roadmap, then, especially in light of the mounting disclosures about government surveillance.

With this update, Wikipedia joins the growing ranks of high profile sites enabling this important Web privacy feature. Google was the first major site to do so all the way back in 2011. As we've tracked forward secrecy on our Encrypt the Web Report, we've seen adoption by many more major sites, such as Dropbox, Facebook, Twitter, Wordpress, and recently Microsoft's OneDrive and Outlook services.

Related Issues: PrivacyEncrypting the WebSecurity
Share this:   ||  Join EFF
Categories: Aggregated News

The Unchecked Spying Needs to Stop

freepress.net - Thu, 10/07/2014 - 03:08
The Unchecked Spying Needs to StopCraig AaronJuly 9, 2014 Today’s revelations that the FBI and the NSA have been spying on prominent members of the Muslim-American community should trouble all of us.
Categories: Aggregated News

We Join Dozens of Organizations and Businesses to Protest TPP Copyright Proposals

eff.org - Thu, 10/07/2014 - 01:39

Today, EFF and its partners in the global Our Fair Deal coalition join together with an even more diverse international network of creators, innovators, start-ups, educators, libraries, archives and users to release two new open letters to negotiators of the Trans-Pacific Partnership (TPP).

The TPP, although characterized as a free trade agreement, is actually far broader in its intended scope. Amongst many changes to which it could require the twelve negotiating countries to agree are a slate of increased rights and privileges for copyright rights holders.

With no official means of participating in the negotiations, the global community of users and innovators who will be affected by these proposed changes have been limited to expressing their concerns through open letters to their political representatives and to the officials negotiating the agreement.

Each of the two open letters released today focuses on a separate element of the heightened copyright regime that the TPP threatens to introduce, and is endorsed by a separate groups of signatories representing those most deeply impacted by the proposed changes in each case.

Intermediary Copyright Enforcement

As the document below describes, countries around the Pacific rim are being pressured to agree to proposed text for the TPP that would require them to adopt a facsimile of the DMCA to regulate the take-down of material hosted online, upon the mere allegation of copyright infringement by a claimed rights-holder. Indeed, industry lobbyists are pushing for an even stricter regime, dubbed "notice and staydown", that would make it harder than ever before for users and innovators to safely publish creative, transformational content online.

Read the full letter here

Amongst the 65 high-profile signatories who have endorsed this open letter as of today are Reddit, the Internet Archive, Stack Exchange and Namecheap.

Copyright Term Extension

The rash 20 year extension of the term of copyright protection in the United States in 1998 confounded economists, and frustrated librarians, archivists and consumers, who were consequently starved of new public domain works until 2019. Now the USA intends to compound its error by extending it to all of the other TPP negotiating countries—or at least, those that haven't already yielded to bilateral pressure to extend their copyright terms. As the letter below explains, this would be a senseless assault on the public domain and on those libraries, authors, educators, users and others who depend upon it.

Read the full letter here

The letter on copyright term extension has been endorsed by 35 organizations so far, including Creative Commons, the Wikimedia Foundation, Public Knowledge and the International Federation of Library Associations and Institutions (IFLA).

Express your support

Although the letters have been presented to TPP negotiators today, they will remain open for further signatories to express their support, and may be presented again in future rounds. Interested organizations can express their interest in endorsing the open letters on copyright term extension and intermediary liability using the links given here.

For individuals who are not affiliated with a company or organization, we encourage them instead to take action through the Our Fair Deal coalition's petition (can we take it to 20,000 signatories by this weekend?), and for those who are American citizens, through EFF's action to oppose fast-track authority.

Related Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalTrans-Pacific Partnership Agreement
Share this:   ||  Join EFF
Categories: Aggregated News

EFF Statement on Intercept Article Revealing Surveillance of Muslim-American Activists

eff.org - Thu, 10/07/2014 - 00:11

The Intercept published an article last night describing secret foreign intelligence surveillance targeting American citizens. One of those citizens, Nihad Awad, is the executive director and founder of the Council on American-Islamic Relations (CAIR), the nation’s leading Muslim advocacy and civil rights organization and a long-time client of EFF.

In response, EFF Staff Attorney Mark Rumold stated:

EFF unambiguously condemns government surveillance of people based on the exercise of their First Amendment rights. The government’s surveillance of prominent Muslim activists based on constitutionally protected activity fails the test of a democratic society that values freedom of expression, religious freedom, and adherence to the rule of law.

Today’s disclosures – that the government has actively targeted leaders within the American Muslim community – are sadly reminiscent of government surveillance of civil rights activists and anti-war protesters in the 1960s and 70s. Surveillance based on First Amendment-protected activity was a stain on our nation then and continues to be today. These disclosures yet again demonstrate the need for ongoing public attention to the government’s activities to ensure that its surveillance stays within the bounds of law and the Constitution. And they once again demonstrate the need for immediate and comprehensive surveillance law reform.

We look forward to continuing to represent CAIR in fighting for its rights, as well as the rights of all citizens, to be free from unconstitutional government surveillance.

EFF represents CAIR Foundation and two of its regional affiliates, CAIR-California and CAIR-Ohio, in a case challenging the NSA’s mass collection of Americans’ call records. More information about that case is available at: First Unitarian Church of Los Angeles v. NSA.

 

Related Issues: NSA SpyingRelated Cases: First Unitarian Church of Los Angeles v. NSA
Share this:   ||  Join EFF
Categories: Aggregated News

Rights That Are Being Forgotten: Google, the ECJ, and Free Expression

eff.org - Wed, 09/07/2014 - 15:10

Google’s handling of a recent decision by the European Court of Justice (ECJ) that allows for Europeans to request that public information about them be deleted from search engine listings is causing frustration amongst privacy advocates. Google—which openly opposed interpreting Europe’s data protection laws as including the removal of publicly available information—is being accused by some of intentionally spinning the ECJ’s ruling to appear ‘unworkable’, while others—such as journalist Robert Peston—have expressed dissatisfaction with the ECJ ruling itself.

The issue with the ECJ judgement isn't European privacy law, or the response by Google. The real problem is the impossibility of an accountable, transparent, and effective censorship regime in the digital age, and the inevitable collateral damage borne of any attempt to create one, even from the best intentions. The ECJ could have formulated a decision that would have placed Google under the jurisdiction of the EU’s data protection law, and protected the free speech rights of publishers. Instead, the court has created a vague and unappealable model, where Internet intermediaries must censor their own references to publicly available information in the name of privacy, with little guidance or obligation to balance the needs of free expression. That won’t work in keeping that information private, and will make matters worse in the global battle against state censorship.

Google may indeed be seeking to play the victim in how it portrays itself to the media in this battle, but Google can look after itself. The real victims in this battle lie further afield, and should not be ignored.

The first victim of Google’s implementation of the ECJ decision is transparency under censorship. Back in 2002—in the wake of bad publicity following the company’s removal of content critical of the Church of Scientology—Google established a policy of informing users when content was missing from search engine results. This, at least, gave some visibility when data was hidden away from them. Since then, whenever content has been removed from a Google search, the company has posted a message at the bottom of each search page notifying its users, and if possible they’ve passed the original legal order to Chilling Effects. Even during its ill-considered collaboration with Chinese censors, Google maintained this policy of disclosure to users; indeed, one of the justifications the company gave for working in China is that Chinese netizens would know when their searches were censored.

Right to be Non-Existent: Google warns of potential removals, even when the person you've searched for doesn't exist.

Google's implementation of the ECJ decision has profoundly diluted that transparency. While the company will continue to place warnings at the bottom of searches, those warnings have no connection to deleted content. Rather, Google is now placing a generic warning at the bottom of any European search that it believes is a search for an individual name, whether or not content related to that name has been removed.

Google’s user notification warnings have now been rendered useless for providing any clear indication of censored content. (As an aside, this means that Google is also now posting warnings based on what its algorithms think "real names" look like—even though these determinations are notoriously inaccurate, as we pointed out during Google Plus's Real Names fiasco.)

The second victim of Google’s ECJ implementation is fairness. After Google informed major news media like the Guardian UK and BBC that they were being censored, those sites noted—correctly—that legitimate journalism was being silenced. Google subsequently restored some of the news stories it had been told to remove. Will Google review its decisions when smaller media, such as bloggers, complain? Or does the power to undo censorship remain only with the traditional press and their bully pulpit? Even the flawed DMCA takedown procedure includes a legally defined path for appealing the removal of content. For now it seems that restorations will rely not on a legal right for publishers to appeal, but rather on the off chance that intermediaries like Google will assume the risk of legal penalties from data protection authorities, and restore deleted links.

Which brings us to the third victim: Europe's privacy law itself. Europe's privacy regime has long been a model for effective and reasonable governance of privacy. Its recent updated data protection regulation provides an opportunity for the European Union to define how the right to privacy can be defended by governments in the modern digital era.

Tying the data protection regulation to censorship risks discrediting its aims and impugning its practicality.

“Minor” censorship is still censorship

Before the Google Spain vs. Gonzalez was decided by the ECJ, the court’s advisor, Advocate General Jääskinen, spelled out a reasonable model for deciding the case which would have placed Google and other non-European companies as liable to follow EU privacy law, but would not have required the deletion or hiding of public knowledge. Instead, the court gave credence to the idea that public censorship had a place in “fixing” privacy violations.

Some of the arguments in favor of the ECJ censorship model are reminiscent of other attempts to aggressively block and filter data, including the ongoing regulatory battles against online copyright infringement. While it can be argued that the latest removals are “hugely less significant than the restrictions that Google voluntarily imposes for copyright and other content,” they are no less insidious. Every step down the road of censorship is damaging. And when each step proves—as it did with copyright—to be ineffective in preventing the spread of information, the pressure grows to take the next, more repressive, step.

Currently the EU’s requirement on Google to censor certain entries can easily be bypassed by searching on Bing, say, or by using the US Google search, or by appending extra search terms than simply a name. That is not surprising. Online censorship can almost always be circumvented. Turkey’s ban on Twitter was equally useless, but extremely worrying nonetheless. Even Jordan’s draconian censorship of news sites that fail to register for licenses has been bypassed using Facebook...but should be condemned on principle regardless.

And a fundamentally unenforceable law is guaranteed to be the target of calls for increasingly draconian enforcement, as the legal system attempts to sharpen it into effectiveness. If Bing is touted as an alternative to Google, then the pressure will grow on Bing to perform the same service (Microsoft says it is already preparing a similar deletion service). Europe’s data protection administrators may grow unhappy that simply searching on google.com instead of google.fr or google.co.uk will reveal what was meant to be forgotten, and—as Canada's courts have already demanded—order search engines to delete data globally.

At the very least, European regulators need to stop thinking that handing over the reins of content regulation to the Googles and Facebooks of this world will lead anywhere good. The intricacies of privacy law need to be supervised by regulators, not paralegals in tech corporations. Restrictions on free expression need to be considered, in public, by the courts, on a case-by-case basis, and with both publishers and plaintiffs represented, not via an online form and the occasional outrage of a few major news sources. And online privacy needs better protection than censorship, which doesn't work, and causes so much more damage than it prevents.

Related Issues: Free SpeechPrivacySearch Engines
Share this:   ||  Join EFF
Categories: Aggregated News

Net Neutrality and Transparency Principles Must Extend to Mobile Internet Access Too

eff.org - Wed, 09/07/2014 - 10:09

Recent debate about network neutrality has largely focused on how to make sure broadband providers don’t manipulate their customers’ Internet connections (or as John Oliver put it, how to prevent “cable company f*ckery”). But in today’s world of smartphones and tablets people are spending less of their time on the Internet typing at a computer and more of it swiping on a smartphone. This is why it’s critically important for net neutrality principles to apply to mobile broadband too.

The good news is that there is greater competition in the mobile broadband space than the wired broadband market.  Unsatisfied customers should be more able to vote with their wallet and pick a new carrier (absent unduly burdensome, anti-competitive switching costs).  That could change, however, and that means we need to be paying attention. To help that along, here’s a quick explainer.

Smartphones and tablets are computers

A smartphone (or a tablet) is just another type of computer—it just happens to be able to make phone calls and take pictures too. And a smartphone’s Internet connection is its most important feature: after all, how “smart” is a phone that can’t look up directions, share photos or videos, or browse the web (except through Wi-Fi)?

At the same time, people are spending more and more time on the Internet via mobile devices. And for many, mobile devices are the primary source of Internet access. Over half of Americans adults use smartphones. What’s more, African American and Latino communities are more likely to access the Internet on a mobile device than a home wire-line connection. The Internet should be no less open on these platforms.

The ubiquity and necessity of mobile Internet means that it’s vital that we ensure that mobile providers don’t abuse their control. And that means we need net neutrality for mobile broadband too.

What mobile net neutrality looks like today

Unfortunately, having more competition in this space isn’t preventing non-neutral behavior. Equally important, there aren't enough transparency requirements for mobile Internet so that users can exercise their right to vote with their wallets.

As a result, mobile broadband providers are discriminating against certain types of applications and trying to extract more money from consumers depending on how they use their data. For example:

  • AT&T blocked Apple’s FaceTime service in order to force customers to pay higher prices;
  • Both AT&T and Sprint forbid users from maintaining “network connections to the Internet such as through a web camera” unless there’s an active user on the other end;
  • Both  AT&T and T-Mobile forbid users from using peer-to-peer file-sharing applications;
  • In 2011, Verizon blocked tethering, the practice of using your phone’s wireless data for other devices, in order to get customers to pay additional fees, until the FCC stopped them. (T-Mobile, Sprint, and AT&T still make users pay extra for tethering.)

As of now mobile providers are delivering a second-class Internet, where they get to decide what can and cannot be accessed via your smartphone.

What mobile net neutrality needs to look like

Instead, mobile device owners should enjoy the same levels of control for networked applications on their mobile devices as they do on their laptops and desktops. Service providers shouldn't be blocking sites, shaping traffic, discriminating based on application, etc.  In particular, they shouldn't be restricting tethering.  

Mobile transparency

Mobile broadband providers should also adhere to the same sort of enhanced transparency that’s needed from traditional wire-line broadband providers. That means mobile providers need to regularly disclose what sorts of congestion management techniques they use as well as statistics on download and upload speed, latency, and packet loss, indexed by cell tower location and time of day.

Mobile providers should meet this requirement in two different ways.

For one, we know that mobile companies have gone to extraordinary and intrusive lengths to collect data about network performance and user activity from their customers. While EFF in no way endorses intrusive data collection, if providers insist on continuing the practice, and that data can be released in a way that still protects users' privacy (e.g. via aggregation and anonymization) then service providers should be required to share that data. Such aggregated and anonymized data could help the FCC (and the public) see how mobile broadband network performance varies over time and rough geographical area--great coverage here/not enough coverage there, etc; or (if the data is broken out by endpoints), which services are being throttled due to peering, hosting, and content delivery network arrangements. If providers complain that they have been unfairly accused of non-neutral behavior, let them prove it.

Additionally, providers should give consumers access to the phone's "baseband chip,” the chip in the device that actually communicates with the cellular network, so that we can take measurements of connection quality ourselves. Access to the baseband chip is vital because without baseband-layer performance measurement, consumers are stuck measuring performance from the OS layer, which is only an approximation to the true picture. This is like the difference between measuring traditional broadband speed using your laptop, versus actually measuring it at the cable or DSL modem—if your laptop is running slowly due to other programs the measurements could be skewed.

Zero-rating: when some websites don’t count against your data use

Zero-rating refers to when providers don’t count data to and from certain websites or services toward users’ monthly data limits. T-Mobile’s recent announcement of its Music Freedom plan is a good example of zero-rating: users can stream all the music they want from certain services without worrying about their data limit.

Technically, zero-rating is a type of data discrimination: it allows a mobile broadband provider to influence what Internet services people are more likely to use. In this way zero-rating allows mobile broadband providers to pick winners instead of leaving that determination to the market, thereby stifling competition and innovation.

To be clear, zero-rating has sometimes been used for laudable purposes. For example, the Wikipedia Zero program allows users to access Wikipedia for free. But while to tempting to defend preferential treatment for an invaluable service like Wikipedia, zero-rating still makes it harder for other innovative services -- for-profit or nonprofit -- to get in the game.  

Where do we go from here?

Right now the focus of the net neutrality is traditional broadband. But we need to prevent “mobile broadband f*ckery” too. Accessing the Internet is accessing the Internet, no matter which kind of computer you use. And just as there’s no silver bullet to ensure a neutral net for wired broadband service, it’ll take an ensemble of solutions to keep our mobile connectivity non-discriminatory as well. That means more competition, community based solutions (like the Open Wireless Movement), innovation, transparency, and prohibitions against non-neutral practices, like the blocking of tethering by mobile providers.

In the meantime, the FCC wants to hear from people across the country about how the proposed network neutrality rules will impact us all. So speak up and tell the FCC how and why you use the Internet over your mobile device. Let’s be sure they hear us loud and clear: network neutrality must extend to every way we access the Internet, regardless of whether or not we’re at a desk or on a smartphone.

Related Issues: Net Neutrality
Share this:   ||  Join EFF
Categories: Aggregated News

TPP Negotiations Go Further Underground with Unprecedented Secrecy Around Meetings in Canada

eff.org - Wed, 09/07/2014 - 01:45

EFF is in Ottawa this week for the Trans-Pacific Partnership (TPP) negotiations, to influence the course of discussions over regressive digital policy provisions in this trade agreement that could lead to an increasingly restrictive Internet. But this round is different from the others—the secrecy around the talks is wholly unprecedented. The Canadian trade ministry, who is hosting this round of talks, has likely heightened the confidentiality due to the mass public opposition that is growing against this undemocratic, corporate-driven trade deal.

The trade offices from the 12 countries negotiating this deal no longer pre-announce details about the time and location of these negotiations. They don't bother releasing official statements about the negotiations because they no longer call these “negotiation rounds” but “officials' meetings.” But the seeming informality of these talks is misleading—negotiators are going to these so-called meetings to secretly pull together a deal. As far as we know, they're still discussing whether they could expand the international norm of copyright terms to make it even longer. They are negotiating provisions that could lead to users getting censored and filtered over copyright, with no judicial oversight or consideration for fair use. And trade delegates are deliberating how much of a crime they should make it if users break the DRM on their devices and content, even if users don't know it's illegal and the content they're unlocking isn't even restricted by copyright in the first place.

So for this negotiation, we had to rely on rumors and press reports to know when and where it was even happening. At first, there were confirmed reports that the next TPP meeting would take place at a certain luxury hotel in downtown Vancouver. So civil society began to mobilize, planning events in the area to engage users and members of the public about the dangers of TPP. Then seemingly out of the blue, the entire negotiating round was moved across the country to Ottawa. There's no way to confirm whether this was a deliberate misdirection, but either way it felt very fishy.

Already given this level of secrecy, it goes without saying that there will be no room for members of civil society or the public to engage directly with TPP negotiators. Towards the beginning of TPP talks, we were given 15 minutes to present to stakeholders, in addition to a stakeholder event that allowed us to hang around a big room to meet and pass information to negotiators who walked by. Then it was cut down to ten minutes (after we made some noise that it was going to be cut down to a mere eight minutes). In the following rounds, the stakeholder event was completely removed from the schedules of the official rounds. These didn't provide sufficient time to convey to negotiators about the major threats we saw in this agreement, so those events already seemed to be a superficial nod to public participation. But now, they don't even pretend to give us their ear.

Of course, corporate lobbyists continue to have easy access to the text. Advisors to major content industries can comment and read the text of the agreement on their private computers. But those of us who represent the public interest are left to chase down negotiators down the halls of hotels to let our concerns be heard and known to them.

As we watch TPP crawl its way towards getting finalized, signed, and eventually taint our laws with its one-sided corporate agenda, we need to continue to remember this fact: laws made in secret, with no public oversight or input, are illegitimate. That is not how law is made in democracies. If we're to defend the fundamental democratic rule that law is based on transparent, popular consensus, we need to fight back against an agreement that engages in such a secretive, corporate-captured process.

~

Additional Resources:

Michael Geist: Why The Secrecy on the TPP Talks in Ottawa This Week? Because There is Something to Hide

Council of Canadian: Secretive critical talks on the Trans Pacific Partnership happening in Ottawa

Related Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalTrans-Pacific Partnership Agreement
Share this:   ||  Join EFF
Categories: Aggregated News

The Next Patent Office Director Probably Shouldn't Be One Of The Guys Who Killed Patent Reform

eff.org - Tue, 08/07/2014 - 08:17

Philip Johnson is Chief Intellectual Property Counsel of Johnson & Johnson, one of the largest pharmaceutical companies in the world. He is also a representative member of the Coalition for 21st Century Patent Reform, the leading trade group opposing patent reform this past year.

And now he's rumored to be next in line to be the director of the United States Patent and Trademark Office.

What?

That's exactly what we're asking ourselves. Why would an administration that has ostensibly been so pro-reform over the last year nominate such an entrenched insider? (Though perhaps this isn't so shocking a question, as those of us working on the net neutrality fight would remind ourselves.) Although it would seem that Johnson is eminently qualified as a patent expert, many of his views lie contrary to recent reform efforts—including reforms proposed by the White House itself.

The Coalition for 21st Century Patent Reform, or 21C, represents companies in the pharmaceutical and biotech industries, among others. These industries, flanked by trial lawyers and universities, helped water down and ultimately kill patent reform this year. In fact, Johnson's employer, a member of 21C, was recently called out as leading the charge against patent reform.

For example, 21C worked to remove the expansion of covered business method (CBM) patent review from proposed legislation [PDF], which is a big part of the reason why language around such reform didn't appear in the final version of the House's Innovation Act nor the latest Senate proposals. Johnson's perspective lies directly contrary to the White House's proposal in early 2013, recommending that the legislature expand the CBM provision to include computer-enabled patents.

The latest versions of the Senate bill—known as the "Schumer-Cornyn Compromise"—contained solid language about stays of discovery and heightened pleading, both of which 21C had vigorously opposed [PDF] and worked to remove.

What We Need Instead

What we need is someone who understands the problems with patent law, especially when it comes to software patents. Some are pointing to the fact that David Kappos, the previous director of the Patent Office, was from the tech industry, so the next one has to come from pharma or biotech. This push does a great job of highlighting the fact that one single patent system shouldn't apply to technologies as different as pharmaceuticals and software. In any event, the nominee to head the Patent Office shouldn't be the face of opposition to patent reform that was championed by the White House, passed by a majority of the House, and supported by a considerable proportion of Senators.

We need someone who would champion needed reform—not just tackling trolls, but focusing on the most pressing issue in the patent world: quality. When Kappos became director of the Patent Office in 2009, he inherited a huge backlog of unexamined (or rejected and refiled) patent applications, and he set out to fix this problem. His solution, however, was to lower standards, allowing a flood of low-quality patents to issue.

The patent logjam is a symptom of a system that allows poor quality patents. When the issuance rate for applications is so high, of course more people are going to apply for patents. And more of these applications are going to be very stupid. And more of these patents will haunt our innovation space just shy of two decades from now.

Let's not appoint someone who prefers the status quo when it comes to the most serious patent issues—especially when his views run contrary to the Obama Administration's stated positions. Instead, we need a director who understands that bad patents are the root of the many problems we see today, and who will see the need to reject quantity in favor of quality.

Related Issues: PatentsLegislative Solutions for Patent ReformPatent Trolls
Share this:   ||  Join EFF
Categories: Aggregated News

Myanmar's Facebook Block Could Signal More to Come

eff.org - Tue, 08/07/2014 - 05:49

In early March, Yangon—the former capital of Myanmar (Burma)—played host to a conference held by the East-West Center, called "Challenges of a Free Press." The event (which I attended) featured speakers from around the world, but was more notable for its local speakers, including Aung San Suu Kyi and Nay Phone Latt, a blogger who spent four years as a political prisoner before being released under a widespread presidential amnesty in 2012. In a country where the Internet was heavily censored for many years, online freedom was discussed with surprising openness, although concerns about hate speech on platforms like Facebook were raised repeatedly.

This past week, as violence escalated in Mandalay, authorities blocked Facebook to coincide with a curfew imposed on the city. While leaders in the country, including Suu Kyi, have spoken of the responsibility of journalists in reporting the truth (which some have interpreted as an early call for online censorship), journalist Aung Zaw, writing for the Burmese publication, The Irrawaddy, says "...don’t expect the government to take action against the hatemongers—it isn’t going to happen." Instead of dealing with the calls for violence, it seems, they've taken the easy way out, by imposing censorship.

In his piece, Aung Zaw cautions against Western governments putting too positive a spin on Myanmar's "reforms" while the country's "dream of democracy looks increasingly like it is turning into a carefully orchestrated nightmare." Indeed, as the violence escalates, remaining vigilant about increased censorship is imperative. The recent opening of the Internet may allow hate speech to spread more quickly, but it has also enabled innovation, access to information, and growth of a small technology sector. So while the intent behind censoring Facebook is to stem the tide of hateful speech, Myanmar's history suggests that hate speech might merely be the first thing to go.

Related Issues: Free SpeechInternational
Share this:   ||  Join EFF
Categories: Aggregated News

Dear NSA, Privacy is a Fundamental Right, Not Reasonable Suspicion

eff.org - Fri, 04/07/2014 - 10:54

Learning about Linux is not a crime—but don’t tell the NSA that. A story published in German on Tagesschau, and followed up by an article in English on DasErste.de today, has revealed that the NSA is scrutinizing people who visit websites such as the Tor Project’s home page and even Linux Journal. This is disturbing in a number of ways, but the bottom line is this: the procedures outlined in the articles show the NSA is adding "fingerprints"—like a scarlet letter for the information age—to activities that go hand in hand with First Amendment protected activities and freedom of expression across the globe.

What we know

The articles, based on an in-depth investigation, reveal XKeyscore source code that demonstrates how the system works. Xkeyscore is a tool which the NSA uses to sift through the vast amounts of data it obtains. This source code would be used somewhere in the NSA’s process of collecting and analyzing vast amounts of data to target certain activities. According to the Guardian, XKeyscore’s deep packet inspection software is run on collection sites all around the world, ingesting one or two billion records a day.

The code contains definitions that are used to determine whether to place a "fingerprint" on an online communication, to mark it for later. For example, the NSA marks online searches for information about certain tools for better communications security, or comsec, such as Tails.

As the code explained, "This fingerprint identifies users searching for the TAILs (The Amnesic Incognito Live System) software program, viewing documents relating to TAILs, or viewing websites that detail TAILs." Tails is a live operating system that you can start on almost any computer from a DVD, USB stick, or SD card. It allows a user to leave no trace on the computer they are using, which is especially useful for people communicating on computers that they don’t trust, such as the terminals in Internet cafes.

The NSA also defines Tor directory servers (by IP number) and looks for connections to the Tor Project website. This is hardly surprising, considering the documentation of the NSA’s distaste for Tor. It is, however, deeply disappointing. Using privacy and anonymity software, like Tor and Tails, is essential to freedom of expression.  

Most shocking is the code that fingerprints users who visit Linux Journal, the website of a monthly magazine for enthusiasts of the open-source operating system.  The comments in the NSA’s code suggest that the NSA thinks Linux Journal is an "extremist forum," where people advocate for Tails. The only religious wars in the Linux Journal are between the devoted users of vi and emacs.

Learning about security is not suspicious

The idea that it is suspicious to install, or even simply want to learn more about, tools that might help to protect your privacy and security underlies these definitions—and it’s a problem. Everyone needs privacy and security, online and off. It isn’t suspicious to buy curtains for your home or lock your front door. So merely reading about curtains certainly shouldn’t qualify you for extra scrutiny.

Even the U.S. Foreign Intelligence Surveillance Court recognizes this, as the FISA prohibits targeting people or conducting investigations based solely on activities protected by the First Amendment. Regardless of whether the NSA is relying on FISA to authorize this activity or conducting the spying overseas, it is deeply problematic. The U.S. Constitution still protects people outside U.S. borders, and, as a U.S. appeals court recently recognized, even non-citizens are not bereft of its protections.

Moreover, privacy is a human right, which the U.S. has recognized by signing the International Covenant on Civil and Political Rights.  The fingerprinting program revealed today is fundamentally inconsistent with this right.

Tor is used to circumvent Internet censorship

The code focuses a lot on the Tor Project and its anonymity software. Tor is an essential tool for circumventing Internet censorship, which is used extensively by the governments of countries such as China and Iran to control the flow of information and maintain their hold on power.  In fact, Tor was developed with the help of the U.S. Navy, and currently gets funding from several sources within the U.S. government, including the State Department.  Secretary of State Hillary Clinton made support for anti-censorship tools a key element of her Internet policy at the State Department, declaring: "The freedom to connect is like the freedom of assembly in cyberspace."

You can still use Tor and TAILs

One question that is sure to come up is whether this means people desiring anonymity should stop using Tor or Tails. Here’s the bottom line: If you’re using Tor or Tails, there is a possibility that you will be subject to greater NSA scrutiny. But we believe that the benefits outweigh the burdens.

In fact, the more people use Tor, the safer you are. That’s why we’re continuing to run the Tor Challenge. The ubiquitous use of privacy and security tools is our best hope for protecting the people who really need those tools—people for whom the consequences of being caught speaking out against their government can be imprisonment or death. The more ordinary people use Tor and Tails, the harder it is for the NSA to make the case that reading about or using these tools is de facto suspicious.

Related Issues: PrivacyNSA SpyingSecurity
Share this:   ||  Join EFF
Categories: Aggregated News

Is Europe Serious About Reforming Copyright, or Just Greasing the Squeaky Wheel?

eff.org - Fri, 04/07/2014 - 09:51

Coordinated enforcement of intellectual property (IP) rights—copyright, patents and trade marks—has been an elusive goal for Europe. Back in 2005, the European Commission struggled to introduce a directive known as IPRED2 that would criminalize commercial-scale IP infringements, but abandoned the attempt in 2010 due to jurisdictional problems. IP maximalists took another run at it through ACTA, the Anti-Counterfeiting Trade Agreement, but that misguided treaty was roundly defeated in 2012 when the European Parliament rejected it, 478 votes to 39.

Undeterred, the European Commission is trying once again. This time, it is trying to avoid a similarly humiliating defeat in Parliament by focusing on non-legislative strategies. But its effort to sidestep Parliament also means less political or judicial oversight. So it behooves us to take a close look at what is being proposed.

Intermediary enforcement

The most significant item of the Commission's 10 point action plan is the proposal to conclude "Memoranda of Understanding to address the profits of commercial scale IP infringements in the online environment, following Stakeholder Dialogues involving advertising service providers, payment services and shippers." For example, such Memoranda of Understanding might commit payment intermediaries or advertisers to undertake that they will not accept payments or run advertisements for a site accused of hosting infringing material, thereby depriving those sites of revenue.

This strategy is touted in an accompanying communication as a "a rapid response mechanism to the IP infringement problem", since intermediaries can step in to halt alleged infringements much more quickly than it would take judges, who have to go to the time and trouble of actually hearing evidence from both sides and who are trained in copyright law.

The Commission describes this strategy for IP enforcement as a "follow the money" approach. Superficially, this perhaps sounds reasonable. But look closer at what "the money" really means. It does not—as you might expect—mean only targeting those who are making money from copyright infringement. Instead, it extends to anyone accused of infringing copyright on a "commercial scale."

This is where things get muddy, because there is no agreed definition within Europe of "commercial scale." Indeed, this lack of consensus was the motivation behind the failed IPRED2 directive. We do, however, have a good idea what the Commission would like "commercial scale" to include: any large-scale infringement "in which the professional organization of the activity, for example systematic cooperation with other persons, indicates a business dimension".

That potentially includes an incredibly broad swathe of non-commercial activity, including the hosting of fan art and fan fiction, fansubs and remixes—remembering that Europe, unlike the US, does not have a "fair use" copyright limitation. Indeed, the Commission has admitted that "it is difficult to determine in the abstract which acts of wilful trademark counterfeiting or copyright piracy are not 'on a commercial scale'", allowing only that "occasional acts of a purely private nature carried out by end-consumers normally would not constitute 'commercial scale' activities".

The danger that non-commercial, transformative activity will be swept up in the copyright enforcement frenzy is not merely theoretical. Last week it was revealed that 15 Korean fan subtitlers could face up to 5 years jail in exchange for the time and effort they have devoted to popularizing their favorite soap operas. (Whilst you might wonder what this has to do with the EU, it was in partly in response to its Free Trade Agreement with the EU that Korean copyright law was amended in 2011 to raise criminal penalties to this level.)

Admittedly, the European Commission's latest action plan is not about imprisoning people. But it would lock up many non-commercial websites. Deprived of the ability to receive money from donations or advertisements to cover hosting expenses, popular sites that host user-generated content may have no choice but to close.

Another option

There is an easier way in which the European Commission could avoid sweeping noncommercial and publicly beneficial uses in the enforcement net, and it wouldn't require any extra enforcement resources at all. The solution is simply to make such content legal. Currently, the copyright limitations and exceptions allowed under EU law are itemized in Article 5 of its InfoSoc Directive, and although the list is quite detailed ("use in connection with the demonstration or repair of equipment", for instance), it does not contain the flexible copyright limitations that can help protect most mash-ups, remixes and fan works.

Modern copyright laws need to leave space for such new uses that don't unfairly detract from existing commercial markets for copyright works. In the United States and a growing number of other countries, this space is provided by flexible copyright limitations such as fair use.

Earlier this year, the European Commission wrapped up an online public consultation on the future of copyright in Europe, which garnered an amazing 11,117 responses. Many of those responses called for the modernization of European copyright law, including the introduction of new and updated copyright limitations and exceptions that would be better suited for the digital environment, such as an open-ended fair use exception or a more limited exception for user-generated content (UGC).

Although no response to that consultation has yet been officially released, we can get an inkling of how the Commission might view these proposals for reform from the recently leaked draft of a whitepaper that examines areas of EU copyright policy for possible review.

The whitepaper claims that there is "a lack of evidence that the current legal framework for copyright puts a break on or inhibits UGC" and recommends merely that the EU "clarify the application of relevant exceptions as they exist in EU law" and promote "licensing mechanisms…for those uses that clearly do not fall into these exceptions". (As to the latter, the Licenses for Europe consultation aimed at filling the digital deficits in copyright law through licensing was last year boycotted by civil society groups due to the artificially narrow scope of the exercise.)

Similar reticence towards copyright law reform was demonstrated by the Commission this week at WIPO where its representative made a very clear statement that it was not willing to consider work leading to international instrument for limitations and exceptions for libraries and archives; doubling down on a position it adopted at the previous meeting of the same WIPO committee.

This does not paint a positive picture of the future of copyright in Europe. A single-minded focus on enforcement, even when limited to supposedly commercial scale infringements, will do little to foster innovation and creativity, and may indeed achieve the opposite effect. Rather than simply pandering to the IP enforcement lobby, Europe needs to start thinking outside the box—something that its talented fan artists, writers and remixers have been doing for years, in the shadow of an outdated copyright regime.

Related Issues: Fair Use and Intellectual Property: Defending the BalanceInternationalEFF Europe
Share this:   ||  Join EFF
Categories: Aggregated News

Is Your Android Device Telling the World Where You've Been?

eff.org - Thu, 03/07/2014 - 17:03

Do you own an Android device? Is it less than three years old? If so, then when your phone’s screen is off and it’s not connected to a Wi-Fi network, there's a high risk that it is broadcasting your location history to anyone within Wi-Fi range that wants to listen.

This location history comes in the form of the names of wireless networks your phone has previously connected to. These frequently identify places you've been, including homes (“Tom’s Wi-Fi”), workplaces (“Company XYZ office net”), churches and political offices (“County Party HQ”), small businesses (“Toulouse Lautrec's house of ill-repute”), and travel destinations (“Tehran Airport wifi”). This data is arguably more dangerous than that leaked in previous location data scandals because it clearly denotes in human language places that you've spent enough time to use the Wi-Fi.1 Normally eavesdroppers would need to spend some effort extracting this sort of information from the latititude/longitude history typically discussed in location privacy analysis. But even when networks seem less identifiable, there are ways to look them up.

We briefly mentioned this problem during our recent post about Apple deciding to randomize MAC addresses in iOS 8. As we pointed out there, Wi-Fi devices that are not actively connected to a network can send out messages that contain the names of networks they’ve joined in the past in an effort to speed up the connection process.2 But after writing that post we became curious just how many phones actually exhibited that behavior, and if so, how much information they leaked. To our dismay we discovered that many of the modern Android phones we tested leaked the names of the networks stored in their settings (up to a limit of fifteen).  And when we looked at these network lists, we realized that they were in fact dangerously precise location histories.

Aside from Android, some other platforms also suffer from this problem and will need to be fixed, although for various reasons, Android devices appear to pose the greatest privacy risk at the moment. 3

In Android we traced this behavior to a feature introduced in Honeycomb (Android 3.1) called Preferred Network Offload (PNO). 4 PNO is supposed to allow phones and tablets to establish and maintain Wi-Fi connections even when they’re in low-power mode (i.e. when the screen is turned off). The goal is to extend battery life and reduce mobile data usage, since Wi-Fi uses less power than cellular data. But for some reason, even though none of the Android phones we tested broadcast the names of networks they knew about when their screens were on, many of the phones running Honeycomb or later (and even one running Gingerbread) broadcast the names of networks they knew about when their screens were turned off.5

Response from Google

When we brought this issue to Google’s attention, they responded that:

"We take the security of our users' location data very seriously and we're always happy to be made aware of potential issues ahead of time. Since changes to this behavior would potentially affect user connectivity to hidden access points, we are still investigating what changes are appropriate for a future release."

Additionally, yesterday a Google employee submitted a patch to wpa_supplicant which fixes this issue. While we are glad this problem is being addressed so quickly, it will still be some time before that fix gets integrated into the downstream Android code. And even then, Android fragmentation and the broken update process for non-Google Android devices could delay or even prevent many users from receiving the fix. (We hope Google can make progress on this problem, too.)

Protective Steps You Can Take Today

With that said, a workaround is available (for most devices) for users who want to protect their privacy right now: go into your phone’s “Advanced Wi-Fi” settings and set the “Keep Wi-Fi on during sleep” option to “Never”. Unfortunately this will cause a moderate increase in data usage and power consumption—something users shouldn’t have to do in order to keep their phone from telling everyone everywhere they’ve been.

Unfortunately, on at least one device we tested–a Motorola Droid 4 running Android 4.1.2–even this wasn’t sufficient. On the Droid 4, and perhaps on other phones, the only practical way to prevent the phone from leaking location is to manually forget the networks you don't want broadcast, or disable Wi-Fi entirely whenever you aren't actively connecting to a known Wi-Fi network.6 You can also find apps that will do this automatically for you.

Location history is extremely sensitive information. We urge Google to ship their fix as soon as possible, and other Android distributors to offer prompt updates containing it.

  • 1. It can also be used to infer social links between device owners, especially when it comes to secure networks with unique names.
  • 2. This capability is also necessary in order to connect to “hidden” networks, because those networks don’t broadcast their existence like normal Wi-Fi networks. Instead, the phone or other device has to send out a message essentially asking “Are you there?” and if the hidden network is nearby, it will respond.
  • 3. In our testing no iOS 6 or 7 devices were affected, though we observed the same problem on one of several tested iOS 5 devices (an iPad), and earlier versions of iOS might or might not be affected. Many laptops are affected, including all OS X laptops and many Windows 7 laptops. Desktop OSes will need to be fixed, but because our laptops are not usually awake and scanning for networks as we walk around, locational history extraction from them requires considerably more luck or targeting.
  • 4. The offending code is actually in an open source project called wpa_supplicant, which many Linux distributions, including Android, use to manage Wi-Fi. We want to give credit to Android developer Chainfire as well as several others on the XDA forums whose posts on this behavior were very informative. A couple of other researchers have previously critiqued this behavior.
  • 5. The list of phones we tested is available as a CSV file or a Google Doc.
  • 6. Note that this method isn’t foolproof, since an attacker might still be able to get your phone to transmit its known network list when it is connected by transmitting a packet that temporarily disconnects you. Then, depending on the timing, your phone may send the list of networks before it reconnects.
Files:  ssid_leaking_devices.txtRelated Issues: PrivacyCell Tracking
Share this:   ||  Join EFF
Categories: Aggregated News

Open Rights Group Launches Site to Track ISP Blocking in UK

eff.org - Thu, 03/07/2014 - 05:20

"Are you being blocked?" asks Open Rights Group's (ORG) newly-revamped website, "Blocked!"  The site, which relaunched today, allows users to test whether their websites are being blocked by one of the UK's 10 major Internet service providers (ISPs). Anyone who suspects their website to be a target of the ISPs' filters can detect a block by simply entering the URL of their site into the search bar provided.

The project seeks to address the problems of arbitrary blocking of websites prompted by concerns about child protection, copyright, and other issues. As ORG explains:

The UK government has pressured Internet Service Providers (ISPs) into promoting filters to prevent children and young people from seeing content that is supposed to be for over 18s. This may seem like a good idea, but in reality filters block much more than they are supposed to, which means information is being censored.

This project comes in the wake of the government's efforts to promote various filtering technologies as a means to prevent children from accessing inappropriate sexual material. However, on their wiki, ORG argues that in practice, many more people are finding themselves behind filters, which block a much wider range of material than they're supposed to. Hence the goal of "Blocked!": To gather, archive, and understand why websites get blocked.

Using data from users, "Blocked!" will archive what kinds of websites are being blocked in the UK, and why. Coupled with other projects—such as ORG's 451 Unavailable, which seeks greater transparency from ISPs with respect to blocks that occur as a result of legal orders—this documentation of the disruptive effects of filters will aid in the fight to keep the Internet open. 

So far, the project's preliminary results show that 19 percent of the sites tested (out of 100,000 total) were found to be blocked by one ISP or another. The results also demonstrate a high level of variation between ISPs.  Notably, the "Blocked!" campaign itself has been reported blocked by two ISPs, BT and Virgin Media.

ORG has plans to expand the project to conduct bulk tests using large lists of websites, starting with the most commonly visited ones. It remains to be seen what kind of impact the data will have, but as the project gains momentum, the impact can only grow.

Related Issues: Free SpeechContent BlockingInternational
Share this:   ||  Join EFF
Categories: Aggregated News

What on Earth Is Going On at the FCC? A Guide to the Proposed Net Neutrality Rules

eff.org - Thu, 03/07/2014 - 05:11

The main battlefield for the net neutrality fight right now is at the Federal Communications Commission (FCC), in a “rulemaking” underway this summer, which asks for public comment about a new set of proposed rules that the FCC claims will protect the open Internet. This process is one of the most important ways Internet users, businesses, trade groups, and public interest organizations can make their voice heard in this critically important national debate. To help that along, let's take a close look at the process and the proposal the FCC has put on the table. 

The quick version

This isn't the FCC's first neutrality rodeo.  Time and again, the FCC has proposed open Internet rules but they keep getting knocked down in court. The FCC's latest proposed rules are intended to replace a prior set of regulations that a court threw out in January. The new proposal has three main parts. The first is a transparency rule that requires Internet access providers to disclose how they manage traffic and price their services. The second is a ban on blocking websites or other Internet services, and the third is a “no commercially unreasonable practices” rule that the FCC says will stop the sort of non-neutral practices by Internet providers that many people are concerned about.

EFF and many others believe the “commercially unreasonable practices” rule won’t stop non-neutral practices like special access deals, pay-for-play, and preferential treatment for privileged Internet users. And we continue to have the same concern about the proposed rules that we raised about the 2010 rules - namely, that the exceptions are too broad.

The public can comment on the FCC’s new proposal. Public comments are due July 15th, and “reply” comments in response to other commenters are due September 10th. EFF will be weighing in, and you should too.

Now for a slightly longer explanation.

What’s a rulemaking, anyway?

The FCC is using a process called “notice and comment rulemaking.” Agencies like the FCC aren’t elected by the people but can still make rules that all must follow. So, when the FCC makes rules, it has to follow a process that keeps it accountable to the people, at least in theory. The FCC usually has to publish its proposals for new rules along with an explanation, and then allow the public time to comment on them. By law, the FCC then has to take those comments, as a whole, into account when it writes the final rules. Congress can step in at any time by passing new laws for the FCC, and people affected by a final rule can challenge it in court, though in most cases the courts won’t second-guess an agency’s judgment about which rules are best.

The FCC is made up of five commissioners, one of whom is the chairman. It must include both Democrats and Republicans, and three out of the five typically come from the President’s party. Proposing new rules and issuing final rules both require a majority vote.

What’s the FCC actually proposing to do about net neutrality?

The FCC published a “Notice of Proposed Rulemaking,” or NPRM, on May 15th. It’s titled “Protecting and Promoting the Open Internet.” The actual proposed rule is 2 pages long, with 65 pages of explanation by the FCC. The proposed rules are:

            1. Transparency: Broadband Internet access providers have to “disclose accurate information” about “network management practices, performance, and commercial terms.”

            2. No blocking: Wired (fixed) ISPs may not block “lawful content, applications, services, or non-harmful devices.” Mobile broadband providers may not block “lawful websites.”

            3. A standard for special deals: ISPs cannot engage in “commercially unreasonable practices.” It’s not clear what this will mean, but it’s meant to prevent some kinds of non-neutral behavior.

The second and third rules both have an exception for “reasonable network management.”

These are only proposed rules; they’re not in force yet. But these rules, or something very similar, will likely become the final binding rules unless the public can convince the FCC (at least 3 of the 5 commissioners) to change them.

The FCC also asks the public to comment on whether the FCC should “reclassify” broadband Internet access as a “telecommunication service,” which, we believe, would give the FCC more effective authority to target non-neutral practices and promote real competition. And the FCC asks whether or not net neutrality rules should extend to mobile broadband, an issue we’ll break down in a future post.

The FCC’s proposal on transparency

The FCC’s transparency rule from 2010 is still in force, because the D.C. Circuit appeals court didn’t throw out that part of the older net neutrality rules. But this year’s NPRM is an opportunity to tell the FCC what information we think ISPs need to disclose, so that the public can see when and how ISPs are degrading speeds or offering preferential treatment. As we’ve seen, some non-neutral behaviors are hard to distinguish from ordinary network congestion, so transparency is vital. We have some ideas on how to improve the transparency rule.

The no-blocking rule

The no-blocking rule in the new proposal is similar to the 2010 rule that was thrown out by the court. This time around, the FCC wants to come up with a standard for the “minimum level of access” that ISPs should provide between their customers and any other point on the Internet. Anything less than that minimum would be considered blocking.

In theory, providing a connection to a particular site or service that’s too slow or intermittent to be usable would be a violation of the rule. The FCC wants advice from the public on how to define the minimum: by some traditional notion of “best efforts” delivery, technical performance measures such as maximum latency, or an evolving concept of “reasonable” service.

While a no-blocking rule has appeal, we’re concerned that the exceptions to the rule are so broad that harmful blocking can happen anyway. These are the same problems we were worried about in 2010. First, the rule only covers “lawful content,” which could be read to invite ISPs to become copyright police for the Internet, which would be bad news. Second, the proposed rule would have an exception for “reasonable network management,” which could actually let ISPs block sites as long as they can present some justification that would satisfy the FCC.

The “commercially unreasonable practices” rule: vague and ineffective

The core of the proposal, and probably the most contentious part, is the “commercially unreasonable practices” rule. This is where the goal of a neutral Internet runs up against the limits of the FCC’s current authority. In theory, this is a broadly worded rule that’s meant to let the FCC put a stop to non-neutral practices that don’t amount to blocking, but what the rule covers and how it will be enforced are all very up in the air.

The 2010 net neutrality rules contained a “no unreasonable discrimination” rule that was intended to ban harmful payola arrangements or any special deals by which monopoly ISPs would offer better access to certain websites.

The court threw this rule out last January, saying that it amounted to treating ISPs as “common carriers.” A common carrier is a business that’s required to serve all customers. It’s a legal framework that was traditionally applied to ferries, railroads, telegraphs, electric utilities, and the phone system. The court said that the FCC has the power to bring broadband ISPs under the common carrier rules, but if the FCC wants to do that, it must “reclassify” high speed Internet service as a “telecommunications service” under Title II of the Communications Act.

If it doesn’t reclassify, said the court, the FCC can’t regulate Internet access providers like common carriers and ban “unreasonable discrimination.” What’s more, without reclassifying, the rules must allow for “individualized bargaining” - special deals with favored partners.

This year’s FCC proposal raises the possibility that it will reclassify broadband as a Title II telecommunications service. As we've explained, that's a good idea. But the FCC seems reluctant to reclassify. The “default” proposal in the NPRM - the one the FCC will adopt unless enough people speak up - is to replace the anti-discrimination rule with an even more vague “commercially unreasonable practices” rule.

Many FCC-watchers have interpreted this to mean that non-neutral payola arrangements will be allowed in most circumstances, and that the only deals that will be blocked will be those that explicitly favor an ISP’s own Internet applications over its competitors (for example, if Comcast were to reduce the data rates of Internet video services like Netflix and YouTube in order to favor the Internet video provided by Comcast’s subsidiary NBC).

Ironically enough, even though they were supposedly designed to follow the court's instructions, even these new rules might not survive a court challenge. A court could conclude that the FCC is once again trying to impose common carrier obligations on ISPs without the authority to do so.

Finally, the NPRM lays out a lot of possibilities for what the “commercially unreasonable” rule could actually do. The overall theme of the FCC’s description is that if these rules are enacted, the FCC would become a “referee” looking at ISPs’ conduct and calling out actions that they think will harm the non-neutral Internet. The FCC would also hire an “ombudsman” to help smaller businesses and individuals bring complaints to the agency.

We’re very concerned about the “referee” model, because it could give the FCC too much power to make decisions on the fly, perhaps practicing the very kinds of favoritism that shouldn’t happen on a neutral Internet. Also, it deals with non-neutral behaviors after the fact, instead of providing clear rules at the outset. This could also create a very uncertain environment for ISPs and Internet users, and potentially help turn a future FCC into exactly the sort of arbitrary and intrusive regulator of the Internet that we fear.

What Happens Next

The FCC’s proposed rules aren't what we need to ensure the future of our Internet, but these proposed rules are what will likely pass unless the public speaks up and demands effective protections. This is one of our best opportunities to be heard. EFF will be submitting comments, and you can too. Visit DearFCC.org right now to tell the FCC why an open Internet matters to you.

Related Issues: Net Neutrality
Share this:   ||  Join EFF
Categories: Aggregated News

Advertising

 


Advertise here!

Syndicate content
All content and comments posted are owned and © by the Author and/or Poster.
Web site Copyright © 1995 - 2007 Clemens Vermeulen, Cairns - All Rights Reserved
Drupal design and maintenance by Clemens Vermeulen Drupal theme by Kiwi Themes.