Aggregated News

Google Will Survive SESTA. Your Startup Might Not.

eff.org - Sat, 23/09/2017 - 15:59

There was a shocking moment in this week’s Senate Commerce Committee hearing on the Stop Enabling Sex Traffickers Act (SESTA). Prof. Eric Goldman had just pointed out that members of Congress should consider how the bill might affect hundreds of small Internet startups, not just giant companies like Google and Facebook. Will every startup have the resources to police its users’ activity with the level of scrutiny that the new law would demand of them?  “There is a large number of smaller players who don’t have the same kind of infrastructure. And for them, they have to make the choice: can I afford to do the work that you’re hoping they will do?”

Goldman was right: the greatest innovations in Internet services don’t come from Google and Facebook; they come from small, fast-moving startups. SESTA would necessitate a huge investment in staff to filter users’ activity as a company’s user base grows, something that most startups in their early stages simply can’t afford. That would severely hamper anyone’s ability to launch a competitor to the big Internet players—giving users a lot less choice.

Sen. Richard Blumenthal’s stunning response: “I believe that those outliers—and they are outliers—will be successfully prosecuted, civilly and criminally under this law.”

Given the extreme penalties for under-filtering, platforms would err in the opposite direction, removing legitimate voices from the Internet.

Blumenthal is one of 30 cosponsors—and one of the loudest champions—of SESTA, a bill that would threaten online speech by forcing web platforms to police their members’ messages more stringently than ever before. Normally, SESTA’s proponents vastly understate the impact that the bill would have on online communities. But in that unusual moment of candor, Sen. Blumenthal seemed to lay bare his opinions about Internet startups—he thinks of them as unimportant outliers and would prefer that the new law put them out of business.

Let’s make something clear: Google will survive SESTA. Much of the SESTA fight’s media coverage has portrayed it as a battle between Google and Congress, which sadly misses the point. Large Internet companies may have the legal budgets to survive the massive increase in litigation and liability that SESTA would bring. They probably also have the budgets to implement a mix of automated filters and staff censors to comply with the law. Small startups are a different story.

Indeed, lawmakers should ask themselves whether SESTA would unintentionally reinforce large incumbent companies’ advantages. Without the strong protections that allowed today’s large Internet players to rise to prominence, startups would have a strong disincentive to grow. As soon as your user base grows beyond what your staff can directly police, your company becomes a huge liability.

But ultimately, the biggest casualty of SESTA won’t be Google or startups; it will be the people pushed offline.

Many of SESTA’s supporters suggest that it would be easy for web platforms of all sizes to implement automated filtering technologies they can trust to separate legitimate voices from criminal ones. But it’s impossible to do that with anywhere near 100% accuracy. Given the extreme penalties for under-filtering, platforms would err in the opposite direction, removing legitimate voices from the Internet. As EFF Executive Director Cindy Cohn put it, “Again and again, when platforms clamp down on their users’ speech, marginalized voices are the first to disappear.”

The sad irony of SESTA is that while its supporters claim that it will fight sex trafficking, trafficking victims are likely to be among the first people it would silence. And that silence could be deadly. According to Freedom Network USA, the largest network of anti-trafficking advocate organizations in the country (PDF), “Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims.” Congress should think long and hard before passing a bill that would incentivize web platforms to silence those victims.

Internet startups would take the much greater hit from SESTA than large Internet firms would, but ultimately, those most impacted would be users themselves. As online platforms ratcheted up their patrolling of their users’ speech, some voices would begin to disappear from the Internet. Tragically, some of those voices belong to the people most in need of the safety of online communities.

Take Action

Tell Congress: Stop SESTA.

Categories: Aggregated News

A Guide to Common Types of Two-Factor Authentication on the Web

eff.org - Sat, 23/09/2017 - 08:02

Two-factor authentication (or 2FA) is one of the biggest-bang-for-your-buck ways to improve the security of your online accounts. Luckily, it's becoming much more common across the web. With often just a few clicks in a given account's settings, 2FA adds an extra layer of security to your online accounts on top of your password.

In addition to requesting something you know to log in (in this case, your password), an account protected with 2FA will also request information from something you have (usually your phone or a special USB security key). Once you put in your password, you'll grab a code from a text or app on your phone or plug in your security key before you are allowed to log in. Some platforms call 2FA different things—Multi-Factor Authentication (MFA), Two Step Verification (2SV), or Login Approvals—but no matter the name, the idea is the same: Even if someone gets your password, they won't be able to access your accounts unless they also have your phone or security key.

There are four main types of 2FA in common use by consumer websites, and it's useful to know the differences. Some sites offer only one option; other sites offer a few different options. We recommend checking twofactorauth.org to find out which sites support 2FA and how, and turning on 2FA for as many of your online accounts as possible. For more visual learners, this infographic from Access Now offers additional information.

Finally, the extra layer of protection from 2FA doesn't mean you should use a weak password. Always make unique, strong passwords for each of your accounts, and then put 2FA on top of those for even better log-in security.

SMS 2FA

When you enable a site's SMS 2FA option, you'll often be asked to provide a phone number. Next time you log in with your username and password, you'll also be asked to enter a short code (typically 5-6 digits) that gets texted to your phone. This is a very popular option for sites to implement, since many people have an SMS-capable phone number and it doesn't require installing an app. It provides a significant step up in account security relative to just a username and password.

There are some disadvantages, however. Some people may not be comfortable giving their phone number—a piece of potentially identifying information—to a given website or platform. Even worse, some websites, once they have your phone number for 2FA purposes, will use it for other purposes, like targeted advertising, conversion tracking, and password resets. Allowing password resets based on a phone number provided for 2FA is an especially egregious problem, because it means attackers using phone number takeovers could get access to your account without even knowing your password.

Further, you can't log in with SMS 2FA if your phone is dead or can't connect to a mobile network. This can especially be a problem when travelling abroad. Also, it's often possible for an attacker to trick your phone company into assigning your phone number to a different SIM card, allowing them to receive your 2FA codes. Flaws in the SS7 telephony protocol can allow the same thing. Note that both of these attacks only reduce the security of your account to the security of your password.

Authenticator App / TOTP 2FA

Another phone-based option for 2FA is to use an application that generates codes locally based on a secret key. Google Authenticator is a very popular application for this; FreeOTP is a free software alternative. The underlying technology for this style of 2FA is called Time-Based One Time Password (TOTP), and is part of the Open Authentication (OATH) architecture (not to be confused with OAuth, the technology behind "Log in with Facebook" and "Log in with Twitter" buttons).

If a site offers this style of 2FA, it will show you a QR code containing the secret key. You can scan that QR code into your application. If you have multiple phones you can scan it multiple times; you can also save the image to a safe place or print it out if you need a backup. Once you've scanned such a QR code, your application will produce a new 6-digit code every 30 seconds. Similar to SMS 2FA, you'll have to enter one of these codes in addition to your username and password in order to log in.

This style of 2FA improves on SMS 2FA because you can use it even when your phone is not connected to a mobile network, and because the secret key is stored physically on your phone. If someone redirects your phone number to their own phone, they still won't be able to get your 2FA codes. It also has some disadvantages: If your phone dies or gets stolen, and you don't have printed backup codes or a saved copy of the original QR code, you can lose access to your account. For this reason, many sites will encourage you to enable SMS 2FA as a backup. Also, if you log in frequently on different computers, it can be inconvenient to unlock your phone, open an app, and type in the code each time.

Push-based 2FA

Some systems, like Duo Push and Apple's Trusted Devices method, can send a prompt to one of your devices during login. This prompt will indicate that someone (possibly you) is trying to log in, and an estimated location for the login attempt. You can then approve or deny the attempt.

This style of 2FA improves on authenticator apps in two ways: Acknowledging the prompt is slightly more convenient than typing in a code, and it is somewhat more resistant to phishing. With SMS and authenticator apps, a phishing site can simply ask for your code in addition to your password, and pass that code along to the legitimate site when logging in as you. Because push-based 2FA generally displays an estimated location based on the IP address from which a login was originated, and most phishing attacks don't happen to be operated from the same IP address ranges as their victims, you may be able to spot a phishing attack in progress by noticing that the estimated location differs from your actual location. However, this requires that you pay close attention to a subtle security indicator. And since location is only estimated, it's tempting to ignore any anomalies. So the additional phishing protection provided by push-based 2FA is limited.

Disadvantages of push-based 2FA: It's not standardized, so you can't choose from a variety of authenticator apps, and can't consolidate all your push-based credentials in a single app. Also, it requires a working data connection on your phone, while Authenticator apps don't require any connection, and SMS can work on an SMS-only phone plane (or in poor signal areas).

FIDO U2F / Security Keys

Universal Second Factor (U2F) is a relatively new style of 2FA, typically using small USB, NFC or Bluetooth Low Energy (BTLE) devices often called "security keys." To set it up on a site, you register your U2F device. On subsequent logins, the site will prompt you to connect your device and tap it to allow the login.

Like push-based 2FA, this means you don't have to type any codes. Under the hood, the U2F device recognizes the site you are on and responds with a code (a signed challenge) that is specific to that site. This means that U2F has a very important advantage over the other 2FA methods: It is actually phishing-proof, because the browser includes the site name when talking to the U2F device, and the U2F device won't respond to sites it hasn't been registered to. U2F is also well-designed from a privacy perspective: You can use the same U2F device on multiple sites, but you have a different identity with each site, so they can't use a single unique device identity for tracking.

The main downsides of U2F are browser support, mobile support, and cost. Right now only Chrome supports U2F, though Firefox is working on an implementation. The W3C is working on further standardizing the U2F protocol for the web, which should lead to further adoption. Additionally, mobile support is challenging, because most U2F devices use USB.

There are a handful of U2F devices that work with mobile phones over NFC and BTLE. NFC is supported only on Android. On iOS, Apple does not currently allow apps to interact with the NFC hardware, which prevents effective use of NFC U2F. BTLE is much less desirable because a BTLE U2F device requires a battery, and the pairing experience is less intuitive that tapping an NFC device. However, poor mobile support doesn't mean that using U2F prevents you from logging in on mobile. Most sites that support U2F also support TOTP and backup codes. You can log in once on your mobile device using one of those options, while using your phishing-proof U2F device for logins on the desktop. This is particularly effective for mobile sites and apps that only require you to log in once, and keep you logged in.

Lastly, most other 2FA methods are free, assuming you already have a smartphone. Most U2F devices cost money. Brad Hill has put together a review of various U2F devices, which generally cost USD $10-$20. GitHub has written a free, software-based U2F authenticator for macOS, but using this as your only U2F device would mean that losing your laptop could result in losing access to your account.

Bonus: Backup Codes

Sites will often give you a set of ten backup codes to print out and use in case your phone is dead or you lose your security key. Hard-copy backup codes are also useful when traveling, or in other situations where your phone may not have signal or reliable charging. No matter which 2FA method you decide is right for you, it's a good idea to keep these backup codes in a safe place to make sure you don't get locked out of your account when you need them.

Categories: Aggregated News

Silicon Valley Should Just Say No to Saudi Arabia

eff.org - Fri, 22/09/2017 - 22:48

American companies face a difficult tradeoff when dealing with government requests, but they should just say no to Saudi Arabia, which is using social media companies to do its dirty work in censoring Qatari media. Over the past few weeks, both Medium and Snap have caved to Saudi demands to geoblock journalistic content in the kingdom.

The history of Silicon Valley companies’ compliance with requests from foreign governments is a sad one, and one that has undoubtedly led to more censorship around the world. While groups like EFF have been successful at pushing companies toward more transparency and at pushing back against domestic censorship in the United States, it seems that companies are unwilling or unable to see why protecting freedom of expression on their platforms abroad is important.

After Yahoo’s compliance with a user data request from the Chinese government in the early 2000s resulted in the imprisonment of two Chinese citizens, the digital rights community began to pressure companies to use more scrutiny when dealing with orders from foreign governments. The early work of scholars such as Rebecca MacKinnon led to widespread awareness amongst civil society groups and the eventual creation of the Global Network Initiative, which created standards guiding companies’ compliance with foreign requests. A push from advocacy groups resulted in Google issuing its first transparency report in 2010, with other companies following the Silicon Valley giant’s lead. Today—thanks to tireless advocacy and projects like EFF’s Who Has Your Back report—dozens of companies issue their own reports.

Transparency is vital. It helps users to understand who the censors are, and to make informed decisions about what platforms they use. But, as it turns out, transparency does not necessarily lead to less censorship.

Corporate complicity

The Kingdom of Saudi Arabia is one of the world’s most prolific censors, attacking everything from advertisements and album covers to journalistic publications. The government—an absolute monarchy—has in recent years implemented far-reaching surveillance, arrested bloggers and dissidents for their online speech, and allegedly deployed an online “army” against Al Jazeera and its supporters. Even before recent events, the country was known as the Arab world’s leader in Internet censorship, aggressively blocking a wide array of content from its citizens. American companies—including Facebook and Google—have at times in the past voluntarily complied with content restriction demands from Saudi Arabia, though we know little about their context.

Now, in the midst of Saudi Arabia’s sustained attack on Al Jazeera (and its host country, Qatar), the government is ramping up its takedown requests. In particular, the government of Saudi Arabia is going after the press, and disappointingly, Silicon Valley companies seem all too eager to comply.

In late June, Medium complied with requests from the government to restrict access to content from two publications: Qatar-backed Al Araby Al Jadeed (“The New Arab”) and The New Khaliji News. In the interest of transparency, the company sent both requests to Lumen.

Medium has faced government censorship before; In 2016, the Malaysian government blocked the popular blogging platform, while Egypt included the site in a long list of banned publications earlier this year. By complying with the orders of the Saudi government, Medium is less likely to face a full ban in the country.

This week, Snap disappointed free expression advocates by joining the list of companies willing to team up with Saudi Arabia against Qatar and its media outlets. The social media giant pulled the Al Jazeera Discover Publisher Channel from Saudi Arabia late last week. A company spokesperson told Reuters: “We make an effort to comply with local laws in the countries where we operate.”

Corporate responsibility

As we’ve argued in the past, companies should limit their compliance with foreign governments which are not democratic and where they do not have employees or other assets on the ground. By censoring at the behest of a government like Saudi Arabia’s, Medium and Snap have chosen to side with the Saudi regime in a dangerous political game—and by censoring the press, they have demonstrated a stunning lack of commitment to freedom of expression. While other companies like Facebook and Twitter may have set the precedent, it’s not one that other companies should be proud to follow.

We urge Medium and Snap to reconsider their decisions, and for other companies to strengthen their commitment to freedom of expression by refusing to bow to demands from authoritarian governments when they’re not legally bound to.

Categories: Aggregated News

Appeals Court Rules Against Warrantless Cell-site Simulator Surveillance

eff.org - Fri, 22/09/2017 - 07:24

Law enforcement officers in Washington, D.C. violated the Fourth Amendment when they used a cell site simulator to locate a suspect without a warrant, a D.C. appeals court ruled on Thursday. The court thus found that the resulting evidence should have been excluded from trial and overturned the defendant’s convictions.

EFF joined the ACLU in filing an amicus brief, arguing that the use of a cell-site simulator without a warrant constituted an illegal search. We applaud the court’s decision in applying long-established Fourth Amendment principles to the digital age.

Cell-site simulators (also known as “IMSI catchers” and “Stingrays”) are devices that emulate cell towers in order to gain information from a caller’s phone, such as locational information. Police have acted with unusual secrecy regarding this technology, including taking extraordinary steps to ensure that use does not appear in court filings and is not released through public records requests. Concerns over the secrecy and privacy have led to multiple lawsuits and legal challenges, as well as legislation. 

The new decision in Prince Jones v. U.S. is the latest to find that police are violating our rights when using this sophisticated spying technology without a warrant.

Jones was accused of sexual assault and burglary. Much of the evidence collected against him was derived from cell-site simulators targeting his phone. 

The court determined that the use of a cell-site simulator to track and locate Jones was in fact a “search,” despite claims to the contrary from the prosecution. As the court wrote: 

The cell-site simulator employed in this case gave the government a powerful person-locating capability that private actors do not have and that, as explained above, the government itself had previously lacked—a capability only superficially analogous to the visual tracking of a suspect. And the simulator's operation involved exploitation of a security flaw in a device that most people now feel obligated to carry with them at all times. Allowing the government to deploy such a powerful tool without judicial oversight would surely “shrink the realm of guaranteed privacy” far below that which “existed when the Fourth Amendment was adopted.” … It would also place an individual in the difficult position either of accepting the risk that at any moment his or her cellphone could be converted into tracking device or of forgoing “necessary use of” the cellphone… We thus conclude that under ordinary circumstances, the use of a cell-site simulator to locate a person through his or her cellphone invades the person's actual, legitimate, and reasonable expectation of privacy in his or her location information and is a search. 

The decision should serve as yet another warning to law enforcement that new technologies do not mean investigators can bypass the Constitution. If police want data from our devices, they should come back with a warrant. 

Categories: Aggregated News

Appeals Court Limits Ability of Patent Trolls to File Suit in Far-Flung Districts

eff.org - Fri, 22/09/2017 - 05:02

In a closely watched case, the Court of Appeals for the Federal Circuit has issued an order that should see many more patent cases leaving the Eastern District of Texas. The order in In re Cray, together with the Supreme Court’s recent decision in TC Heartland v. Kraft Foods, should make it much more difficult for patent owners to pick and choose among various courts in the country. In particular, it should drastically limit the ability of patent trolls to file in their preferred venue: the Eastern District of Texas.

Venue” is a legal doctrine that relates to where cases can be heard. Prior to 1990, the Supreme Court had long held that in patent cases, the statute found at 28 U.S.C. § 1400 controlled where a patent case could be filed. This statute says that venue in patent cases is proper either (1) where the defendant “resides” or (2) where the defendant has “committed acts of infringement and has a regular and established place of business.” However, in 1990 in a case called VE Holding, the Federal Circuit held that a small technical amendment to another statute—28 U.S.C. § 1391—abrogated this long line of cases. VE Holding, together with another case called Beverly Hills Fan, essentially meant that companies that sold products nationwide could be hailed into any court in the country on charges of patent infringement, regardless of how tenuous the connection to that forum.

In May, 2017, the Supreme Court reaffirmed that the more specific statute, 28 U.S.C. § 1400, controls where a patent case can be filed. TC Heartland ruled that the term “resides” referred to a historical meaning, and was limited to the state of the defendant’s incorporation. However, TC Heartland did not discuss what was meant by the second prong of the venue statute, i.e. when defendants could be considered to have a “regular and established place of business.”

In light of TC Heartland, many patent owners shifted their arguments, and pointed to the “regular and established place of business” in a district as the basis for bringing suit there. Because that term had not been applied for some time, courts have variously determined what, exactly, constitutes a “regular and established place of business.”

One decision, Raytheon Co. v. Cray, Inc., written by Judge Gilstrap (a judge who at one point had ~25% of all patent cases in the entire country before him) appeared to take a broad view of what it meant to have a “regular and established place of business.” Judge Gilstrap held that “a fixed physical location in the district is not a prerequisite to proper venue.” More concerningly, Judge Gilstrap announced his own four-factor “test” that created greater possibilities that venue would be proper in the Eastern District.

The Federal Circuit has now rejected both that test and Judge Gilstrap’s finding that a physical location in the district is not necessary. The Federal Circuit specifically noted that the venue statute “cannot be read to refer merely to a virtual space or to electronic communications from one person to another.” Importantly, the Federal Circuit also held that it is not enough that an employee may live in the district. What is important is whether the alleged infringer has itself (as opposed to the employee) established a place of business in the district. The Federal Circuit did stress, however, that every case should be judged on its own facts. Based on the facts of Cray’s relationship to the district, the Federal Circuit ordered Judge Gilstrap to transfer the case out of the Eastern District.

This is a good ruling for many defendants who may find themselves sued in the Eastern District or any other district they may be only loosely connected with. When patent owners can drag defendants into court in far-flung corners of the country it can cause significant harm, especially for those who are on the receiving end of a frivolous lawsuit. Patent owners can pick a forum that is less inclined to grant fees, keep costs down, or stay cases. As a result, oftentimes it is cheaper to settle even a frivolous case than to fight. Between TC Heartland and now In re Cray, the ability of patent trolls to extort settlements based on cost of litigation rather than merit has been curtailed.

Related Cases: TC Heartland v. Kraft Foods
Categories: Aggregated News

.cat Domain a Casualty in Catalonian Independence Crackdown

eff.org - Fri, 22/09/2017 - 04:49

On October 1, a referendum will be held on whether Catalonia, an autonomous region of the northeast of Spain, should declare itself to be an independent country.  The Spanish government has ruled the referendum illegal, and is taking action on a number of fronts to shut it down and to censor communications promoting it. One of its latest moves in this campaign was a Tuesday police raid of the offices of puntCAT, the domain registry that operates the .cat top-level domain, resulting in the seizure of computers, the arrest of its head of IT for sedition, and the deletion of domains promoting the October 1 referendum, such as refoct1.cat (that website is now available at an alternate URL).

The .cat top-level domain was one of the earliest new top-level domains approved by ICANN in 2004, and is operated by a non-governmental, non-profit organization for the promotion of Catalan language and culture. Despite the seizure of computers at the puntCAT offices, because the operations of the domain registry are handled by an external provider, .cat domains not connected with the October 1 referendum (including eff.cat, EFF's little-known Catalan language website) have not been affected.

We have deep concerns about the use of the domain name system to censor content in general, even when such seizures are authorized by a court, as happened here. And there are two particular factors that compound those concerns in this case. First, the content in question here is essentially political speech, which the European Court of Human Rights has ruled as deserving of a higher level of protection than some other forms of speech. Even though the speech concerns a referendum that has been ruled illegal, the speech does not in itself pose any imminent threat to life or limb.

The second factor that especially concerns us here is that the seizure took place with only 10 days remaining until the scheduled referendum, making it unlikely that the legality of the domains' seizures could be judicially reviewed before the referendum is scheduled to take place. The fact that such mechanisms of legal review would not be timely accessible to the Catalan independence movement, and that the censorship of speech would therefore be de facto unreviewable, should have been another reason for the Spanish authorities to exercise restraint in this case.

Whether it's allegations of sedition or any other form of unlawful or controversial speech, domain name intermediaries should not be held responsible for the content of websites that utilize their domains. If such content is unlawful, a court order directed to the publisher or host of that content is the appropriate way for authorities to deal with that illegality, rather than the blanket removal of entire domains from the Internet. The seizure of .cat domains is a worrying signal that the Spanish government places its own interests in quelling the Catalonian independence movement above the human rights of its citizens to access a free and open Internet, and we join ordinary Catalonians in condemning it.

Categories: Aggregated News

Apple does right by users and advertisers are displeased

eff.org - Thu, 21/09/2017 - 11:47

With the new Safari 11 update, Apple takes an important step to protect your privacy, specifically how your browsing habits are tracked and shared with parties other than the sites you visit. In response, Apple is getting criticized by the advertising industry for "destroying the Internet's economic model." While the advertising industry is trying to shift the conversation to what they call the economic model of the Internet, the conversation must instead focus on the indiscriminate tracking of users and the violation of their privacy.

When you browse the web, you might think that your information only lives in the service you choose to visit. However, many sites load elements that share your data with third parties. First-party cookies are set by the domain you are visiting, allowing sites to recognize you from your previous visits but not to track you across other sites. For example, if you visit first examplemedia.com and then socialmedia.com, your visit would only be known to each site. In contrast, third-party cookies are those set by any other domains than the one you are visiting, and were created to circumvent the original design of cookies. In this case, when you would visit examplemedia.com and loads tracker.socialmedia.com as well, socialmedia.com would be able to track you an all sites that you visit where its tracker is loaded.

Websites commonly use third-party tracking to allow analytics services, data brokerages, and advertising companies to set unique cookies. This data is aggregated into individual profiles and fed into a real-time auction process where companies get to bid for the right to serve an ad to a user when they visit a page. This mechanism can be used for general behavioral advertising but also for “retargeting.” In the latter case,  the vendor of a product viewed on one site buys the chance to target the user later with ads for the same product on other sites around the web. As a user, you should be able to expect you will be treated with respect and that your personal browsing habits will be protected. When websites share your behavior without your knowledge, that trust is broken.

Safari has been blocking third-party cookies by default since Safari 5.1, released in 2010, and has been key to Apple’s emerging identity as a defender of user privacy. Safari distinguished between these seedy cookies from those placed on our machines by first parties - sites we visit intentionally. From 2011 onwards, advertising companies have been devising ways to circumvent these protections. One of the biggest retargeters, Criteo, even acquired a patent on a technique to subvert this protection 1. Criteo, however, was not the first company to circumvent Safari's user protection. In 2012, Google paid 22.5 million dollars to settle an action by the FTC after they used another workaround to track Safari users with cookies from the DoubleClick Ad Network. Safari had an exception to the third-party ban for submission forms where the user entered data deliberately (e.g. to sign up). Google exploited this loophole when Safari users visited sites participating in Google's advertising network to set a unique cookie.

The new Safari update, with Intelligent Tracking Prevention, closes loopholes around third-party cookie-blocking by using machine learning to distinguish the sites a user has a relationship with from those they don’t, and treating the cookies differently based on that. When you visit a site, any cookies that are set can be used in a third-party context for twenty-four hours. During the first twenty-four hours the third-party cookies can be used to track the user, but afterward can only be used to login and not to track. This means that sites that you visit regularly are not significantly affected. The companies this will hit hardest are ad companies unconnected with any major publisher.

At EFF we understand the need for sites to build a successful business model, but this should not come at the expense of people's privacy. This is why we launched initiatives like the EFF DNT Policy and tools like Privacy Badger. These initiatives and tools target tracking, not advertising. Rather than attacking Apple for serving their users, the advertising industry should treat this as an opportunity to change direction and develop advertising models that respect (and not exploit) users.

Apple has been a powerful force in user privacy on a mass scale in recent years, as reflected by their support for encryption, the intelligent processing of user data on device rather than in the cloud, and limitations on ad tracking on mobile and desktop. By some estimates, Apple handles 30% of all pages on mobile. Safari's innovations are not the silver bullet that will stop all tracking, but by stepping up to protect their users’ privacy Apple has set a challenge for other browser developers. When the user's privacy interests conflict with the business models of the advertising technology complex, is it possible to be neutral? We hope that Mozilla, Microsoft and Google will follow Apple, Brave and Opera's lead.

  • 1. In order to present themselves as a first party, Criteo had their host website include code on the internal links in their website to redirect when clicked. So if you click on a link to jackets in a clothes store, your click brings you for an instant to Criteo before forwarding you on to your intended destination. This trick makes them appear as a first party to your browser and they pop up a notification informing you and stating that by clicking on the page you consent to them storing a cookie. Once Safari accepted a first party cookie, that site was allowed to set cookies also when it was a third party. So now they can retarget you elsewhere. Other companies (AdRoll, for example) used the same trick.
Categories: Aggregated News

Attack on CCleaner Highlights the Importance of Securing Downloads and Maintaining User Trust

eff.org - Wed, 20/09/2017 - 05:16

Some of the most worrying kinds of attacks are ones that exploit users’ trust in the systems and softwares they use every day. Yesterday, Cisco’s Talos security team uncovered just that kind of attack in the computer cleanup software CCleaner. Download servers at Avast, the company that owns CCleaner, had been compromised to distribute malware inside CCleaner 5.33 updates for at least a month. Avast estimates that over 2 million users downloaded the affected update. Even worse, CCleaner’s popularity with journalists and human rights activists means that particularly vulnerable users are almost certainly among that number. Avast has advised CCleaner Windows users to update their software immediately.

This is often called a “supply chain” attack, referring to all the steps software takes to get from its developers to its users. As more and more users get better at bread-and-butter personal security like enabling two-factor authentication and detecting phishing, malicious hackers are forced to stop targeting users and move “up” the supply chain to the companies and developers that make software. This means that developers need to get in the practice of “distrusting” their own  infrastructure to ensure safer software releases with reproducible builds, allowing third parties to double-check whether released binary and source packages correspond. The goal should be to secure internal development and release infrastructure to that point that no hijacking, even from a malicious actor inside the company, can slip through unnoticed.

The harms of this hack extend far beyond the 2 million users who were directly affected. Supply chain attacks undermine users’ trust in official sources, and take advantage of the security safeguards that users and developers rely on. Software updates like the one Avast released for CCleaner are typically signed with the developer’s un-spoof-able cryptographic key. But the hackers appear to have penetrated Avast’s download servers before the software update was signed, essentially hijacking Avast’s update distribution process and punishing users for the security best practice of updating their software.

Despite observations that these kind of attack are on the rise, the reality is that they remain extremely rare when compared to other kinds of attacks users might encounter. This and other supply chain attacks should not deter users from updating their software. Like any security decision, this is a trade-off: for every attack that might take advantage of the supply chain, there are one hundred attacks that will take advantage of users not updating their software.

For users, sticking with trusted, official software sources and updating your software whenever prompted remains the best way to protect yourself from software attacks. For developers and software companies, the attack on CCleaner is a reminder of the importance of securing every link of the download supply chain.

Categories: Aggregated News

Live Blog: Senate Commerce Committee Discusses SESTA

eff.org - Tue, 19/09/2017 - 10:27

10:00 a.m.: In closing the hearing, Sen. Dan Sullivan speaks passionately about the need for the Department of Justice to invest more resources in prosecuting sex traffickers. Ms. Slater of the Internet Assocation echoes Sen. Sullivan, arguing that the Justice Department should have more resources to prosecute sex trafficking cases.

We could not agree more. Creating more liability for web platforms is, at best, a distraction. Experts in trafficking argue that, at worst, SESTA would do more harm than good.

Freedom Network USA, the largest network of anti-trafficking advocate organizations in the country, expresses grave concerns about lawmakers unwittingly compromising the very tools law enforcement needs to find traffickers (PDF): "Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims. When websites are shut down, the sex trade is pushed underground and sex trafficking victims are forced into even more dangerous circumstances."

Thank you for following our live blog. Please take a moment to write to your members of Congress and ask them to defend the online communities that matter to you.

Take Action

Tell Congress: Stop SESTA.

____

9:37 a.m.: "We have tried to listen to the industry," Sen. Blumenthal claims. But listening to major Internet industry players is not enough. It's essential that lawmakers talk to the marginalized communities that would be silenced under SESTA. It's essential that lawmakers talk to community-based or nonprofit platforms that will be most hurt by the increased liability, platforms like Wikipedia and the Internet Archive. In a letter to the Committee, the Wikimedia Foundation says point blank that Wikipedia would not exist without Section 230.

In writing off small startups as "outliers," Blumenthal misunderstands something essential about the Internet, that any platform can compete. Liability protections in Section 230 have led to the explosion of successful Internet businesses. Blumenthal claims that SESTA will "raise the bar" in encouraging web platforms to adopt better measures for filtering content, but he's mistaken. The developments in content filtering that SESTA's proponents celebrate would not have taken place without the protections in Section 230.

There is no such thing as a perfect filter. Under SESTA, platforms would have little choice but to rely far too heavily on filters, clamping down on legitimate speech in the process.

____

9:24 a.m.: Prof. Goldman argues that adding enforcement of state criminal law as an exception to Section 230 would effectively balkanize the Internet. One state would have the ability to affect the entire Internet, so long as it can convince a judge that a state law targets sex trafficking. Goldman has written extensively on the problems that would arise from excluding state law from 230 immunity.

____

9:09 a.m.: The committee's discussion about expanding federal criminal law liability for "facilitating" sex trafficking (by amending 18 USC 1591) misses an important point: under SESTA, platforms would be liable not if they knew sex trafficking was happening on their sites, but if they should have known (this is the "reckless disregard" standard set in 1591).

____

9:00 a.m.: Xavier Becerra is correct that Section 230 blocks state criminal prosecutions against platforms for illegal user-generated content (but not federal prosecutions). However, state prosecutors are not prevented from going after the traffickers themselves. As California AG, he should do that.

In Kiersten DiAngelo's letter to the Commerce Committee, she discusses her organization's exasperation at trying to work with California law enforcement to prosecute traffickers. That should be an Attorney General's first priority, not prosecuting web platforms that don't break the law themselves.

____

8:55 a.m.: Yiota Souras from NCMEC says that there should be a legal barrier to enter the online ads marketplace.  There already is one: Congress passed the SAVE Act in 2015 to create express liability for platforms that knowingly advertise sex trafficking ads.

Souras says that there needs to be more community intervention into the lives of children before they end up in online sex ads. We couldn't agree more.

____

8:40 a.m.: When Abigail Slater of the Internet Association speaks to platforms' ability to filter content related to trafficking, she's talking about large web companies. Smaller platforms would be most at risk under SESTA: it would be very difficult for them to absorb the huge increase in legal exposure for user-generated content that SESTA would create.

____

8:32 a.m.: Yiota Souras is confusing the issue. Victims of sex trafficking today can hold a platform liable in civil court for ads their traffickers posted when there is evidence that the platform had a direct hand in creating the illegal content. And victims can directly sue their traffickers without bumping into Section 230.

____

8:25 a.m.: Professor Eric Goldman is now testifying on the importance of Section 230:

SESTA would reinstate the moderation dilemma that Section 230 eliminated. Because of Section 230, online services today voluntarily take many steps to suppress socially harmful content (including false and malicious content, sexual material, and other lawful but unwanted content) without fearing liability for whatever they miss. Post-SESTA, some services will conclude that they cannot achieve this high level of accuracy, or that moderation procedures would make it impossible to serve their community. In those cases, the services will reduce or eliminate their current moderation efforts.

Proponents of SESTA have tried to get around this dilemma by overstating the effectiveness of automated content filtering. In doing so, they really miss the point of filtering technologies. Automated filters can be very useful as an aid to human review, but they're not appropriate as the final arbiters of free expression online. Over-reliance on them will almost certainly result in silencing marginalized voices, including those of trafficking victims themselves.

____

8:15 a.m.: Contrary to what Xavier Becerra suggested, we're not opposed to amending statutes in general. But Section 230 has included a reasonable policy balance, enabling culpable platforms to held liable while allowing free speech and innovation to thrive online. Amending it is unnecessary and dangerous.

____

8:12 a.m.: Ms. Yvonne Ambrose, the mother of a trafficking victim, is now speaking on the horrors her daughter went through.

It's specifically because of the horror of trafficking that Congress must be wary of bills that would do more harm than good. To quote anti-trafficking advocate (and herself a trafficking victim) Kristen DiAngelo (PDF), "SESTA would do nothing to decrease sex trafficking; in fact, it would have the opposite effect. [...] When trafficking victims are pushed off of online platforms and onto the streets, we become invisible to the outside world as well as to law enforcement, thus putting us in more danger of violence."

In DiAngelo's letter, she tells the horrific story of a trafficking victim who was forced by her pimp to work the street when the FBI shut down a website where sex workers advertised:

Since she was new to the street, sexual predators considered her fair game. Her first night out, she was robbed and raped at gunpoint, and when she returned to the hotel room without her money, her pimp beat her. Over the next seven months, she was arrested seven times for loitering with the intent to commit prostitution and once for prostitution, all while she was being trafficked.

Freedom Network USA, the largest network of anti-trafficking service providers in the country, expresses grave concerns about any proposal that would shift more liability to web platforms (PDF): "The current legal framework encourages websites to report cases of possible trafficking to law enforcement. Responsible website administrators can, and do, provide important data and information to support criminal investigations. Reforming the CDA to include the threat of civil litigation could deter responsible website administrators from trying to identify and report trafficking.

____

8:05 a.m.: Sen. Wyden is right. Sec. 230 made the Internet a platform for free speech. It should remain intact.

Wyden makes it clear that by design, Section 230 does nothing to protect web platforms from prosecution for violations of federal criminal law. It also does nothing to shield platforms' users themselves from liability for their own actions in either state or federal court. Wyden speaks passionately on the need for resources to fight sex traffickers online. Reminder: SESTA would do nothing to fight traffickers.

____

7:57 a.m.: Sen. Blumenthal is wrong. Section 230 does not provide blanket immunity to platforms for civil claims. Platforms that have a direct hand in posting illegal sex trafficking ads can be held liable in civil court.

SESTA is not narrowly targeted. It would open up online platforms to a "deluge" (Sen. Blumenthal's words) of state criminal prosecutions and federal and state civil claims based on user-generated content.

____

7:45 a.m.: Sen. Nelson asks: why aren't we doing everything we can to fight sex trafficking?

We agree. That's why it's such a shame that Congress is putting its energy into enacting a measure that would not fight sex traffickers. In her letter to the Committee, anti-trafficking advocate (and herself a trafficking victim) Kristen DiAngelo outlines several proposals that Congress could take to fight trafficking: for example, enacting protective measures to make it easier for sex workers to report traffickers.

Undermining Section 230 is not the right response. It's a political bait-and-switch.

____

7:33 am: The hearing is beginning now. You can watch it at the Commerce Committee website.

____

There’s a bill in Congress that would be a disaster for free speech online. The Senate Committee on Commerce, Science, and Transportation is holding a hearing on that bill, and we’ll be blogging about it as it happens.

The Stop Enabling Sex Traffickers Act (SESTA) might sound virtuous, but it’s the wrong solution to a serious problem. The authors of SESTA say it’s designed to fight sex trafficking, but the bill wouldn’t punish traffickers. What it would do is threaten legitimate online speech.

Join us at 7:30 a.m. Pacific time (10:30 Eastern) on Tuesday, right here and on the @EFFLive Twitter account. We’ll let you know how to watch the hearing, and we’ll share our thoughts on it as it happens. In the meantime, please take a moment to tell your members of Congress to Stop SESTA.

Take Action

Tell Congress: Stop SESTA.

Categories: Aggregated News

The Cybercrime Convention's New Protocol Needs to Uphold Human Rights

eff.org - Tue, 19/09/2017 - 09:10

As part of an ongoing attempt to help law enforcement obtain data across international borders, the Council of Europe’s Cybercrime Convention— finalized in the weeks following 9/11, and ratified by the United States and over 50 countries around the world—is back on the global lawmaking agenda. This time, the Council’s Cybercrime Convention Committee (T-CY) has initiated a process to draft a second additional protocol to the Convention—a new text which could allow direct foreign law enforcement access to data stored in other countries’ territories. EFF has joined EDRi and a number of other organizations in a letter to the Council of Europe, highlighting some anticipated concerns with the upcoming process and seeking to ensure civil society concerns are considered in the new protocol. This new protocol needs to preserve the Council of Europe’s stated aim to uphold human rights, and not undermine privacy, and the integrity of our communication networks.

How the Long Arm of Law Reaches into Foreign Servers

Thanks to the internet, individuals and their data increasingly reside in different jurisdictions: your email might be stored on a Google server in the United States, while your shared Word documents might be stored by Microsoft in Ireland. Law enforcement agencies across the world have sought to gain access to this data, wherever it is held. That means police in one country frequently seek to extract personal, private data from servers in another.

Currently, the primary international mechanism for facilitating governmental cross border data access is the Mutual Legal Assistance Treaty (MLAT) process, a series of treaties between two or more states that create a formal basis for cooperation between designated authorities of signatories. These treaties typically include some safeguards for privacy and due process, most often the safeguards of the country that hosts the data.

The MLAT regime includes steps to protect privacy and due process, but frustrated agencies have increasingly sought to bypass it, by either cross-border hacking, or leaning on large service providers in foreign jurisdictions to hand over data voluntarily.

The legalities of cross-border hacking remain very murky, and its operation is the very opposite of transparent and proportionate. Meanwhile, voluntary cooperation between service providers and law enforcement occurs outside the MLAT process and without any clear accountability framework. The primary window of insight into its scope and operation is the annual Transparency Reports voluntarily issued by some companies such as Google and Twitter.

Hacking often blatantly ignores the laws and rights of a foreign state, but voluntary data handovers can be used to bypass domestic legal protections too.  In Canada, for example, the right to privacy includes rigorous safeguards for online anonymity: private Internet companies are not permitted to identify customers without prior judicial authorization. By identifying often sensitive anonymous online activity directly through the voluntary cooperation of a foreign company not bound by Canadian privacy law, law enforcement agents can effectively bypass this domestic privacy standard.

Faster, but not Better: Bypassing MLAT

The MLAT regime has been criticized as slow and inefficient. Law enforcement officers have claimed that have to wait anywhere between 6-10 months—the reported average time frame for receiving data through an MLAT request—for data necessary to their local investigation. Much of this delay, however, is attributable to a lack of adequate resources, streamlining and prioritization for the huge increase in MLAT requests for data held the United States, plus the absence of adequate training for law enforcement officers seeking to rely on another state’s legal search and seizure powers.

Instead of just working to make the MLAT process more effective, the T-CY committee is seeking to create a parallel mechanism for cross-border cooperation. While the process is still in its earliest stages, many are concerned that the resulting proposals will replicate many of the problems in the existing regime, while adding new ones.

What the New Protocol Might Contain

The Terms of Reference for the drafting of this new second protocol reveal some areas that may be included in the final proposal.

Simplified mechanisms for cross border access

T-CY has flagged a number of new mechanisms it believes will streamline cross-border data access. The terms of reference mention a simplified regime’ for legal assistance with respect to subscriber data. Such a regime could be highly controversial if it compelled companies to identify anonymous online activity without prior judicial authorization. The terms of reference also envision the creation of “international production orders.”. Presumably these would be orders issued by one court under its own standards, but that must be respected by Internet companies in other jurisdictions. Such mechanisms could be problematic where they do not respect the privacy and due process rights of both jurisdictions.

Direct cooperation

The terms of reference also call for "provisions allowing for direct cooperation with service providers in other jurisdictions with regard to requests for [i] subscriber information, [ii] preservation requests, and [iii] emergency requests." These mechanisms would be permissive, clearing the way for companies in one state to voluntarily cooperate with certain types of requests issued by another, and even in the absence of any form of judicial authorization.

Each of the proposed direct cooperation mechanisms could be problematic. Preservation requests are not controversial per se. Companies often have standard retention periods for different types of data sets. Preservation orders are intended to extend these so that law enforcement have sufficient time to obtain proper legal authorization to access the preserved data. However, preservation should not be undertaken frivolously. It can carry an accompanying stigma, and exposes affected individuals’ data to greater risk if a security breach occurs during the preservation period. This is why some jurisdictions require reasonable suspicion and court orders as requirements for preservation orders.

Direct voluntary cooperation on emergency matters is challenging as well. While in such instances, there is little time to engage the judicial apparatus and most states recognize direct access to private customer data in emergency situations, such access can still be subject to controversial overreach. This potential for overreach--and even abuse--becomes far higher where there is a disconnect between standards in requesting and responding jurisdictions.

Direct cooperation in identifying customers can be equally controversial. Anonymity is critical to privacy in digital contexts. Some data protection laws (such as Canada’s federal privacy law) prevent Internet companies from voluntarily providing subscriber data to law enforcement voluntarily.

Safeguards

The terms of reference also envisions the adoption of “safeguards". The scope and nature of these will be critical. Indeed, one of the strongest criticisms against the original Cybercrime Convention has been its lack of specific protections and safeguards for privacy and other human rights. The EDRi Letter calls for adherence to the Council of Europe’s data protection regime, Convention 108, as a minimum prerequisite to participation in the envisioned regime for cross-border access, which would provide some basis for shared privacy protection. The letter also calls for detailed statistical reporting and other safeguards.

What’s next?

On 18 September, the T-CY Bureau will meet with European Digital Rights Group (EDRI) to discuss the protocol. The first meeting of the Drafting Group will be held on 19 and 20 September. The draft Protocol will be prepared and finalized by the T-CY, in closed session.

Law enforcement agencies are granted extraordinary powers to invade privacy in order to investigate crime. This proposed second protocol to the Cybercrime Convention must ensure that the highest privacy standards and due process protections adopted by signatory states remain intact.

We believe that the Council of Europe T-CY Committee — Netherlands, Romania, Canada, Dominica Republic, Estonia, Mauritius, Norway, Portugal, Sri Lanka, Switzerland, and Ukraine — should concentrate first on fixes to the existing MLAT process, and they should ensure that this new initiative does not become an exercise in harmonization to the lowest denominator of international privacy protection. We'll be keeping track of what happens next.

Categories: Aggregated News

EFF to Court: The First Amendment Protects the Right to Record First Responders

eff.org - Tue, 19/09/2017 - 08:57

The First Amendment protects the right of members of the public to record first responders addressing medical emergencies, EFF argued in an amicus brief filed in the federal trial court for the Northern District of Texas. The case, Adelman v. DART, concerns the arrest of a Dallas freelance press photographer for criminal trespass after he took photos of a man receiving emergency treatment in a public area.

EFF’s amicus brief argues that people frequently use electronic devices to record and share photos and videos. This often includes newsworthy recordings of on-duty police officers and emergency medical services (EMS) personnel interacting with members of the public. These recordings have informed the public’s understanding of emergencies and first responder misconduct.

EFF’s brief was joined by a broad coalition of media organizations: the Freedom of the Press Foundation, the National Press Photographers Association, the PEN American Center, the Radio and Television Digital News Association, Reporters Without Borders, the Society of Professional Journalists, the Texas Association of Broadcasters, and the Texas Press Association.

Our local counsel are Thomas Leatherbury and Marc Fuller of Vinson & Elkins L.L.P.

EFF’s new brief builds on our amicus brief filed last year before the Third Circuit Court of Appeals in Fields v. Philadelphia. There, we successfully argued that the First Amendment protects the right to use electronic devices to record on-duty police officers.

Adelman, a freelance journalist, has provided photographs to media outlets for nearly 30 years. He heard a call for paramedics to respond to a K2 overdose victim at a Dallas Area Rapid Transit (“DART”) station. When he arrived, he believed the incident might be of public interest and began photographing the scene. A DART police officer demanded that Adelman stop taking photos. Despite Adelman’s assertion that he was well within his constitutional rights, the DART officer, with approval from her supervisor, arrested Adelman for criminal trespass.

Adelman sued the officer and DART. EFF’s amicus brief supports his motion for summary judgment.

Categories: Aggregated News

Security Education: What's New on Surveillance Self-Defense

eff.org - Tue, 19/09/2017 - 06:36

Since 2014, our digital security guide, Surveillance Self-Defense (SSD), has taught thousands of Internet users how to protect themselves from surveillance, with practical tutorials and advice on the best tools and expert-approved best practices. After hearing growing concerns among activists following the 2016 US presidential election, we pledged to build, update, and expand SSD and our other security education materials to better advise people, both within and outside the United States, on how to protect their online digital privacy and security.

While there’s still work to be done, here’s what we’ve been up to over the past several months.

SSD Guide Audit

SSD is consistently updated based on evolving technology, current events, and user feedback, but this year our SSD guides are going through a more in-depth technical and legal review to ensure they’re still relevant and up-to-date. We’ve also put our guides through a "simple English" review in order to make them more usable for digital security novices and veterans alike. We've worked to make them a little less jargon-filled, and more straightforward. That helps everyone, whether English is their first language or not. It also makes translation and localization easier: that's important for us, as SSD is maintained in eleven languages.

Many of these changes are based on reader feedback. We'd like to thank everyone for all the messages you've sent and encourage you to continue providing notes and suggestions, which helps us preserve SSD as a reliable resource for people all over the world. Please keep in mind that some feedback may take longer to incorporate than others, so if you've made a substantive suggestion, we may still be working on it!

As of today, we’ve updated the following guides and documents:

Assessing your Risks

Formerly known as "Threat Modeling," our Assessing your Risks guide was updated to be less intimidating to those new to digital security. Threat modeling is the primary and most important thing we teach at our security trainings, and because it’s such a fundamental skill, we wanted to ensure all users were able to grasp the concept. This guide walks users through how to conduct their own personal threat modeling assessment. We hope users and trainers will find it useful.

SSD Glossary Updates

SSD hosts a glossary of technical terms that users may encounter when using the security guide. We’ve added new terms and intend on expanding this resource over the coming months.

How to: Avoid Phishing Attacks

With new updates, this guide helps users identify phishing attacks when they encounter them and delves deeper into the types of phishing attacks that are out there. It also outlines five practical ways users can protect themselves against such attacks.

One new tip we added suggests using a password manager with autofill. Password managers that auto-fill passwords keep track of which sites those passwords belong to. While it’s easy for a human to be tricked by fake login pages, password managers are not tricked in the same way. Check out the guide for more details, and for other tips to help defend against phishing.

How to: Use Tor

We updated How to: Use Tor for Windows and How to: use Tor for macOS and added a new How to: use Tor for Linux guide to SSD. These guides all include new screenshots and step-by-step instructions for how to install and use the Tor Browser—perfect for people who might need occasional anonymity and privacy when accessing websites.

How to: Install Tor Messenger (beta) for macOS

We've added two new guides on installing and using Tor Messenger for instant communications.  In addition to going over the Tor network, which hides your location and can protect your anonymity, Tor Messenger ensures messages are sent strictly with Off-the-Record (OTR) encryption. This means your chats with friends will only be readable by them—not a third party or service provider.  Finally, we believe Tor Messenger is employing best practices in security where other XMPP messaging apps fall short.  We plan to add installation guides for Windows and Linux in the future.

Other guides we've updated include circumventing online censorship, and using two-factor authentication.

What’s coming up?

Continuation of our audit: This audit is ongoing, so stay tuned for more security guide updates over the coming months, as well as new additions to the SSD glossary.

Translations: As we continue to audit the guides, we’ll be updating our translated content. If you’re interested in volunteering as a translator, check out EFF’s Volunteer page.

Training materials: Nothing gratifies us more than hearing that someone used SSD to teach a friend or family member how to make stronger passwords, or how to encrypt their devices. While SSD was originally intended to be a self-teaching resource, we're working towards expanding the guide with resources for users to lead their friends and neighbors in healthy security practices. We’re working hard to ensure this is done in coordination with the powerful efforts of similar initiatives, and we seek to support, complement, and add to that collective body of knowledge and practice.

Thus we’ve interviewed dozens of US-based and international trainers about what learners struggle with, their teaching techniques, the types of materials they use, and what kinds of educational content and resources they want. We’re also conducting frequent critical assessment of learners and trainers, with regular live-testing of our workshop content and user testing evaluations of the SSD website.

It’s been humbling to observe where beginners have difficulty learning concepts or tools, and to hear where trainers struggle using our materials. With their feedback fresh in mind, we continue to iterate on the materials and curriculum.

Over the next few months, we are rolling out new content for a teacher’s edition of SSD, intended for short awareness-raising one to four hour-long sessions. If you’re interested in testing our early draft digital security educational materials and providing feedback on how they worked, please fill out this form by September 30. We can’t wait to share them with you.

 

Categories: Aggregated News

In A Win For Privacy, Uber Restores User Control Over Location-Sharing

eff.org - Tue, 19/09/2017 - 05:14

After making an unfortunate change to its privacy settings last year, we are glad to see that Uber has reverted back to settings that empower its users to make choices about sharing their location information.

Last December, an Uber update restricted users' location-sharing choices to "Always" or "Never," removing the more fine-grained "While Using" setting. This meant that, if someone wanted to use Uber, they had to agree to share their location information with the app at all times or surrender usability. In particular, this meant that riders would be tracked for five minutes after being dropped off.

Now, the "While Using" setting is back—and Uber says the post-ride tracking will end even for users who choose the "Always" setting. We are glad to see Uber reverting back to giving users more control over their location privacy, and hope it will stick this time. EFF recommends that all users manually check that their Uber location privacy setting is on "While Using"after they receive the update.

1.     Open the Uber app, and press the three horizontal lines on the top left to open the sidebar.

2.     Once the sidebar is open, press Settings.

3.     Scroll to the bottom of the settings page to select Privacy Settings.

4.     In your privacy settings, select Location.

5.     In Location, check to see if it says “Always.”  If it does, click to change it.

6.     Here, change your location setting to "While Using" or "Never". Note that "Never" will require you to manually enter your pickup address every time you call a ride.

Categories: Aggregated News

Azure Confidential Computing Heralds the Next Generation of Encryption in the Cloud

eff.org - Tue, 19/09/2017 - 03:37

For years, EFF has commended companies who make cloud applications that encrypt data in transit. But soon, the new gold standard for cloud application encryption will be the cloud provider never having access to the user’s data—not even while performing computations on it.

Microsoft has become the first major cloud provider to offer developers the ability to build their applications on top of Intel’s Software Guard Extensions (SGX) technology, making Azure “the first SGX-capable servers in the public cloud.” Azure customers in Microsoft’s Early Access program can now begin to develop applications with the “confidential computing” technology.

Intel SGX uses protections baked into the hardware to ensure that data remains secure, even from the platform it’s running on. That means that an application that protects its secrets inside SGX is protecting it not just from other applications running on the system, but from the operating system, the hypervisor, and even Intel’s Management Engine, an extremely privileged coprocessor that we’ve previously warned about.

Cryptographic methods of computing on encrypted data are still an active body of research, with most methods still too inefficient or involving too much data leakage to see practical use in industry. Secure enclaves like SGX, also known as Trusted Execution Environments (TEEs), offer an alternative path to applications looking to compute over encrypted data. For example, a messaging service with a server that uses secure enclaves offers similar guarantees to end-to-end encrypted services. But whereas an end-to-encrypted messaging service would have to use client-side search or accept either side channel leakage or inefficiency to implement server-side search, by using an enclave they can provide server-side search functionality with always-encrypted guarantees at little additional computational cost. The same is true for the classic challenge of changing the key that a ciphertext is encrypted without access to the key, known as proxy re-encryption. Many problems that have challenged cryptographers for decades to find efficient, leakage-free solutions are solvable instead by a sufficiently robust secure enclave ecosystem.

While there is great potential here, SGX is still a relatively new technology, meaning that security vulnerabilities are still being discovered as more research is done. Memory corruption vulnerabilities within enclaves can be exploited by classic attack mechanisms like return-oriented programming (ROP). Various side channel attacks have been discovered, some of which are mitigated by a growing host of protective techniques. Promisingly, Microsoft’s press release teases that they’re “working with Intel and other hardware and software partners to develop additional TEEs and will support them as they become available.” This could indicate that they’re working on developing something like Sanctum, which isolates caches by trusted application, reducing a major side channel attack surface. Until these issues are fully addressed, a dedicated attacker could recover some or all of the data protected by SGX, but it’s still a massive improvement over not using hardware protection at all.

The technology underlying Azure Confidential Computing is not yet perfect, but it's efficient enough for practical usage, stops whole classes of attacks, and is available today. EFF applauds this giant step towards making encrypted applications in the cloud feasible, and we look forward to seeing cloud offerings from major providers like Amazon and Google follow suit. Secure enclaves have the potential to be a new frontier in offering users privacy in the cloud, and it will be exciting to see the applications that developers build now that this technology is becoming more widely available.

Categories: Aggregated News

An open letter to the W3C Director, CEO, team and membership

eff.org - Tue, 19/09/2017 - 02:50

Dear Jeff, Tim, and colleagues,

In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing “Encrypted Media Extensions,” an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties.

When it became clear, following our formal objection, that the W3C's largest corporate members and leadership were wedded to this project despite strong discontent from within the W3C membership and staff, their most important partners, and other supporters of the open Web, we proposed a compromise. We agreed to stand down regarding the EME standard, provided that the W3C extend its existing IPR policies to deter members from using DRM laws in connection with the EME (such as Section 1201 of the US Digital Millennium Copyright Act or European national implementations of Article 6 of the EUCD) except in combination with another cause of action.

This covenant would allow the W3C's large corporate members to enforce their copyrights. Indeed, it kept intact every legal right to which entertainment companies, DRM vendors, and their business partners can otherwise lay claim. The compromise merely restricted their ability to use the W3C's DRM to shut down legitimate activities, like research and modifications, that required circumvention of DRM. It would signal to the world that the W3C wanted to make a difference in how DRM was enforced: that it would use its authority to draw a line between the acceptability of DRM as an optional technology, as opposed to an excuse to undermine legitimate research and innovation.

More directly, such a covenant would have helped protect the key stakeholders, present and future, who both depend on the openness of the Web, and who actively work to protect its safety and universality. It would offer some legal clarity for those who bypass DRM to engage in security research to find defects that would endanger billions of web users; or who automate the creation of enhanced, accessible video for people with disabilities; or who archive the Web for posterity. It would help protect new market entrants intent on creating competitive, innovative products, unimagined by the vendors locking down web video.

Despite the support of W3C members from many sectors, the leadership of the W3C rejected this compromise. The W3C leadership countered with proposals — like the chartering of a nonbinding discussion group on the policy questions that was not scheduled to report in until long after the EME ship had sailed — that would have still left researchers, governments, archives, security experts unprotected.

The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew — and the large corporate members continued to reject any meaningful compromise — the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate.  In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors — and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible.

But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law — which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history.

In our campaigning on this issue, we have spoken to many, many members' representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn't on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool's errand.

We believe they will regret that choice. Today, the W3C bequeaths a legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they’ll be able to ensure no one ever subjects them to the same innovative pressures.

So we'll keep fighting to keep the web free and open. We'll keep suing the US government to overturn the laws that make DRM so toxic, and we'll keep bringing that fight to the world's legislatures that are being misled by the US Trade Representative to instigate local equivalents to America's legal mistakes.

We will renew our work to battle the media companies that fail to adapt videos for accessibility purposes, even though the W3C squandered the perfect moment to exact a promise to protect those who are doing that work for them.

We will defend those who are put in harm's way for blowing the whistle on defects in EME implementations.

It is a tragedy that we will be doing that without our friends at the W3C, and with the world believing that the pioneers and creators of the web no longer care about these matters.

Effective today, EFF is resigning from the W3C.

Thank you,

Cory Doctorow
Advisory Committee Representative to the W3C for the Electronic Frontier Foundation

Categories: Aggregated News

California Legislature Sells Out Our Data to ISPs

eff.org - Sun, 17/09/2017 - 00:10

In the dead of night, the California Legislature shelved legislation that would have protected every Internet user in the state from having their data collected and sold by ISPs without their permission. By failing to pass A.B. 375, the legislature demonstrated that they put the profits of Verizon, AT&T, and Comcast over the privacy rights of their constituents.

Earlier this year, the Republican majority in Congress repealed the strong privacy rules issued by the Federal Communications Commission in 2016, which required ISPs to get affirmative consent before selling our data.  But while Congressional Democrats fought to protect our personal data, the Democratic-controlled California legislature did not follow suit. Instead, they kowtowed to an aggressive lobbying campaign, from telecommunications corporations and Internet companies, which included spurious claims and false social media advertisements about cybersecurity. 

“It is extremely disappointing that the California legislature failed to restore broadband privacy rights for residents in this state in response to the Trump Administration and Congressional efforts to roll back consumer protection,” EFF Legislative Counsel Ernesto Falcon said. “Californians will continue to be denied the legal right to say no to their cable or telephone company using their personal data for enhancing already high profits. Perhaps the legislature needs to spend more time talking to the 80% of voters that support the goal of A.B. 375 and less time with Comcast, AT&T, and Google's lobbyists in Sacramento.” 

All hope is not lost, because the bill is only stalled for the rest of the year. We can raise it again in 2018.

A.B. 375 was introduced late in the session; that it made it so far in the process so quickly demonstrates that there are many legislators who are all-in on privacy.  In January, EFF will build off this year's momentum with a renewed push to move A.B. 375 to the governor's desk. Mark your calendar and join us. 

Categories: Aggregated News

One Last Chance for Police Transparency in California

eff.org - Sat, 16/09/2017 - 01:52

As the days wind down for the California legislature to pass bills, transparency advocates have seen landmark measures fall by the wayside. Without explanation, an Assembly committee shelved legislation that would have shined light on police use of surveillance technologies, including a requirement that police departments seek approval from their city councils. The legislature also gutted a key reform to the California Public Records Act (CPRA) that would’ve allowed courts to fine agencies that improperly thwart requests for government documents. 

But there is one last chance for California to improve the public’s right to access police records. S.B. 345 would require every law enforcement agency in the state to publish on its website all “current standards, policies, practices, operating procedures, and education and training materials” by January 1, 2019. The legislation would cover all materials that would be otherwise available through a CPRA request.

S.B. 345 is now on Gov. Jerry Brown's desk, and he should sign it immediately. 

Take Action

Tell Gov. Brown to sign S.B. 345 into law

There are two main reasons EFF is supporting this bill. 

The first is obvious: in order to hold law enforcement accountable, we need to understand the rules that officers are playing by. For privacy advocates, access to materials about advanced surveillance technologies—such as automated license plate readers, facial recognition, drones, and social media monitoring—will lead to better and more informed debates over policy.  The bill also would strengthen the greater police accountability movement, by proactively releasing policies and training about use of force, deaths in custody, body-worn cameras, and myriad other controversial police tactics and procedures.  

The second reason is more philosophical: we believe that rather than putting the onus on the public to always file formal records requests, government agencies should automatically upload their records to the Internet whenever possible. S.B. 345 creates openness by default for hundreds of agencies across the state.

To think of it another way: S.B. 345 is akin to the legislature sending its own public records request to every law enforcement agency in the state. 

Unlike other measures EFF has supported this session, S.B. 345 has not drawn strong opposition from law enforcement. In fact, only the California State Sheriffs’ Association is in opposition, arguing that the bill could require the disclosure of potentially sensitive information. This is incorrect, since the bill would only require agencies to publish records that would already be available under the CPRA.  The claim is further undercut by the fact that eight organizations representing law enforcement have come out in support of the bill, including the California Narcotics Officers Association and the Association of Deputy District Attorneys. 

The bill isn’t perfect. As written, the enforcement mechanism are vague, and it’s unclear what kind of consequences, if any, agencies may face if they fail to post these records in a little more than a year. In addition, agencies may overly withhold or redact policies, as is often the case with responses to traditional public records requests. Nevertheless, EFF believes that even the incremental measure contained in the bill will help pave the way for long term transparency reforms.

Join us in urging Gov. Jerry Brown to sign this important bill. 

Categories: Aggregated News

We're Asking the Copyright Office to Protect Your Right To Remix, Study, and Tinker With Digital Devices and Media

eff.org - Fri, 15/09/2017 - 04:39

Who controls your digital devices and media? If it's not you, why not? EFF has filed new petitions with the Copyright Office to give those in the United States protection against legal threats when you take control of your devices and media. We’re also seeking broader, better protection for security researchers and video creators against threats from Section 1201 of the Digital Millennium Copyright Act.

DMCA 1201 is a deeply flawed and unconstitutional law. It bans “circumvention” of access controls on copyrighted works, including software, and bans making or distributing tools that circumvent such digital locks. In effect, it lets hardware and software makers, along with major entertainment companies, control how your digital devices are allowed to function and how you can use digital media. It creates legal risks for security researchers, repair shops, artists, and technology users.

We’re fighting DMCA 1201 on many fronts, including a lawsuit to have the law struck down as unconstitutional. We’re also asking Congress to change the law. And every three years we petition the U.S. Copyright Office for temporary exemptions for some of the most important activities this law interferes with. This year, we’re asking the Copyright Office, along with the Librarian of Congress, to expand and simplify the exemptions they granted in 2015. We’re asking them to give legal protection to these activities:

  • Repair, diagnosis, and tinkering with any software-enabled device, including “Internet of Things” devices, appliances, computers, peripherals, toys, vehicle, and environmental automation systems;
  • Jailbreaking personal computing devices, including smartphones, tablets, smartwatches, and personal assistant devices like the Amazon Echo and the forthcoming Apple HomePod;
  • Using excerpts from video discs or streaming video for criticism or commentary, without the narrow limitations on users (noncommercial vidders, documentary filmmakers, certain students) that the Copyright Office now imposes;
  • Security research on software of all kinds, which can be found in consumer electronics, medical devices, vehicles, and more;
  • Lawful uses of video encrypted using High-bandwidth Digital Content Protection (HDCP, which is applied to content sent over the HDMI cables used by home video equipment).

Over the next few months, we’ll be presenting evidence to the Copyright Office to support these exemptions. We’ll also be supporting other exemptions, including one for vehicle maintenance and repair that was proposed by the Auto Care Association and the Consumer Technology Association. And we’ll be helping you, digital device users, tinkerers, and creators, make your voice heard in Washington DC on this issue.

Categories: Aggregated News

Shrinking Transparency in the NAFTA and RCEP Negotiations

eff.org - Fri, 15/09/2017 - 03:08

Provisions on digital trade are quietly being squared away in both of the two major trade negotiations currently underway—the North American Free Trade Agreement (NAFTA) renegotiation and the Regional Comprehensive Economic Partnership (RCEP) trade talks. But due to the worst-ever standards of transparency in both of these negotiations, we don’t know which provisions are on the table, which have been agreed, and which remain unresolved. The risk is that important and contentious digital issues—such as rules on copyright or software source code—might become bargaining chips in negotiation over broader economic issues including wages, manufacturing and dispute resolution, and that we would be none the wiser until after the deals have been done.

The danger of such bad compromises being made is especially acute because both of the deals are in trouble. Last month President Donald Trump targeted the NAFTA which includes Canada and Mexico, describing it in a tweet as "the worst trade deal ever made," which his administration "may have to terminate." At the conclusion of the 2nd round of talks held last week in Mexico, the prospects of agreement being concluded anytime soon seem unlikely. Even as a third round of talks is scheduled for Ottawa from September 23-27, 2017, concern about the agreement's future has prompted Mexico to step up efforts to boost commerce with Asia, South America and Europe.

The same is true of the RCEP agreement, which is being spearheaded by the 10 member ASEAN bloc and its 6 FTA partners, and which was expected to be concluded by the end of this year. The possibility of RCEP being ratified this year now seems unlikely as nations are far from agreement on key areas. Reports suggest however that the negotiators are targeting the e-commerce chapter as a priority area for early agreement. So far, the text specific to e-commerce has not been made publicly available and the leaked Terms of Reference for the Working Group on ecommerce (WGEC) is the only reference to what issues could make an appearance in the RCEP. We have previously reported that the e-commerce chapter was expected to be shorter and less detailed than the chapters on goods and services. However the secrecy of trade negotiations makes it very difficult to accurately track developments or policy objectives that are being pushed through or prioritized in the negotiations.

Trade Negotiations Are Becoming Less Open, Not More

Far from adopting the enhanced measures of transparency and openness that EFF demanded and that U.S. Trade Representative Lighthizer promised to deliver, the NAFTA renegotiation process seems to be walking back from the minimal level of transparency that civil society fought hard for during the Trans-Pacific Partnership Agreement (TPP) talks. So far, the NAFTA process has had no open stakeholder meetings at its rounds to date. EFF has written a joint letter to negotiators [PDF] that has been endorsed by groups including Access Now, Creative Commons, Derechos Digitales, and OpenMedia, demanding that it reinstate stakeholder meetings, as an initial step in opening up the negotiations to greater public scrutiny.

The openness of the RCEP negotiation process has also been degrading. At a public event held during the Auckland round, the Trade Minister from New Zealand and members of the Trade Negotiating Committee (TNC) fielded questions from stakeholders using social media and the event was live streamed. Organizers of earlier rounds of RCEP held in South Korea and Indonesia had facilitated formal and informal meetings between negotiators and civil society organisations (CSOs). But at recent rounds the opportunities for interaction between negotiators and stakeholders has dropped. The hosting nations have also been much more restrained with their engagement and outreach. For example, at the Hyderabad round there was no press conference or official statement released by the government representatives or chapter negotiators.

A Broader Retreat from Stakeholder Inclusion?

This worrying retreat from democracy in trade negotiations mirrors a broader softening of support by governments for public participation in policy development. From a high point about a decade ago, when governments embraced a so-called “multi-stakeholder model” as the foundation of bodies such as the Internet Governance Forum (IGF), several countries that were previous supporters of this model seem to be much cooler towards it now. Consider the Xiamen Declaration which was adopted by consensus at the 9th BRICS (Brazil, Russia, India, China, South Africa) summit in China this month. Unlike previous BRICS declarations which supported a multi-stakeholder approach, the Xiamen declaration stresses the importance of state sovereignty throughout the document.

This trend is not reserved to the BRICS bloc. Western governments, too, are excluding civil society voices from policy development, even while they experiment with methods for engaging directly with large corporations. In January this year, Denmark in recognition of technological issues becoming matters of foreign policy has appointed a "digitisation ambassador" to engage with tech companies such as Google and Facebook. This is a poor substitute for a fully inclusive, balanced and accountable process that would also include Internet users and other civil society stakeholders.

Given the complexity of trade negotiations and the fast-changing pace of the digital environment, negotiators are not always equipped to negotiate fair trade deals without the means of having a broader public discussion of the issues involved. In particular, including provisions related to the digital economy in trade agreements can result in a push to negotiate on issues before they can form an understanding of potential consequences. A wide and open consultative process, ensuring a more balanced view of the issues at stake, could help.

The intransigence of trade ministries such as the USTR to heed demands either from EFF or from Congress to become more open and transparent suggest that it may be a long while before we see such an inclusive, balanced, and accountable process evolving out of trade negotiations as they exist now. But other venues for discussing digital trade, such as the IGF and the OECD, do exist today and could be used rather than rushing into closed-door norm-setting. One advantage of preferring these more flexible, soft-law mechanisms for developing norms on Internet related issues is that they provide a venue for cooperation and policy coordination, without locking countries into a set of rules that may become outmoded as business models and technologies continue to evolve.

This is not the model that NAFTA or RCEP negotiators have chosen, preferring to open the door to corporate lobbyists while keeping civil society locked out. This week’s letter to the trade ministries of the United States, Canada, and Mexico calls them out on this and asks them to do better. If you are in the United States, you can also join the call for better transparency in trade negotiations by asking your representative to support the Promoting Transparency in Trade Act.

Categories: Aggregated News

EFF Asks Court: Can Prosecutors Hide Behind Trade Secret Privilege to Convict You?

eff.org - Fri, 15/09/2017 - 00:57
California Appeals Court Urged to Allow Defense Review of DNA Matching Software

If a computer DNA matching program gives test results that implicate you in a crime, how do you know that the match is correct and not the result of a software bug? The Electronic Frontier Foundation (EFF) has urged a California appeals court to allow criminal defendants to review and evaluate the source code of forensic software programs used by the prosecution, in order to ensure that none of the wrong people end up behind bars, or worse, on death row.

In this case, a defendant was linked to a series of rapes by a DNA matching software program called TrueAllele. The defendant wants to examine how TrueAllele takes in a DNA sample and analyzes potential matches, as part of his challenge to the prosecution’s evidence. However, prosecutors and the manufacturers of TrueAllele’s software argue that the source code is a trade secret, and therefore should not be disclosed to anyone.

“Errors and bugs in DNA matching software are a known problem,” said EFF Staff Attorney Stephanie Lacambra. “At least two other programs have been found to have serious errors that could lead to false convictions. Additionally, different products used by different police departments can provide drastically different results. If you want to make sure the right person is imprisoned—and not running free while someone innocent is convicted—we can’t have software programs’ source code hidden away from stringent examination.”

The public has an overriding interest in ensuring the fair administration of justice, which favors public disclosure of evidence. However, in certain cases where public disclosure could be too financially damaging, the court could use a simple protective order so that only the defendant’s attorneys and experts are able to review the code. But even this level of secrecy should be the exception and not the rule.

“Software errors are extremely common across all kinds of products,” said EFF Staff Attorney Kit Walsh. “We can’t have someone’s legal fate determined by a black box, with no opportunity to see if it’s working correctly.”

For the full brief in California v. Johnson:
https://www.eff.org/document/amicus-brief-california-v-johnson

Contact:  Stephanie LacambraCriminal Defense Staff Attorneystephanie@eff.org KitWalshStaff Attorneykit@eff.org
Categories: Aggregated News

Advertising

 


Advertise here!

Syndicate content
All content and comments posted are owned and © by the Author and/or Poster.
Web site Copyright © 1995 - 2007 Clemens Vermeulen, Cairns - All Rights Reserved
Drupal design and maintenance by Clemens Vermeulen Drupal theme by Kiwi Themes.
Buy now