Techie Feeds

Plugin vulnerabilities exploited in traffic monetization schemes

Malwarebytes - Tue, 03/26/2019 - 15:00

In their Website Hack Trend Report, web security company Sucuri noted that WordPress infections rose to 90 percent in 2018. One aspect of Content Management System (CMS) infections that is sometimes overlooked is that attackers not only go after the CMSes themselves—WordPress, Drupal, etc.—but also third-party plugins and themes.

While plugins are useful in providing additional features for CMS-run websites, they also increase the surface of attack. Not all plugins are regularly maintained or secure, and some are even abandoned by their developers, leaving behind bugs that will never get fixed.

In the past few months, we have noticed threat actors leveraging several high profile plugin vulnerabilities to redirect traffic toward various monetization schemes, depending on a visitor’s geolocation and other properties. The WordPress GDPR compliance plugin vulnerability, and the more recent Easy WP STMP and Social Warfare vulnerabilities are a few examples of opportunistic attacks quickly adopted in the wild.

Redirection infrastructure

Hacked websites can be monetized in different ways, but one of the most popular is to hijack traffic and redirect visitors toward scams and exploits.

We started looking at the latest injection campaign following the notes from Sucuri’s blog post about the Social Warfare zero-day stored XSS. According to log data, the automated exploit attempts to load content from a Pastebin paste, which can be seen below. The obfuscated code reveals one of the domains used by the threat actors:

Pastebin code snippet used in automated attacks against vulnerable plugins

Our crawlers identified a redirection scheme via the same infrastructure related to these recent plugin hacks. Compromised websites are injected with heavily obfuscated code that decodes to setforconfigplease[.]com (the same domain as found in the Pastebin code).

Obfuscated code injected into hacked site

The first layer of redirection goes to domains hosted on 176.123.9[.]52 and 176.123.9[.]53 that will perform the second redirect via a .tk domain. Denis from Sucuri has tracked the evolution and rotation of these domains during the past few days.

New domain used in the "Easy WP SMTP" and "Social Warfare" (and some other) attacks — redrentalservice[.]com — registered 2019-03-21. Replacement for setforconfigplease[.]com (registered on March 4). https://t.co/2RWVxhLrfb and https://t.co/lqse0IwR61

— Denis (@unmaskparasites) March 22, 2019

Based on our telemetry, the majority of users redirected in this campaign are from Brazil, followed by the US and France:

Top detections based on visitors’ country of origin

Scams, malvertising, and more

The goal of this campaign (and other similar ones) is traffic monetization. Threat actors get paid to redirect traffic from compromised sites to a variety of scams and other profit-generating schemes. Over the past few months, we have been following this active redirection campaign involving the same infrastructure described earlier.

Keeping track of any ongoing threat gives insight into the threat actor’s playbook—whether changes are big or small. Code may go through iterations, from clear text to obfuscated, or perhaps may contain new features.

While there are literally dozens of final payloads based on geolocation and browser type delivered in this campaign, we focused on a few popular ones that people are likely to encounter. By hijacking traffic from thousands of hacked websites, the crooks fingerprint and redirect their victims while trying to avoid getting blocked.

 

Traffic redirections by payload type

Browser lockers and tech support scams

Historically, we have seen this sub campaign as one of the main purveyors of browser lockers, used by tech support scammers. New domains with the .tk TLD are generated every few minutes to act as redirectors to browlocks. Back in October 2018, Sucuri mentioned this active campaign abusing old tagDiv themes and unpatched versions of the Smart Google Code Inserter plugin.

Browser lockers continue to be a popular social engineering tool to scare people into thinking their computers are infected and locked up. While there is no real malware involved, there are clever bits of JavaScript that have given browser vendors headaches. The “evil cursor” is one of those tricks that effectively prevents users from closing a tab or browser window, and has just been recently fixed.

Browlock urging victims to call fake Microsoft support

Ad fraud and clickjacking

One particular case we documented deals with ad fraud via decoy sites that look like blogs to display Google Ads. This fraudulent scheme was exposed back in August, showing how traffic from hacked sites could generate $20,000 in ad revenue per month.

However, in a twist implemented shortly after, the fraudsters fooled users that attempted to close the ad and hijacked their mouse to actually click on the ad instead. Indeed, as you move your mouse cursor toward the X, the ad banner shifts up and rather than closing the ad, your click opens it up.

The crooks use CSS code dynamically appended to the page that monitors the mouse cursor and reacts when it comes over the X. The timing is important to capture the click a few milliseconds later when the ad banner comes in focus. These client-side tricks are implemented to maximize ad profits, since revenue generated from ad clicks is much higher.

CSS code responsible for click fraud

Malvertising and pop-ups

There is no end to the number of malvertising schemes criminals can deploy. A particularly sneaky one is abusing the push notifications for Chrome, a feature that is a rogue advertiser’s dream. This allows websites to pop notifications in the bottom right corner of your screen even while you are not browsing the site in question. Those pop-ups tend to be snake oil PC optimizers and adult webcams or solicitations.

Fake video player tricking users to accept notifications

Form scrapers and skimmers

For a brief period of time, we saw the addition of a JavaScript scraper and what appeared to be a rudimentary skimmer in some traffic chains. It is unclear what the purpose was, unless it was some kind of experiment coupled with the regular .tk redirects.

Skimmers are most commonly found on e-commerce sites, in particular those running the Magento CMS. They are probably the most lucrative way to monetize a hacked site, unless, of course, there’s no user data to steal, in which case malicious redirects are second best.

Form scraper and skimmer identified in redirection infrastructure

Website traffic as a commodity

Website security is similar to computer security in that site owners are also exposed to zero-day exploits and must always patch. Yet, without proactive protection (i.e. web application firewall) and with site owners failing to roll out their security updates in a timely manner, zero-days can be incredibly effective.

When critical vulnerabilities are discovered, it can be a matter of hours before exploitation in the wild is observed. Compromised websites turn into a commodity relied upon for various monetization schemes, which in turn feeds into the buying and selling of malicious traffic.

Malwarebytes users are protected against these scams, thanks to our web-blocking capabilities. For additional protection with browser lockers, forced extensions, and other scams we recommend our browser extension.

Indicators of compromise (IOCs)

176.123.9[.]52

redrentalservice[.]com
setforconfigplease[.]com
somelandingpage[.]com
setforspecialdomain[.]com
getmyconfigplease[.]com
getmyfreetraffic[.]com

176.123.9[.]53

verybeatifulpear[.]com
thebiggestfavoritemake[.]com
stopenumarationsz[.]com
strangefullthiggngs[.]com

simpleoneline[.]online
lastdaysonlines[.]com
cdnwebsiteforyou[.]biz

The post Plugin vulnerabilities exploited in traffic monetization schemes appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 18 – 24)

Malwarebytes - Mon, 03/25/2019 - 15:46

Last week on Malwarebytes Labs, we touched on the susceptibility of hospitals against phishing attacks, password reuse, the risk of interactive TV shows to side-channel attacks, and Facebook’s new and out-of-character plan to promote privacy in the platform.

Other cybersecurity news
  • A study highlighted that 20 percent of Americans do not trust anyone with the protection of their data, suffer security fatigue, and want tighter controls over how others handle and protect their personal data. (Source: Help Net Security)
  • Epic Games found themselves in hot water after multiple accusations of its Epic Games Launcher purportedly scanning and collecting information of Steam users without their consent—a significant privacy red flag. They promised to fix this. (Source: Bleeping Computer)
  • Miscreants used the tragic Boeing 737 Max crash to push spam containing a malicious .JAR file. This file installs a RAT called Houdini H-Worm and the Adwind information stealer. (Source: Bleeping Computer)
  • Meet Kiddle, the child-friendly search engine that is powered by Google Safe Search but revealed that it’s not affiliated with Google. (Source: Sophos’ Naked Security Blog)
  • A Google Photos vulnerability could have allowed hackers to track when, where, and with whom photos were taken. Good news: It’s now patched. (Source: Imperva Blog)
  • Formjacking, the stealing of information entered in forms, is on the rise. And companies should focus on it. (Source: IT World Canada)
  • Business email compromise (BEC)—or at least its core methodology—began moving from email to SMS. (Source: Agari Blog)
  • A malicious spam campaign pretending to originate from the Center for Disease Control and Prevention (CDC) contained news about a new flu pandemic. It also contained a GandCrab attachment. (Source: My Online Security)
  • Millions of users downloaded a compromised iPhone app that called to nearly two dozen malicious servers to serve malvertising to devices. (Source: SC Magazine)
  • Learn4Life, a recovery program for at-risk teens, is teaching students about network security—something they wouldn’t likely learn from traditional high school. (Source PR Newswire)

Stay safe, everyone!

The post A week in security (March 18 – 24) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Researchers go hunting for Netflix’s Bandersnatch

Malwarebytes - Fri, 03/22/2019 - 15:00

A new research paper from the Indian Institute of Technology Madras explains how popular Netflix interactive show Bandersnatch could fall victim to a side-channel attack.

In 2016, Netflix began adding TLS (Transport Layer Security) to their video content to ensure strangers couldn’t eavesdrop on viewer habits. Essentially, now the videos on Netflix are hidden away behind HTTPS—encrypted and compressed.

Previously, Netflix had run into some optimisation issues when trialling the new security boost, but they got there in the end—which is great for subscribers. However, this new research illustrates that even with such measures in place, snoopers can still make accurate observations about their targets.

What is Bandersnatch?

Bandersnatch is a 2018 film on Netflix that is part of the science fiction series Black Mirror, an anthology about the ways technology can have unforeseen consequences. Bandersnatch gives viewers a choose-your-own-adventure-style experience, allowing for various options to perform task X or Y. Not all of them are important, but you’ll never quite be sure what will steer you to one of 10 endings.

Charlie Brooker, the brains behind Bandersnatch and Black Mirror, was entirely aware of the new, incredibly popular wave of full motion video (FMV) games on platforms such as Steam [1], [2], [3]. Familiarity with Scott Adams text adventures and the choose your own adventure books of the ’70s and ’80s would also be a given.

No surprise, then, that Bandersnatch—essentially an interactive FMV game as a movie—became a smash hit. Also notable, continuing the video game link: It was built using Twine, a common method for piecing together interactive fiction in gaming circles.

What’s the problem?

Researchers figured out a way to determine which options were selected in any given play-through across multiple network environments. Browsers, networks, operating systems, connection type, and more were changed for 100 people during testing.

Bandersnatch offers two choices at multiple places throughout the story. There’s a 10-second window to make that choice. If nothing is selected, it defaults to one of the options and continues on.

Under the hood, Bandersnatch is divided into multiple pieces, like a flowchart. Larger, overarching slices of script go about their business, while within those slices are smaller fragments where storyline can potentially branch out.

This is where we take a quick commercial break and introduce ourselves to JSON.

Who is JSON?

He won’t be joining us. However, JavaScript Object Notation will.

Put simply, JSON is an easily-readable method of sending data between servers and web applications. In fact, it more closely resembles a notepad file than a pile of obscure code.

In Bandersnatch, there are a set of answers considered to be the default flow of the story. That data is prefetched, allowing users who choose the default or do nothing to stream continuously.

When a viewer reaches the point in the story where they must make a choice, a JSON file is triggered from the browser to let the Netflix server know. Do nothing in the 10-second window? Under the hood, the prefetched data continues to stream, and viewers continue their journey with the default storyline.

If the viewer chooses the other, non-default option, however, then the prefetched data is abandoned and a second, different type of JSON file is sent out requesting the alternate story path.

What we have here is a tale of two JSONs.

Although the traffic between the Netflix browser and its servers is encrypted, researchers in this latest study were able to decipher which choices its participants made 96 percent of the time by determining the number and type of JSON files sent.

Should we be worried?

This may not be a particularly big problem for Netflix viewers, yet. However, if threat actors could intercept and follow user choices using a similar side channel, they could build reasonable behavioral profiles of their victims.

For instance, viewers of Bandersnatch are asked questions like “Frosties or sugar-puffs?”, “Visit therapist or follow Colin?”, and “Throw tea over computer or shout at dad?”. The choices made could potentially reveal benign information, such as food and music preferences, or more sensitive intel, such as a penchant for violence or political leanings.

Just as we can’t second guess everyone’s threat model (even for Netflix viewers), we also shouldn’t dismiss this. There are plenty of dangerous ways monitoring along these lines could be abused, whether the data is SSL or not. Additionally, this is something most of us going about our business probably haven’t accounted for, much less know what to do about it.

What we do know is that it’s important that content providers—such as gaming studios or streaming services—affected by this research account for it, and look at ways of obfuscating data still further.

Afterall, a world where your supposedly private choices are actually parseable feels very much like a Black Mirror episode waiting to happen.

The post Researchers go hunting for Netflix’s Bandersnatch appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Are hackers gonna hack anymore? Not if we keep reusing passwords

Malwarebytes - Thu, 03/21/2019 - 15:00

Enterprises have a password problem, and it’s one that is making the work of hackers a lot easier. From credential stuffing to brute force and password spraying attacks, modern hackers don’t have to do much hacking in order to compromise internal corporate networks. Instead, they log in using weak, stolen, or otherwise compromised credentials.

Take the recent case of Citrix as an example. The FBI informed Citrix that a nation-state actor had likely gained access to the company’s internal network, news that came only months after Citrix forced a password reset because it had suffered a credential-stuffing attack.

“While not confirmed, the FBI has advised that the hackers likely used a tactic known as password spraying, a technique that exploits weak passwords. Once they gained a foothold with limited access, they worked to circumvent additional layers of security,” Citrix wrote in a March 6th blog post.

Passwords problems abound

While a recent data privacy survey conducted by Malwarebytes found that an overwhelming majority (96 percent) of the 4,000 cross-generational respondents said online privacy is crucial, nearly a third (29 percent) admitted to reusing passwords across multiple accounts.

Survey after survey shows that passwords are the bane of enterprise security. In a recent survey conducted by Centrify, 52 percent of respondents said their organizations do not have a password vault, and one in five still aren’t using MFA for administrative privileged access.

“That’s too easy for a modern hacker,” said Torsten George, Cybersecurity Evangelist at Centrify. “Organizations can significantly harden their security posture by adopting a Zero Trust Privilege approach to secure the modern threatscape and granting least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment.”

How hackers attack without hacking

The problem with password reuse is that in order for an attacker to gain a foothold into your network, malicious actors don’t have to use advanced tactics. “In many cases, first stage attacks are simple vectors such as password spraying and credential stuffing and could be avoided with proper password hygiene,” according to Daniel Smith, head of threat research at Radware.

When cybercriminals are conducting password spraying attacks, they typically scan an organization’s infrastructure for externally-facing applications and network services, such as webmail, SSO, and VPN gateways.

Because these interfaces typically have strict timeout features, malicious actors will opt for password spraying over brute force attacks, which allows them to avoid being timed out or trigger an alert to administrators.

“Password spraying is a technique that involves using a limited set of passwords like Unidesk1, test, C1trix32 or nsroot that are discovered during the recon phase and used in attempted logins for known usernames,” Smith said. “Once the user is compromised, the actors will then employ advanced techniques to deploy and spread malware to gain persistence in the network.”

Cybercriminals have also been targeting cloud-based accounts by leveraging Internet Message Access Protocol (IMAP) for password-spray attacks, according to Proofpoint. One tricky hitch with IMAP is that two-factor authentication inherently can’t work, so it is automatically bypassed when authenticating, said Justin Jett, director of audit and compliance for Plixer.

“Because password-spraying attacks don’t generate an alarm or lock out a user account, a hacker can continually attempt logging in until they succeed. Once they succeed, they may try to use the credentials they found for other purposes,” Jett said.

Tightening up password security

The reality is that guessing passwords is easier for hackers than it is for them to go up against technology. If we’re being honest, there is a strong chance that an attacker is already in your network, given the widespread problem of password reuse. Because passwords are used to authenticate users, any conversation about augmenting password security has to look at the bigger picture of authentication strategies.

On the one hand, it’s true that password length and complexity are critical to creating strong passwords, but making each password unique has its challenges. Password managers have proven to address the problem of remembering credentials for multiple accounts, and these tools are indeed an important piece of an overall password security strategy.

“The pervasiveness of password stuffing, brute force and other similar attacks shows that password length is no longer a deterrent,” said Fausto Oliveira, principal security architect at Acceptto.

Instead, Oliveira said enabling continuous authentication on privileged employee, client, and consumer accounts is one preemptive approach that can stop an attacker from gaining access to sensitive information—even if they breach the system with a brute force attack.

“It is not about a simple 123456, obvious P@55word password versus a complicated passphrase, but recognizing that all of your passwords are compromised. This includes those passwords you have not yet created, you just don’t know it yet.”

Passwords continue to be a problem because their creation and maintenance is largely the responsibility of the user. There’s no technology to change human behavior, which only exacerbates the issues of password reuse and overall poor password hygiene.

Organizations that want to tighten up their password security need to look seriously at more viable solutions than trusting users, which may include eliminating passwords altogether.

The post Are hackers gonna hack anymore? Not if we keep reusing passwords appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Facebook’s history betrays its privacy pivot

Malwarebytes - Wed, 03/20/2019 - 15:00

Facebook CEO Mark Zuckerberg proposed a radical pivot for his company this month: it would start caring—really—about privacy, building out a new version of the platform that turns Facebook less into a public, open “town square” and more into a private, intimate “living room.”

Zuckerberg promised end-to-end encryption across the company’s messaging platforms, interoperability, disappearing messages, posts, and photos for users, and a commitment to store less user data, while also refusing to put that data in countries with poor human rights records.

If carried out, these promises could bring user privacy front and center.

But Zuckerberg’s promises have exhausted users, privacy advocates, technologists, and industry experts, including those of us at Malwarebytes. Respecting user privacy makes for a better Internet, period. And Zuckerberg’s proposals are absolutely a step in the right direction. Unfortunately, there is a chasm between Zuckerberg’s privacy proposal and Facebook’s privacy success. Given Zuckerberg’s past performance, we doubt that he will actually deliver, and we blame no user who feels the same way.

The outside response to Zuckerberg’s announcement was swift and critical.

One early Facebook investor called the move a PR stunt. Veteran tech journalist Kara Swisher jabbed Facebook for a “shoplift” of a competitor’s better idea. Digital rights group Electronic Frontier Foundation said it would believe in a truly private Facebook when it sees it, and Austrian online privacy rights activist (and thorn in Facebook’s side) Max Schrems laughed at what he saw as hypocrisy: merging users’ metadata across WhatsApp, Facebook, and Instagram, and telling users it was for their own, private good.

The biggest obstacle to believing Zuckerberg’s words? For many, it’s Facebook’s history.

The very idea of a privacy-protective Facebook goes so against the public’s understanding of the company that Zuckerberg’s comments taste downright unpalatable. These promises are coming from a man whose crisis-management statements often lack the words “sorry” or “apology.” A man who, when his company was trying to contain its own understanding of a foreign intelligence disinformation campaign, played would-be president, touring America for a so-called “listening tour.”

Users, understandably, expect better. They expect companies to protect their privacy. But can Facebook actually live up to that?

“The future of the Internet”

Zuckerberg opens his appeal with a shaky claim—that he has focused his attention in recent years on “understanding and addressing the biggest challenges facing Facebook.” According to Zuckerberg, “this means taking positions on important issues concerning the future of the Internet.”

Facebook’s vision of the future of the Internet has, at times, been largely positive. Facebook routinely supports net neutrality, and last year, the company opposed a dangerous, anti-encryption, anti-security law in Australia that could force companies around the world to comply with secret government orders to spy on users.

But Facebook’s lobbying record also reveals a future of the Internet that is, for some, less secure.

Last year, Facebook supported one half of a pair of sibling bills that eventually merged into one law. The law followed a convoluted, circuitous route, but its impact today is clear: Consensual sex workers have found their online communities wiped out, and are once again pushed into the streets, away from guidance and support, and potentially back into the hands of predators.

“The bill is killing us,” said one sex worker to The Huffington Post.

Though the law was ostensibly meant to protect sex trafficking victims, it has only made their lives worse, according to some sex worker advocates.

On March 21, 2018, the US Senate passed the Allow States and Victims to Fight Online Sex Trafficking (FOSTA) bill. The bill was the product of an earlier version of its own namesake, and a separate, related bill, called the Stop Enabling Sex Traffickers Act (SESTA). Despite clear warnings from digital rights groups and sex positive advocates, Facebook supported SESTA in November 2017. According to the New York Times, Facebook made this calculated move to curry favor amongst some of its fiercest critics in US politics.

“[The] sex trafficking bill was championed by Senator John Thune, a Republican of South Dakota who had pummeled Facebook over accusations that it censored conservative content, and Senator Richard Blumenthal, a Connecticut Democrat and senior commerce committee member who was a frequent critic of Facebook,” the article said. “Facebook broke ranks with other tech companies, hoping the move would help repair relations on both sides of the aisle, said two congressional staffers and three tech industry officials.”

Last October, the bill came back to haunt the social media giant: a Jane Doe plaintiff in Texas sued Facebook for failing to protect her from sex traffickers.

Further in Zuckerberg’s essay, he promises that Facebook will continue to refuse to build data centers in countries with poor human rights records.

Zuckerberg’s concern is welcome and his cautions are well-placed. As the Internet has evolved, so has data storage. Users’ online profiles, photos, videos, and messages can travel across various servers located in countries around the world, away from a company’s headquarters. But this development poses a challenge. Placing people’s data in countries with fewer privacy protections—and potentially oppressive government regimes—puts everyone’s private, online lives at risk. As Zuckerberg said:

“[S]toring data in more countries also establishes a precedent that emboldens other governments to seek greater access to their citizen’s data and therefore weakens privacy and security protections for people around the world,” Zuckerberg said.

But what Zuckerberg says and what Facebook supports are at odds.

Last year, Facebook supported the CLOUD Act, a law that lowered privacy protections around the world by allowing foreign governments to directly request companies for their citizens’ online data. It is a law that, according to Electronic Frontier Foundation, could result in UK police inadvertently getting their hands on Slack messages written by an American, and then forwarding those messages to US police, who could then charge that American with a crime—all without a warrant.

The same day that the CLOUD Act was first introduced as a bill, it received immediate support from Facebook, Google, Microsoft, Apple, and Oath (formerly Yahoo). Digital rights groups, civil liberties advocates, and human rights organizations directly opposed the bill soon after. None of their efforts swayed the technology giants. The CLOUD Act became law just months after its introduction.

While Zuckerberg’s push to keep data out of human-rights-abusing countries is a step in the right direction for protecting global privacy, his company supported a law that could result in the opposite. The CLOUD Act does not meaningfully hinge on a country’s human rights record. Instead, it hinges on backroom negotiations between governments, away from public view.

The future of the Internet is already here, and Facebook is partially responsible for the way it looks.

Skepticism over Facebook’s origin story 2.0

For years, Zuckerberg told anyone who would listen—including US Senators hungry for answers—that he started Facebook in his Harvard dorm room. This innocent retelling involves a young, doe-eyed Zuckerberg who doesn’t care about starting a business, but rather, about connecting people.

Connection, Zuckerberg has repeated, was the ultimate mission. This singular vision was once employed by a company executive to hand-wave away human death for the “*de facto* good” of connecting people.

But Zuckerberg’s latest statement adds a new purpose, or wrinkle, to the Facebook mission: privacy.

“Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks,” Zuckerberg said.

Several experts see ulterior motives.

Kara Swisher, the executive editor of Recode, said that Facebook’s re-steering is probably an attempt to remain relevant with younger users. Online privacy, data shows, is a top concern for that demographic. But caring about privacy, Swisher said, “was never part of [Facebook’s] DNA, except perhaps as a throwaway line in a news release.”

Ashkan Soltani, former chief technology officer of the Federal Trade Commission, said that Zuckerberg’s ideas were obvious attempts to leverage privacy as a competitive edge.

“I strongly support consumer privacy when communicating online but this move is entirely a strategic play to use privacy as a competitive advantage and further lock-in Facebook as the dominant messaging platform,” Soltani said on Twitter.

As to the commitment to staying out of countries that violate human rights, Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford Law School’s Center for Internet and Society, pressed harder.

“I don’t know what standards they’re using to determine who are human rights abusers,” Pfefferkorn said in a phone interview. “If it’s the list of countries that the US has sanctioned, where they won’t allow exports, that’s a short list. But if you have every country that’s ever put dissidents in prison, then that starts some much harder questions.”

For instance, what will Facebook do if it wants to enter a country that, on paper, protects human rights, but in practice, utilizes oppressive laws against its citizens? Will Facebook preserve its new privacy model and forgo the market entirely? Or will it bend?

“We’ll see about that,” Pfefferkorn said in an earlier email. “[Zuckerberg] is answerable to shareholders and to the tyranny of the #1 rule: growth, growth, growth.”

Asked whether Facebook’s pivot will succeed, Pfefferkorn said the company has definitely made some important hires to help out. In the past year, Facebook brought aboard three critics and digital rights experts—one from EFF, one from New American’s Open Technology Institute, and another from AccessNow—into lead policy roles. Further, Pfefferkorn said, Facebook has successfully pushed out enormous, privacy-forward projects before.

“They rolled out end-to-end encryption and made it happen for a billion people in WhatsApp,” Pfefferkorn said. “It’s not necessarily impossible.”

WhatsApp’s past is now Facebook’s future

In looking to the future, Zuckerberg first looks back.

To lend some authenticity to this new-and-improved private Facebook, Zuckerberg repeatedly invokes a previously-acquired company’s reputation to bolster Facebook’s own.

WhatsApp, Zuckerberg said, should be the model for the all new Facebook.

“We plan to build this [privacy-focused platform] the way we’ve developed WhatsApp: focus on the most fundamental and private use case—messaging—make it as secure as possible, and then build more ways for people to interact on top of that,” Zuckerberg said.

The secure messenger, which Facebook purchased in 2014 for $19 billion, is a privacy exemplar. It developed default end-to-end encryption for users in 2016 (under Facebook’s stead), refuses to store keys to grant access to users’ messages, and tries to limit user data collection as much as possible.

Still, several users believed that WhatsApp joining Facebook represented a death knell for user privacy. One month after the sale, WhatsApp’s co-founder Jan Kaum tried to dispel any misinformation about WhatsApp’s compromised vision.

“If partnering with Facebook meant that we had to change our values, we wouldn’t have done it,” Kaum wrote.

Four years after the sale, something changed.

Kaum left Facebook in March 2018, reportedly troubled by Facebook’s approach to privacy and data collection. Kaum’s departure followed that of his co-founder Brian Acton the year before.

In an exclusive interview with Forbes, Acton explained his decision to leave Facebook. It was, he said, very much about privacy.

“I sold my users’ privacy to a larger benefit,” Acton said. “I made a choice and a compromise. And I live with that every day.”

Strangely, in defending Facebook’s privacy record, Zuckerberg avoids a recent pro-encryption episode. Last year, Facebook fought—and prevailed—against a US government request to reportedly “break the encryption” in its Facebook Messenger app. Zuckerberg also neglects to mention Facebook’s successful roll-out of optional end-to-end encryption in its Messenger app.

Further, relying so heavily on WhatsApp as a symbol of privacy is tricky. After all, Facebook didn’t purchase the company because of its philosophy. Facebook purchased WhatsApp because it was a threat. 

Facebook’s history of missed promises

Zuckerberg’s statement promises users an entirely new Facebook, complete with end-to-end encryption, ephemeral messages and posts, less intrusive, permanent data collection, and no data storage in countries that have abused human rights.

These are strong ideas. End-to-end encryption is a crucial security measure for protecting people’s private lives, and Facebook’s promise to refuse to store encryption keys only further buttresses that security. Ephemeral messages, posts, photos, and videos give users the opportunity to share their lives on their own terms. Refusing to put data in known human-rights-abusing regimes could represent a potentially significant market share sacrifice, giving Facebook a chance to prove its commitment to user privacy.

But Facebook’s promise-keeping record is far lighter than its promise-making record. In the past, whether Facebook promised a new product feature or better responsibility to its users, the company has repeatedly missed its own mark.

In April 2018, TechCrunch revealed that, as far back as 2010, Facebook deleted some of Zuckerberg’s private conversations and any record of his participation—retracting his sent messages from both his inbox and from the inboxes of his friends. The company also performed this deletion, which is unavailable to users, for other executives.

Following the news, Facebook announced a plan to give its users an “unsend” feature.

But nearly six months later, the company had failed to deliver its promise. It wasn’t until February of this year that Facebook produced a half-measure: instead of giving users the ability to actually delete sent messages, like Facebook did for Zuckerberg, users could “unsend” an accidental message on the Messenger app within 10 minutes of the initial sending time.

Gizmodo labeled it a “bait-and-switch.”

In October 2016, ProPublica purchased an advertisement in Facebook’s “housing categories” that excluded groups of users who were potentially African-American, Asian American, or Hispanic. One civil rights lawyer called this exclusionary function “horrifying.”

Facebook quickly promised to improve its advertising platform by removing exclusionary options for housing, credit, and employment ads, and by rolling out better auto-detection technology to stop potentially discriminatory ads before they published.

One year later, in November 2017, ProPublica ran its experiment again. Discrimination, again, proved possible. The anti-discriminatory tools Facebook announced the year earlier caught nothing.

“Every single ad was approved within minutes,” the article said.

This time, Facebook shut the entire functionality down, according to a letter from Chief Operating Officer Sheryl Sandberg to the Congressional Black Caucus. (Facebook also announced the changes on its website.)

More recently, Facebook failed to deliver on a promise that users’ phone numbers would be protected from search. Today, through a strange workaround, users can still be “found” through the phone number that Facebook asked them to provide specifically for two-factor authentication.

Away from product changes, Facebook has repeatedly told users that it would commit itself to user safety, security, and privacy. The actual track record following those statements tells a different story, though.

In 2013, an Australian documentary filmmaker met with Facebook’s public policy and communications lead and warned him of the rising hate speech problem on Facebook’s platform in Myanmar. The country’s ultranationalist Buddhists were making false, inflammatory posts about the local Rohingya Muslim population, sometimes demanding violence against them. Riots had taken 80 people’s lives the year before, and thousands of Rohingya were forced into internment camps.

Facebook’s public policy and communications lead, Elliot Schrage, sent the Australian filmmaker, Aela Callan, down a dead end.

“He didn’t connect me to anyone inside Facebook who could deal with the actual problem,” Callan told Reuters.

By November 2017, the problem had exploded, with Myanmar torn and its government engaging in what the United States called “ethnic cleansing” against the Rohingya. In 2018, investigators from the United Nations placed blame on Facebook.

“I’m afraid that Facebook has now turned into a beast,” said one investigator.

During the years before, Facebook made no visible effort to fix the problem. By 2015, the company employed just two content moderators who spoke Burmese—the primary language in Myanmar. By mid-2018, the company’s content reporting tools were still not translated into Burmese, handicapping the population’s ability to protect itself online. Facebook had also not hired a single employee in Myanmar at that time.

In April 2018, Zuckerberg promised to do better. Four months later, Reuters discovered that hate speech still ran rampant on the platform and that hateful posts as far back as six years had not been removed.

The international crises continued.

In March 2018, The Guardian revealed that a European data analytics company had harvested the Facebook profiles of tens of millions of users. This was the Cambridge Analytica scandal, and, for the first time, it directly implicated Facebook in an international campaign to sway the US presidential election.

Buffeted on all sides, Facebook released … an ad campaign. Drenched in sentimentality and barren of culpability, a campaign commercial vaguely said that “something happened” on Facebook: “spam, clickbait, fake news, and data misuse.”

“That’s going to change,” the commercial promised. “From now on, Facebook will do more to keep you safe and protect your privacy.”

Here’s what happened since that ad aired in April 2018.

The New York Times revealed that, throughout the past 10 years, Facebook shared data with at least 60 device makers, including Apple, Samsung, Amazon, Microsoft, and Blackberry. The New York Times also published an investigatory bombshell into Facebook’s corporate culture, showing that, time and again, Zuckerberg and Sandberg responded to corporate crises with obfuscation, deflection, and, in the case of one transparency-focused project, outright anger.

British parliamentary committee released documents that showed how Facebook gave some companies, including Airbnb and Netflix, access to its platform in exchange for favors. (More documents released this year showed prior attempts by Facebook to sell user data.) Facebook’s Onava app got kicked off the Apple app store for gathering user data. Facebook also reportedly paid users as young as 13-years-old to install the “Facebook Research” app on their own devices, an app intended strictly for Facebook employee use.

Oh, and Facebook suffered a data breach that potentially affected up to 50 million users.

While the substance of Zuckerberg’s promises could protect user privacy, the execution of those promises is still up in the air. It’s not that users don’t want what Zuckerberg is describing—it’s that they’re burnt out on him. How many times will they be forced to hear about another change of heart before Facebook actually changes for good?

Tomorrow’s Facebook

Changing the direction of a multibillion-dollar, international company is tough work, though several experts sound optimistic about Zuckerberg’s privacy roadmap. But just as many experts have depleted their faith in the company. If anything, Facebook’s public pressures might be at their lowest—detractors have removed themselves from the platform entirely, and supporters will continue to dig deep into their own good will.

What Facebook does with this opportunity is entirely under its own control. Users around the world will be better off if the company decides that, this time, it’s serious about change. User privacy is worth the effort.

The post Facebook’s history betrays its privacy pivot appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New research finds hospitals are easy targets for phishing attacks

Malwarebytes - Tue, 03/19/2019 - 15:00

New research from Brigham and Women’s Hospital in Boston finds hospital employees are extremely vulnerable to phishing attacks. The study highlights just how effective phishing remains as a tactic—the need for defense against and awareness of email scams is more critical than ever.

The research was a multi-center exercise that looked at results of phishing simulations at six anonymous healthcare facilities in the US. Research coordinators ran phishing simulations for close to seven years and analyzed click rates for more than 2.9 million simulated emails. Results revealed that 422,052 (14.2 percent) of phishing emails were clicked, which is a rate of one in seven.

Patient data at risk

Security professionals are acutely aware of the intense scrutiny placed on patient data and the regulatory requirements around HIPAA (Health Insurance Portability and Accountability Act). This new research on phishing in healthcare puts a spotlight on the vulnerability of this kind of data.

“Patient data, patient care, patient trust and financial stability may be on the line,” said study author William Gordon, MD, MBI, of the Brigham’s Division of General Internal Medicine and Primary Care. “Understanding susceptibility, but also what steps can be taken to mitigate it, are critical as cyberattacks continue to rise.”

Odds of clicks decreased with time

There was a positive finding in the study. Researchers noted that clicks on phishing emails went down with increasing campaigns. After institutions had run 10 or more phishing simulation campaigns, the odds of users clicking on fraudulent emails went down by more than one-third.

The findings make the case for solid awareness efforts to educate about the dangers of phishing, said Gordon.

“Things get better over time with awareness, education, and training,” he said. “Our study suggests that while the risk is high, there is an opportunity to mitigate it.”

Healthcare industry struggles with breach rate

Chris Carmody, senior vice president of enterprise technology and services at the University of Pittsburgh Medical Center (UPMC) and president of Clinical Connect Health Information Exchange, noted in an interview with Reuters Health News that phishing is a challenge in an increasingly digital healthcare environment.

“This is definitely a problem in all industries where people rely on e-communications, especially email,” Carmody said in the interview. “And health care is no different. We see clinical users whose primary focus is on patient care, and we’re trying to do our best to help them develop the knowhow to know what to look for so they can identify phishing attempts and report them to us.”

Carmody estimates that his security group at UMPC, which also runs phishing simulations, gets about 7,500 suspect emails forwarded to them each month, with about 12.5 percent of them being actually malicious.

But any number puts a healthcare facility at risk, as these kinds of institutions are particularly vulnerable to breach. A separate report from Beazley Breach Response finds that healthcare organizations suffered the highest number of data breaches in 2018 across any sector of the US economy. Healthcare institutions have a 41 percent reported breach rate, the highest of any industry.

Other figures from ratings firm SecurityScorecard find the healthcare industry is one of the lowest ranked industries when it comes to security practices. The report, titled SecurityScorecard 2018 Healthcare Report: A Pulse on The Healthcare Industry’s Cybersecurity Risk, looked at data from 1200 healthcare entities and ranked healthcare 15th out of 17 industries for overall cybersecurity posture.

The SecurityScorecard report noted the healthcare industry is one of the lowest performing industries in terms of endpoint security, posing a threat to patient data and potentially patient lives. In addition, 60 percent of the most common cybersecurity issues in the healthcare industry relate to poor patching cadence.

Healthcare phishing in the headlines

Healthcare phishing attempts that devastate facilities and lead to patient data leaks regularly make news headlines. In December 2018, an employee of Memorial Hospital at Gulfport, Mississippi was tricked by a phishing scheme and the result was the breached data of 30,000 patients.

The breach was discovered when investigators noticed an unauthorized party had gained access to an employee email account earlier in the month. Among the patient data leaked were emails, names, dates of birth, health data, and information about services patients had received at MHG. Social Security numbers were also leaked on some patients.

Phishing on the rise all over

Massive malware campaigns like Emotet and TrickBot have pushed phishing levels higher this year in many industries. Kaspersky Labs most recent Spam and phishing in 2018 report finds the number of phishing attacks that took place in 2018 more than doubled from the previous year.

Research from Sophos finds that 45 percent of UK businesses were hit by phishing attacks between 2016 and 2018. The study also revealed 54 percent had identified instances of employees replying to unsolicited emails or clicking the links in them.

The Malwarebytes 2019 State of Malware report finds all sectors are impacted by the kind of malware served up in phishing emails. Trojans like Emotet and TrickBot are particularly problematic in education, manufacturing, and retail. While healthcare fared poorly in the Brigham and Women’s study, every vertical is plagued by phishing.

How can business defend against phishing attacks?

Of all of the cybersecurity risks to organizations, the human element is always the toughest to mitigate. But, as the healthcare phishing study shows, user awareness does have a positive impact on click rates—the more campaigns were launched, the fewer employees who fell prey to fake emails.

There are plenty of free awareness and anti-phishing resources available that businesses can tap for training internally. For example, our anti-phishing guide offers suggestions and awareness tips for both employees and customers. And Google has an anti-phishing test you can access online to familiarize users with common phishing techniques. Of course, there are also many companies that offer training products for purchase.

However businesses choose to train employees, it’s important to have regular access to information and tools that promote awareness of evolving phishing techniques. In the healthcare industry, it’s not just about the bottom line—it could actually save lives.

The post New research finds hospitals are easy targets for phishing attacks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 11 – 17)

Malwarebytes - Mon, 03/18/2019 - 14:57

Last week on Malwarebytes Labs, we looked at the Lazarus group in our series about APT groups, we discussed the introduction of Payment Service Directive 2 (PSD2) in the EU, we tackled Google’s Nest fiasco, and the launch of Mozilla’s Firefox Send. In addition, we gave you an overview of the pervasive threat, Emotet, and we discussed reputation management in the age of cyberattacks against businesses.

Other security news
  • A new phishing campaign targeting mainly iOS users is asking them to login in with their Facebook account and give away their credentials. The technique the threat actors are using can easily be ported over to scam Android users. (Source: SC Magazine)
  • Iranian hackers have stolen between six and 10 terabytes of data from Citrix. The hack was focused on assets related to NASA, aerospace contracts, Saudi Arabia’s state oil company, and the FBI. (Source: The Inquirer)
  • Up to 150 million users might have downloaded and installed an Android app on their phones that contained a new strain of adware named SimBad. The malicious advertising kit was found inside 210 Android apps that had been uploaded on the official Google Play Store. (Source: ZDNet)
  • The popularity of the Apex Legends game and its absence on the Android Play store have attracted the attention of many malware writers who exploited this opportunity to spread malicious versions for Android. (Source: Security Affairs)
  • A new insidious malware dubbed GlitchPOS bent on siphoning credit-card numbers from point-of-sale (PoS) systems has recently been spotted on a crimeware forum. GlitchPOS joins other recently-developed malware  targeting the retail and hospitality space. (Source: ThreatPost)
  • A partial Facebook outage affecting users around the world and stretching beyond 14 hours is believed to be the biggest interruption ever suffered by the social network. (Source: CNN)
    Telegram reported it received 3 million signups during this Facebook outage. (Source: CNet)
  • A 21-year-old Australian man was arrested after earning over $200,000 from stolen Spotify and Netflix accounts. Allegedly, he sold the stolen accounts through an “account generator” website. (Source: TechSpot)
  • A code execution vulnerability in WinRAR (CVE-2018-20250) generated over a hundred distinct exploits in the first week since its disclosure, and the number of exploits keeps on swelling. (Source: BleepingComputer)
  • A new flaw in the content management software (CMS) WordPress has been discovered that could potentially lead to remote code execution attacks. Users are advised to update to the latest version, which was at 5.1.1 at the time of writing. (Source: The Hacker News)
  • The Chinese authorities are collecting DNA as a means to track their people. And it seems they got unlikely corporate and academic help from the United States. (Source: The New York Times)

Stay safe, everyone!

The post A week in security (March 11 – 17) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Reputation management in the age of cyberattacks against businesses

Malwarebytes - Fri, 03/15/2019 - 16:15

Avid readers of the Malwarebytes Labs blog would know that we strive to prepare businesses of all sizes for the inevitability of cyberattacks. From effectively training employees about basic cybersecurity hygiene to guiding organizations in formulating an incident response (IR) program, a cybersecurity policy, and introducing an intentional culture of security, we aim to promote proactive prevention.

However, there are times when organizations need to be reactive. And one of these is business reputation management (BRM), a buzzword that refers to the practice of ensuring that organizations are always putting their best foot forward, online and offline, by constant monitoring and dealing with information and communications that shape public perception. This is a process that executives must not miss out on, especially when the company has found itself in the center of a media storm after disclosing a cybersecurity incident that has potentially affected millions of their customers.

In this post, we look at why companies of all sizes should have such a system in place by having a refresher on what forms a reputation and how much consumer trust and loyalty have evolved. We’ll also show you what proactive and reactive BRM would look like before, during, and after a cybersecurity fallout.

Reputation, like beauty, is in the eye of the beholder

A company’s reputation—how clients, investors, employees, suppliers, and partners perceive it—is its most valuable, intangible asset. Gideon Spanier, Global Head of Media at Campaign, has said in his Raconteur piece that it is built on three things: what you say, what you do, and what others say about you when you’re not in the room. Because of the highly digitized and networked world we live in, the walls of this room have become imaginary, with everyone now hearing what you have to say.

Looking up organizations and brands online has become part of a consumer’s decision-making process, so having a strong and positive online presence is more important than ever. But to see that only 15 percent of executives are addressing the need to manage their business’s reputation, there’s clearly work to be done.

Consumer trust and loyalty evolved

Brand trust has grown up. Before, we relied on word of mouth—commendations and condemnations alike—from friends and family, the positivity or the negativity of our own and others’ experiences about a product or service, and endorsements from someone we look up to (like celebrities and athletes). Nowadays, many of us tend to believe what strangers say about a brand, product, or service; read the news about what is going on with institutions; and follow social media chatter about them.

The relationship between consumer trust and brand reputation has changed as well. While mainstream names are still favored over new or unfamiliar brands (even if they offer a similar product or service at a cheaper cost), connected consumers have learned the value of their data. Not only do they want their needs met, but they also expect companies to take care of them—and by extension, the information they choose to give away—so they can feel safe and happy.

Of course, with trust comes loyalty. Weber Shandwick, a global PR firm, has reminded business leaders in their report, The Company behind the Brand: In Reputation We Trust [PDF], has found that consumers in the UK tend to associate themselves with a product, and if the company producing that product falls short of what is expected of them, they bail in search for a better one, which is usually offered by a competing brand. It’s not hard to imagine this same reaction from consumers in the United States in the context of stolen customer data due to a company-wide data breach.

Business reputation management in action

The possibility of finding their business in the crosshairs of threat actors is no longer just a possibility, but something executives should always be prepared for. The good news is that it’s not impossible to protect your business reputation from risks.

In this section, we outline what businesses can do in three phases—before, during, and after an attack—by illustration based on a real-world scenario to give organizations an idea on how they can formulate a game plan to manage their reputation now or in the future. Note that we have aligned our pointers in the context of cybersecurity and privacy incidents.

Before an attack: Be prepared for a breach
  • Identify and secure your company’s most sensitive data. This includes intellectual property (IP) and your customers’ personally identifiable information (PII).
  • Back up your data. We have a practical guide for that.
  • Patch everything. It may take a while, and it may cause some disruption, but it’ll be worth it.
  • Educate employees on basic data security measures, social engineering tactics, and how to identify red flags of a potential breach.
  • Put together a team of incident responders. That is, if the company has decided to handle incidents in-house. If this is the case:
    • Provide them the tools they will need for the job.
    • Train them on how to use these tools and on established processes of proper evidence collection and storage.
  • Create a data breach response plan. This is a set of actions an organization takes to quickly and effectively address a security or privacy incident. Sadly, according to PwC’s 2018 Global Economic Crime and Fraud Survey, only 30 percent of companies have this plan in place.
    • Once created, make sure that all internal stakeholders—your employees, executives, business units, investors, and B2B contacts—are informed about this plan, so they know what to do and what to expect.
  • Learn the security breach notification laws in the state your business is based in. Make sure that your company complies with the legislation.
  • Establish an alert and follow-through process. This includes maintaining a communication channel that is accessible 24/7. In the event of an attack, internal stakeholders must be informed first.
  • On a similar note, create a notification process. Involve relevant key departments, such as marketing and legal, in coming up with what to say to customers (if the breach involves PII theft), regulators, and law enforcement, and how to best notify them.
  • Depending on the nature of your company and the potential assets that may be affected by a breach, prepare a list of possible special services your company can offer to clients that may be affected. For example, if your company stores credit card information, you can provide identity protection to clients with a contact number they can call to avail of the service. This was what Home Depot did when it was breached in 2014.

Read: How to browse the Internet safely at work

During an attack: Be strategic
  • Keep internal stakeholders updated on developments and steps your company has taken to mitigate and remedy the severity of the situation. Keep phone lines open, but it would be more efficient to send periodic email updates. Create a timeline of events as you go along.
  • Identify and document the following information and evidence as much as you can, as these are needed when the time comes to notify clients and the public about the breach:
    • Compromised systems, assets, and networks
    • Patient zero, or how the breach happened
    • Information in affected machines that has been disclosed, taken, deleted, or corrupted.
  • If your company has a blog or a page where you can post company news, draft up an account of the events from start to finish and what you continue to plan on doing in the next few weeks following the breach. Be transparent and effective. This is a good opportunity to show clients that the company is not just talking the talk but also walking the walk. The Chief Marketing Officer (CMO) should take the lead on this.
After an attack: Be excellent to your stakeholders
  • Notify your clients and other entities that may have been affected by the breach.
    • Put out the company news or blog post the company has drafted about the cybersecurity incident.
    • Send out breach notifications via email, linking back to the blog, and social media.
  • Prepare to receive questions from clients and anyone who is interested in learning more about what happened. Expect to have uncomfortable conversations.
  • Offer additional services to your clients, which you have already thought out and prepared for in the first phase of this BRM exercise.
  • Continue accepting and addressing concerns and questions from clients at extended periods for a certain length of time.
  • Implement new processes and use new products based on post-incident discussions to further minimize future breaches from happening.
  • Rejuvenate stakeholder’s confidence and trust by focusing on breach preparedness, containment, and mitigation strategies as proof of the company’s commitment to its clients. This can turn the stigma of data breaches on its head. Remember that a breach can happen to any company from any industry. How the company acted before, during, and after the incident is what will be remembered. So use that to your advantage.
  • Audit the information your company collects and stores to see if you have data that is not necessarily needed to fulfill your product and service obligations to clients. The logic behind this is the less data you keep about customers; the less data are at risk. Make sure that all your stakeholders, especially your customers, know about which data you will not be collecting and storing anymore.
  • Recognize the hard work of your employees and reward them for it. Yes, they’re your stakeholders, too, and shouldn’t be forgotten, especially after the event of a cybersecurity incident.
Business reputation management is the new black

Indeed, businesses remains a favorite target of today’s threat actors and nation states. It’s the new normal, at this point—something that many organizations are still choosing to deny.

Knowing how to manage your business’s reputation is seen as a competitive advantage. Sure, it’s one thing to know how to recover from a cybersecurity incident. But it’s quite another to know what to do to keep the brand’s image intact amidst the negative attention and what to say to those who have been affected by the attack—your stakeholders—and to the public at large.

The post Reputation management in the age of cyberattacks against businesses appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mozilla launches Firefox Send for private file sharing

Malwarebytes - Thu, 03/14/2019 - 17:37

Mozilla look to reclaim some ground from the all-powerful Chrome with a new way to send and receive files securely from inside the browser. Firefox Send first emerged in 2017, promising an easy way to send documents without fuss. The training wheels have now come off and Send is ready to go primetime. Will it catch on with the masses, or will only a small, niche group use it to play document tennis?

How does it work?

Firefox Send allows for files up to 1GB to be sent to others via any web browser (2.5GB if you sign in with a Firefox account). The files are encrypted after a key is generated, at which point a URL is created containing said key. You send this URL to the recipient, who is able to then download and access the file securely. Mozilla can’t access the key, as the JavaScript code powering things only runs locally.

Before sending, a number of security settings come into play. You can set the link expiration to include number of downloads, from one to 200, or number of days the link is live (up to seven). Passwords are also available for additional security.

It’s not for everyone

The process isn’t 100 percent anonymous, as per the Send privacy page:

IP addresses: We receive IP addresses of downloaders and uploaders as part of our standard server logs. These are retained for 90 days, and for that period, may be connected to activity of a file’s download URL. Although we develop our services in ways that minimize identification, you should know that it may be possible to correlate the IP address of a Send user to the IP address of other Mozilla services with accounts; and if there is a match, this could identify the account email address.

Of course, there may be even less anonymity if you use the service while signed into a FireFox account to make use of the greater send allowance of 2.5GB.

As a result, this might not be something you wish to use if absolute anonymity is your primary concern.

Who is likely to make use of this?

Send is for situations where you need to get an important file to someone but:

  1. The recipient isn’t massively tech-savvy. If you’re dealing with applications involving a drip feed of documents over time, this can get messy. Eventually, the person at the other end will have had enough of multiple AES-256 encrypted zip files hosted on Box where the password never seems to work, or they don’t have the right zip tool to extract the file. Send will simplify that process.
  2. The person at the other end is tech-savvy. However, they’re not necessarily aware that sending bank details or passport photos in plaintext emails is a bad idea.

A Mozilla project manager mentioned issues involving Visa-related documents in the cloud, and this is definitely where a service like Send can flourish. Multiple uploads over time usually ends up in a game of “hunt the files.” Did you delete everything? Maybe you should leave some of it online in case a problem arises? Are the files really gone if you delete them all, or is it as simple as flipping a “Whoops, didn’t mean it” switch and watching them all come back?

These are real-world, practical problems that people run into on a daily basis. The duct tape, multiple service/program approach works up to a point—and then it doesn’t. Firefox Send is perhaps a bit niche, but there’s nothing wrong with that. Not everyone is a fan of leaving important documents scattered across Google Drive or Dropbox, and this is a handy alternative. We’ll have to see what impact this product has long-term, but having more privacy options available is never a bad thing.

The post Mozilla launches Firefox Send for private file sharing appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Emotet revisited: pervasive threat still a danger to businesses

Malwarebytes - Thu, 03/14/2019 - 15:00

One of the most common and pervasive threats for businesses today is Emotet, a banking Trojan turned downloader that has been on our list of top 10 detections for many months in a row. Emotet, which Malwarebytes detects as Trojan.Emotet, has been leveled at consumers and organizations across the globe, fooling users into infecting endpoints through phishing emails, and then spreading laterally through networks using stolen NSA exploits. Its modular, polymorphic form, and ability to drop multiple, changing payloads have made Emotet a thorn in the side of cybersecurity researchers and IT teams alike.

Emotet first appeared on the scene as a banking Trojan, but its effective combination of persistence and network propagation has turned it into a popular infection mechanism for other forms of malware, such as TrickBot and Ryuk ransomware. It has also earned a reputation as one of the hardest-to-remediate infections once it has infiltrated an organization’s network.

Emotet detections March 12, 2018 – February 23, 2019

In July 2018, the US Department of Homeland Security issued a Technical Alert through CISA (Cyber-Infrastructure) about Emotet, warning that:

“Emotet continues to be among the most costly and destructive malware affecting SLTT governments. Its worm-like features result in rapidly spreading network-wide infection, which are difficult to combat. Emotet infections have cost SLTT governments up to $1 million per incident to remediate.”

From banking Trojan to botnet

Emotet started out in 2014 as an information-stealing banking Trojan that scoured sensitive financial information from infected systems (which is the reason why Malwarebytes detects some components as Spyware.Emotet). However, over time Emotet and its business model evolved, switching from a singular threat leveled at specific targets to a botnet that distributes multiple malware payloads to industry verticals ranging from governments to schools.

Emotet was designed to be modular, with each module having a designated task. One of its modules is a Trojan downloader that downloads and runs additional malware. At first, Emotet started delivering other banking Trojans on the side. However, its modular design made it easier for its authors—a group called Mealybug—to adapt the malware or swap functionality between variants. Later versions began dropping newer and more sophisticated payloads that held files for ransom, stole personally identifiable information (PII), spammed other users with phishing emails, and even cleaned out cryptocurrency wallets. All of these sidekicks were happy and eager to make use of the stubborn nature of this threat.

Infection mechanism

We have discussed some of the structure and flow of Emotet’s infection vectors in detail here and here by decoding an example. What most Emotet variants have in common is that the initial infection mechanism is malspam. At first, infections were initiated from Javascript files attached to emails; later, (and still true today) it was via infected Word documents that downloaded and executed the payload.

A considerable portion of Emotet malspam is generated by the malware’s own spam module that sends out malicious emails to the contacts it finds on an infected system. This makes the emails appear as though they’re coming from a known sender. Recipients of email from a known contact are more likely to open the attachment and become the next victim—a classic social engineering technique.

Besides spamming other endpoints, Emotet also propagates through the popular EternalBlue vulnerability stolen from the NSA and released by the ShadowBrokers Group. This functionality allows the infection to spread laterally across a network of unpatched systems, which makes it even more dangerous to businesses that have hundreds or thousands of endpoints linked together.

Difficult to detect and remove

Emotet has several methods for maintaining persistence, including auto-start registry keys and services, and it uses modular Dynamic Link Libraries (DLLs) to continuously evolve. Because Emotet is polymorphic and modular, it can evade typical signature-based detection.

In fact, not only is Emotet difficult to detect, but also to remediate.

A major factor that frustrates remediation is the aforementioned lateral movement via EternalBlue. This particular exploit requires admins follow a strict policy of isolating infected endpoints from the network, patching, disabling Administrative Shares, and ultimately removing the Trojan before reconnecting to the network—otherwise, face the certainty that cleaned endpoints will become re-infected over and over by infected peers.

Add to that mix an ongoing development of new capabilities, including the ability to be VM-aware, avoid spam filters, or uninstall security programs, and you’ll begin to understand why Emotet is every networks administrators’ worst nightmare.

Recommended remediation steps

An effective, though time-consuming method for disinfecting networked systems has been established. The recommended steps for remediation are as follows:

  • Identify the infected systems by looking for Indicators of Compromise (IOCs)
  • Disconnect the infected endpoints from the network. Treat systems where you have even the slightest doubt as infected.
  • Patch the system for EternalBlue. Patches for many Windows versions can be found through this Microsoft Security Bulletin about MS17-010.
  • Disable administrative shares, because Emotet also spreads itself over the network through default admin shares. TrickBot, one of Emotet’s trusty sidekicks, also uses the Admin$ shares once it has brute forced the local administrator password. A file share server has an IPC$ share that TrickBot queries to get a list of all endpoints that connect to it.
  • Scan the system and clean the Emotet infection.
  • Change account credentials, including all local and domain administrator passwords, as well as passwords for email accounts to stop the system from being accessible to the Trojan.
Prevention

Obviously, it’s preferable for businesses to avoid Emotet infections in the first place, as remediation is often costly and time-consuming. Here are some things you can do to prevent getting infected with Emotet:

  • Educate users: Make sure end users are aware of the dangers of Emotet and know how to recognize malspam—its primary infection vector. Train users on how to detect phishing attempts, especially those that are spoofed or more sophisticated than, say, the Nigerian Prince.
  • Update software regularly: Applying the latest updates and patches reduces the chances of Emotet infections spreading laterally through networks via EternalBlue vulnerabilities. If not already implemented, consider automating those updates.
  • Limit administrative shares: to the absolute minimum for Emotet damage control.
  • Use safe passwords: Yes, it really is that important to use unique, strong passwords for each online account. Investigate, adopt, and role out a single password manager for all of the organization’s users.
  • Back up files: Some variants of Emotet also download ransomware, which can hold now-encrypted files hostage, rendering them useless unless a ransom is paid. Since we and the FBI recommend never paying the ransom—as it simply finances future attacks and paints a target on an organization’s back—having recent and easy-to-deploy backups is always a good idea.
IOCs

Persistence

C:\Windows\System32\randomnumber\
C:\Windows\System32\tasks\randomname
C:\Windows\[randomname]
C:\users[myusers]\appdata\roaming[random]
%appdata%\Roaming\Microsoft\Windows\Start Menu\Programs\Startup [Randomname].LNK. file in the startup folder

Registry keys

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services {Random Hexadecimal Numbers}
HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run {Random Names} with value c:\users\admin\appdata\roaming\{Random}{Legitimate Filename}.exe

Filename examples

PlayingonaHash.exe
certapp.exe
CleanToast.exe
CciAllow.exe
RulerRuler.exe
connectmrm.exe

Strings

C:\email.doc
C:\123\email.doc
C:\123\email.docx
C:\a\foobar.bmp
X:\Symbols\a
C:\loaddll.exe
C:\email.htm
C:\take_screenshot.ps1
C:\a\foobar.gif
C:\a\foobar.doc

Subject Filters

“UPS Ship Notification, Tracking Number”
“UPS Express Domestic”
“Tracking Number *”

Trick to check whether a UPS tracking number is real: a legitimate UPS tracking number contains eighteen alpha-numeric characters and starts with ‘1Z’ and ends with a check digit.

A number matching this format may still be false, but one that doesn’t match is certainly not real.

The post Emotet revisited: pervasive threat still a danger to businesses appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Google’s Nest fiasco harms user trust and invades their privacy

Malwarebytes - Wed, 03/13/2019 - 16:30

Technology companies, lawmakers, privacy advocates, and everyday consumers likely disagree about exactly how a company should go about collecting user data. But, following a trust-shattering move by Google last month regarding its Nest Secure product, consensus on one issue has emerged: Companies shouldn’t ship products that can surreptitiously spy on users.

Failing to disclose that a product can collect information from users in ways they couldn’t have reasonably expected is bad form. It invades privacy, breaks trust, and robs consumers of the ability to make informed choices.

While collecting data on users is nearly inevitable in today’s corporate world, secret, undisclosed, or unpredictable data collection—or data collection abilities—is another problem.

A smart-home speaker shouldn’t be secretly hiding a video camera. A secure messaging platform shouldn’t have a government-operated backdoor. And a home security hub that controls an alarm, keypad, and motion detector shouldn’t include a clandestine microphone feature—especially one that was never announced to customers.

And yet, that is precisely what Google’s home security product includes.

Google fumbles once again

Last month, Google announced that its Nest Secure would be updated to work with Google Assistant software. Following the update, users could simply utter “Hey Google” to access voice controls on the product line-up’s “Nest Guard” device.

The main problem, though, is that Google never told users that its product had an internal microphone to begin with. Nowhere inside the Nest Guard’s hardware specs, or in its marketing materials, could users find evidence of an installed microphone.

When Business Insider broke the news, Google fumbled ownership of the problem: “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” a Google spokesperson said. “That was an error on our part.”

Customers, academics, and privacy advocates balked at this explanation.

“This is deliberately misleading and lying to your customers about your product,” wrote Eva Galperin, director of cybersecurity at Electronic Frontier Foundation.

“Oops! We neglected to mention we’re recording everything you do while fronting as a security device,” wrote Scott Galloway, professor of marketing at the New York University Stern School of Business.

The Electronic Privacy Information Center (EPIC) spoke in harsher terms: Google’s disclosure failure wasn’t just bad corporate behavior, it was downright criminal.

“It is a federal crime to intercept private communications or to plant a listening device in a private residence,” EPIC said in a statement. In a letter, the organization urged the Federal Trade Commission to take “enforcement action” against Google, with the hope of eventually separating Nest from its parent. (Google purchased Nest in 2014 for $3.2 billion.)

Days later, the US government stepped in. The Senate Select Committee on Commerce sent a letter to Google CEO Sundar Pichai, demanding answers about the company’s disclosure failure. Whether Google was actually recording voice data didn’t matter, the senators said, because hackers could still have taken advantage of the microphone’s capability.

“As consumer technology becomes ever more advanced, it is essential that consumers know the capabilities of the devices they are bringing into their homes so they can make informed choices,” the letter said.

This isn’t just about user data

Collecting user data is essential to today’s technology companies. It powers Yelp recommendations based on a user’s location, product recommendations based on an Amazon user’s prior purchases, and search results based on a Google user’s history. Collecting user data also helps companies find bugs, patch software, and retool their products to their users’ needs.

But some of that data collection is visible to the user. And when it isn’t, it can at least be learned by savvy consumers who research privacy policies, read tech specs, and compare similar products. Other home security devices, for example, advertise the ability to trigger alarms at the sound of broken windows—a functionality that demands a working microphone.

Google’s failure to disclose its microphone prevented even the most privacy-conscious consumers from knowing what they were getting in the box. It is nearly the exact opposite approach that rival home speaker maker Sonos took when it installed a microphone in its own device.

Sonos does it better

In 2017, Sonos revealed that its newest line of products would eventually integrate with voice-controlled smart assistants. The company opted for transparency.

Sonos updated its privacy policy and published a blog about the update, telling users: “The most important thing for you to know is that Sonos does not keep recordings of your voice data.” Further, Sonos eventually designed its speaker so that, if an internal microphone is turned on, so is a small LED light on the device’s control panel. These two functions cannot be separated—the LED light and the internal microphone are hardwired together. If one receives power, so does the other.

While this function has upset some Sonos users who want to turn off the microphone light, the company hasn’t budged.

A Sonos spokesperson said the company values its customers’ privacy because it understands that people are bringing Sonos products into their homes. Adding a voice assistant to those products, the spokesperson said, resulted in Sonos taking a transparent and plain-spoken approach.

Now compare this approach to Google’s.

Consumers purchased a product that they trusted—quite ironically—with the security of their homes, only to realize that, by purchasing the product itself, their personal lives could have become less secure. This isn’t just a company failing to disclose the truth about its products. It’s a company failing to respect the privacy of its users.

A microphone in a home security product may well be a useful feature that many consumers will not only endure but embrace. In fact, internal microphones are available in many competitor products today, proving their popularity. But a secret microphone installed without user knowledge instantly erodes trust.

As we showed in our recent data privacy report, users care a great deal about protecting their personal information online and take many steps to secure it. To win over their trust, businesses need to responsibly disclose features included in their services and products—especially those that impact the security and privacy of their customers’ lives. Transparency is key to establishing and maintaining trust online.

The post Google’s Nest fiasco harms user trust and invades their privacy appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Explained: Payment Service Directive 2 (PSD2)

Malwarebytes - Wed, 03/13/2019 - 15:00

Payment Service Directive 2 (PSD2) is the implementation of a European guideline designed to further harmonize money transfers inside the EU. The ultimate goal of this directive is to simplify payments across borders so that it’s as easy as transferring money within the same country. Since the EU was set up to diminish the borders between its member states, this make sense. The implementation offers a legal framework for all payments made within the EU.

After the introduction of PSD in 2009, and with the Single Euro Payments Area (SEPA) migration completed, the EU introduced PSD2 on January 13, 2018. However, this new harmonizing plan came with a catch— the use of new online payment and account information services provided by third parties, such as financial institutions, who needed to be able to access the bank accounts of EU users. While they first need to obtain users’ consent to do so, we all know consent is not always freely given or with a full understanding of the implications. Still, it must be noted: Nothing will change if you don’t give your consent, and you are not obliged to do so.

Which providers

Before these institutions are allowed to ask for consent, they have to be authorized and registered under PSD2. The PSD2 already sets out information requirements for the application as payment institution and for the registration as account information services provider (AISP). The European Banking Authority (EBA) published guidelines on the information to be provided by applicants intending to obtain authorization as payment and electronic money institutions, as well as to register as an AISP.

From the pages of the Dutch National Bank (De Nederlandsche Bank):

“In this register are also (foreign) Account information service providers based upon the European Passport. These Account information service providers are supervised by the home supervisor. Account information service providers from other countries of the European Economic Area (EEA) could issue Account information services based upon the European Passport through an Agent in the Netherlands. DNB registers these agents of foreign Account information service providers without obligation to register. The registration of these agents are an extra service to the public. However the possibility may exist that the registration of incoming agents differs from the registration of the home supervisor.”

So, an AISP can obtain a European Passport to conduct its services across the entire EU, while only being obligated to register in its country of origin. And even though the European Union is supposed to be equal across the board, the reality is, in some countries, it’s easier to worm yourself into a comfortable position than in others.

Access to bank account = more services

Wait a minute. What exactly does all of this mean? Third parties often live under a separate set of rules and are not always subject to the same scrutiny. (Case in point: AISPs can move to register in “easier” countries and get away with much more.) So while that offers an AISP better flexibility to provide smooth transfer services, it would also allow those payment institutions to offer new services based on their view into your bank account. That includes a wealth of information, such as:

  • How much money is coming into and out of the account each month
  • Spending habits: what you spend money on and where you spend it
  • Payment habits: Are you paying bills way ahead of deadline or tardy?

AISPs can check your balance, request your bank to initiate a payment (transfer) on your behalf, or create a comprehensive overview of your balances for you.

Simple example: There is an AISP service that keeps tabs on your payments and income and shows you how much you can spend freely until your next payment is expected to come in. This is useful information to have when you are wondering if you can make your money last until the end of the month if you buy that dress.

However, imagine this information in the hands of a commercial party that wants to sell you something. They would be able to figure out how much you are spending with their competitors and make you a better offer. Or pepper you with ads tailored to your spending habits. Is that a problem? Yes, because why did you choose your current provider in the first place? Better service or product? Customer friendliness? Exactly what you needed? In short, the competitor might use your information to help themselves, and not necessarily you.

What is worrying about PSD2?

Consumer consent is a good thing. But if we can learn from history, as we should, it will not be too long before consumers are being tricked into clicking a big green button that gives a less trustworthy provider access to their banking information. Maybe they don’t even have to click it themselves. We can imagine Man-in-the-Middle attacks that sign you up for such a service.

Any offer of a service that requires your consent to access banking information should be carefully examined. How will AISPs that work for free make money? Likely by advertising to you or selling your data.

And then there is the possibility for “soft extortion,” like a mortgage provider that doesn’t want to do business with you unless you provide them with the access to your banking information. Or will offer you a better deal if you do.

In all of these scenarios, consent was given in one way or another, but is the deal really all that beneficial for the customer?

What we’d like to see

Some of the points below may already be under consideration in some or all of the EU member states, but we think they offer a good framework for the implementation of these new services.

  • We only want AISPs that work for the consumer and not for commercial third parties. In fairness, the consumer will pay the AISP for their services so that abuse or misuse of free product business models does not take place.
  • AISPs that want to do business in a country should be registered in that country, as well as in other countries where they want to do business.
  • AISPs should be constantly monitored, with the option to revoke their license if they misbehave. Note that GDPR already requires companies to delete data after services have stopped or when consent is withdrawn.
  • Access to banking information should not be used as a requirement for unrelated business models, or be traded for a discount on certain products.
  • GDPR regulations should be applied with extra care in this sensitive area. Some data- and privacy-related bodies have already expressed concerns about the discrepancies between GDPR and PSD2, even though they come from the same source.
  • Obligatory double-check through another medium by the AISP whether the customer has signed up out of their own free will, with a cooling-off period during which they can withdraw the permission.
Would anyone consent to PSD2 access?

For the moment, it’s hard to imagine a reason for allowing another financial institution or other business access to personal banking information. But despite the obvious red flags, it’s possible that people might be convinced with discounts, denials of service, or appealing benefits to give their consent.

And some of our wishes could very well be implemented as some kinks are still being ironed out. The Dutch Data Protection Authority (DPA) has pointed out that there are discrepancies between GDPR and PSD2 and expressed their concern about them. The DPA acknowledges this in their recommendation on the Implementation Act, and most recently in the Implementation Decree.

In both recommendations, the DPA concludes, in essence, that the GDPR has not been taken in consideration adequately in the course of the Dutch implementation of PSD2. The same may happen in other EU member states. Of course, the financial world tells us that licenses will not be issued to just anybody, but the public has not entirely forgotten the global 2008 banking crisis.

On top of that, there are major lawsuits in progress against insurance companies and other companies that sold products constructed in a way the general public could not possibly understand. These products are now considered misleading, and some even fraudulent. To put it mildly, the trust of the European public in financials is not high at the moment.

And we are not just looking at traditional financials.

Did you know that Google has obtained an eMoney license in Lithuania and that Facebook did the same in Ireland?

Are you worried now? Let me explain that all of these concerns have been brought up before, and the general consensus is that the regulations are strict enough to warrant an introduction of PSD2 that will only allow trustworthy partners which have been vetted and will be monitored by the authorities.

Nevertheless, you can rest assured that we will keep an eye on this development. When the times comes that PSD2 is introduced to the public, it might also turn out to be a subject that phishers are interested in. We can already imagine the “Thank you for allowing us to access your bank account; click here to revoke permission” email buried in junk mail.

Stay safe, everyone!

The post Explained: Payment Service Directive 2 (PSD2) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The Advanced Persistent Threat files: Lazarus Group

Malwarebytes - Tue, 03/12/2019 - 16:27

We’ve heard a lot about Advanced Persistent Threats (APTs) over the past few years. As a refresher, APTs are prolonged, aimed attacks on specific targets with the intention to compromise their systems and gain information from or about that target.

While the targets may be anyone or anything—a person, business, or other organization—APTs are often associated with government or military operations, as they tend to be the organizations with the resources necessary to conduct such an attack. Starting with Mandiant’s APT1 report in 2013, there’s been a continuous stream of exposure of nation-state hacking at scale.

Cybersecurity companies have gotten relatively good at observing and analyzing the tools and tactics of nation-state threat actors; they’re less good at placing these actions in context sufficient enough for defenders to make solid risk assessments. So we’re going to take a look at a few APT groups from a broader perspective and see how they fit into the larger threat landscape.

Today, we’re going to review the activities of Lazarus group, alternatively named Hidden Cobra and Guardians of Peace.

Who is Lazarus Group?

Lazarus Group is commonly believed to be run by the North Korean government, motivated primarily by financial gain as a method of circumventing long-standing sanctions against the regime. They first came to substantial media notice in 2013 with a series of coordinated attacks against an assortment of South Korean broadcasters and financial institutions using DarkSeoul, a wiper program that overwrites sections of the victims’ master boot record.

In November 2014, a large scale breach of Sony Pictures was attributed to Lazarus. The attack was notable due to its substantial penetration across Sony networks, the extensive amount of data exfiltrated and leaked, as well of use of a wiper in a possible attempt to erase forensic evidence. Attribution on the attacks was largely hazy, but the FBI released a statement tying the Sony breach to the earlier DarkSeoul attack, and officially attributed both incidents to North Korea.

Fast forward to May 2017 with the widespread outbreak of WannaCry, a piece of ransomware that used an SMB exploit as an attack vector. Attribution to North Korea rested largely on code reuse between WannaCry and previous North Korean attacks, but this was considered to be thin grounds given the common practice of tool sharing between regional threat groups. Western intelligence agencies released official statements to the public reaffirming the attribution, and on September 6, 2018, the US Department of Justice charged a North Korean national with involvement in both WannaCry and the Sony breach.

More recently, the financially-motivated arm of Lazarus Group has been garnering attention for attacks against financial institutions, as well as cryptocurrency exchanges. The latter is notable for involving Trojanized trading apps for both Windows and MacOS.

Malware commonly deployed Should you be worried?

Yes, but not to the degree you might think. Lazarus Group activities center on financial gain, as well as achieving the political goals of the North Korean regime. Given that North Korea’s stated political objectives tend to hyper focus on regional conflicts with South Korea and Japan, businesses outside of that sphere probably are at a low risk of politically-motivated attacks.

Financial motivations, however, do pose a significant risk to almost all organizations. Fortunately, defense against these types of attacks is largely the same whether they are state sponsored or not. Defenders should have robust log-monitoring capability, a patch management program, anti-phishing protection, and flags to distinguish legitimate communications from leadership from imposters.

What might they do next?

Attribution for Lazarus attacks is softer than with many other threat groups, and divining political motivations for North Korea has proven difficult for decades. As a result, it’s tough to project what their next targets might be. It’s a reasonable assumption, however, that while sanctions remain on North Korean leadership, the financial motivations of Lazarus will also remain. Organizations at particular risk of financially-motivated attacks should include Lazarus while considering security mitigations.

Additional resources

Comprehensive review of TTPs by Kaspersky

Extensive review of the Sony attack

The post The Advanced Persistent Threat files: Lazarus Group appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 4 – 11)

Malwarebytes - Mon, 03/11/2019 - 15:47

Last week, Malwarebytes Labs released its in-depth, international data privacy survey of nearly 4,000 individuals, revealing that every generation, including Millennials, cares about online privacy. We also covered a novel case of zombie email that involved a very much alive account user, delved into the typical data privacy laws a US startup might have to comply with on its journey to success, and spotlighted the Troldesh ransomware, also known as “Shade.”

Other security news

Stay safe, everyone!

The post A week in security (March 4 – 11) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Zombie email rises from grave after eight years of radio silence

Malwarebytes - Fri, 03/08/2019 - 16:00

In a novel twist on “What happens to our accounts when we die,” we have “what happens to our abandoned accounts while we’re still alive”. In this case, UK ISP TalkTalk kept an old customer’s email account alive some eight years after she closed it—which left it wide open for takeover by spammers.

If you’ve cancelled an account and wondered which bits of your digital data continue to live on, this story is for you.

I’ve talked in the past about how when loved ones die, their emails, social network accounts, and more keep on keeping on. Of course, this content is a prime target for cybercriminals, who can pilfer contacts and other data from long-dormant accounts.

There are typically three ways of “rezzing” a dormant account, aka bringing it back. They are:

Accidental: This is where a previously dormant account comes back to life, but with no malicious intent behind it. For example, critic Roger Ebert’s wife accidentally started sending public messages instead of direct messages via his inactive Twitter feed.

Targeted: This is when trolls or other ne’er-do-wells specifically target an account to cause distress or just get a cheap laugh. A victim of the 2012 Aurora, Colorado, cinema shooting randomly tweeted “I’m alive” some years after the event. This was, of course, enormously distressing for everyone involved.

Non targeted: This is a deliberate hack, but it isn’t specifically about the victim. Rather, the account is just there to serve as a sock puppet/fake account to sell a scam or push a bogus product. It’s quite common on social media, and for the scammer, it’s “just business.”

What happened with TalkTalk?

While we often see accounts belonging to the dead compromised and dragged into all manner of dubious online activities, this situation is a little different. The outcome is the same—an account, long dormant, is harvested and brought back into action, zombie-style. However, in this case, the former account owner is still alive. It’s a “non targeted” if we’re going by the examples above, but, in contrast to those examples, it’s causing considerable headaches for the account owner.

Companies usually keep multiple pieces of data on former customers for a period after account cancellation—web browsing history, payment methods, or old addresses, for example. But to keep an email dormant while attached to someone’s identity—and for eight full years—is a bad idea, because at some point it’s probably going to be compromised.

The compromise doesn’t even have to be a database breach. It could be something as simple as the person having drastically improved their security practices over the years, yet old accounts are forever tied to something like “password123”.

In this case, the account was indeed hijacked somehow. (The Register article doesn’t go into detail on this, and frankly it’d be a minor miracle if the affected person had any idea what happened some eight years on).

Friends of the account owner became aware something was up when the account started sending them emails with suspicious links to .pdf and .img files. The scammers reused previous subject lines to make it all look a touch more above board. This is similar to how mail menaces will use “RE:…” in their subject titles to make the email look as though it’s part of an actual discussion.

Why is this a problem?

The former owner couldn’t get the account shut down due to a multi-tiered portal setup. It’s not uncommon for ISPs to have multiple login sections, some of which cater to generic items and others to specific account features, or packages, or and anything else you care to think of. This is especially common when an organisation offers television, phone, Internet, and other services.

While this wouldn’t ordinarily be a problem, in order to shut down the compromised account, the former owner needed access to a specific portal that required her to be a current customer. As she’s not, TalkTalk requested two forms of identification to prove her identity. Given previous stories on TalkTalk’s data breaches, she may be reluctant to hand it over.

What happens now?

Nobody is quite sure. Even if the ex-customer weren’t asking for it to be shut down, one would imagine TalkTalk would see it being used for spam and disable it. That has to break a ToS somewhere alone the line.

Most ISPs issue an ISP-branded email regardless of whether you want one or not. With that in mind, it’s worth logging into whatever portal you have available and having a look around. If an email address exists for your ISP, and you’ve never used it, it could be a problem for you down the line—or even right now. The account email may reuse your main login password, or have something incredibly basic assigned as the default password, which could easily be cracked.

You don’t want to walk into a zombie email scenario like the one outlined above. Review any dormant accounts you might have attached to things like cloud services, mobile or IoT devices, or ISPs and shut them down if you can. If you can’t, you can at least pop in there and add a difficult password unlikely to be broken through brute force. And if you want to go the extra mile, contact the companies attached to the email addresses and find out what their policies are for shutting down email accounts after customers leave.

As for suspicious emails: Should you receive something from an email address you haven’t seen in a long time, be careful. If you have another way to contact the person supposedly sending the missive, do so. Otherwise, keep these tips in mind before you open any attachments or click any links. It’s just not worth letting curiosity getting the better of you—or your PC.

The post Zombie email rises from grave after eight years of radio silence appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The not-so-definitive guide to cybersecurity and data privacy laws

Malwarebytes - Thu, 03/07/2019 - 16:00

US cybersecurity and data privacy laws are, to put it lightly, a mess.

Years of piecemeal legislation, Supreme Court decisions, and government surveillance crises, along with repeated corporate failures to protect user data, have created a legal landscape that is, for the American public and American businesses, confusing, complicated, and downright annoying.

Businesses are expected to comply with data privacy laws based on the data’s type. For instance, there’s a law protecting health and medical information, another law protecting information belonging to children, and another law protecting video rental records. (Seriously, there is.) Confusingly, though, some of those laws only apply to certain types of businesses, rather than just certain types of data.

Law enforcement agencies and the intelligence community, on the other hand, are expected to comply with a different framework that sometimes separates data based on “content” and “non-content.” For instance, there’s a law protecting phone call conversations, but another law protects the actual numbers dialed on the keypad.

And even when data appears similar, its protections may differ. GPS location data might, for example, receive a different protection if it is held with a cell phone provider versus whether it was willfully uploaded through an online location “check-in” service or through a fitness app that lets users share jogging routes.

Congress could streamline this disjointed network by passing comprehensive federal data privacy legislation; however, questions remain about regulatory enforcement and whether states’ individual data privacy laws will be either respected or steamrolled in the process.

To better understand the current field, Malwarebytes is launching a limited blog series about data privacy and cybersecurity laws in the United States. We will cover business compliance, sectoral legislation, government surveillance, and upcoming federal legislation.

Below is our first blog in the series. It explores data privacy compliance in the United States today from the perspective of a startup.

A startup’s tale—data privacy laws abound

Every year, countless individuals travel to Silicon Valley to join the 21st century Gold Rush, staking claims not along the coastline, but up and down Sand Hill Road, where striking it rich means bringing in some serious venture capital financing.

But before any fledgling startup can become the next Facebook, Uber, Google, or Airbnb, it must comply with a wide, sometimes-dizzying array of data privacy laws.

Luckily, there are data privacy lawyers to help.

We spoke with D. Reed Freeman Jr., the cybersecurity and privacy practice co-chair at the Washington, D.C.-based law firm Wilmer Cutler Pickering Hale and Dorr about what a hypothetical, data-collecting startup would need to become compliant with current US data privacy laws. What does its roadmap look like?

Our hypothetical startup—let’s call it Spuri.us—is based in San Francisco and focused entirely on a US market. The company developed an app that collects users’ data to improve the app’s performance and, potentially, deliver targeted ads in the future.

This is not an exhaustive list of every data privacy law that a company must consider for data privacy compliance in the US. Instead, it is a snapshot, providing information and answers to potentially some of the most common questions today.

Spuri.us’ online privacy policy

To kick off data privacy compliance on the right foot, Freeman said the startup needs to write and post a clear and truthful privacy policy online, as defined in the 2004 California Online Privacy Protection Act.

The law requires businesses and commercial website operators that collect personally identifiable information to post a clear, easily-accessible privacy policy online. These privacy policies must detail the types of information collected from users, the types of information that may be shared with third parties, the effective date of the privacy policy, and the process—if any—for a user to review and request changes to their collected information.

Privacy policies must also include information about how a company responds to “Do Not Track” requests, which are web browser settings meant to prevent a user from being tracked online. The efficacy of these settings is debated, and Apple recently decommissioned the feature in its Safari browser.

Freeman said companies don’t need to worry about honoring “Do Not Track” requests as much as they should worry about complying with the law.

“It’s okay to say ‘We don’t,’” Freeman said, “but you have to say something.”

The law covers more than what to say in a privacy policy. It also covers how prominently a company must display it. According to the law, privacy policies must be “conspicuously posted” on a website.

More than 10 years ago, Google tried to test that interpretation and later backed down. Following a 2007 New York Times report that revealed that the company’s privacy policy was at least two clicks away from the home page, multiple privacy rights organizations sent a letter to then-CEO Eric Schmidt, urging the company to more proactively comply.

“Google’s reluctance to post a link to its privacy policy on its homepage is alarming,” the letter said, which was signed by the American Civil Liberties Union, Center for Digital Democracy, and Electronic Frontier Foundation. “We urge you to comply with the California Online Privacy Protection Act and the widespread practice for commercial web sites as soon as possible.”

The letter worked. Today, users can click the “Privacy” link on the search giant’s home page.

What About COPPA and HIPAA?

Spuri.us, like any nimble Silicon Valley startup, is ready to pivot. At one point in its growth, it considered becoming a health tracking and fitness app, meaning it would collect users’ heart rates, sleep regimens, water intake, exercise routines, and even their GPS location for selected jogging and cycling routes. Spuri.us also once considered pivoting into mobile gaming, developing an app that isn’t made for children, but could still be downloaded onto children’s devices and played by kids.

Spuri.us’ founder is familiar with at least two federal data privacy laws—the Health Insurance Portability and Accountability Act (HIPAA), which regulates medical information, and the Children’s Online Privacy Protection Act (COPPA), which regulates information belonging to children.

Spuri.us’ founder wants to know: If her company stars collecting health-related information, will it need to comply with HIPAA?

Not so, Freeman said.

“HIPAA, the way it’s laid out, doesn’t cover all medical information,” Freeman said. “That is a common misunderstanding.”

Instead, Freeman said, HIPAA only applies to three types of businesses: health care providers (like doctors, clinics, dentists, and pharmacies), health plans (like health insurance companies and HMOs), and health care clearinghouses (like billing services that process nonstandard health care information).

Without fitting any of those descriptions, Spuri.us doesn’t have to worry about HIPAA compliance.

As for complying with COPPA, Freeman called the law “complicated” and “very hard to comply with.” Attached to a massive omnibus bill at the close of the 1998 legislative session, COPPA is a law that “nobody knew was there until it passed,” Freeman said.

That said, COPPA’s scope is easy to understand.

“Some things are simple,” Freeman said. “You are regulated by Congress and obliged to comply with its byzantine requirements if your website is either directed to children under the age of 13, or you have actual knowledge that you’re collecting information from children under the age of 13.”

That begs the question: What is a website directed to children? According to Freeman, the Federal Trade Commission created a rule that helps answer that question.

“Things like animations on the site, language that looks like it’s geared towards children, a variety of factors that are intuitive are taken into account,” Freeman said.

Other factors include a website’s subject matter, its music, the age of its models, the display of “child-oriented activities,” and the presence of any child celebrities.

Because Spuri.us is not making a child-targeted app, and it does not knowingly collect information from children under the age of 13, it does not have to comply with COPPA.

A quick note on GDPR

No concern about data privacy compliance is complete without bringing up the European Union’s General Data Protection Regulation (GDPR). Passed in 2016 and having taken effect last year, GDPR regulates how companies collect, store, use, and share EU citizens’ personal information online. On the day GDPR took effect, countless Americans received email after email about updated privacy policies, often from companies that were founded in the United States.

Spuri.us’ founder is worried. She might have EU users but she isn’t certain. Do those users force her to become GDPR compliant?

“That’s a common misperception,” Freeman said. He said one section of GDPR explains this topic, which he called “extraterritorial application.” Or, to put it a little more clearly, Freeman said: “If you’re a US company, when does GDPR reach out and grab you?”

GDPR affects companies around the world depending on three factors. First, whether the company is established within the EU, either through employees, offices, or equipment. Second, whether the company directly markets or communicates to EU residents. Third, whether the company monitors the behavior of EU residents.

“Number three is what trips people up,” Freeman said. He said that US websites and apps—including those operated by companies without a physical EU presence—must still comply with GDPR if they specifically track users’ behavior that takes place in the EU.

“If you have an analytics service or network, or pixels on your website, or you drop cookies on EU residents’ machines that tracks their behavior,” that could all count as monitoring the behavior of EU residents, Freeman said.

Because those services are rather common, Freeman said many companies have already found a solution. Rather than dismantling an entire analytics operation, companies can instead capture the IP addresses of users visiting their websites. The companies then perform a reverse geolocation lookup. If the companies find any IP addresses associated with an EU location, they screen out the users behind those addresses to prevent online tracking.

Asked whether this setup has been proven to protect against GDPR regulators, Freeman instead said that these steps showcase an understanding and a concern for the law. That concern, he said, should hold up against scrutiny.

“If you’re a startup and an EU regulator initiates an investigation, and you show you’ve done everything you can to avoid tracking—that you get it, you know the law—my hope would be that most reasonable regulators would not take a Draconian action against you,” Freeman said. “You’ve done the best you can to avoid the thing that is regulated, which is the track.”

A data breach law for every state

Spuri.us has a clearly-posted privacy policy. It knows about HIPAA and COPPA and it has a plan for GDPR. Everything is going well…until it isn’t.

Spuri.us suffers a data breach.

Depending on which data was taken from Spuri.us and who it referred to, the startup will need to comply with the many requirements laid out in California’s data breach notification law. There are rules on when the law is triggered, what counts as a breach, who to notify, and what to tell them.

The law protects Californians’ “personal information,” which it defines as a combination of information. For instance, a first and last name plus a Social Security number count as personal information. So do a first initial and last name plus a driver’s license number, or a first and last name plus any past medical insurance claims, or medical diagnoses. A Californian’s username and associated password also qualify as “personal information,” according to the law.

The law also defines a breach as any “unauthorized acquisition” of personal information data. So, a rogue threat actor accessing a database? Not a breach. That same threat actor downloading the information from the database? Breach.

In California, once a company discovers a data breach, it next has to notify the affected individuals. These notifications must include details on which type of personal information was taken, a description of the breach, contact information for the company, and, if the company was actually the source of the breach, an offer for free identity theft prevention services for at least one year.

The law is particularly strict on these notifications to customers and individuals impacted. There are rules on font size and requirements for which subheadings to include in every notice: “What Happened,” “What Information Was Involved,” “What We Are Doing,” “What You Can Do,” and “More Information.”

After Spuri.us sends out its bevy of notices, it could still have a lot more to do.

As of April 2018, every single US state has its own data breach notification law. These laws, which can sometimes overlap, still include important differences, Freeman said.

“Some states require you to notify affected consumers. Some require you to notify the state’s Attorney General,” Freeman said. “Some require you to notify credit bureaus.”

For example, Florida’s law requires that, if more than 1,000 residents are affected, the company must notify all nationwide consumer reporting agencies. Utah’s law, on the other hand, only requires notifications if, after an investigation, the company finds that identity theft or fraud occurred, or likely occurred. And Iowa has one of the few state laws that protects both electronic and paper records.

Of all the data compliance headaches, this one might be the most time-consuming for Spuri.us.

In the meantime, Freeman said, taking a proactive approach—like posting the accurate and truthful privacy policy and being upfront and honest with users about business practices—will put the startup at a clear advantage.

“If they start out knowing those things on the privacy side and just in the USA,” Freeman said, “that’s a great start that puts them ahead of a lot of other startups.”

Stay tuned for our second blog in the series, which will cover the current fight for comprehensive data privacy legislation in the United States.

The post The not-so-definitive guide to cybersecurity and data privacy laws appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Spotlight on Troldesh ransomware, aka ‘Shade’

Malwarebytes - Wed, 03/06/2019 - 16:00

Despite the decline in the number of ransomware infections over the last year, there are several ransomware families that are still active. Ransom.Troldesh, aka Shade, is one of them. According to our product telemetry, Shade has experienced a sharp increase in detections from Q4 2018 to Q1 2019.

When we see a swift spike in detections of a malware family, that tells us we’re in the middle of an active, successful campaign. So let’s take a look at this “shady” ransomware to learn how it spreads, what are its symptoms, why it’s dangerous to your business, and how you can protect against it.

Troldesh spiked in February 2019

Infection vector

Troldesh, which has been around since 2014, is typically spread by malspam—specifically malicious email attachments. The attachments are usually zip files presented to the receiver as something he “has to” open quickly. The extracted zip is a Javascript that downloads the malicious payload (aka the ransomware itself). The payload is often hosted on sites with a compromised Content Management System (CMS).

Part of the obfuscated Troldesh Javascript

As the sender in Troldesh emails is commonly spoofed, we can surmise that the threat actors behind this campaign are phishing, hoping to pull the wool over users’ eyes in order to get them to open the attachment.

The origin of Troldesh is believed to be Russian because its ransom notes are written in both Russian and English.

Target systems are running Windows OS. Victims will have to unzip the attachment and double-click the Javascript file to get the infection started.

Ransomware behavior

Once deployed, the ransomware drops a lot of numbered readme#.txt files on the infected computer after the encryption routine is complete, most likely to make sure that the victim will read at least one of them. These text files contain the same message as the ransom note.

Targeted file extensions

Troldesh looks for files with these extensions on fixed, removable, and remote drives:

.1cd, .3ds, .3fr, .3g2, .3gp, .7z, .accda, .accdb, .accdc, .accde, .accdt, .accdw, .adb, .adp, .ai, .ai3, .ai4, .ai5, .ai6, .ai7, .ai8, .anim, .arw, .as, .asa, .asc, .ascx, .asm, .asmx, .asp, .aspx, .asr, .asx, .avi, .avs, .backup, .bak, .bay, .bd, .bin, .bmp, .bz2, .c, .cdr, .cer, .cf, .cfc, .cfm, .cfml, .cfu, .chm, .cin, .class, .clx, .config, .cpp, .cr2, .crt, .crw, .cs, .css, .csv, .cub, .dae, .dat, .db, .dbf, .dbx, .dc3, .dcm, .dcr, .der, .dib, .dic, .dif, .divx, .djvu, .dng, .doc, .docm, .docx, .dot, .dotm, .dotx, .dpx, .dqy, .dsn, .dt, .dtd, .dwg, .dwt, .dx, .dxf, .edml, .efd, .elf, .emf, .emz, .epf, .eps, .epsf, .epsp, .erf, .exr, .f4v, .fido, .flm, .flv, .frm, .fxg, .geo, .gif, .grs, .gz, .h, .hdr, .hpp, .hta, .htc, .htm, .html, .icb, .ics, .iff, .inc, .indd, .ini, .iqy, .j2c, .j2k, .java, .jp2, .jpc, .jpe, .jpeg, , .jpf, .jpg, .jpx, .js, .jsf, .json, .jsp, .kdc, .kmz, .kwm, .lasso, .lbi, .lgf, .lgp, .log, .m1v, .m4a, .m4v, .max, .md, .mda, .mdb, .mde, .mdf, .mdw, .mef, .mft, .mfw, .mht, .mhtml, .mka, .mkidx, .mkv, .mos, .mov, .mp3, .mp4, .mpeg, .mpg, .mpv, .mrw, .msg, .mxl, .myd, .myi, .nef, .nrw, .obj, .odb, .odc, .odm, .odp, .ods, .oft, .one, .onepkg, .onetoc2, .opt, .oqy, .orf, .p12, .p7b, .p7c, .pam, .pbm, .pct, .pcx, .pdd, .pdf, .pdp, .pef, .pem, .pff, .pfm, .pfx, .pgm, .php, .php3, .php4, .php5, .phtml, .pict, .pl, .pls, .pm, .png, .pnm, .pot, .potm, .potx, .ppa, .ppam, .ppm, .pps, .ppsm, .ppt, .pptm, .pptx, .prn, .ps, .psb, .psd, .pst, .ptx, .pub, .pwm, .pxr, .py, .qt, .r3d, .raf, .rar, .raw, .rdf, .rgbe, .rle, .rqy, .rss, .rtf, .rw2, .rwl, .safe, .sct, .sdpx, .shtm, .shtml, .slk, .sln, .sql, .sr2, .srf, .srw, .ssi, .st, .stm, .svg, .svgz, .swf, .tab, .tar, .tbb, .tbi, .tbk, .tdi, .tga, .thmx, .tif, .tiff, .tld, .torrent, .tpl, .txt, .u3d, .udl, .uxdc, .vb, .vbs, .vcs, .vda, .vdr, .vdw, .vdx, .vrp, .vsd, .vss, .vst, .vsw, .vsx, .vtm, .vtml, .vtx, .wb2, .wav, .wbm, .wbmp, .wim, .wmf, .wml, .wmv, .wpd, .wps, .x3f, .xl, .xla, .xlam, .xlk, .xlm, .xls, .xlsb, .xlsm, .xlsx, .xlt, .xltm, .xltx, .xlw, .xml, .xps, .xsd, .xsf, .xsl, .xslt, .xsn, .xtp, .xtp2, .xyze, .xz, and .zip

Encryption

Files are encrypted using AES 256 in CBC mode. For each encrypted file, two random 256-bit AES keys are generated: One is used to encrypt the file’s contents, while the other is used to encrypt the file name. The extensions mentioned above are added after the encryption of the filename.

Protect against Troldesh

Malwarebytes users can block Ransom.Troldesh through several different protection modules, which are able to stop the ransomware from encrypting files in real time.

Real-time protection against the files in our definitions stops the ransomware itself:

Our anti-exploit and anti-ransomware modules block suspicious behavior:

Meanwhile, Malwarebytes’ malicious website protection blocks compromised sites:

Other methods of protection

There are some security measures you can take to avoid getting to the phase where protection has to kick in or files need to be recovered.

  • Scan emails with attachments. These suspicious mails should not reach the end user.
  • User education. If they do reach the end user, they should be informed not to open attachments of this nature or run executable files in attachments. In addition, if your company has an anti-phishing plan, they should know who to forward the email to in the organization for investigation.
  • Blacklisting. Most end users do not need to be able to run scripts. In those cases, you can blacklist wscript.exe.
  • Update software and systems. Updating software can plug up vulnerabilities and keep known exploits at bay.
  • Back up files. Reliable and easy-to-deploy backups can shorten the recovery time.
Remediation

If you should get to the point where remediation is necessary, these are the steps to follow:

  • Perform a full system scan. Malwarebytes can detect and remove Ransom.Troldesh without further user interaction.
  • Recover files. Removing Troldesh does not decrypt your files. You can only get your files back from backups you made before the infection happened or by performing a roll-back operation.
  • Get rid of the culprit. Delete the email that was the root cause.
Decryption

Even though AES 256 is a strong encryption algorithm, there are free decryption tools available for some of the Troldesh variants. You can find out more about these decryption tools at NoMoreRansom.org (look under “Shade” in the alphabetical list).

Victims of Troldesh are provided with a unique code, an email address, and a URL to an onion address. They are asked to contact the email address mentioning their code or go to the onion site for further instructions. It is not recommended to pay the ransom authors, as you will be financing their next wave of attacks.

What sets Troldesh apart from other ransomware variants is the huge number of readme#.txt files with the ransom note dropped on the affected system, and the contact by email with the threat actor. Otherwise, it employs a classic attack vector that relies heavily on tricking uninformed victims. Nevertheless, it has been quite successful in the past, and in its current wave of attacks. The free decryptors that are available only work on a few of the older variants, so victims will likely have to rely on backups or roll-back features.

IOCs

Ransom.Troldesh has used the following extensions for encrypted files:

.xtbl
.ytbl
.cbtl
.no_more_ransom
.better_call_saul
.breaking_bad
.heisenberg
.da_vinci_code
.magic_software_syndicate
.windows10
.crypted000007
.crypted000078

Contacts: Novikov.Vavila@gmail.com Selenadymond@gmail.com RobertaMacDonald1994@gmail.com IPs TCP 154.35.32.5 443 outgoing Bitcoin: 1Q1FJJyFdLwPt5yyZAQ8kfxfeWq8eoD25E Domain : cryptsen7fo43rr6.onion

The post Spotlight on Troldesh ransomware, aka ‘Shade’ appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Labs survey finds privacy concerns, distrust of social media rampant with all age groups

Malwarebytes - Tue, 03/05/2019 - 13:00

Before Cambridge Analytica made Facebook an unwilling accomplice to a scandal by appropriating and misusing more than 50 million users’ data, the public was already living in relative unease over the privacy of their information online.

The Cambridge Analytica incident, along with other, seemingly day-to-day headlines about data breaches pouring private information into criminal hands, has eroded public trust in corporations’ ability to protect data, as well as their willingness to use the data in ethically responsible ways. In fact, the potential for data interception, gathering, collation, storage, and sharing is increasing exponentially in all private, public, and commercial sectors.

Concerns of data loss or abuse have played a significant role in the US presidential election results, the legal and ethical drama surrounding Wikileaks, Brexit, and the implementation of the European Union’s General Data Privacy Regulations. But how does the potential for the misuse of private data affect the average user in Vancouver, British Colombia; Fresno, California; or Lisbon, Portugal?

To that end, The Malwarebytes Labs team conducted a survey from January 14 to February 15, 2019 to inquire about the data privacy concerns of nearly 4,000 Internet users in 66 countries, including respondents from: Australia, Belgium, Brazil, Canada, France, Germany, Hong Kong, India, Iran, Ireland, Japan, Kenya, Latvia, Malaysia, Mexico, New Zealand, the Philippines, Saudi Arabia, South Africa, Taiwan, Turkey, the United Kingdom, the United States, and Venezuela.

The survey, which was conducted via SurveyMonkey, focused on the following key areas:

  • Feelings on the importance of online privacy
  • Rating trust of social media and search engines with data online
  • Cybersecurity best practices followed and ignored (a list of options was provided)
  • Level of confidence in sharing personal data online
  • Types of data respondents are most comfortable sharing online (if at all)
  • Level of consciousness of data privacy at home vs. the workplace

____________________________________________________________________________________________________________________________

For a high-level look at our analysis of the survey results, including an exploration of why there is a disconnect between users’ emotions and their behaviors, as well as which privacy tools Malwarebytes recommends for those who wish to do more to protect their privacy, download our report:

The Blinding Effect of Security Hubris on Data Privacy

____________________________________________________________________________________________________________________________

For this blog, we explored commonalities and differences among Baby Boomers (ages 56+), Gen Xers (ages 36 – 55), Millennials (ages 18 – 35), and Gen Zeds, or the Centennials (ages 17 and under) concerning feelings about privacy, level of confidence sharing information online, trust of social media and search engines with data, and which privacy best practices they follow.

Lastly, we delved into the regional data compiled from respondents in Europe, the Middle East, and Africa (EMEA) and compared it against North America (NA) to examine whether US users share common ground on privacy with other regions of the world.

Privacy is complicated

If 10 years ago, someone had asked you to carry an instrument that could: listen into your conversations, broadcast your exact location to marketers, and allow you be tracked as you moved between the grocery aisles (and how long you lingered in front of the Cap’n Crunch cereal), most would have declined, suggesting it was a crazy joke. Of course, that was before the advent of smartphones that can do all that and more, today.

Many regard the public disclosure of surreptitious information-gathering programs conducted by the National Security Agency (NSA) here in the US as a watershed moment in the debate over government surveillance and privacy. Despite the outcry, experts noted that the disclosures hardly made a dent in US laws about how the government may monitor citizens (and non-citizens) legally.

Tech companies in Silicon Valley were equally affected (or unaffected, depending on how you look at it) by Edward Snowden’s actions. Yet, over time, they have felt the effects of people’s change in behaviors and actions toward their services. In the face of increasing pressure from criminal actions and public perception in key demographics, companies like Google, Apple, and Facebook have taken steps to beef up the encryption of and better secure user data. But is this enough to make people trust them again?

Challenge: Put your money where your mouth is

In reality, particularly in commerce, we may have reservations about allowing companies to collect from us, especially because we have little influence on how they use it, but that doesn’t stop us from doing so. The care for the protection of our own data, and that of others, may well be nonexistent—signed away in an End-User Licensing Agreement (EULA) buried 18 pages deep.

Case in point: Students of the Massachusetts Institute of Technology (MIT) conducted a study in 2017 and revealed that, among other findings, there is a paradox between how people feel about privacy and their willingness to easily give away data, especially when enticed with rewards (in this case, free pizza).

Indeed, we have a complicated relationship with our data and online privacy. One minute, we’re declaring on Twitter how the system has failed us and the next, we’re taking a big bite of a warm slice of BBQ chicken pizza after giving away your best friend’s email address.

This begs the question: Is getting something in exchange for data a square deal? More specifically, should we have to give something away to use free services? Has a scam just taken place? But more to the point: Do people really, really care about privacy? If they do, why, and to what extent?

In search of answers

Before we conducted our survey, we had theories of our own, and these were colored by many previous articles on the topic. We assumed, for example, that Millennials and Gen Zeds, having grown up with the Internet already in place, would be much less concerned about their privacy than Baby Boomers, who spent a few decades on the planet before ever having created an online account. Rather than further a bias, we started from scratch—we wanted to see for ourselves how people of different generations truly felt about privacy.

Privacy by generations: an overview

This section outlines the survey’s overall findings across generations and regions. A breakdown of each generation’s privacy profile follows, including some correlations from studies that tackled similar topics in the past.

  • An overwhelming majority of respondents (96 percent) feel that online privacy is crucial. And their actions speak for themselves: 97 percent say they take steps to protect their online data, whether they are on a computer or mobile device.
  • Among seven options provided, below are the top four cybersecurity and privacy practices they follow:
    • “I refrain from sharing sensitive personal data on social media.” (94 percent)
    • “I use security software.” (93 percent)
    • “I run software updates regularly.” (90 percent)
    • “I verify the websites I visit are secured before making purchases.” (86 percent)
  • Among seven options provided, below are the top four cybersecurity faux pas they admitted to:
    • “I skim through or do not read End User License Agreements or other consent forms.” (66 percent)
    • “I use the same password across multiple platforms.” (29 percent)
    • “I don’t know which permissions my apps have access to on my mobile device.” (26 percent)
    • “I don’t verify the security of websites before making a purchase. (e.g. I don’t look for “https” or the green padlock on sites.)” (10 percent)

This shows that while respondents feel the need to take care of their privacy or data online, we can deduce that they can only consistently protect it at least most of the time and not all the time.

  • There is a near equal percentage of people who trust (39 percent) and distrust (34 percent) search engines across all generations.
  • Across the board, there is a universal distrust of social media (95 percent). We can then safely assume that respondents are more likely to trust search engines to protect their data than social media.
  • When asked to agree or disagree with the statement, “I feel confident about sharing my personal data online,” 87 percent of respondents disagree or strongly disagree.
  • On the other hand, confident data sharers—or those who give away information to use a service they need—would most likely share their contact info (26 percent), such as name, address, phone number, and email address; card details when shopping online (26 percent); and banking details (16 percent).
  • A small portion (2 percent) of highly confident sharers are also willing to share (or already have shared) their Social Security Number (SSN) and health-related data.
  • In practice, however, 59 percent of respondents said they don’t share any of the sensitive data we listed online.
  • When asked to rate the statement, “I am more conscious of data privacy when at work than I am at home,” a large share (84 percent) said “false.”
Breaking it down

There are many events that happened within this decade that have shaped the way Internet users across generations perceive privacy and how they act on that perception. The astounding number of breaches that have taken place since 2017 and the billions of data stolen, leaked, and bartered on the digital underground market—not to mention the seemingly endless number of opportunities for governments, institutions, and individuals to spy and harvest data on people—can either drive Internet users with a modicum of interest in preserving privacy to (1) live off the grid or (2) completely change their perception of data privacy. The former is unlikely to happen for the majority of users. The latter, however, is already taking place. In fact, not only have perceptions changed but so has behavior, in some cases, almost instantly.

We profiled each age group in light of past and present privacy-related events and how these have changed their perceptions, feeling, and online practices. Here are some of the important findings that emerged from our survey.

Centennials are no noobs when it comes to privacy.*

It’s important to note that while many users who are 18 years old and under (83 percent) admit that privacy is important to them, even more (87 percent) are taking steps to ensure that their data is secure online. Ninety percent of them do this by making sure that the websites they visit are secure before making online purchases. They also refrain from sharing sensitive PII on social media (86 percent) and use security software (86 percent).

Jerome Boursier, security researcher and co-founder of AdwCleaner, is also a privacy advocate. He disagrees with Gen Zeds’ claims that they don’t disclose their personally identifiable information (PII) on social media. “I think most people in the survey would define PII differently. People—especially the younger ones—tend to have a blurry definition of it and don’t consider certain information as personally identifiable the same way older generations do.”

Other notable practices Gen Z admit to partaking in are borrowed from the Cybersecurity 101 handbook, such as using complicated passwords and tools like a VPN on their mobile devices, while others go above-and-beyond normal practices, such as checking the maliciousness of a file they downloaded using Virus Total and modifying files to prevent telemetry logging or reporting—something Microsoft has been doing since the release of Windows 7.

They are also the generation that is the most unlikely to update their software.

Contrary to public belief, Millennials do care about their privacy.

This bears repeating: Millennials do care about their privacy.

An overwhelming majority (93 percent) of Millennials admitted to caring about their privacy. On the other hand, a small portion of this age group, while disclosing that they aren’t that bothered about their privacy, also admit that they still take steps to keep their online data safe.

One reason we can cite why Millennials may care about their privacy is that they want to manage their online reputations, and they are the most active at it, according to the Pew Research Center. In the report “Reputation Management and Social Media,” researchers found that Millennials take steps to limit the amount of PII online, are well-versed at personalizing their social media privacy settings, delete unwanted comments about them on their profiles, and un-tag themselves from photos they were tagged in by someone else. Given that a lot of employers are Google-ing their prospective employees (and Millennials know this), they take a proactive role in putting their best foot forward online.

Like Centennials, Millennials also use VPNs and Tor to protect their anonymity and privacy. In addition, they regularly conduct security checks on their devices and account activity logs, use two-factor authentication (2FA), and do their best to get on top of news, trends, and laws related to privacy and tech. A number of Millennials also admit to not having a social media presence.

While a large share (92 percent) of Millennials polled distrust social media with their data (and 64 percent of them feel the same way about search engines), they continue to use Google, Facebook, and other social media and search platforms. Several Millennials also admit that they can’t seem to stop themselves from clicking links.

Lastly, only a little over half of the respondents (59 percent) are as conscious of their data privacy at home as they are at work. This means that there is a sizable chunk of Millennials who are only conscious of their privacy at work but not so much at home.

Gen Xers feel and behave online almost the same way as Baby Boomers.

Gen Xers are the youngest of the older generations, but their habits better resemble their elder counterparts than their younger compatriots. Call it coincidence or bad luck—depending on your predisposition—or even “wisdom in action.” Either way, being likened to Baby Boomers is a compliment when it comes to privacy and security best practices.

Respondents in this age group have the highest number of people who are privacy-conscious (97 percent), and they are no doubt deliberate (98 percent) in their attempts to secure and take control of their data. Abstaining from posting personal information on social media ranks high in their list of “dos” at 93 percent. Apart from using security software and regularly updating all programs they use, they also do their best to opt out of everything they can, use strong passwords and 2FA, install blocker apps on browsers, and surf the web anonymously.

On the flip side, they’re second only to Millennials for The Generation Good at Avoiding Reading EULAs (71 percent). Gen Xers also bagged The Least Number of People in a Generation to Reuse Passwords (24 percent) award.

When it comes to a search engine’s ability to secure their data, over half of Gen Xers (65 percent) distrust them, while nearly a quarter (24 percent) chose to be neutral in their stance

Baby Boomers know more about protecting privacy online than other generations, and they act upon that knowledge.

Our findings of Baby Boomers have challenged the longstanding notion that they are the most clueless bunch when it comes to cybersecurity and privacy.

Of course, this isn’t to say that there are no naïve users in this generation—all generations have them—but our survey results profoundly contrast what most of us accepted as truth about what Boomers feel about privacy and how they behave when online. They’re actually smarter and more prudent than we care to give them credit for.

Baby Boomers came out as the most distrustful generation (97 percent) of social media when it comes to protecting their data. Because of this, those who have a social media presence hardly disclose (94 percent) any personal information when active.

In contrast, only a little over half (57 percent) of Boomers trust search engines, making them the most trustful among other groups. This means that it is highly likely for a Baby Boomer to trust search engines with their data over social media.

Boomers are also the least confident (89 percent) generation in terms of sharing personal data online. This correlates to a nationwide study commissioned by Hide My Ass! (HMA), a popular VPN service provider, about Baby Boomers and their different approach to online privacy. According to their research, Boomers are likely to respond “I only allow trusted people to see anything I post & employ a lot of privacy restrictions.”

Lastly, they’re also the most consistent in terms of guarding their data privacy both at home and at work (88 percent).

“I am immediately surprised that Baby Boomers are the most conscious about data privacy at work and at home. Anecdotally, I guess it makes sense, at least in work environments,” says David Ruiz, Content Writer for Malwarebytes Labs and a former surveillance activist for the Electronic Frontier Foundation (EFF). He further recalls: “I used to be a legal affairs reporter and 65-and-up lawyers routinely told me about their employers’ constant data security and privacy practices (daily, changing Wi-Fi passwords, secure portals for accessing documents, no support of multiple devices to access those secure portals).”

Privacy by region: an overview of EMEA and NA

A clear majority of survey respondents within the EMEA region are mostly from countries in Europe. One would think that Europeans are more versed in online privacy practices, given they are particularly known for taking privacy and data protection seriously compared to those in North America (NA). Although being well-versed can be seen in certain age groups in EMEA, our data shows that the privacy-savviness of those in NA are not that far off. In fact, certain age groups in NA match or even trump the numbers in EMEA.

Comparing and contrasting user perception and practice in EMEA and NA

There is no denying that those polled in EMEA and NA care about privacy and take steps to secure themselves, too. Most of them refrain from disclosing any information they deemed as sensitive in social media (an average of 89 percent of EMEA users versus 95 percent of NA users), verify websites where they plan to make purchases are secure (an average of 90 percent of EMEA users versus 91 percent of NA users), and use security software (an average of 89 percent of EMEA users versus 94 percent of NA users).

However, like what we’ve seen in the generational profiles, they also recognize the weaknesses that dampen their efforts. All respondents are prone to skimming through or completely avoiding reading the EULA (an average of 77 percent of EMEA users versus 71 percent of NA users). This is the most prominent problem across generations, followed by reusing passwords (an average of 26 percent of EMEA users versus 38 percent of NA users) and not knowing which permissions their apps have access to on their mobile devices (an average of 19 percent of EMEA users versus 17 percent of NA users).

As you can see, there are more users in NA that are embracing these top online privacy practices than those in EMEA.

All respondents from EMEA and NA are significantly distrustful of social media—92 and 88 percent, respectively—when it comes to protecting their data. For those who are willing to disclose their data online, they usually share their credit card details (26 percent), contact info (26 percent), and banking details (16 percent). Essentially, the most common pieces of information you normally give out when you do online banking and purchasing.

Millennials in both EMEA and NA (61 percent) feel the least conscious about their data privacy at work vs. at home. On the other hand, Baby Boomers (85 percent) in both regions feel the most conscious about their privacy in said settings.

It’s also interesting to note that Baby Boomers in both regions appear to share a similar profile.

Privacy in EMEA and NA: notable trends

When it comes to knowing which permissions apps have access to on mobile devices, Gen Zeds in EMEA (90 percent) are the most aware compared to Gen Zeds in NA (63 percent). In fact, Gen Zeds and Millennials (73 percent) are the only generations in EMEA that are conscious of app permissions. Not only that, they’re the less likely group to reuse passwords (at 20 and 24 percent, respectively) across generations in both regions. Although Gen Xers in EMEA have the highest rate of users (31 percent) who recycle passwords.

It also appears that the average percentage of older respondents—the Gen Xers (31 percent) and Baby Boomers (37 percent)—in both regions are more likely to read EULAs or take the time to do so than the average percentage of Gen Zeds and Millennials (both at 18 percent).

Gen Zeds in NA are the most distrustful generation of search engines (75 percent) and social media (100 percent) when it comes to protecting their data. They’re also the most uncomfortable (100 percent) when it comes to sharing personal data online.

Among the Baby Boomers, those in NA are the most conscious (85 percent) when it comes to data privacy at work. However, Baby Boomers in EMEA are not far off (84 percent).

With privacy comes universal reformation, for the betterment of all

The results of our survey have merely provided a snapshot of how generations and certain regions perceive privacy and what steps they take (and don’t take) to control what information is made available online. Many might be surprised by these findings while others may correlate them with other studies in the past. However you take it, one thing is clear: Online privacy has become as important an issue as cybersecurity, and people are beginning to take notice.

With this current privacy climate, it is not enough for Internet users to do the heavy lifting. Regulators play a part, and businesses should act quickly to guarantee that the data they collect from users is only what is reasonably needed to keep services going. In addition, they should secure the data they handle and store, and ensure that users are informed of changes to which data they collect and how they are used. We believe that this demand from businesses will continue at least for the next three years, and any plans or reforms that elevate the importance of online privacy of user data will serve as cornerstones to future transformations.

At this point in time, there is no real way to have complete privacy and anonymity when online. It’s a pipe dream in the current climate. Perhaps the best we can hope for is a society where businesses of all sizes recognize that the user data they collect has a real impact on their customers, and to respect and secure that data. Users should not be treated as a collection of entries with names, addresses, and contact numbers in a huge database. Customers are customers once again, who are always on the lookout for products and services to meet their needs.

The privacy advocate mantle would then be taken upon by Centennials and “Alphas” (or iGeneration), the first age group entirely born within the 21st century and considered the most technologically infused of us all. For those who wish to conduct future studies on privacy like this, it would be really, really interesting to see how Alphas and Centennials would react to a free box of pizza in exchange for their mother’s maiden name.

[*] The Malwarebytes Labs was only able to poll a total of 31 respondents in Gen Zed. This isn’t enough to create an accurate profile of this age group. However, this author believes that what we were able to gather is enough to give an informed assessment of this age group’s feelings and practices.

The post Labs survey finds privacy concerns, distrust of social media rampant with all age groups appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (February 25 – March 3)

Malwarebytes - Mon, 03/04/2019 - 18:03

Last week, we delved into the realm of K-12 schools and security, explored the world of compromised websites and Golang bruteforcers, and examined the possible realms of pay for privacy. We also looked at identity management solutions, Google’s Universal Read Gadget, and did the deepest of dives into the life of Max Schrems.

Other security news
  • Big coin, big  problems: Founder of My Big Coin charged with seven counts of fraud (Source: The Register)
  • Another day, another exposed list: Specifically, the paid-for Dow Jones watchlist (Source: Security Discovery)
  • Mobile malware continues to rise: Mobile threats may have been a little quiet recently, but that certainly doesn’t mean they’ve gone away. Ignore at your peril (Source: CBR)
  • PDF tracking: Viewing some samples in Chrome can lead to tracking behaviour (source: Edgespot)
  • Verification bait and switch: Instagram users who desire verification status should be wary of a phish currently in circulation (Source: PCMag)
  • Missile warning sent from hacked Twitter account: The dangers of not securing your social media profile take on a whole new terrifying angle (Source: Naked Security)
  • Graphics card security update: NVIDIA rolls out a fix patching no less than 8 flaws for their display driver (Source: NVIDIA)
  • Momo, oh no: The supposed Momo challenge has predictably turned out to be an urban myth, except it was known to be a so-called creepypasta hoax for a long time (Source: IFLScience)
  • Police arrest supplier of radios: Turns out you really don’t want to install fraudulent software from someone Homeland security considers to be a security threat (Source: CBC news)

Stay safe, everyone!

The post A week in security (February 25 – March 3) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Spectre, Google, and the Universal Read Gadget

Malwarebytes - Fri, 03/01/2019 - 16:43

Spectre, a seemingly never ending menace to processors, is back in the limelight once again thanks to the Universal Read Gadget. First seen at the start of 2018, Spectre emerged alongside Meltdown as a major potential threat to people’s system security.

Meltdown and Spectre

Meltdown targeted Intel processors and required a malicious process running on the system to interact with it. Spectre could be launched from browsers via a script. As these threats were targeting hardware flaws in the CPU, they were difficult to address and required BIOS updates and some other things to ensure a safe online experience. As per our original blog:

The core issue stems from a design flaw that allows attackers access to memory contents from any device, be it desktop, smart phone, or cloud server, exposing passwords and other sensitive data. The flaw in question is tied to what is called speculative execution, which happens when a processor guesses the next operations to perform based on previously cached iterations.

The Meltdown variant only impacts Intel CPUs, whereas the second set of Spectre variants impacts all vendors of CPUs with support of speculative execution. This includes most CPUs produced during the last 15 years from Intel, AMD, ARM, and IBM.

This is not a great situation for everyone to suddenly find themselves in. Manufacturers were caught on the backfoot and customers rightly demanded a solution.

If this is the part where you’re thinking, “What caused this again?” then you’re in luck.

Speculative patching woes

The issues came from something called “speculative execution.” As we said in this follow up blog about patching difficulties:

Speculative execution is an effective optimization technique used by most modern processors to determine where code is likely to go next. Hence, when it encounters a conditional branch instruction, the processor makes a guess for which branch might be executed based on the previous branches’ processing history. It then speculatively executes instructions until the original condition is known to be true or false. If the latter, the pending instructions are abandoned, and the processor reloads its state based on what it determines to be the correct execution path.

The issue with this behaviour and the way it’s currently implemented in numerous chips is that when the processor makes a wrong guess, it has already speculatively executed a few instructions. These are saved in cache, even if they are from the invalid branch. Spectre and Meltdown take advantage of this situation by comparing the loading time of two variables, determining if one has been loaded during the speculative execution, and deducing its value.

Four variants existed across Spectre and Meltdown, with Intel, IBM, ARM, and AMD being snagged by Spectre and “just” Intel being caught up by Meltdown.

The vulnerabilities impacting CPUs (central processing units) made it a tricky thing to fix. Software alterations could cause performance snags, and hardware fixes could be even more complicated. A working group was formed to try and thrash out the incredibly complicated details of how this issue would be tackled.

In January 2018, researchers stressed the only real way to solve Spectre was redesigning computer hardware from the ground up. This is no easy task. Replace everything, or suffer the possible performance hit from any software fixes. Fairly complex patching nightmares abound, with operating systems, pre/post Skylake CPUs, and more needing tweaks or wholesale changes.

Additional complications

It wasn’t long before scams started capitalising on the rush to patch. Now people suddenly had to deal with unrelated fakes, malware, and phishes on top of actual Meltdown/Spectre threats.

Alongside the previously mentioned scams, fake websites started to pop up, too. Typically they claimed to be an official government portals, or plain old download sites offering up a fix. They might also make use of SSL, because displaying a padlock is now a common trick of phishers. That’s a false sense of security—just because there’s a padlock, doesn’t mean it’s a safe site. All it means is the data on it is encrypted. Beyond that, you’re on your own.

The site in our example offered up a zipfile. Contained within was SmokeLoader, well known for attempting to grab additional malicious downloads.

Click to enlarge

Eventually, the furore died down and people slowly forgot about Spectre. It’d pop up again in occasional news articles, but for the most part, people treated it as out of sight, out of mind.

Which brings us to last week’s news.

Spectre: What happened now?

What happened now is a reiteration of the “it’s not safe yet” message. The threat is mostly the same, and a lot of people may not need to worry about this. However, as The Register notes, the problem hasn’t gone away and some developers will need to keep it in mind.

Google has released a paper titled, unsurprisingly enough, “Spectre is here to stay: An analysis of side-channels and speculative execution.”

The Google paper

First thing’s first: It’s complicated, and you can read the full paper [PDF] here.

There’s a lot of moving parts to this, and frankly nobody should be expected to understand everything in it unless they’re working in or around this in some capacity. Some of this has already been mentioned, but it’s already about 700 words or so ago so a short recap may be handy:

  1. Side channels are bad. Your computer may be doing a bunch of secure tasks, keeping your data safe. All those bits and pieces of hardware, however, are doing all sorts of things to make those secure processes happen. Side channel attacks come at the otherwise secure data from another angle, in the realm of the mechanical. Sound, power consumption, timing between events, electromagnetic leaks, cameras, and more. All of these provide a means for a clever attacker to exploit this leaky side channel and grab data you’d rather they didn’t.
  2. They do this in Spectre’s case by exploiting speculative execution. Modern processors are big fans of speculative execution, given they make use of it extensively. It helps improve performance, by making guesses about what programs will do next and then abandoning if it turns out that doesn’t happen after all. Conversely, the retained paths are deployed and everything gets a nice speed boost. Those future potential possibilities is where Spectre comes in.
  3. As the paper says, “computations that should never have happened…allow for information to be leaked” via Spectre. It allows the attacker to inject “dangerously speculative behaviour” into trusted code, or untrusted code typically subjected to safety checks. Both are done through triggering “ordinarily impossible computations” through specific manipulations of the processor’s shared micro-architectural states.

Everything is a bit speed versus security, and security lost out. The manufacturers realised too late that the speed/security tradeoff came with a hefty security price the moment Spectre arrived on the scene. Thinking bad actors couldn’t tamper with with speculative executions—or worse, not considering this in the first place—has turned out to be a bit of a disaster.

The paper goes on to list that Intel, ARM, AMD, MIPS, IBM, and Oracle have all reported being affected. It’s also clear that:

Our paper shows these leaks are not only design flaws, but are in fact foundational, at the very base of theoretical computation.

This isn’t great. Nor is the fact that they estimate it’s probably more widely distributed than any security flaw in history, affecting “billions of CPUs in production across all device classes.”

Spectre: no exorcism due

The research paper asserts that Spectre is going to be around for a long time. Software-based techniques to ward off the threat will never quite remove the issue. They may ward off the threat but add a performance cost, with more layers of defence potentially making things too much of a drag to consider them beneficial.

The fixes end up being a mixed bag of trade-offs and performance hits, and Spectre is so variable and evasive that it quickly becomes impossible to pin down a 100 percent satisfactory solution. At this point, Google’s “Universal Read Gadget” wades in and makes everything worse.

What is the Universal Read Gadget?

A way to read data without permission that is for all intents and purposes unstoppable. When multiple vulnerabilities in current languages run on the CPU, it allows construction of said read gadget and that’s the real meat of Google’s research. Nobody is going to ditch speculative execution anytime soon, and nobody is going to magically come up with a way to solve the side channel issue, much less something like a Universal Read Gadget.

As the paper states,

We now believe that speculative vulnerabilities on today’s hardware defeat all language-enforced confidentiality with no known comprehensive software mitigations…as we have discovered that untrusted code can construct a universal read gadget to read all memory in the same address space through side-channels.

On the other hand, it’s clear we shouldn’t start panicking. It sounds bad, and it is bad, but it’s unlikely anyone is exploiting you using these techniques. Of course, unlikely doesn’t mean unfeasible, and this is why hardware and software organisations continue to wrestle with this particular genie.

The research paper stresses that the URG is very difficult to pull off.

The universal read gadget is not necessarily a straightforward construction. It requires detailed knowledge of the μ-architectural characteristics of the CPU and knowledge of the language implementation, whether that be a static compiler or a virtual machine. Additionally, the gadget might have particularly unusual performance and concurrency characteristics

Numerous scenarios will require different approaches, and it lists multiple instances where the gadget will potentially fail. In short, nobody is going to come along and Universal Read Gadget your computer. For now, much of this is at the theoretical stage. That doesn’t mean tech giants are becoming complacent however, and hardware and software organisations have a long road ahead to finally lay this spectre to rest.

The post Spectre, Google, and the Universal Read Gadget appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds