Techie Feeds

A look inside the FBI’s 2018 IC3 online crime report

Malwarebytes - Wed, 04/24/2019 - 15:57

The FBI’s Internet Crime Complaint Center have released their annual Crime Report, with the most recent release focusing on 2018. While the contents may not surprise, it definitely cements some of the bigger threats to consumers and businesses—and not all of them are particularly high tech. Sometimes less is most definitely more.

What is the Internet Crime Complaint Center?

Good question. For those not in the know, it’s the FBI’s way of allowing you to file a complaint about a computer crime. If the victim or alleged perpetrator are located in the US, you can file. The information is then handed to trained analysts who distribute the data as appropriate.

They eventually take all that information and turn it into a report. There’s a fair bit in there to chew on—here’s the report, in PDF format—but there are some prominent themes on display. Shall we take a look at what’s hot?

Business Email Compromise (BEC)

Business Email Compromise is something we mention on here fairly regularly. Someone usually pretends to be the CEO of an organisation, and attempts to pull off a wire transfer via someone else in finance. Cash is often routed through Hong Kong where wires are common, so as not to attract attention. 

It’s a straightforward attack, low risk, small overheads, and if you fire enough out, eventually someone will bite. You only need one successful attack to walk away with millions.

In 2018, IC3:

  • Received just over 20,000 reports of BEC attacks
  • Declared adjusted losses of over $1.2 billion

Those are big numbers, but even bigger when you consider BEC reports the year before were 15,000, and adjusted losses were $675 million. One slightly peculiar twist to the usual “steal your money” approach is this:

In 2018, the IC3 received an increase in the number of BEC/EAC complaints requesting victims purchase gift cards. The victims received a spoofed email, a spoofed phone call or a spoofed text from a person in authority requesting the victim purchase multiple gift cards for either personal or business reasons.

Not quite as glamorous as Hong Kong wires, and in all honesty it sounds faintly ludicrous at first viewing, but it’s definitely working for somebody.

Payroll diversion

This is an interesting twist on the BEC scams. The attackers don’t waste time pretending to be CEOs. Instead, they go for logins tied to payroll processing systems. Once they’re in, they change the account information and the money is diverted to somewhere controlled by the hacker. They’ll also hide warnings to admins, which would’ve alerted them to deposit information changes. The money will then typically be sent to a  prepaid card—yes, prepaid cards are flavour of the month (year?) this time around. From the report:

Institutions most affected by this scam have been education, healthcare, and commercial airway transportation.

From just one hundred complaints, there was a combined reported loss of $100 million dollars. This is frankly astonishing. Phishing can truly be devastating in the right hands.

Tech support fraud

Tech support scams feel as though they’ve been around forever, and they’re busy cementing their place in the top three table of awful things. The 2018 tally for these antics weigh in at 14,000 complaints from victims scattered across 48 countries. The losses almost hit $39 million, representing a 161 percent rise from the previous year. Most of the victims are over 60, which fits the general M.O. of going after older targets who may not be aware of the latest happenings in fraud land.

The full report covers topics such as top states divided by both number of victims and victim losses, breakdowns on target age groups, crime types, assets recovered, and much more.

One thing’s for sure: with over 900 complaints a day, roughly 300,000 complaints received per year on average, and something in the region of $2.71 billion in losses accounted for in 2018, online crime isn’t going away anytime soon.

The post A look inside the FBI’s 2018 IC3 online crime report appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Consumers have few legal options for protecting privacy

Malwarebytes - Tue, 04/23/2019 - 17:03

There are no promises in the words, “We care about user privacy.”

Yet, these words appear on privacy policy after privacy policy, serving as disingenuous banners to hide potentially invasive corporate practices, including clandestine data collection, sharing, and selling.

This is no accident. It is a strategy.

In the US, companies that break their own privacy policies can—and do—face lawsuits over misleading and deceiving their users, including making false statements about data privacy. But users are handicapped in this legal fight, as successful lawsuits and filings are rare.

Instead of relying on the legal system to assert their data privacy rights, many users turn to tech tools, installing various web browsers, browser extensions, and VPNs to protect their online behavior.

Luckily, users aren’t alone in this fight. A small number of companies, including Apple, Mozilla, Signal, WhatsApp, and others, are truly committed to user privacy. They stand up to overbroad government requests. They speak plainly about data collection. And they often disengage from practices that put user data in the hands of unexpected third parties.

In the latest blog in our series on data privacy and cybersecurity laws, we look at the options that consumers actually have in asserting their digital privacy rights today. In the US, it is an area of law that, unlike global data protection, is slim.

As Jay Stanley, senior policy analyst with the ACLU Speech, Privacy, and Technology Project, put it: “There’s a thin web of certain laws that exist out there [for digital consumer privacy], but the baseline default is that it’s kind of the Wild West.”

Few laws, few protections

For weeks, Malwarebytes Labs has delved into the dizzying array of global data protection and cybersecurity laws, exploring why, for instance, a data breach in one state requires a different response than a data breach in another, or why “personal information” in one country is not the same as “personal data” in another.

Despite the robust requirements for lawful data protection around the world, individuals in the United States experience the near opposite. In the US, there is no comprehensive federal data protection law, and thus, there is no broad legal protection that consumers can use to assert their data privacy rights in court.

“In the United States, the sort of default is: Consumer beware,” said Lee Tien, senior staff attorney with the digital rights nonprofit Electronic Frontier Foundation.

As we explored last month, US data protection law is split into sectors—there’s a law for healthcare providers, a law for video rental history, a law for children’s online information, and laws for other select areas. But user data that falls out of those narrow scopes has little protection.

If a company gives intimate menstrual tracking info to Facebook? Tough luck. If a flashlight app gathers users’ phone contacts? Too bad. If a vast network of online advertising companies and data brokers build a corporate surveillance regime that profiles, monitors, and follows users across websites, devices, and apps, delivering ads that never disappear? Welcome to the real world.

“In general, unless there is specific, sectoral legislation, you don’t have much of a right to do anything with respect to [data privacy],” Tien said.

There is one caveat, though.

In the US, companies cannot lie about their own business practices, data protection practices included. These laws prohibit “unlawful, unfair, or fraudulent” business practices, along with “unfair, deceptive, untrue, or misleading” advertising. Whatever a company says it does, legally, should be what it actually does, Tien said.

“Most of consumer privacy that’s not already controlled by a statute lives in this space of ‘Oh, you made a promise about privacy, and then you broke it,’” Tien said. “Maybe you said you don’t share information, or you said that when you store information at rest, you store it in air-gapped computers, using encryption. If you say something like that, but it’s not true, you can get into trouble.”

This is where a company’s privacy policy becomes vital. Any company’s risk for legal liability is only as large as its privacy policy is detailed.

In fact, the fewer privacy promises made, the fewer opportunities to face a lawsuit, said ACLU’s Stanley.

“This is why all privacy policies are written to not make any promises, but instead have hand-wavy statements,” Stanley said. “What often follows a sweeping statement is 16 pages of fine print about privacy and how the company actually doesn’t make any promises to protect it.”

But what about a company that does make—and break—a promise?

Few laws, fewer successful assertions

Okay, so let’s say a company breaks its data privacy promise. It said it would not sell user data in its privacy policy and it undeniably sold user data. Time to go to court, right?

Not so fast, actually.

The same laws that prohibit unfair and deceitful business practices also often include a separate legal requirement for anyone that wants to use them in court: Individuals must show that the alleged misconduct personally harmed them.

Proving harm for something like a data breach is exceedingly difficult, Tien said.

“The mechanism of harm is more customized per victim than, say, an environmental issue,” Tien said, explaining that even the best data science can’t reliably predict an average person’s harm when subjected to a data breach the way that environmental science can predict an average person’s harm if they’ve been subjected to, for instance, a polluted drinking source.

In 2015, this difficulty bore out in court, when an Uber driver sued the ride-hailing company because of a data breach that affected up to 50,000 drivers. The breach, the driver alleged, led to a failed identity theft attempt and a fraudulent credit card application in his name.

Two years later, the judge dismissed the lawsuit. At a hearing she told the driver: “It’s not there. It’s just not what you think it is…It really isn’t enough to allege a case.”

There is, again, a caveat.

Certain government officials—including state Attorneys General, county District Attorneys, and city attorneys—can sue a company for its deceitful business practices without having to show personal harm. Instead, they can file a company as a representative for the public.

In 2018, this method was also tested in court, with the exact same company. Facing pressure from 51 Attorneys General—one for each US state and one for Washington, D.C.—Uber paid $148 million to settle a lawsuit alleging the company’s misconduct when covering up a data breach two years earlier.

Despite this success, waiting around for overworked government attorneys to file a lawsuit on a user’s behalf is not a practical solution to protecting online privacy. So, many users have turned to something else—technology.

Consumer beware? Consumer prepared

As online tracking methods have evolved far past the simpler days of just using cookies, consumers have both developed and adopted a wide array of tools to protect their online behavior, hiding themselves from persistent advertisers.

Paul Stephens, director of policy and advocacy for Privacy Rights Clearinghouse, said that, while the technology of tracking has become more advanced, so have the tools that push back.

Privacy-focused web browsers, including Brave and Mozilla’s Firefox Focus, were released in the past two years, and tracking-blocking browser extensions like Ghostery, Disconnect, and Privacy Badger—which is developed by EFF—are all available, at least in basic models, for free to consumers. Even Malwarebytes has a browser extension for both Firefox and Chrome that, along with obstructing malicious content and scams, blocks third-party ads and trackers that monitor users’ online behavior.

Stephens said he has another philosophy about protecting online privacy: Never trust an app.

“We have this naïve conception that the information we’re giving an app, that what we’re doing with that app, is staying with that app,” Stephen said. “That’s really not true in most situations.”

Stephens pointed to the example of a flashlight app that, for no discernible reason, collected users’ contact lists, potentially gathering the phone numbers and email addresses for every friend, family member, and met-once-at-a-party acquaintance.

“Quite frankly,” Stephens said, “I would not trust any app to not leak my data.”

Corporate respect for consumer privacy

There is one last pillar in defending consumer privacy, and, luckily for many users, it’s a sturdy one: corporations.

Yes, we earlier criticized the many nameless companies that window-dress themselves in empty privacy promises, but, for years, several companies have emerged as meaningful protectors of user privacy.

These companies include Apple, Signal, Mozilla, WhatsApp, DuckDuckGo, Credo Mobile, and several others. They all make explicit promises to users about not selling data or giving it to third parties that don’t need it, along with sometimes refusing to store any user data not fundamentally needed for corporate purposes. Signal, the secure messaging app, takes user privacy so seriously that the company cannot read users’ end-to-end encrypted messages to one another.

While many of these companies are household names, a smaller company is putting privacy front and center, and it’s doing it for a much-needed field—DNA testing.

Helix DNA not only tests people’s genetic data, but it also directs them to several partners who offer services that utilize DNA testing, such as The Mayo Clinic and National Geographic. Because Helix serves as a sort of hub for DNA testing services, and because it works so closely with so many companies and organizations that handle genetic data, it decided it was in the right position to set the tone for privacy, said Helix senior director of policy and clinical affairs Elissa Levin.

“It is incumbent on us to set the industry standards on privacy,” Levin said.

Last year, Helix worked with several other companies—including 23andMe, Ancestry, MyHeritage, and Habit—to release a set of industry “best practices,” providing guidance on how DNA testing companies should collect, store, share, and respect user data.

Among the best practices are several privacy-forward ideas not required by law, including the right for users to access, correct, and delete their data from company databases. Also included is a request to ban sharing any genetic data with third parties like employers and insurance companies. And, amidst recent headlines about captured serial killers and broad FBI access to genetic data, the best practices suggest that companies, when possible, notify individuals about government requests for their data.

Helix itself does not sell any user data, and it requires express user consent for any data sharing with third parties. Helix also brought in privacy executive and current head of data policy at the World Economic Forum Anne Toth to advise on its privacy practices before even launching, Levin said.

As to whether consumers appreciate having their privacy protected, Levin said the proof is not so much in what consumers say, but rather in what they don’t say.

“The best way to gauge that is in looking at the fact that we have not gotten negative feedback from users or concerns about our privacy practices,” Levin said. She said that any time a company is in the news for data misuse, there is never a large uptick in users reflexively walking away, even though Helix allows users to remove themselves from the platform.

Consumer privacy is the future

Online privacy matters, both to users and to companies. It should matter to lawmakers, but in the US, it has taken Congress until barely last year to take substantial interest in the topic.

Until the US has a comprehensive data privacy law, consumers will find a way to protect themselves, legal framework or not. Companies should be smart and not get left behind. Not only is protecting user privacy the right thing to do—it’s the smart thing to do.

The post Consumers have few legal options for protecting privacy appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Of hoodies and headphones: a spotlight on risks surrounding audio output devices

Malwarebytes - Mon, 04/22/2019 - 18:15

More than a decade ago, cardiologists from the Beth Israel Medical Center in Boston presented their findings at the American Heart Association (AHA) Scientific Sessions 2008 about MP3 headphones causing disruptions with heart devices—such as the pacemaker and the implantable cardioverter defibrillator (ICD)—when the headphones were placed on their chests, directly over their devices’ location.

These interference can range from preventing a defibrillator from detecting abnormal heart rhythms, deactivating the defibrillator temporarily (and, thus, stopping it from delivering a life-saving shock), forcing a pacemaker to deliver signals to the heart (and, thus, making it beat while disregarding the patient’s current heart rhythm), to fully reprogramming the heart device.

Experts named neodymium magnets, which are common in most headphones, as the culprit to these potentially life-threatening disruptions. Doctors have been repeatedly warned pacemaker and defibrillator patients about the risks of magnets and other devices that would accidentally interrupt their functionalities, but the warnings seem to have fallen on deaf ears.

Headphones, earphones, and headsets were never designed to interfere with heart devices—yet, interfere they did. While the interference was accidental, the curious among us may start to wonder: Can headphones be intentionally messed with to harm their users? What else can headphones do that they weren’t supposed to? Thankfully, the answer to the former is, “Not with life-endangering consequences.” However, audio output devices, including headphones, can pose a security and privacy risk to users, especially when abused by smart people with ill-intent.

Headphones, like webcams, are now suspect

It’s not just the webcam you should mind and secure. For years, researchers have been looking for and poking holes in our audio output devices, in the name of security and privacy. While the potential risks of headphones may be a new subject for our readers, the solutions for securing them are (thankfully) practical and familiar.

In the next few sections, we’ll cover various potential risks and vulnerabilities of headphones and other audio output devices, as well as any tech related to them—including the software that comes with some headphone sets.

From headphones to microphone to risk

YouTube houses a trove of videos on how one can turn their headphones and even ear buds into a microphone. This is possible because the make of the two are identical, meaning they work in much the same way. That makes it easy for anyone to MacGyver a microphone if all they have is a pair of headphones.

Exactly how do users transform their headphones into microphones? By physically plugging their headphone or earphone jack into the audio line in port. Unfortunately, headphones aren’t optimized to be microphones and vice versa, which means the quality won’t be the same.

But can headphones used as a makeshift microphone be a risk to your privacy? Indeed they can, albeit a minor one. If you put one speaker really close to your mouth while pouring your heart out, vulnerabilities in the headphones can enable threat actors to record whatever it is you’re spouting to the mirror or to a room full of tipsy friends.

Spying without spyware

Improvising a microphone with headphones is not the only way to put oneself at risk. As this CNET video shows, headphone software can be used to create a microphone and become subject to attacks as well.

Researchers at Ben Gurion University (BGU) in Israel found a way to automate the physical task of switching the output to the input, and improve the headphones’ ability to capture sounds clearly from across the room in the process.

They did this by introducing a proof-of-concept malware they called SPEAKE(a)R to a Realtek audio sound card, which quietly re-tasked the output channel to an input channel of a headphone set connected to a PC or laptop, and recorded any sounds or conversations happening in the room. You can watch the video recording of the demo in their lab below, or read their paper on the subject here [PDF]:

The SPEAKE(a)R lab demo

In their tests, the researchers used a pair of Sennheiser headphones. This could probably explain the clear quality of the recorded sound even from 20 feet away. We guess that the sound quality is dependent on the quality of headphones, as Sennheiser is one of a handful of brands known for high fidelity headphones.

The only way to make the SPEAKE(a)R malware useless is to not physically attach the headphones to an affected system.

When headphone software opens systems to MITM attacks

Speaking of Sennheiser, the company found itself in security hot water after researchers at Secorvo found a vulnerability not in their headphones, but in their headphone software: HeadSetup.

According to Secorvo’s 16-page report [PDF], this flaw can affect users of both Windows and macOS systems who are using or have used the headphones software. The flaw stems from the way the software creates an encrypted Web Socket (a communications protocol) with the browser: It installs a self-signed TLS certificate in the OS’s Trusted Root CA certificate store (for Windows) and the macOS Trust Store (for macOS).

Since all TLS certificates and their associated keys are identical for all installation instances of the headphone software, threat actors who use HeadSetup can potentially access the key and use it to forge certificates. This automatically confirms fake sites, which can be used to perform Man-in-the-Middle (MITM) attacks against target users. Yikes.

Sennheiser users can update the HeadSetup software to the latest version to protect themselves from future attacks.

Exploited USB headphone port in Nexus 9 can lead to data exfiltration

Aleph Security researchers, inspired by the work of Michael Ossmann and Kyle Osborn on multiplexed wired attack surfaces [PDF], experimented on and later discovered that the headphone jack of the Nexus 9 can be used to access and interact with its FIQ (Fast Interrupt Request) Debugger. The Debugger is a developer tool that is shipped with Google Nexus devices. The researchers were able to access it using a Universal Asynchronous Receiver/Transmitter (UART) debug cable that they built themselves.

More unfortunate still, the FIQ Debugger for the Nexus 9 could respond to commands that those with ill-intent may find especially useful. This includes the unauthorized access of sensitive information in the Android OS via the stack canary value, registry, and process list, and other functionalities, such as bootloader, that could force the device to do a factory reset.

FIQ Debugger interface with a list of help commands (Source: Aleph Security)

Fortunately, Google has fully patched flaws the researchers reported.

Risks surrounding Bluetooth headphones, earphones, and headsets

BlueBorne is the name used to describe an attack method that uses Bluetooth technology to infiltrate and control Bluetooth-enabled devices. Since many wireless headphones, ear buds, and stream services use Bluetooth tech, they are susceptible to this attack.

Discovered in 2017 by IoT security company Armis, BlueBorne consists of eight related zero-day vulnerabilities that can compromise major OS platforms. Affected devices can cause all sorts of security problems to their users, including malware propagation, espionage, and information theft, to name a few.

Anyone can eavesdrop on users via Bluetooth-enabled headsets, even if they’re not in discoverable mode. All one needs is the known default PIN code of the headset, which for most is “0000” (without quotation marks), an external antenna (to extend the Bluetooth range), and a device to control it remotely. SANS Senior Instructor Joshua Wright showed how this can be done in the video “Eavesdropping on Bluetooth Headsets.”

Users can avoid falling victims to BlueBorne attacks and eavesdropping by ensuring that their device’s firmware is up-to-date and turning off their device’s Bluetooth when not in use.

From audiophile to…paranoiac?

Covering your laptop’s built-in webcam is a common and effective security practice to deter potential voyeurs from clandestinely watching you without your knowledge. This is also the reason why users are recommended to disconnect external cameras from desktop computers when not being used.

In terms of headphones, headsets, and earphones, another set of approaches are needed. While securing webcams is easy, securing audio inputs is not. In fact, putting a tape over a laptop’s microphone input—even a thick piece doubled up à la Mark Zuckerberg—simply wouldn’t work. Securing audio inputs takes knowledge of how your device’s audio technology works, a bit of patience, and, in extreme cases, destroying a good pair of ear plugs.

If you’re worried that your headphones, earphones, or headset could be used to invade your privacy, you don’t have to go to extremes. Applying basic cybersecurity hygiene to how you use your audio listening devices, such as updating all software and hardware, including firmware and the apps you use with the device, is a good place to start.

But if you absolutely and undoubtedly don’t want your headphones snooping on you in any way, here’s a simple, low-cost way of doing it: disconnect them from your computing or mobile device.

Happy, and safe, listening!

Other resources:

The post Of hoodies and headphones: a spotlight on risks surrounding audio output devices appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (April 15 – 21)

Malwarebytes - Mon, 04/22/2019 - 15:47

Last week, Malwarebytes Labs revealed multiple giveaway online scam campaigns banking on the popularity (and generosity) of Ellen DeGeneres, weighed in on the hack that compromised legacy Microsoft email service accounts like Hotmail and MSN, explained what “like-farming” means and how to spot it on social media, and spotlighted on uncharacteristic executable file formats one of our researchers presented at the SAS conference.

We also exposed persistent phishing campaigns targeting Electrum wallet users to defraud them of Bitcoins and how malware can pose a physical threat to those inside industrial plants and to the residents nearby them.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (April 15 – 21) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Ellen DeGeneres giveaway scam spreading on social media

Malwarebytes - Mon, 04/15/2019 - 16:14

Scammers are pushing multiple fake Facebook profiles of Ellen DeGeneres, popular US TV show host and producer, with the goal of tricking people into jumping through a few money-making hoops. This isn’t a sophisticated scam. It isn’t hacking the Gibson. It won’t be the focus of a cutting edge infosec talk. However, it’s certainly doing some damage—up to a point.

This scam is a victim of its own ambition.

What are they doing?

The profiles all have one main promotion point, claiming that Ellen has a competition on-the-go, and people entering will be fortunate enough to win all manner of cool prizes. One profile touts pictures of Ellen standing next to a car; in another, she holds aloft a giant VISA card. Many of the fakes push genuine video clips of the TV host talking about charity drives to add a little more credibility to their fakeout.

The scammers somehow managed to make clips of Ellen talking about donation efforts from viewers sound like she’s giving things away. The illusion falls apart with a little bit of thought, but as with most scams, the allure of something for nothing proves too good to resist.

What do potential victims have to do?

Some of the pages deviate from the template a little, but for the most part, the thing that gets this scam moving is the below text. It’s your standard plea to overshare the bogus offer to friends, family, and other contacts across the social network:

Click to enlarge

Surprise in the next 24 hours, I will randomly select people on Facebook, everyone who *shares* will receive a gift card, cash, and a big winner can win a car & house “Share now” don’t miss! We are watching!!! I will choose 500 lucky people, $5,000,000 each only follows instructions

Step 1- Love it

Step 2- Share

Step 3- Comment on “DONE”

I’ve shared the scam, what next?

Good question, potential fake Ellen giveaway victim.

What happens next is you’re directed to the comments section of the various posts floating around. You’ll then see one of half a dozen or so messages, roughly along the same lines:

“Hi all, you must register your name by downloading my movie click here and your name will automatically be registered”

Click to enlarge

Downloading…your movie?

Well, this took a weird turn. A few of the links lead to a blogspot page touting “Ellen Degeneres givaways 2019.”

Click to enlarge

“To become a winner, by downloading one movie, you have been registered as a winner”

Uh-huh. Weirdly the site also claims to offer up John Wick 3, Hellboy, and Shazam, which don’t feel very Ellen-ish. Speaking of not very Ellen-ish, one of the other sites offers up those other well-known Ellen Degeneres classics: Glass and Escape Room.

Click to enlarge

Yet another site, which appears to have fallen out of a late 1990’s design wormhole, sends you elsewhere when clicking the register button.

Click to enlarge

Where to next?

All of these blogs send clickers to the kind of movie sign-up portal we’ve been seeing online for some time. Suffice to say, we won’t go over old ground, but you are absolutely not going to win any Ellen competitions by registering on any of the below sites. At best, you’ll end up with a one-off membership fee or a rolling subscription.

That’s quite a scam daisy-chain

It is! It’s such a weirdly specific target, and so poorly thought out. Are the core demographic of Ellen fans really going to start with a cookie-cutter chain letter spam missive on Facebook, get caught up in a maze of confusing “Ellen starred in Batman Returns, you know” blogger pages, before ending up on a variety of utterly unrelated “sign up to watch this movie” portals—and then actually sign up?

Generally, most scams that have a movie sign-up site as a destination are a lot more straightforward than this: one click, BAM. Done. Even when these scams cross into strange realms, such as the fake John Wick ebooks from February, they tend to net out a more simple, and thus easier to ensnare users, process.

This scam has more twists and turns than Ellen popping up unannounced at the end of Usual Suspects. If we had to guess, we’d say “strong opening performance, closely followed by a viewing figures nosedive.”

A captive audience?

From a cursory glance at stats available for the blogspot websites via the Bitly links, this theory would appear to be borne out. There’s a lot of sharing and commenting apparently taking place on Facebook itself, but in terms of translating to actual movie spam page clickthroughs?

Click to enlarge

Not so much. Only one of the three sites have anything approaching a regular flow of traffic, and those are small numbers. The second site has about 1,400 clicks, but that’s spread across two spikes in February and April. The third site has a grand total of 48 clicks at time of writing.

When the daisy chain snaps

Someone had a clever idea here: focus a scam around a celebrity you wouldn’t perhaps think of being the bait, and wrap it across multiple social media profiles. In theory, it could have been a winner for the individuals behind it. However, all inventiveness began and ended with the inclusion of Ellen. In the same way innovative online fakeouts gave way to endless, dreary years of “here’s a survey scam,” those seem to have been replaced by “here’s a movie sign-up scam” instead.

What you tend to see now is the movie sign-up scams jammed into almost every social engineering con trick around. They are—just like Ellen playing Agent Smith in The Matrix—inevitable.

Cancelling the show

Ultimately, then, this is a good example of a low-level scam gone utterly off the rails. Overloading something like this with needless complexity and multiple steps sounds cool on paper, but what this actually does is help potential victims steer clear. When they get bored, or confused, or drift off, that’s bad news for the scammers, and great news for everyone else.

If you’re behind this, please: Keep up the terrible work.

The post Ellen DeGeneres giveaway scam spreading on social media appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (April 8 – 14)

Malwarebytes - Mon, 04/15/2019 - 14:42

Last week on Labs, we said hello to Baldr, a new stealer on the market, we wondered who is managing the security of medical management apps, discussed the different perceptions of personal information, and we looked at fake Instagram assistance apps found on Google Play that are stealing passwords.

Other cybersecurity news
  • German pharmaceuticals giant Bayer says it has been hit by malware, possibly from China, but that none of its intellectual property has been accessed. (Source: The Register)
  • Canadian police last week raided the residence of a Toronto software developer behind “Orcus RAT,” a product that has been used in countless malware attacks. (Source: Krebs on Security)
  • In response to concerns raised by the European Commission, Facebook has agreed to update its terms and conditions in the EU to make it clear to users how their personal data is used. (Source: BetaNews)
  • Three vulnerabilities have been discovered in the Verizon Fios Quantum Gateway, a very popular router which, when exploited together, could give an attacker complete control of a victim’s network. (Source: ThreatPost)
  • New variants of the sextortion scams are now attaching password-protected zip files that contain alleged proof that the sender has a video recording of the recipient. (Source: BleepingComputer)
  • Chamois, the botnet you probably never heard about before, is losing ground again after having controlled some 20 million devices at its peak. (Source: Duo Secuirty)
  • A global Amazon team listens to what we tell Alexa and reviews audio clips in an effort to help the voice-activated assistant respond to commands. (Source: Bloomberg)
  • An attacker gained access to the servers hosting The intruder potentially had access to unencrypted message data, password hashes, and access tokens. (Source:
  • US-Cert issued a warning that Multiple Virtual Private Network (VPN) applications store the authentication and/or session cookies insecurely in memory and/or log files. (Source:
  • Fake news peddlers have devised a cunning new way to prevent their posts from getting removed from social media. Instead of linking to fake news, bad actors are now linking to posts promoting older news articles that may no longer be accurate, but won’t be reported as fake since they were once legitimate news. (Source: ThreatPost)

Stay safe, everyone!

The post A week in security (April 8 – 14) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fake Instagram assistance apps found on Google Play are stealing passwords

Malwarebytes - Fri, 04/12/2019 - 17:40

We all want those Instagram likes and followers. Many apps on Google Play claim they can assist you with that effort. But what if the app that’s supposed to be helping you is also stealing your username and password? 

As a matter of fact, that’s exactly what we found in three fake Instagram assistance apps still available on Google Play at the time of this writing. Moreover, these fake apps are targeting Iranian users. Malwarebytes already detects the malicious apps as Android/Trojan.Spy.FakeInsta.

What’s in a like?

As the psychology of social media reveals how addicting it can be to receive likes and even better, followers, on platforms such as Instagram, users often look for shortcuts or other ways to game the system in order to get that rush of dopamine. 

That’s where Instagram assistance apps come into play—Google Play, that is! Apps that claim to boost your likes and increase your followers are an attractive notion, especially when building a thriving Instagram account organically can take months or even years. Malware authors are great opportunists, and there is certainly a lot of opportunity to exploit when it comes to creating account-stealing fake apps.

InstaStolen account

Let’s use an app named Followkade as a case study of this new-found Instagram credential stealer.

App Name: Followkade

Package Name: com.followkade.insta

Installs: 50,000+

Reviews: 4.0 out of 6,999 total respondents

As you can see, it’s a highly-rated app with thousands of downloads and reviews. Customers on Google Play looking to determine the app’s legitimacy would be none-the-wiser.

After install, the app opens to a splash page, and then a page asking for Instagram credentials.

I used the following to log in:

Username: test_username

Password: test_password

After opening a network scanner, I pressed Login. Along with normal login traffic to Instagram, there was some additional network traffic going on here. Take a look at the screenshots below with proof of the stolen credentials.

There it is in plain text: my test username and password being sent to a known malicious website.

Insta targets

There are many apps that pose as so-called helpers piggybacking off the social media craze. Some of them are legitimate apps that might be able to help users boost likes and followers as advertised. However, malware authors can too easily mimic the above board apps, and they bank on users’ desire to find fast validation through social media acceptance.  

The other two apps that we found, LikeBegir and Aseman Security, also target Iranian users, as does Followkade. LikeBegir claims it will increase likes, help users buy cheap coins, and provide daily gifts. Aseman Security, ironically, boasts that it will boost security for your Instagram page and prevent it from being hacked.

I would imagine there aren’t a lot of Iranian Instagram assistance apps on Google Play, so it’s an easy target for malware authors of that region. In these cases, picking a highly-rated and installed app isn’t much help to be safe.

Acknowledgement and tips

Many thanks to Malwarebytes Forum patron AmirGooran for tipping us off about the fake apps. 

If you’re looking to boost your Instagram community, it’s a lot safer to do it the old-fashioned way: by creating quality content with well-edited, creative photos. Take the time to write engaging captions with appropriate hashtags to attract others. And build your community by following and interacting with other top content creators you truly appreciate—not just using the follow for a follow model.

And if you’re interested in securing your Instagram account, once again, the old-fashioned ways win out. Be sure to use strong password credentials, which means long passwords that don’t have easily guessable information such as birthdays or family names, and nothing that has been used for another account. We typically recommend folks use a password manager so they needn’t worry about remembering 27 different passwords. In addition, avoid using the Insta Messages function for communicating any confidential, important information, because it has no end-to-end encryption option whatsoever.

Read more: How do I secure my social media profile?

Like anything in life, building a respectable social media following takes work. Avoid the shortcuts: Not only do they fail at doing the things they promise—they may also take away much more than you would receive. After all, are fake likes really worth getting your personal information stolen? Stay safe out there!

The post Fake Instagram assistance apps found on Google Play are stealing passwords appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What is personal information? In legal terms, it depends

Malwarebytes - Thu, 04/11/2019 - 17:03

In early March, cybersecurity professionals around the world filled the San Francisco Moscone Convention Center’s sprawling exhibition halls to discuss and learn about everything infosec, from public key encryption to incident response, and from machine learning to domestic abuse.

It was RSA Conference 2019, and Malwarebytes showed up to attend and present. Our Wednesday afternoon session—“One person can change the world—the story behind GDPR”—explored the European Union’s new, sweeping data privacy law which, above all, protects “personal data.”

But the law’s broad language—and finite, severe penalties—left audience members with a lingering question: What exactly is personal data?

The answer: It depends.

Personal data, as defined by the EU’s General Data Protection Regulation, is not the same as “personally identifiable information,” as defined by US data protection and cybersecurity laws, or even “personal information” as defined by California’s recently-signed data privacy law. Further, in the US, data protection laws and cybersecurity laws serve separate purposes and, likewise, bestow slightly separate definitions to personal data.

Complicating the matter is the public’s instinctual approach to personal information, personal data, and online privacy. For everyday individuals, personal information can mean anything from telephone numbers to passport information to postal codes—legal definitions be damned.

Today, in the latest blog for our cybersecurity and data privacy series, we discuss the myriad conditions and legal regimes that combine to form a broad understanding of personal information.

Companies should not overthink this. Instead, data privacy lawyers said businesses should pay attention to what information they collect and where they operate to best understand personal data protection and compliance.

As Duane Morris LLP intellectual property and cyber law partner Michelle Donovan said:

“What it comes down to, is, it doesn’t matter what the rules are in China if you’re not doing business in China. Companies need to figure out what jurisdictions apply, what information are they collecting, where do their data subjects reside, and based on that, figure out what law applies.”

What law applies?

The personal information that companies need to protect changes from law to law. However, even though global data protection laws define personal information in diverse ways, the definitions themselves are not important to every business.

For instance, a small company in California that has no physical presence in the European Union and makes no concerted efforts to market to EU residents does not have to worry about GDPR. Similarly, a Japanese startup that does not collect any Californians’ data does not need to worry about that state’s recently-signed data privacy law. And any company outside the US that does not collect any US personal data should not have to endure the headaches of complying with 50 individual state data breach notification laws.

Baker & McKenzie LLP of counsel Vincent Schroeder, who advises companies on privacy, data protection, information technology, and e-commerce law, said that the various rules that determine which laws apply to which businesses can be broken down into three basic categories: territorial rules, personal rules, and substantive rules.

Territorial rules are simple—they determine legal compliance based on a company’s presence in a country, state, or region. For instance, GDPR applies to companies that physically operate in any of the EU’s 28 member-states, along with companies that directly market and offer their products to EU citizens. That second rule of direct marketing is similar to another data privacy law in Japan, which applies to any company that specifically offers its products to Japanese residents.

“That’s the ‘marketplace rule,’ they call it,” Schroeder said. “If you’re doing business in that market, consciously, then you’re affecting the rights of the individuals there, so you need to adhere to the local regulatory law.” 

Substantive rules, on the other hand, determine compliance based on a company’s characteristics. For example, the newly-passed California Consumer Privacy Act applies to companies that meet any single one of the following three criteria: pull in annual revenue of $25 million, derive 50 percent or more of that annual revenue from selling consumers’ personal information, or buy, receive, sell, or share the personal information of 50,000 or more consumers, households, or devices.

Businesses that want to know what personal information to legally protect should look first to which laws apply. Only then should they move forward, because “personal information” is never just one thing, Schroeder said.

“It’s an interplay of different definitions of the territorial, personal, and substantive scopes of application, and for definitions of personal data,” Schroeder said.

Personal information—what’s included?

The meaning of personal information changes depending on who you ask and which law you read. Below, we focus on five important interpretations. What does personal information mean to the public? What does it mean according to GDPR? And what does it mean according to three state laws in California—the country’s legislative vanguard in protecting its residents’ online privacy and personal data.

The public

Let’s be clear: Any business concerned with legal obligations to protect personal information should not start a compliance journey by, say, running an employee survey on Slack and getting personal opinions.

That said, public opinions on personal data are important, as they can influence lawmakers into drafting new legislation to better protect online privacy.

Jovi Umawing, senior content writer for Malwarebytes Labs who recently compiled nearly 4,000 respondents’ opinions on online privacy, said that personal information is anything that can define one person from another.

“Personal information for me is relevant data about a person that makes them unique or stand out,” Umawing wrote. “It’s something intangible that one owns or possesses that (when combined with other information) points back to the person with very high or unquestionable accuracy.”

Pieter Arntz, malware intelligence researcher for Malwarebytes, provided a similar view. He said he considers “everything that can be used to identify me or find more specific information about me as personal information.” That includes addresses, phone numbers, Social Security numbers, driver’s license info, passport info, and, “also things like the postal code,” which, for people who live in very small cities, can be revealing, Arntz said.

Interestingly, some of these definitions overlap with some of the most popular data privacy laws today.


In 2018, the General Data Protection Regulation took effect, granting EU citizens new rights to access, transport, and delete personal data. In 2019, companies are still figuring out what that personal data encompasses.

The text of the law offers little clarity, instead providing this ocean-wide ideology: “Personal data should be as broadly interpreted as possible.”

According to GDPR, the personal data that companies must protect includes any information that can “directly or indirectly” identify a person—or subject—to whom the data belongs or describes. Included are names, identification numbers, location data, online identifiers like screen names or account names, and even characteristics that describe the “physical, physiological, genetic, mental, commercial, cultural, or social identity of a person.”

That last piece could include things like an employee’s performance record, a patient’s medical diagnosis history, a user’s specific anarcho-libertarian political views, and even a person’s hair color and length, if it is enough to determine that person’s identity.

Donovan, the attorney from Duane Morris, said that GDPR’s definition could include just about any piece of information about a person that is not anonymized.

“Even if that information is not identifying [a person] by name, if it identifies by a number, and that number is known to be used to identify that person—either alone or in combination—it could still associate with that person,” Donovan said. “You should assume that if you have any data about an individual that is not anonymized when you get it, it’s likely going to be covered.”

The California Consumer Privacy Act

In June 2018, California became the first state in the nation to respond to frequent online privacy crises by passing a comprehensive, statewide data privacy law. The California Consumer Privacy Act, or CCPA, places new rules on companies that collect California residents’ personal data.

The law, which will go into effect in 2020, calls this type of data “personal information.”

“Personal information,” according to the CCPA, is “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”

What that includes in practice, however, is a broad array of data points, including a person’s real name, postal address, and online IP address, along with biometric information—like DNA and fingerprint data—and even their browsing history, education history, and what the law vaguely describes as “audio, electronic, visual, thermal, olfactory, or similar information.”

Aside from protecting several new data types, the CCPA also makes a major change to how Californians can assert their data privacy rights in court. For the first time ever, a statewide data privacy law details “statutory damages,” which are legislatively-set, monetary amounts that an individual can ask to recover when filing a private lawsuit against a company for allegedly violating the law. Under the CCPA, people who believe their data privacy rights were violated can sue a company and ask for up to $750.

This is a huge shift in data privacy law, Donovan said.

“For the first time, there’s a real privacy law with teeth,” Donovan said.

Previously, if individuals wanted to sue a company for a data breach, they needed to prove some type of economic loss when asking for monetary damages. If, say, a fraudulent credit card was created with stolen data, and then fraudulent charges were made on that card, monetary damages might be easy to figure out. But it’s rarely that simple.  

“Now, regardless of the monetary damage, you can get this statutory damage of $750 per incident,” Donovan said.

California’s data breach notification law and data protection law

If we stay in California but go back in time several years, we see the start of a trend—California has been the first state, more than once, to pass data protection legislation.

In 2002, California passed its data breach notification law. The first of its kind in the United States, the law forced companies to notify California residents about unauthorized access to their “personal information.”

The previous definitions of personal information and data that we’ve covered—GDPR’s broad, anything-goes approach, and CCPA’s inclusion of heretofore unimagined “olfactory,” smell-based personal data—do not apply here.

Instead, personal information in the 17-year-old law—which received an update five years ago—is defined as a combination of types of information. The necessary components include a Californian’s first and last name, or first initial and last name, paired up with things like their Social Security number, driver’s license number, and credit card number and corresponding security code, along with an individual’s email address and password.

So, if a company suffers a data breach of a California resident’s first and last name plus their Social Security number? That’s considered personal information. If a data breach compromises another California resident’s first initial, last name, and past medical insurance claims? Once again, that data is considered personal information, according to the law.

In 2014, this definition carried somewhat over into California’s data protection law. That year, then-California governor Jerry Brown signed changes to the state’s civil code that created data protection requirements for any company that owns, licenses, or maintains the “personal information” of California residents.

According to Assembly Bill No. 1710, “personal information” is, once again, the combination of information that includes a first name and last name (or first initial and last name), plus a Social Security number, driver’s license number, credit card number and corresponding security number, and medical information and health information.

The definitions are not identical, though. California’s data protection law, unlike its data breach notification law, does not cover data collected by automated license plate readers, or ALPRs. ALPRs can indiscriminately—and sometimes disproportionately—capture the license plate numbers of any vehicles that cross into their field of vision.

Roughly one year later, California passed a law to strengthen protections of ALPR-collected data.

The takeaway

By now, it’s probably easier to define what personal information isn’t rather than what it is (obviously, there is a legal answer to that, too, but we’ll spare the details). These evolving definitions point to a changing legal landscape, where data is not protected solely because of its type, but because of its inherent importance to people’s privacy.

Just as there is no one-size-fits-all definition to personal information, there is no one-size-fits-all to personal data protection compliance. If a company finds itself wondering what personal data it should protect, may we suggest something we have done for every blog in this series: Ask a lawyer.

Join us again soon for the next blog in our series, in which we will discuss consumer protections for data breaches and online privacy invasions.  

The post What is personal information? In legal terms, it depends appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Who is managing the security of medical management apps?

Malwarebytes - Wed, 04/10/2019 - 15:00

One truth that is consistent across every sector—be it technology or education—is that software is vulnerable, which means that any device running software applications is also at risk. While virtually any application-running device could be compromised by an attacker, vulnerabilities in medical management apps pose a unique and more dangerous set of problems.

Now add to vulnerabilities the issue of data privacy, especially that of sensitive medical information, and you have a perfect storm.

In a recent report, Data sharing practices of medicines related apps and the mobile ecosystem: traffic, content, and network analysis, published by BMJ, researchers analyzed the top-rated Android apps for medicine management and found that 19 out of the 24 tested apps shared user data outside of the app.

Because medical records are such a lucrative data set, attackers often target the healthcare industry, seeking out and eventually finding the weakest link in the supply chain. That’s why it’s important for stakeholders to consider the broader implications of weaknesses in health and medical apps.

According to the US Food & Drug Administration (FDA), medical apps that pose risks to patient health and safety have been regulated since 1997. “While many mobile apps carry minimal risk, those that can pose a greater risk to patients will require FDA review.”

As medical management apps offer the convenience of care at home, some devices have become directly intertwined with patient care. While some apps may only offer benign image-processing services, others may include data on test results, appointments, drug refills, and more. seem benign that some medical. This is why the FDA categorizes medical apps by risk.

What could go wrong?

Security concerns come not necessarily from the app itself, but from third parties that are creating the apps that interface with that data. “Developers relied on the services of infrastructure related third parties to securely store or process user data, thus the risks to privacy are lower. However, sharing with infrastructure related third parties represents additional attack surfaces in terms of cybersecurity,” the BMJ report said.

“Furthermore, the presence of trackers for advertising and analytics, uses additional data and processing time and could increase the app’s vulnerability to security breaches.”

Data that sits on any app or database can be compromised, but medical management apps are home to a trove of private information and different types of proprietary data, as well as whatever the healthcare provider has interfacing with that app, according to penetration tester, Mike Jones.

“From what I’ve experienced with medical management apps, the risks are through the roof because the apps are not under the same regulations as the Health Insurance Portability and Accountability Act (HIPAA). When you look at the amount of data that any kind of home health or medical service offers, if it is managed through an app, one of the biggest concerns is data leakage.”

Sharing and selling data might be a new reality in today’s digital, research-driven world, but it’s important to first strip the data of its context so that patient privacy is not interfered with. Yet, sharing and securing data don’t have to be mutually exclusive concepts, said Warren Poschman, senior solutions architect at comforte AG.

“Want to know what meds I’m taking or what procedures I’ve had so it can be cross referenced and insights gained? Absolutely! Want to know that it was me specifically that takes that medication or has had those procedures? Absolutely not! Regulatory bodies need to start ensuring that companies anonymize the data so that it can be safely used no matter where it travels to.”

Risk extends beyond the medical data

Perhaps even more concerning than an attacker being able to access the data collected or stored on these apps is the reality that if a malicious actor tampers with them, patients can get the wrong medications or medications could be diverted to different places, Jones said.

In Hacking the Hospital, a two-year study that evaluated cybersecurity risks in hospitals, Independent Security Evaluators (ISE) found two different web applications through which an adversary could remotely “deploy attacks that target and compromise patient health. We demonstrated that a variety of deadly remote attacks were possible within these facilities,” the report said. That was in 2016.

Fast forward three years, and ISE, executive partner Ted Harrington remains concerned about the risks to patient safety with medical management apps.

“What is critically important is that these solutions ensure that the appropriate amount of medicine goes to the right patient.”

When it comes to patient safety, the healthcare industry has established practices of redundancies, but these practices have largely been influenced by regulations. Highly-regulated industries are motivated to make changes in order to be compliant, but compliance isn’t synonymous with security, Harrington said.

Though many medical apps are regulated by the FDA, medical management apps don’t fall under HIPAA regulations, and those established practices that ensure patient safety among the providers and staff aren’t usually extended to software.

Still, there are a variety of direct and indirect implications for those that are responsible for delivering care if medical apps are compromised in any way.

“The delivery of care relies heavily on technology, which needs to be accurate,” Harrington said. “If there were instances that demonstrated these solutions are inaccurate, that could undermine faith in technology, and that can negatively impact things like the speed at which professionals can deliver care. Speed is second only to accuracy in the delivery of care.”

Where do apps go from here?

It’s a question to which there is no single, clear answer. The complexities and speed of innovation have created formidable obstacles when it comes to the security of medical and health apps.

As technology advances, more developers are relying on artificial intelligence and machine learning in software, “deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care,” according to the FDA.

These changes in technology also drive the evolution of regulations, which Jones said have to ensure security throughout the development lifecycle. The FDA is, in fact, “considering a total product lifecycle-based regulatory framework for these technologies that would allow for modifications to be made from real-world learning and adaptation, while still ensuring that the safety and effectiveness of the software as a medical device is maintained.”

Greater than good intentions

Without falling victim to fear, uncertainty, and doubt, there is reality to the belief that medical management apps can be the difference between life and death. To shift the focus from compliance to security, Harrington said, “We need to understand technology the way an attacker would understand it. How would a hacker exploit this technology? So, you start with building out a threat model.”

Not all hackers are financially motivated, which is why it’s also important to perform a security assessment that goes beyond running a scanner. “That’s ineffective,” said Harrington. “You need to go deeper, as deep as an attacker would.”

Increasingly, more security-minded professionals are advocating for developers to take more personal responsibility. I am the Cavalry, for example, recently published The Case for a Hippocratic Oath for Connected Medical Devices: Viewpoint in the Journal of Medical Internet Research (JMIR), in which the authors ask whether manufacturers and adopters of these connected technologies should be governed by the symbolic spirit of the Hippocratic Oath.

“The idea of holding developers responsible is in the right spirit,” Harrington said. After all, if a bridge collapses and an investigation finds that it was structurally deficient, contractors, inspectors, maintenance, and even the engineers who designed the bridge can be charged with negligence. Should not the same be true of those that build the technology that bridges the gap between medical professionals and patients?

The post Who is managing the security of medical management apps? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Say hello to Baldr, a new stealer on the market

Malwarebytes - Tue, 04/09/2019 - 15:00

By William Tsing, Vasilios Hioureas, and Jérôme Segura

Over the past few months, we have noticed increased activity and development of new stealers. Unlike many banking Trojans that wait for the victim to log into their bank’s website, stealers typically operate in grab-and-go mode. This means that upon infection, the malware will collect all the data it needs and exfiltrate it right away. Because such stealers are often non-resident (meaning they have no persistence mechanism) unless they are detected at the time of the attack, victims will be none-the-wiser that they have been compromised.

This type of malware is popular among criminals and covers a greater surface than more specialized bankers. On top of capturing browser history, stored passwords, and cookies, stealers will also look for files that may contain valuable data.

In this blog post, we will review the Baldr stealer which first appeared in underground forums in January 2019, and was later seen in the wild by Microsoft in February.

Baldr on the market

Baldr is likely the work of three threat actors: Agressor for distribution, Overdot for sales and promotion, and LordOdin for development. Appearing first in January, Baldr quickly generated many positive reviews on most of the popular clearnet Russian hacking forums.

Previously associated with the Arkei stealer (seen below), Overdot posts a majority of advertisements across multiple message boards, provides customer service via Jabber, and addresses buyer complaints in the reputational system used by several boards.

Of interest is a forums post referencing Overdot’s previous work with Arkei, where he claims that the developers of both Baldr and Arkei are in contact and collaborate on occasion.

Unlike most products posted on clearnet boards, Baldr has a reputation for reliability, and it also offers relatively good communication with the team behind it.

LordOdin, also known as BaldrOdin, has a significantly lower profile in conjunction with Baldr, but will monitor and like posts surrounding it.

He primarily posts to differentiate Baldr from competitor products like Azorult, and vouches that Baldr is not simply a reskin of Arkei:

Agressor/Agri_MAN is the final player appearing in Baldr’s distribution:

Agri_MAN has a history of selling traffic on Russian hacking forums dating back roughly to 2011. In contrast to LordOdin and Overdot, he has a more checkered reputation, showing up on a blacklist for chargebacks, as well as getting called out for using sock puppet accounts to generate good reviews.

Using the alternate account Agressor, he currently maintains an automated shop to generate Baldr builds at service-shop[.]ml. Interestingly, Overdot makes reference to an automated installation bot that is not connected to them, and is generating complaints from customers:

This may indicate Agressor is an affiliate and not directly associated with Baldr development. At presstime, Overdot and LordOdin appear to be the primary threat actors managing Baldr.


In our analysis of Baldr, we collected a few different versions, indicating that the malware has short development cycles. The latest version analyzed for this post is version 2.2, announced March 20:

We captured Baldr via different distribution chains. One of the primary vectors is the use of Trojanized applications disguised as cracks or hack tools. For example, we saw a video posted to YouTube offering a program to generate free Bitcoins, but it was in fact the Baldr stealer in disguise.

We also caught Baldr via a drive-by campaign involving the Fallout exploit kit:

Technical analysis (Baldr 2.2)

Baldr’s high level functionality is relatively straight forward, providing a small set of malicious abilities in the version of this analysis. There is nothing ground breaking as far as what it’s trying to do on the user’s computer, however, where this threat differentiates itself is in its extremely complicated implementation of that logic.

Typically, it is quite apparent when a malware is thrown together for a quick buck vs. when it is skillfully crafted for a long-running campaign. Baldr sits firmly in the latter category—it is not the work of a script kiddie. Whether we are talking about its packer usage, payload code structure, or even its backend C2 and distribution, it’s clear Baldr’s authors spent a lot of time developing this particular threat.

Functionality overview

Baldr’s main functionality can be broken down into five steps, which are completed in chronological order.

Step 1: User profiling

Baldr starts off by gathering a list of user profiling data. Everything from the user account name to disk space and OS type is enumerated for exfiltration.

Step 2: Sensitive data exfiltration

Next, Baldr begins cycling through all files and folders within key locations of the victim computer. Specifically, it looks in the user AppData and temp folders for information related to sensitive data. Below is a list of key locations and application data it searches:

AppData\Local\Google\Chrome\User Data\Default AppData\Local\Google\Chrome\User Data\Default\Login Data AppData\Local\Google\Chrome\User Data\Default\Cookies AppData\Local\Google\Chrome\User Data\Default\Web Data AppData\Local\Google\Chrome\User Data\Default\History AppData\Roaming\Exodus\exodus.wallet AppData\Roaming\Ethereum\keystore AppData\Local\ProtonVPN Wallets\Jaxx Liberty\ NordVPN\ Telegram Jabber TotalCommander Ghisler

Many of these data files range from simple sqlite databases to other types of custom formats. The authors have a detailed knowledge of these target formats, as only the key data from these files is extracted and loaded into a series of arrays. After all the targeted data has been parsed and prepared, the malware continues onto its next functionality set.

Step 3: ShotGun file grabbing

DOC, DOCX, LOG, and TXT files are the targets in this stage. Baldr begins in the Documents and Desktop directories and recursively iterates all subdirectories. When it comes across a file with any of the above extensions, it simply grabs the entire file’s contents.

Step 4: ScreenCap

In this last data-gathering step, Baldr gives the controller the option of grabbing a screenshot of the user’s computer.

Step 5: Network exfiltration

After all of this data has been loaded into organized and categorized arrays/lists, Baldr flattens the arrays and prepares them for sending through the network.

One interesting note is that there is no attempt to make the data transfer more inconspicuous. In our analysis machine, we purposely provided an extreme number of files for Baldr to grab, wondering if the malware would slowly exfiltrate this large amount of data, or if it would just blast it back to the C2.

The result was one large and obvious network transfer. The malware does not have built-in functionality to remain resident on the victim’s machine. It has already harvested the data it desires and does not care to re-infect the same machine. In addition, there is no spreading mechanism in the code, so in a corporate environment, each employee would need to be manually targeted with a unique attempt.

Packer code level analysis

We will begin with the payload obfuscation and packer usage. This version of Baldr starts off as an AutoIt script built into an exe. Using a freely available AIT decompiler, we got to the first stage of the packer below.

As you can see, this code is heavily obfuscated. The first two functions are the main workhorse of that obfuscation. What is going on here is simply reordering of the provided string, according to the indexes passed in as the second parameter. This, however, does not pose much of a problem as we can easily extract the strings generated by simply modifying this script to ConsoleWrite out the deobfuscated strings before returning:

The resulting strings extracted are below:

Execute BinaryToString @TempDir @SystemDir @SW_HIDE @StartupDir @ScriptDir @OSVersion @HomeDrive @CR @ComSpec @AutoItPID @AutoItExe @AppDataDir WinExists UBound StringReplace StringLen StringInStr Sleep ShellExecute RegWrite Random ProcessExists ProcessClose IsAdmin FileWrite FileSetAttrib FileRead FileOpen FileExists FileDelete FileClose DriveGetDrive DllStructSetData DllStructGet DllStructGetData DllStructCreate DllCallAddress DllCall DirCreate BinaryLen TrayIconHide :Zone.Identifier kernel32.dll handle CreateMutexW struct* FindResourceW kernel32.dll dword SizeofResource kernel32.dll LoadResource kernel32.dll LockResource byte[ VirtualAlloc byte shellcode [

In addition to these obvious function calls, we also have a number of binary blobs which get deobfuscated. We have included only a limited set of these strings as to not overload this analysis with long sets of data.

We can see that it is pulling and decrypting a resource DLL from within the main executable, which will be loaded into memory. This makes sense after analyzing a previous version of Baldr that did not use AIT as its first stage. The prior versions of Baldr required a secondary file named Dulciana. So, instead of using AIT, the previous versions used this file containing the encrypted bytes of the same DLL we see here:

Moving forward to stage two, all things essentially remain equal throughout all versions of the Baldr packer. We have the DLL loaded into memory, which creates a child process of the main Baldr executable in a suspended state and proceeds to hollow this process, eventually replacing it with the main .NET payload. This makes manually unpacking with ollyDbg nice because after we break on child Baldr.exe load, we can step through the remaining code of the parent, which writes to process memory and eventually calls ResumeThread().

As you can see, once the child process is loaded, the functions that it has set up to call contain VirtualAlloc, WriteProcessMemory, and ResumeThread, which gives us an idea what to look out for. If we dump this written memory right before resume thread is called, we can then easily extract the main payload.

Our colleague @hasherezade has made this step-by-step video of unpacking Baldr:

Payload code analysis

Now that we have unpacked the payload, we can see the actual malicious functionality. However, this is where our troubles began. For the most part, malware written in any interpreted language is a relief for a reverse engineer as far as ease of analysis goes. Baldr, on the other hand, managed to make the debugging and analysis of its source code a difficult task, despite being written in C#.

The code base of this malware is not straight forward. All functionality is heavily abstracted, encapsulated in wrapper functions, and utilizes a ton of utility classes. Going through this code base of around 80 separate classes and modules, it is not easy to see where the key functionality lies. Multiple static passes over the code base are necessary to begin making sense of it all. Add in the fact that the function names have been mangled and junk instructions are inserted throughout the code, and the next step would be to start debugging the exe with DnSpy.

Now we get to our next problem: threads. Every minute action that this malware performs is executed through a separate thread. This was obviously done to complicate the life of the analyst. It would be accurate to say that there are over 100 unique functions being called inside of threads throughout the code base. This does not include the threads being called recursively, which could become thousands.

Luckily, we can view local data as it is being written, and eventually we are able to locate the key sections of code:

The function pictured above gathers the user’s profile, as mentioned previously. This includes the CPU type, computer name, user accounts, and OS.

After the entire process is complete, it flattens the arrays storing this data, resulting in a string like this:

The next section of code shows one of the many enumerator classes used to cycle directories, looking for application data, such as stored user accounts, which we purposely saved for testing.

The data retrieved was saved into lists in the format below:

In the final stage of data collection, we have the threads below, which cycle the key directories looking for txt and doc files. It will save the filename of each txt or doc it finds, and store the file’s contents in various arrays.

Finally, before we proceed to the network segment of the malware, we have the code section performing the screen captures:

Class 2d10104b function 1b0b685() is one of the main modules that branches out to do the majority of the functionality, such as looping through directories. Once all data has been gathered, the threads converge and the remaining lines of code continue single threaded. It is then that the network calls begin and all the data is sent back to the C2.

The zipped data is encrypted via XOR with a 4 byte key and version number obtained from contacting the C2 via a first network request. The second request sends the cyphered data back to the C2.


Like other stealers, Baldr comes with a panel that allows the customers (criminals that buy the product) to see high-level stats, as well as retrieve the stolen information. Below is a panel login page:

And here, in a screenshot posted by the threat actor on a forum, we see the inside of the panel:

Final analysis

Baldr is a solid stealer that is being distributed in the wild. Its author and distributor are active in various forums to promote and defend their product against critics. During a short time span of only a few months, Baldr has gone through many versions, suggesting that its author is fixing bugs and interested in developing new features.

Baldr will have to compete against other stealers and differentiate itself. However, the demand for such products is high, so we can expect to see many distributors use it as part of several campaigns.

Malwarebytes users are protected against this threat, detected as Spyware.Baldr.

Thanks to S!Ri for additional contributions.

Indicators of compromise

Baldr samples

5464be2fd1862f850bdb9fc5536eceafb60c49835dd112e0cd91dabef0ffcec5 -> version 1.2 1cd5f152cde33906c0be3b02a88b1d5133af3c7791bcde8f33eefed3199083a6 -> version 2.0 7b88d4ce3610e264648741c76101cb80fe1e5e0377ea0ee62d8eb3d0c2decb92 > version 2.2 8756ad881ad157b34bce011cc5d281f85d5195da1ed3443fa0a802b57de9962f (2.2 unpacked)

Network traces

hwid={redacted}&os=Windows%207%20x64&file=0&cookie=0&pswd=0&credit=0&autofill=0&wallets=0&id=BALDR&version=v1.2.0 hwid={redacted}&os=Windows%207%20x64&file=0&cookie=0&pswd=0&credit=0&autofill=0&wallets=0&id=BALDR&version=v2.0

The post Say hello to Baldr, a new stealer on the market appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (April 1 – 7)

Malwarebytes - Mon, 04/08/2019 - 15:52

Last week, Malwarebytes Labs took readers on a brief tour of some of the world’s most notable data privacy laws, explored how gamers can protect themselves against cyberthreats, and offered thoughts about the reports that a 23-year-old Chinese woman gained access to President Donald Trump’s Mar-a-Lago resort while carrying four cellphones, a hard drive, a laptop, and a thumb drive that was “infected” with malware.

We also provided an in-depth look into the importance of cybersecurity in critical public infrastructure, like water management plants and power plants.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (April 1 – 7) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Was this really an attempt by the Chinese?

Malwarebytes - Wed, 04/03/2019 - 15:43

Last weekend, during President Trump’s visit to the Mar-a-Lago resort, a 23-year-old Chinese woman attempted to gain access to the Florida resort by lying and bluffing her way in. After some discussion at the gate, she was escorted to the reception of the resort where it was found out that she was not on the list of people that were allowed to enter.

According to the report a search of her belongings showed she was carrying four cellphones, a hard drive, a laptop and a thumb drive that was found to be infected with malware.

The word infected was emphasized by us because it raises an important question. A thumb drive can have malware on it that is inactive. The malware can be deployed when the carrier is able to connect it to a target system. But it can also have malware on it that will deploy automatically once it is connected to a system. For example, like we have seen in USB drives dropped in the parking lot of a corporation that a threat actor wants to infiltrate. The third option is that the thumb drive is actually infected without the knowledge of the carrier. We sometimes see an old worm resurface that has infected the root of a thumb drive and consequently infects the system it was connected to. These are usually older worms that were widely spread and get a second chance when someone finds and uses an old USB stick.

As you can see, it is very important to know which of these scenarios is true here. Given the circumstances we are led to believe that the first scenario might be true.

But even if this is true this seems an amateur attempt that we should not attribute to the Chinese government or one of their APT groups too quickly. While it is true that Russian and Chinese attempts to gain access to important information are getting more overt, this one seems to be of a less professional nature. We will have to wait and see. Ms Zhang has a detention hearing April 8 and an arraignment April 15, so hopefully we will learn some more then.

According to Malwarebytes’ expert on China and APT groups William Tsing:

Although China has a long history of manipulating members of the Chinese diaspora towards espionage goals, we lack sufficient information at this time to conclude definitively that Zhang was engaged as an intelligence collector.  What we can say for sure is that businesses at high risk of cyber attack – such as Mar a Lago – can take measures to lower their risk profile.  Knowing your customers, and what legitimate business activity looks like, can assist in spotting fraudulent or dangerous behavior.  Empowering employees to challenge or alert to suspicious activity can stop an attack in its tracks.  Lastly, hotels of any sort are functionally impossible to secure well due to their transient population, and should not be the location of any sensitive or significant business transactions.

What we do know is that secret service agents at the gate verified that the last name on the passport she presented matched that of one of the club members, so when she claimed she wanted to use the pool she was escorted to the front desk. There she showed an invitation – in Chinese – for a United Nations friendship event. There was no-one that could read the invitation, but no such event was scheduled, so Ms Zhang was questioned and eventually detained.

President Trump was not at the resort at the moment this went down, but he was playing golf at a nearby facility.

The post Was this really an attempt by the Chinese? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How gamers can protect against increasing cyberthreats

Malwarebytes - Wed, 04/03/2019 - 15:00

A few years ago, cybersecurity scryers predicted that the video gaming industry would be the next big target of cybercriminals. Whether this will come true in the future or not, the average gamer may have little to no idea of what awaits them, much less be prepared for it.

In fact, while generally more technically adept than the average Joe, most gamers lack familiarity with risks they could encounter while gaming or browsing the web for game-related content. For the majority of US households, this takes place on devices such as the personal computer, smartphone, and the dedicated gaming console.

Factoring in the gaming industry’s steady growth since 2011—the changes in consumer gaming perception, habits, and appetite for new content, tech, and accessories—and the expectation that, despite a foreseen nominal dip, the industry will still hit high marks on sales at end of year, it is more crucial than ever to educate gamers on cybersecurity best practices. This includes the various threats gamers may encounter online, their real-world consequences, and what they can do to protect themselves.

While a lot has changed in the gaming industry in the last five years, most of the tried-and-tested tactics of ensnaring the unfamiliar (and oftentimes, the experienced) are still around, causing panic and making headlines.

So, without further ado, here are the risks every gamer—on a PC, mobile, or gaming console—should keep an eye out for.

Malware and potentially unwanted programs (PUPs)

Malware and PUPs have been the top-of-mind threats to online gamers, and for a good reason. They come in many, many forms—key generators; game cracks; trainers; fake mobile game apps [1][2], game installers, clients/launchers, and audio protocol; game hacks; cheat files [1][2]; infected or risky mods; unofficial game patches; bogus emulators—you name it. At this point, it won’t be a surprise to consider that every conceivable software related to gaming might have a malicious equivalent in the wild.

Malware doesn’t only appear as applications, but can also be embedded in image files. In 2016, cybercriminals were found to have hidden a Trojan in image files in over 60 Android apps using stenography. Perhaps even more surprising, cryptomining code was included in Abstractism, a platform that was once peddled in Steam and was eventually pulled from the market after a flood of complaints.

Malicious binaries can also exploit software vulnerabilities, the way the TeslaCrypt ransomware did when, with the aid of several known exploit kits, it took advantage of unpatched Adobe Flash Player programs.

Lastly, malware can affect gamers when they connect to infected servers. In the report, Study of the Belonard Trojan, exploiting zero-day vulnerabilities in Counter-Strike 1.6, security experts at Russian antivirus firm Doctor Web investigated Belonard, a Trojan that takes advantage of weaknesses in both Steam and pirated versions of Counter-Strike 1.6 (CS 1.6).

Once infected with Belonard, gamers are then made part of a botnet, which can further propagate the promotion and marketing of other potentially malicious servers.

Survey scams

We sometimes wonder how a tactic this old can stick around for so long, and we find the answer in a longstanding phishing truism: It works.

Survey scammers immediately jumped on the Far Cry 5 craze by offering “free” copies of the game after it was released in Q2 2018. Unbeknownst to users who are led more by their desire to get a free Triple-A title game than to protect their data, they sign up to a service that purports to offer “unlimited movies,” but end up giving away their email addresses, receive even more offers they don’t want, and realize in the end that they didn’t get any of what was offered to them.

A similar flocking happened when Grand Theft Auto 5 (GTA V) came out in Q3 2017. Many scammers used YouTube to market their so-called money generators, which are survey scams, to nudge gamers to give away their personally identifiable information (PII) or download a potentially malicious file.

Let’s also not forget the amount of scammery that went down when Pokemon Go reached peak hype.

Phishing scams

Steam users are probably more than familiar with the times when phishers used squatted domains to lure them into giving out their credentials to Steam or their favorite third-party trading site, like CS:GO Lounge. and were just two of several new domains that popped up, made to look like a Steam Community page, and used in several campaigns aiming to harvest Steam accounts. We believed that the stolen accounts could be used to lead more Steam users into giving away their credentials as well.

Similarly, a fake CS:GO Lounge domain was registered and mimicked the real trading and bidding site. Criminals behind it were also after Steam credentials. To rub salt to the wound, they even added a Trojan that pretended to be a Steam activation file.

A phishing campaign targeting PS4 users. Not a particularly good one.

Account takeover (ATO)

An account takeover is the result of credential fraud caused by phishing, hacking, or a data breach. Anyone maintaining an account online is at risk.

Ubisoft, the company behind Assassin’s Creed and the Tom Clancy brand, was compromised in 2013. While the company was mum about how it happened, one of our experts hinted that an employee may have been spear-phished, allowing the criminals to gain access to their internal network. Ubisoft prompted its users to quickly change their passwords.

Employees aren’t the only likely targets of those with nefarious intentions. Game developer’s forums are also at risk. Bohemia Interactive’s DayZ had theirs compromised, with hackers accessing and downloading usernames, passwords, and email addresses.

Ad flooding and malvertising

Ads, whether showcased on websites or apps, are perceived as more of an annoyance than a threat by normal users. But when they become too aggressive, Malwarebytes characterizes them as adware.

Mobile users who enjoy playing free games can probably attest that they can tolerate ads—they’re usually not in the way of the game they’re playing anyway. But if ads are more prevalent than the actual game, then expect to hear users complain. A lot.

Of course, some ads also contain malvertising, which opens up the angle that ads can be used as infection vectors to reach users who aren’t usually bothered by them.


Not all threats video gamers encounter online are after their information or their money. Some are after them, their reputation, their peace of mind. We implore every gamer to be wary of the items below as much as the items above because they can cause mental and emotional damage, rather than financial.

According to the US Department of Health and Human Services, the division that maintains the website, online bullying includes flaming, harassment, exclusion, denigration, outing/shaming caused by deception or pretension, and doxing. Nude photo sharing or revenge porn can also be considered a form of cyberbullying.

Cyberbullying can happen to gamers while interacting online, whether that’s using voice features of multiplayer games, or in forums or other chat functions of gaming platforms.

We’ve covered the topic of cyberbullying on several occasions, especially during events like the National Cybersecurity Awareness Month (NCSAM). We shared tech that could help curb cyberbullying, statistics on online bullying trends, and demystified the myths surrounding this act. It pays to go back and read these posts.


Trolling could be both fun and funny. At least at first. But after the raucous laughter dies down to a chuckle, gamers eventually decide to get serious and carry.

Except they can’t.

Because sometimes that troll continues to stand in the open doorway doing jumping jacks, preventing gamers from going to the next room and advancing in gameplay.

This was what happened after Ubisoft officially released Tom Clancy’s The Division.

On the other hand, griefing—the term used to bring grief onto players by ruining their overall experience—is not lost in Elite Dangerous, a space exploration simulation. For one of its players, Commander DoveEnigma13, the end game is to reach a distant star system called Colonia. It may be his last chance to make the trip, as he had been battling a terminal illness for at least three years. So, with other Elite players, his daughter, and Frontier (the game developers) helping make this voyage a success, the Enigma Expedition was born.

However, reports of other Elite Dangerous griefers were sabotaging the expedition by attacking the final waypoint, a mega-ship called Dove Enigma, which Frontier also created as homage for the Commander. Without this, it would be difficult for the Enigma fleet of 560+ players strong to reach Colonia due to fuel shortage. However, in an interview with the Polygon, one of the players who was part of the fleet said that “the threat is minor at best.”

Read: When trolls come in a three-piece suit


Thanks to Pokemon Go, augmented reality (AR) has become part of the modern gamer’s vocabulary. It’s the future of interactive and immersive gaming, bringing the experience to new heights. Unfortunately for some, AR games like Ingress have also made a way for gamers with questionable intent to use unofficial tools to stalk other gamers, visit their real-life homes, and leave creepy messages on doorsteps for the homeowners to see.

Intoku, an Ingress gamer, admitted to Kotaku in an interview: “Players on both sides have stalked and been stalked.” With a game that is based around real-world locations, players shouldn’t be surprised, nor should they expect little or no risk when playing such games.


Swatting might start off as a prank call to emergency services, but the results—a dispatch of a large number of armed police officers to a particular address—can quickly become deadly, as we’ve seen in the Andrew Finch case. And yet, Peter “Rolly Ranchers” Varady, a then 12-year old YouTube streamer, was swatted less than a month after Finch’s death. This happened days after Cizzorz, a renowned YouTube streamer with millions of subscribers, helped him dramatically increase his subscriber count from 400 to almost 100,000.

In another story, a gamer with the pseudonym “Obnoxious” used swatting to get back at mostly young and female gamers who ignored or declined his friend requests on League of Legends (LoL).

In response to numerous swatting stories, some local US law enforcement agencies offer an anti-swatting service to video gamers and YouTubers.


Probably the worst risk young gamers can encounter online is grooming, which is when a pedophile prepares a child for a meeting with the intention of committing a sexual offense. Not only is grooming a targeted act, but it’s also premeditated. Sometimes, it can be stopped if a parent happens to be in the same room as their child, or law enforcement is already tailing a suspect. Other times, it can lead to tragedy beyond words.

Breck Bednar was 14-years-old when he met Lewis Daynes online. Daynes was the ringmaster of the “virtual clubhouse” where Bednar and his friends at school would hang out. He claimed to be a computer engineer running a multimillion pound company. Daynes groomed Bednar into tricking his parents in order to arrange a meeting. He invited Bednar to his flat in Essex one Sunday in February 2014. Bednar texted his father that he’d be spending the night at a friend’s (who wasn’t Daynes). That was the last time they spoke.

There’s another side of grooming that is built around the highly popular game, Fortnite: the cybercrime kind. According to the BBC, teenagers as young as 14 admit to stealing private gaming accounts and reselling them online. Experts say that organized crime is linked to these activities, and that cybercrime grooming is taking place behind the scenes by dangerous persons or groups.

Play it safe. Always.

With a myriad of risks in online gaming, from financial to physical, it’s especially important to adhere to cybersecurity best practices. The gaming community is active, engaged, and passionate—and criminals will take advantage of that to the best of their ability. Head them off at the pass by following our advice:

  • Explore your options. Regardless of your gaming platform, it always pays to know how it works. Since a lot of PC-based games use launchers, acquaint yourself with their settings and customize them with security and privacy in mind.
  • Take advantage of additional security and privacy options when available. These launchers may have some form of two-factor authentication (2FA) to ensure that a user who asserts they own the account can verify this claim easily.
  • Update all software installed on your gaming rig or, if you’re a console gamer, the firmware and the games installed in it.
  • Always treat links sent your way—either by someone you’ve known for a long time or by someone you just met—as suspect. Because of the number of ways gaming accounts can be taken over by miscreants—and most of the time, victimized gamers are not aware of this—it’s wise to handle links with caution. It would be easier if you have other means to contact the link sender other than the gaming platform to verify that indeed it was them who messaged you. Ideally, if you and your friends and family members play games to bond, establish amongst yourselves a verification process, like a keyword/phrase you can mention or type up in chat. Not saying the keyword/phrase can denote that you’re not talking to the person they claim to be.
  • Use a form of password management that works for you. We know it causes fatigue just to remember all those username and password combinations. Based on some comments we’ve received on the Malwarebytes Labs blog, we also know that not everyone is using password managers, but instead have created their own way of managing and storing passwords. Go with what works, as long as your passwords are kept safe and secure. Most of all, avoid reusing passwords.
  • Manage your gaming profiles. These days, gaming profiles should be treated the way a regular social media profile and feed should be. Don’t reveal information about yourself that is deemed sensitive. You can pick and choose who sees your gaming activities and who doesn’t. Use your options wisely.
  • Keep your shields up. If suspicious files claim they can help you in your gaming, but you must first disable your antivirus or turn off your firewall, that’s a major red flag. If a piece of software wants to have free rein in your system without your security protections on, you better find safer alternatives.
  • Play games in the presence of or within earshot of your parents/carer. Grown-ups living with minors who are into gaming are always advised to get involved this way. They don’t have to breathe down their children’s necks, but they should at least pop in from time to time and make sure nothing nefarious is taking place—whether that’s the content of the game itself or the conversations happening amongst players.
Game over

We can say with confidence that many of the risks to online gamers a few years ago are still the risks they face today. Although nowadays, news of gamers behaving badly toward other gamers are at equal footing with news about malware and online criminals targeting gamers. Because of the real-world and life-changing impact they present to people behind the avatars—and to their families and loved ones—more is at stake now than just playing along in a computer-generated world. Gamers are not only called to take cybersecurity seriously, but they’re also called to be responsible digital citizens.

Playing video games is meant to be fun; a way for us to relax, blow off steam, and de-stress. However, let’s also recognize that gaming is already part of the overall threat landscape. Make sure that your information—and your person—are safe from harm in the digital world and beyond.

Game on!

The post How gamers can protect against increasing cyberthreats appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The global data privacy roadmap: a question of risk

Malwarebytes - Tue, 04/02/2019 - 15:00

For most American businesses, complying with US data privacy laws follows a somewhat linear, albeit lengthy, path. Set up a privacy policy, don’t lie to the consumer, and check the specific rules if you’re a health care provider, video streaming company, or kids’ app maker.

For American businesses that want to expand to a new market, though, complying with global data privacy laws is more akin to finding dozens of forks in the road, each one marked with an indecipherable signpost.

Should a company expand to China? That depends on whether the company wants to have its source code potentially analyzed by the Chinese government. Okay, what about South Korea? Well, is the company ready to pay three percent of its revenue for a wrongful data transfer, or to have one of its executives spend time behind bars?

Europe is an obvious market to capture, right? That’s true, but, depending on which country, the local data protection authorities could issue enormous fines for violating the General Data Protection Regulation.

What if a company just follows in the footsteps of the more established firms, like Google, Amazon, or Microsoft, which all opened data centers in Singapore in the past two years? Once again, the answer depends on the company. If it’s providing a service that Singapore considers “essential,” it will have to heed a new cybersecurity law there.

At this point, a company might think about entering a country with no data privacy laws. No laws, no getting in trouble, right? Wrong. Data privacy laws can sprout up seemingly overnight, and future compliance costs could severely cut into a company’s budget.

While this may appear overcomplicated, one guiding principle helps: If a company cannot afford to comply with a country’s data privacy laws, it probably should not expand to that country. The risk, which could be millions in penalties, might not outweigh the reward.

Today, for the third piece in our data privacy and cybersecurity blog series, which also took a look at current US data privacy laws and federal legislation on the floor, we explore the decision-making process of a mid-market-sized company that wants to expand its business outside the United States.

With the help of Reed Smith LLP counsel Xiaoyan Zhang, we looked at several notable data privacy laws in Europe, Asia, Latin America, the Middle East, and Africa.

Issue-spotting within a culturally-crafted landscape

Before a company expands into a new country, it should try to truly comprehend the data privacy laws located within, Zhang said. She said this involves more than just reading the law; it requires training one’s thinking into an entirely different culture.

Unlike crimes including manslaughter and robbery—which have near-universal definitions—Zhang said data privacy violations fluctuate from region to region, with interpretations rooted in a country’s history, economy, public awareness, and opinions on privacy.

“Data privacy is not like murder, which is much more straightforward,” Zhang said. “Privacy law is very intimately tied into culture.”

So, while overseas concepts might appear familiar— like protecting “personally identifiable information” in the US and protecting “personal information” in the European Union—the culture behind those concepts varies.

For example, in the European Union, a history of fierce antitrust regulation and government enforcement helped usher GDPR’s passage. In fact, Austrian online privacy advocate Max Schrems—whose legal complaints against Facebook heavily influenced the final text of GDPR—remarked years ago that he was surprised at the lack of tall garden hedges around Americans’ homes. The country’s understanding of privacy, Schrems realized, was different than that of Austria, and so, too, are its data privacy laws.

Similarly, Zhang said she has fielded many questions from EU lawyers who assume that data privacy regulations around the world are similar to those in GDPR.

“EU lawyers are used to thinking that, for every data collection, there must be a legitimate purpose, and they insist on asking the same questions,” Zhang said. “When I’m talking about legal advice in China, they’ll say ‘Oh, our medical device needs to collect data from users, does China have any law or statutes that give us a legitimate business purpose to collect that data?’”

Zhang continued: “No. In China, you don’t need that. It’s totally different.”

The differences can be managed with the right help, though.

The safest path for market expansion is to rely on a global data privacy lawyer to “issue-spot” any obvious global compliance issues, Zhang said. These experts will look at what type of data a company handles—including medical, financial, geolocation, biometric, and others—what type of service the company performs, and whether the company will need to perform frequent cross-border data transfers. Depending on all these factors, each company’s individual roadmap for data privacy compliance will be unique.

However, Zhang led us on a bit of a world tour, detailing some of the notable data privacy laws in Europe, Asia, Africa, the Middle East, and Latin America. Company expansion into these markets, Zhang emphasized, depends on whether a company is ready for compliance.

Many countries, many laws Europe

Starting with Europe there is, of course, GDPR. Complying with the sweeping set of provisions is tricky because GDPR gives each EU member-state the authority to enforce the new data protection law on its own turf.

This enforcement is done through Data Protection Authorities (DPAs), which oversee, investigate, and issue fines for GDPR violation. Each member-state has its own DPA, and, in the months before GDPR’s implementation, the DPAs gave mixed signals about what local enforcement would look like.

France’s DPA, the National Data Protection Commission (CNIL), said that companies that are at least trying to comply with GDPR “can expect to be treated leniently initially, provided that they have acted in good faith.”

Less than one year later, though, that leniency met its limit. CNIL hit Google with the largest GDPR-violation fine on record, at roughly $57 million.

The best defense to these penalties, Zhang said, is to consult with local legal experts who know the region’s enforcement history and details.

“You cannot just seek consultation from a GDPR expert. If you want to go specifically to Germany, you need German lawyers who can offer insight on things that are specific to Germany,” Zhang said. “That’s for all of Europe.”

Latin America

Outside of Europe—but still inspired by GDPR—is Latin America. Zhang said several Latin American countries have enacted, or are considering, legislation that protects the data privacy rights of individuals.

In 2018, Brazil passed its comprehensive data protection law, which protects people’s personal information and includes tighter protections for sensitive information that discloses race, ethnicity, religion, political affiliation, and biometrics. Argentina also forwarded privacy protections for its citizens, and it earned a special clearance in GDPR as a “whitelisted” party, meaning that personal data can be moved to Argentina from the EU without extra safeguards.


Moving to China, a whole new risk factor comes into play—surveillance.

China’s cybersecurity law grants the Chinese government broad, invasive powers to spy on Internet-related businesses that operate within the country. Implemented in 2017, the law allows China’s foreign intelligence agency to perform “national security reviews” on technology that foreign companies want to sell or offer in China.

This authority raised alarm bells for the researchers at Recorded Future, who attributed past cyberattacks directly to the Chinese government. Researchers said the law could give the Chinese government the power to both find and exploit zero-day vulnerabilities in foreign companies’ products, all for the price of admission into the Chinese market.

“China’s law has a hidden angle for government control and monitoring,” Zhang said. “It has a different rationale.”

Outside of China, Singapore has garnered the attention of Google, Microsoft, and Amazon, which all built data centers in the country in the past few years. The country passed its Personal Data Protection Act in 2012 and its Cybersecurity Act in 2018, the latter of which sets up a framework for monitoring cybersecurity threats in the country.

The law has a narrow scope, as it only applies to companies and organizations that control what the Singaporean government calls “critical information infrastructure,” or CII. This includes computer systems that manage banking, government, healthcare, and aviation services, among others. The law also includes data breach notification requirements.

Moving to South Korea, the risk for organizations goes up dramatically, Zhang said. The country’s Personal Information Protection Act preserves the privacy rights of its citizens, and its penalties include criminal and regulatory fines, and even jail time. Cross-border data transfers, in particular, are strictly guarded. One wrongful transfer can result in a fine of up to three percent of a company’s revenue.


Traveling once again, expansion into Africa requires an understanding of the continent’s burgeoning, or sometimes non-existent, data privacy laws. Zhang said that, of Africa’s more than 50 countries, only about 15 have data protection laws, and even fewer have the regulators necessary to enforce those laws.

“Among [the countries], nine have no regulators to enforce the law, and five have a symbolic law but it’s not enforced,” Zhang said.

So, that invites the question: What exactly does happen if a company expands into a country that doesn’t have any data privacy laws?

What happens is potentially more risk.

First, a country could actually develop and pass a data privacy law within years of a company’s expansion into its borders. It’s not unheard of—less than one year after Amazon announced its rollout into Bahrain, the country introduced its first comprehensive data privacy law. Second, compliance with the new data privacy law could be expensive, Zhang said, forcing a company into a tough situation where it might have to withdraw entirely from the new market.

“One common misconception is that if a country doesn’t have a law at all, it’s a good country to go to,” Zhang said. “You should think twice about whether that’s the case.

Expand or not? It’s up to each company

There is no single roadmap for companies entering new markets outside the United States. Instead, there are multiple paths a company can take depending on its product, services, the data it collects, data it will need to move between borders, and its tolerance for risk.

The safest path, Zhang said, is to ask questions upfront. It is far better to make an informed decision about how to enter a market—even if compliance is costly—than to be surprised with fines or penalties later on.

The post The global data privacy roadmap: a question of risk appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Compromising vital infrastructure: water management

Malwarebytes - Mon, 04/01/2019 - 15:00

It’s probably unnecessary to explain why water management is considered part of our vital infrastructure, but it’s a wider field than you might expect—and almost every one of its components can be integral to our survival.

We all need clean water to drink. As much as I like my coffee, I can’t make it with contaminated liquids. And the farmers that grow our coffee need water to irrigate their land. On top of that, the water we use in our households and workplaces needs to be cleaned before it goes back into nature.

In some countries, and especially in large river delta areas, we need a high level of control over the water level to prevent flooding. Other areas need methods to retain water to avoid droughts or to keep vital transportation methods that depend on rivers and canals on the move.

We also use water to generate energy, for example, through dams and mills. In the first decade of this millennium, hydropower accounted for about 20 percent of the world’s electricity, and with the increasing need for clean energy, we can expect this percentage to rise.

Water management is considered so critical that tampering with a water system is a US Federal Offense (42 U.S.C. § 300i-1).

Yet, cybercriminals have found ways to compromise these vital systems as well. Let’s take a look at their methods of attack.


The Supervisory Control and Data Acquisition (SCADA) architecture that is in use in various water management plants, despite their diversity, is for the most part consistent. There are only so many companies that produce Programmable Logic Controllers (PLCs). In the past, vulnerabilities have been found in widely-used PLCs made by General Electric, Rockwell Automation, Schneider Modicon, Koyo Electronics, and Schweitzer Engineering Laboratories. And I would dare to wager that some have been found that we haven’t been made aware off.

One of the best organized safety aspects of water and sewage plants is its physical access (which is not always easy to secure either, if only because of the size of some of these installations). But, according to the 2018 Cybersecurity Risk and Responsibility in the Water Sector report by the American Water Works Association (AWWA):

“Cybersecurity is a top priority for the water and wastewater sector. Entities, and the senior individuals who run them, must devote considerable attention and resources to cybersecurity preparedness and response, from both a technical and governance perspective. Cyber risk is the top threat facing business and critical infrastructure in the United States.”

The report goes on to say that getting cybersecurity right is not an easy mission and many organizations have limited budgets, aging computer systems, and personnel who may lack the knowledge and experience for building robust cybersecurity defenses and responding effectively to cyberattacks.

In cyberwarfare, a mass shutdown of computers controlling waterworks and dams could result in flooding, power outages, and shortage of clean water. In the long run, this could lead to famine and disease. In March and April 2018, the US Department of Homeland Security and Federal Bureau of Investigation warned that the Russian government is specifically targeting the water sector and other critical infrastructure sectors as part of a multi-stage intrusion campaign.


One of the major threats to water-energy plants is Industroyer, aka CrashOverRide, an adaptable malware that can automate and orchestrate mass power outages. The most dangerous component of CrashOverride is its ability to manipulate the settings on electric power control systems. It also has the capability of erasing the software on the computer system that controls circuit breakers. CrashOverRide clearly was not designed for financial gain. It’s purely a destructive tool.

Another malware that many industrial plants are threatened by is called Stuxnet. This threat is designed to spread through Windows systems and go after certain programmable controllers by seeking out their related software. Near the end of 2018, the Onslow Water and Sewer Authority (ONWASA) said it would have to completely restore a number of its internal systems thanks to an outbreak of Emotet and one of the ransomware variants it is known to deliver.

Earlier in 2018, the first cryptocurrency mining malware impacting industrial controls systems and SCADA servers was found in the network of a water utility provider in Europe. This was not seen as a targeted attack, but rather the result of an operator accessing the Internet on a legacy Human Machine Interface (HMI).

Not that SCADA systems are free of targeted attacks. A honeypot that mimicked a water-pump SCADA network was found by hackers within days and soon became the target of a dozen serious attacks.

Insider threats are another cause for concern. In 2007, headlines told of an intruder who installed unauthorized software and damaged the computer used to divert water from the Sacramento River. In hindsight, this turned out to be a former, and probably disgruntled, employee.

An infected laptop PC gave hackers access to computer systems at a Harrisburg, PA, water treatment plant. An employee’s laptop was compromised via the Internet, likely through a watering hole attack, and then used as an entry point to install a virus and spyware on the plant’s computer system.


A lot of what we can learn from these incidents will already sound familiar to most of our readers. Countermeasures that security teams in water management plants and organizations can apply follow many of the same cybersecurity best practices as corporations protecting against a breach. Some of our recommendations include the following:

  • A clear and strict Bring Your Own Device (BYOD) policy can help prevent staff bringing in unwanted threats to the network.
  • A strict and sensible password regime can hinder brute force attacks and should close out employees who left the firm.
  • Legacy systems that serve as human interfaces should not have Internet access.
  • Easy backup and restore should be made possible to keep any disruption limited in time and impact. Needless to say, this is imperative for critical systems.
  • Software running on industrial controls systems and SCADA servers should not give away the nature of the plant or the underlying hardware. This makes it harder for attackers to find out which exploits will be successful.
  • Use secure software, even though you cannot control or check the security of your hardware.
  • Monitor the processors and servers that are vital to the infrastructure constantly so any abnormal behavior will be flagged immediately.
Water and power

As you can see, there are many similarities between water management plants and power plants. While water management may be even more vital to our existence, many of the threats are basically the same. This is due to the similarities in plant infrastructure and hardware.

And when the threats are the same, you will see that the countermeasures are also similar. What’s strange, however, is that despite both water and power being vital to the country’s infrastructure, their cybersecurity budgets are quite limited, and they often have to work with legacy systems.

When the city of Atlanta was crippled by a ransomware attack in March 2018, city utilities were also disrupted. For roughly a week, employees with the Atlanta Department of Watershed Management were unable to turn on their work computers or gain wireless Internet access. Two weeks after the attack, Atlanta completely took down its water department website “for server maintenance and updates” until further notice.

Instead of systems backing each other up, they brought each other down like dominoes—an almost perfect example of Murphy’s Law, or the “butter side down” rule, as my grandma used to call it. It doesn’t have to be that way, and when it comes to our vital infrastructure, it shouldn’t.

Stay safe and hydrated, everybody!

The post Compromising vital infrastructure: water management appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 25 – 31)

Malwarebytes - Mon, 04/01/2019 - 08:24

Last week, we looked at plugin vulnerabilities, location tracking app problems, and talked about plain text password woes. We also looked at federal data privacy regulation and took a deep dive into  BatMobi Adware.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (March 25 – 31) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Awakening the beast: BatMobi adware

Malwarebytes - Fri, 03/29/2019 - 15:00

On February 12, a patron of the Malwarebytes Forum alerted us of an issue with ad redirects that seemed to come out of nowhere. An outcry from other commenters filled the forum thread, all experiencing the same redirects to the same exact websites. Our web protection team traced the offending websites back to the culprit—the adware known as BatMobi.

What is BatMobi?

BatMobi is an Advertisement Software Development Kit (Ad SDK), which is essentially a software library that connects applications to ad networks. Developers insert Ad SDKs into their apps’ code to gain revenue through ads. Thus, they can offer their apps for free and still make money. Most variants of BatMobi were clean and safe to use—until recently.

Based on a Reddit post about the sudden web redirects on January 21, it appears these “clean” versions of BatMobi turned into mobile adware around mid January. Adware is a subcategory of Potentially Unwanted Programs (PUPs), which means it hangs around the fringes of bad behavior and often results in poor user experiences. Furthermore, BatMobi has always had a slightly more aggressive version we consider low-level adware. We detect this as Android/Adware.BatMobi.

Triggered by Google Play

An interesting component of this newly seen BatMobi variant is the location in which it was popping up ads—Google Play. Forum patrons verified the ads were popping up whenever an app was updating or installing in Google Play. BatMobi is using Chrome Custom Tabs within its code to open websites in Google Play whenever it was triggered by these events. Although the websites being redirected to are relatively safe sites, they are an unwanted nuisance for the user—exactly what we consider adware.

Tracking down the beast

Usually, pinpointing the source of an adware app on a customer’s device is simple, especially when knowing the adware variant, as in this case. Thanks to all the great Malwarebytes forum participants, I had a large set of data to work with in the form of what we call Apps Reports.

This is a list of apps along with data about their MD5, package name, and other components to assist tracking down infections. Even with all the data, finding BatMobi was a nightmare: It hides deep within an app’s code, in different apps on each user’s device, and no other mobile anti-malware vendors detect it. Nevertheless, I was able to make some headway and find a couple of patterns of infection. Here were my findings.


The search started with the third-party app store Uptodown. More specifically, apps that download videos from YouTube, such as Videoder, Video Downloader, Snaptube, and TubeMate were delivering ads to users the most. These apps all come with hidden versions of BatMobi.  Removing these apps solved the issue for many, but still it persisted for others.

Click to view slideshow. Mi Mobile

Another component that further complicates detecting and removing BatMobi is that we found it on apps pre-installed on Mi Mobile devices—specifically, the Xiaomi Redmi Note 5. The infected apps are as listed:

Package name:
App name: App vault 

Package name:       
App name: Downloads

Please note that not all versions of these apps have BatMobi nor do all Xiaomi Redmi Note 5 devices—only a select few.  Detections are in place in Malwarebytes for Android to alert users of its presence.

If you are having issues with adware on pre-installed apps, you can follow our removal instructions for disabling or uninstalling.

Warning: Make sure to read Restoring apps onto the device (without factory reset) in the rare case you need to revert/restore apps.

Use this/these command(s) during step 7 under Uninstalling Adups via ADB command line to remove:

adb shell pm uninstall -k –user 0
adb shell pm uninstall -k –user 0

Still unknowns

Even after finding two dominant sources of the Batmobi infection, there are still cases left unsolved. You see, as suddenly as the ads appeared, they abruptly stopped in early March.  Without active cases to see if removing apps will remediate or not, finding these deeply hidden BatMobi variants has become nearly impossible. I’m confident that there are versions still on Google Play, but finding them now is searching for a needle in millions of haystacks.

The scary reality of Ad SDKs

Technically, since these hidden BatMobi variants no longer trigger ads inappropriately, they are no longer considered adware. I suppose that’s the good news. My assumption is that BatMobi made a change on their servers without warning, thus triggering the ads in January. But we don’t know why there was an abrupt stop in March. What happened? Maybe an overwhelming amount of complaints to BatMobi caused a change of heart?

This all leaves us with an uneasy feeling about Ad SDKs. It highlights their power to switch from clean and safe to adware overnight. It’s a scary reality to have code lay dormant in legitimate apps that can turn malicious so quickly. I reiterate that yes, these website redirects were to relatively safe sites, but the potential for worse is present.

Developers beware

The last thing a developer wants is for their app to be on an anti-malware scanner’s adware list without warning. In the past, we have seen ad companies clearly move from legitimate to serving adware, becoming overly aggressive with data collection and/or aggressively pushing ad content, as in the case above. However, in those cases it was easy to make a clear cut distinction of the cause of infection. This time, its much more unclear which components were causing the issue, and so much is still left unknown.

Unfortunately, finding an Ad SDK that developers can trust is an ongoing challenge. All we can say is do your research and choose wisely. If an Ad SDK has any variants that are considered adware, as with BatMobi, it’s a wise decision to stay clear.

Stay safe out there!

The post Awakening the beast: BatMobi adware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

US Congress proposes comprehensive federal data privacy legislation—finally

Malwarebytes - Thu, 03/28/2019 - 15:00

The United States might be the only country of its size—both in economy and population—to lack a comprehensive data privacy law protecting its citizens’ online lives.

That could change this year.

Never-ending cybersecurity breaches, recently-enacted international privacy laws, public outrage, and crisis after crisis from the world’s largest social media company have pushed US Senators and Representatives into rarely-charted territory: regulation.

Before Congressmembers’ desks are at least four federal bills that would change how companies handle and protect Americans’ private data. The bills seek better user privacy through increased transparency, oversight, fines, and liability, and, in the case of one bill, the possibility of jail time for dishonest tech executives.

Several US states are also considering comprehensive data privacy bills, taking inspiration from California, which passed its own law last year. If those state laws pass, a new wrinkle will be added to the broader country-wide debate: Should state privacy protections be respected or should one federal law supersede those rules?

This month, Malwarebytes Labs launched its limited blog series about data privacy and cybersecurity laws. In this second blog in the series, we explore five federal data privacy bills.

How we got here

For decades, Congress regulated data privacy based on single, sector-specific issues. Rather than writing laws to protect all types of data, they instead wrote laws to combat individual crises.

In the late 80s, that crisis was a Supreme Court nominee’s video rental history being leaked to the press, resulting in the Video Privacy Protection Act. In the late 90s, that crisis was the potential targeting of children online, resulting in the Children’s Online Privacy Protection Act. In the mid-2000s, the kidnapping and murder of a Kansas teenager prompted lawmakers to discuss lowering protections on GPS data held by cell phone providers. (The proposed bill failed passage multiple times.)

This reactive approach is just how Congress works, said Michelle Richardson, director of the data and privacy project at Center for Democracy and Technology (CDT).

“This country has generally allowed companies to do their thing until something goes quite wrong,” Richardson said. “It has to get worse before the US and its decision-makers and its cowboy personality feel ready to intervene.”

Today, Congress is again ready to intervene. The crisis at hand is two-fold.

First, data breaches of Yahoo, Uber, Equifax, Marriot, Target, the Sony PlayStation Network, Facebook, Anthem, JPMorgan Chase, and many more have resulted in Americans’ personally identifiable information being stolen or accessed by cybercriminals. This PII includes names, Social Security numbers, credit card numbers, passport numbers, dates of birth, account passwords, physical and email addresses, and even employment histories.

Second, even when a company hasn’t suffered a breach, Americans’ personal data has been misused or left astray. The FBI searched private company DNA databases. A period-tracking app shared its users’ pregnancy decisions and menstrual tracking information with Facebook. And political beliefs were reaped in an effort to sway a US presidential election.

Congress has concluded that user privacy can no longer be solely entrusted to America’s technology companies.

“The digital space can’t keep operating like the Wild West at the expense of our privacy,” said Amy Klobuchar, Democratic Senator of Minnesota and presidential candidate.

Data privacy legislation has huge support outside of Capitol Hill, too—from the public. Richardson said that, thanks to the work of researchers, journalists, and civil liberties advocates, the public better understands how their data moves from company to company.

“We don’t give nearly enough credit to civil media [outlets] and civil society [groups] for the research they’ve done into data practices and for giving people cold, hard facts about how their data is collected,” Richardson said.

That research has exposed not just personal data misuse, but also corporate irresponsibility.

Last year, Reuters showed that Facebook failed to fulfill its promise to control the wildfire-like spread of hate speech on its platform in Myanmar. The Intercept exposed Google’s plans to build a censored version of its online search tool in China, resulting in several employee departures and renewed questions about Google’s removal of its “Don’t Be Evil” tagline. ACLU showcased the failures in Amazon’s facial recognition software, revealing that the technology falsely matched 28 members of Congress with mugshots of arrestees.

Some US states have already responded.

Last year, Vermont passed a law regulating data brokers, and California passed its California Consumer Privacy Act. The law gives Californians the right to know which data is collected on them, whether that data is sold, the option to opt out of those sales, and the right to access that data. The law will take effect at the start of 2020.

In the meantime, other states are aiming to follow suit. Washington, Utah, and New York legislatures are all considering new laws that could give their residents better access and control to the information that companies collect on them.

International data privacy law is even further ahead.

Last year, the European Union successfully completed its effort to pull together the data privacy laws of its 28 member-states into one cohesive package. The General Data Protection Regulation came into effect on May 25, 2018, and since then, it has produced lawsuits against Facebook and a record fine out of France against Google.

At home and abroad, regulation is in the air.

The proposals

Since last April, multiple US Senators have tried to take on the mantle of the public’s chief data privacy protector. Some tried to show their commitment to data privacy by asking Facebook CEO Mark Zuckerberg pointed questions during his Congressional testimony regarding the Cambridge Analytica scandal. One Senator—and presidential candidate—made a direct public appeal to break up Amazon, Google, and Facebook.

But in putting actual ideas onto paper, four Senators have emerged as frontrunners in America’s data privacy debate. Senators Klobuchar, Ron Wyden of Oregon, Marco Rubio of Florida, and Brian Schatz of Hawaii have directly sponsored individual, separate bills to protect Americans from opaque and unfair data collection.

Google, Facebook, Amazon, Apple, Microsoft, Yahoo, Uber, Netflix, and countless others could be affected by these proposals.

The bills ask for essentially the same thing: tighter controls on user data. Consequences often include higher fines from the Federal Trade Commission (FTC), which currently serves as the country’s primary data misuse regulator.

Sen. Klobuchar’s bill—the first of the four to be formally introduced in April 2018—would require certain companies to write their terms of service agreements in “language that is clear, concise, and well-organized.” It would also require companies to give users the right to access data collected on them (similar to California’s state bill and to GDPR), along with notifying users about a data breach within 72 hours.

Sen. Rubio’s bill—the American Data Dissemination Act (ADD)—would require the FTC to write its own privacy recommendations for Congress to later approve. The ADD asks that the FTC’s  rules closely align with the Privacy Act of 1974, which restricts how federal agencies collect, store, and share Americans’ personal information. If passed, the FTC would have up to 27 months to get its own recommendations approved.

The ADD would also “preempt”—meaning, it would nullify—current and upcoming state data privacy laws. If passed, companies would only need to comply with the FTC’s federal rules that Congress would later approve. California and Vermont would wave goodbye to their newly-passed laws, and Utah, Washington, and New York would likely shut down their own efforts.

But preemption could be a deal-breaker for free speech advocates, digital rights groups, and government representatives.

“Under the Rubio bill, Americans would not have their privacy protected,” said Center for Digital Democracy Executive Director Jeff Chester, in speaking to Bloomberg. “State preemption is a non-starter as far as the consumer and privacy groups community and their allies in Congress are concerned.”

In California, the state’s attorney general also pushed back.

“For those of you following debate over data #privacy, note: We oppose any attempt to pre-empt #California’s privacy laws…” wrote Sarah Lovenheim, communications advisor to California Attorney General Xavier Becerra.

The opposition to Sen. Rubio’s bill is compounded by its slow timeline, making it impossible for lawmakers to know what specific rules they could be asked to approve in two years’ time.

The ADD demands Congress make an unknown, gameshow-style choice: Keep the data privacy protections you have, or choose what’s behind Door Number Two?

Sen. Wyden’s bill—the Consumer Data Protection Act—sets itself apart as the only bill that includes jail time consequences.

Sen. Wyden’s bill would require data-collecting companies to deliver annual reports that detail their internal privacy-protecting efforts. Those reports would need to be signed and confirmed by a high-level company executive, like a CEO or CTO. But if those executives confirm a false report, they could face jail time, the bill proposes.

The Consumer Data Protection Act would also require the FTC to set up a “Do Not Track” website where Americans could register to opt out of online tracking and third-party data sharing. Companies that fail to comply with consumers’ wishes would face fines.

This “Do Not Track” proposal is far from perfect. If a company’s requirement to get user consent clashes with that user’s Do Not Track preferences, the bill proposes a harmful compromise: Put the services behind a price tag. Paying for privacy is wrong, and, even if the bill passes, companies should refuse to engage in such a dangerous practice.

Finally, there is Sen. Schatz’s Data Care Act, which relies on a novel interpretation of corporate responsibility. The bill equates the responsibility that doctors have to their patients’ information as the same responsibility that technology companies should have to user data.

“Just as doctors and lawyers are expected to protect and responsibly use the personal data they hold, online companies should be required to do the same,” Sen. Schatz said in a press release.

The bill creates rules under five broad umbrellas—the “duty to care,” the “duty of loyalty,” the “duty of confidentiality,” federal and state enforcement, and rulemaking authority by the FTC to enforce the bill.

Fifteen Senators from both parties have signed on as co-sponsors, including Sen. Klobuchar. (Sens. Rubio and Wyden have not.) Several civil rights organizations, including Free Press, EFF, and CDT, have voiced support.

“We commend Senator Schatz for tackling the difficult task of drafting privacy legislation that focuses on routine data processing practices instead of consumer data self-management,” said CDT’s Richardson in a press release.

Here, Richardson is talking about something that she and the policy team at CDT find particularly important: consent. Many of today’s data privacy bills lean heavily on the idea that clearer terms of service and more notifications and more annual reports will somehow empower consumers to make the right choices for themselves when consenting to use online platforms.

But that’s unfair, Richardson said.

“[CDT’s] biggest concern is that a lot of these proposals are a notice-and-consent model. They look at these agreements we sign and say, ‘Maybe make them clearer,’ for example,” Richardson said. “That’s doubling down on our existing system, where it’s up to individuals to micromanage their relationships with hundreds, if not thousands of companies that touch their data every day.”

So, CDT—which routinely discusses already-authored legislation with Congressmembers—took a different approach. The organization wrote its own bill.

The bill’s rules are not built on consent. Instead, CDT’s bill focuses, Richardson said, on “what are the things you can’t sign away? What are your digital civil rights?”

CDT’s bill would give US persons—including residents—the rights to access, correct, and delete data that is collected on them, along with the right to take their personal data and move it somewhere else (which is similar to a right granted in the European Union’s GDPR). The bill would also require the FTC to investigate and write rules barring discriminatory practices in online advertising.

Companies affected by CDT’s bill would be given 30 days to put into place mechanisms for users to exercise their above rights. Also, if those companies license or sell personal information to third parties, they would need to assure that their third-party partners are practicing the same privacy commitments as the companies themselves.

Similar to Sen. Rubio’s bill, CDT’s bill would pre-empt state laws, but only those that focus on data privacy. Laws that deal with, say, consumer protection or data breaches, would remain intact.

As to which federal bill will prevail—it’s a bit of a tossup. Passing a bill into law is never as easy as getting the best idea forward. Big Tech is sure to lobby against any bill that would cut into its business model, and civil liberties groups could, depending on the legislation, disagree with one another about the best path forward.

Until then, CDT thinks it is taking the right approach, removing the burden from users and instead protecting what their rights should look like in the future.

Richardson put it plainly: “This is a moment about having corporations treat us better.”

In our next blog in the series, we will look at data privacy compliance for businesses seeking to expand outside the US market.

The post US Congress proposes comprehensive federal data privacy legislation—finally appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Location data leaks from family tracking app database

Malwarebytes - Wed, 03/27/2019 - 16:00

An app called Family Locator, which allows family members to keep track of one another recently experienced an exposed database issue of the worst kind. Specifically: the MongoDB database was left exposed with no password, like so many other recent infosec tales of woe. The end result is the location of about 280,000 users leaking in real time.

For a location tracking app that also includes information about children, this is quite the error. Map views, family maps, and push notifications to let you know where everybody is all sound great—until random people also potentially have access to it. This is the fate handed to Family Locator these past few days, although nobody knows how long the sensitive data has been exposed.

What was leaked?

The Family Locator database records held names, email, plain text passwords, and photographs, along with coordinates tied to user-allocated names, such as office, home, and condo. As per the TechCrunch report, none of it was encrypted, a misstep repeated by Facebook last week.

On a related note, the app’s privacy policy is rather short and to the point:

What information do we collect and how we use it

Contact information:

When you create an account, we may collect your personal information such as your username, first and last name and email address.

We may send important or promotional information about our products.

Geolocation data:

We collect your location through GPS, WiFi, or phone network in order to provide our Service.

Do we disclose any information to outside parties?

No. We do not sell, trade, or otherwise transfer to outside parties any of your personally identifiable information.

 Changes to our privacy policy

We may update this policy at any time by posting changes on this page.

It seems the most-urgently required change to the page is the addition of the word “whoops.”

Was there a real-world impact to this?

There absolutely was. After setting up a dummy account and verifying the accuracy of their coordinates against what was listed in the database, TechCrunch contacted one user randomly, who validated that their location exposed in the database was also correct, and that one of their family members using the app was their child.

This is, frankly, terrible, especially as TechCrunch found numerous other parent/child combinations in the database.

Did it all go wrong at this point?

You bet it did. I’ve reported hundreds of security fails down the years. I’ve had data exposure issues fixed on image hosting websites, exploits on social networking portals patched up, data hauls taken offline, outbreaks on instant messaging platforms shut down, and much more besides.

Many people working in infosec do the same thing, all the time. Security awareness, even for other developers, used to be pretty bad a decade or more ago—it was pretty much throw a paper plane and hope something lands.

Things are supposed to be much better now, right?

In the case of Family Locator, they aren’t.

What happened next sounds like one of my wild goose chases from yesteryear. No useful information could be found on the site’s WHOIS record or privacy policy page (as you can see above), and zero contact information was listed on the website. TechCrunch bought business records to finally obtain a name tied to the business, but that still didn’t get them any further.

Microsoft, who host the MongoDB database in question, were contacted, and eventually it was taken offline. Presumably they contacted the app developer, but it seems they’ve still not acknowledged their leaky database, either way.

Are MongoDB breaches a thing?

Sadly, yes. MongoDB is wonderful to deploy, but people seem to lose interest at the “locking it down” stage [1], [2], [3]. Sometimes, it’s deviations from default configurations causing the problem. Other times, nobody set a password. This is disappointing, given the security documentation available to ensure everything on the server stays secure.

What now?

If you’re one of the app users caught up in these events, try not to panic. While the data was exposed, it’s most likely to be abused by marketers and scrapers, and not so much hardened criminals. While this isn’t exactly great, it’s still better (and more probable) than “dubious stalker character uses this data to lurk near my home.” The chances of someone like that not only being able to find the data, but be close enough to your location to do something with it are remote.

It’s also a good reminder that we can’t possibly predict how secure a service is when signing up to it.  The more access you give to your personal life, the more damage can be done should something go wrong afterwards. This may not be massively reassuring, but it’s sadly where we’re at. It’s up to app developers to step up and do a better job of it.

The post Location data leaks from family tracking app database appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Facebook’s plain text misstep, and other password sins

Malwarebytes - Wed, 03/27/2019 - 15:00

Two days after an article by Brian Krebs disclosed that hundreds of millions of Facebook account passwords had been stored in plain text for years, Facebook released a statement indicating they hash and salt passwords, more or less in accordance with industry best practice.

Plain text storage of credentials is a fairly egregious security misstep, but there’s a variety of other ways credential security can fail. Given the trend toward sharp increase in third party credential mishandling in recent years, we’d like to take a look at some of the other ways companies handling user data can leave users insecure, irrespective of password length, complexity, or obscure security questions.

Outdated hashes

Not every hashing algorithm provides the same degree of security, and many large scale third-party breaches have occurred at companies using deprecated algorithms, or in some instances, none at all. The most common algorithm for many years, MD5, was shown to be insecure via single block collision in 2010, and therefore inappropriate for most security uses. MD5 weaknesses were further underlined when Microsoft published that the authors of Flame malware used MD5 vulnerabilities to forge a Windows certificate.

Despite these issues, MD5 was still widely used in password hashing and allowed for attackers to easily crack exfiltrated credentials, as seen with the 2013 Yahoo breach. Institutional inertia with updating hashes can dramatically increase the end user impact of a breach, as seen in the haveibeenpwned list of data breaches that show frequent use of MD5. Organizations with a serious commitment to data handling best practices should be using either bcrypt or scrypt for secure credential storage.

Misconfigured servers

Cloud servers are typically a great way for enterprises to reduce cost and increase speed of infrastructure rollout. Security on an assortment of cloud services, however, can have sub-optimal default settings, or rely on users to implement security solutions. As a result, misconfigured servers have resulted in large scale data loss in a variety of settings. (While typically referred to as a breach after-the-fact, it’s an inappropriate descriptor due to the lack of any fortifications to be breached.)

In February of this year, Dow Jones exposed 4.4GB of sensitive human targeting data simply by failing to make the AWS server the data was stored on non-public. While not indexed by search engines, threat actors with dedicated crawlers could trivially locate and exfiltrate the data in question. This sort of vulnerability of inaction is more common than most users think, and can be as impactful as an actual breach.

Godaddy exposed 31,000 of its own server configurations in this way, and personal information from voter databases has been repeatedly exposed, most recently in 2017. With third-party service providers forming an integral part of most organizations’ infrastructure, we should expect this sort of “breach” to increase in frequency.

Security questions (with no other validation)

Common in enterprise financial platforms, questions about a user’s personal life started as a well-meaning account validation practice that was quickly proven as impractical for securing data.

Cementing their reputation for poor security, Experian was noted for having an exceptionally poor implementation of security questions for account validation. While typically these questions are defined by end users and/or take unstructured input for the answers, Experian used unchangeable questions set only by them, based on their own data holdings on users’ personal information (regardless of accuracy), and made the answers trivially searchable, such as past addresses or date of birth.

Security questions might still hold a modicum of security if they allow free input for both the question and the answer, and are not used as a primary account validation method. However, proliferation of personal information online makes use of security questions largely a bad call.

Using passwords at all?

What if the fundamental problem is not with handling passwords, but using passwords at all? Some researchers think that long-standing security problems involving passwords are baked into the design of using passwords to authenticate. Hardware-based, passwordless authentication can sidestep issues involving secure credential storage and transmission by requiring a physical device to generate a time-bounded authentication token.

These schema are typically multifactorial designs involving smart cards, USB keys, QR codes, or biometrics. While none of these designs resolve authentication issues entirely (how do you replace a compromised thumbprint?), they can be significantly more secure than managing sending and storing a credential pair.

It’s not just about breaches

As seen above, password handling by third parties carries more risk than simply a breach and exfiltration. Quite often, mishandling credential management is more problematic than direct data loss, but it also points to fundamental design flaws in an organization’s infrastructure.

Reducing organizations’ attack surface should include a serious look at how passwords are stored, the appropriateness of an authentication scheme to a given use case, and whether your company may have outgrown passwords entirely.

The next time you hear about a third-party breach and wonder, “Is my password secure?”, you might also ask, “What was this organization doing with it in the first place?”

The post Facebook’s plain text misstep, and other password sins appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds