Techie Feeds

Ellen DeGeneres giveaway scam spreading on social media

Malwarebytes - Mon, 04/15/2019 - 16:14

Scammers are pushing multiple fake Facebook profiles of Ellen DeGeneres, popular US TV show host and producer, with the goal of tricking people into jumping through a few money-making hoops. This isn’t a sophisticated scam. It isn’t hacking the Gibson. It won’t be the focus of a cutting edge infosec talk. However, it’s certainly doing some damage—up to a point.

This scam is a victim of its own ambition.

What are they doing?

The profiles all have one main promotion point, claiming that Ellen has a competition on-the-go, and people entering will be fortunate enough to win all manner of cool prizes. One profile touts pictures of Ellen standing next to a car; in another, she holds aloft a giant VISA card. Many of the fakes push genuine video clips of the TV host talking about charity drives to add a little more credibility to their fakeout.

The scammers somehow managed to make clips of Ellen talking about donation efforts from viewers sound like she’s giving things away. The illusion falls apart with a little bit of thought, but as with most scams, the allure of something for nothing proves too good to resist.

What do potential victims have to do?

Some of the pages deviate from the template a little, but for the most part, the thing that gets this scam moving is the below text. It’s your standard plea to overshare the bogus offer to friends, family, and other contacts across the social network:

Click to enlarge

Surprise in the next 24 hours, I will randomly select people on Facebook, everyone who *shares* will receive a gift card, cash, and a big winner can win a car & house “Share now” don’t miss! We are watching!!! I will choose 500 lucky people, $5,000,000 each only follows instructions

Step 1- Love it

Step 2- Share

Step 3- Comment on “DONE”

I’ve shared the scam, what next?

Good question, potential fake Ellen giveaway victim.

What happens next is you’re directed to the comments section of the various posts floating around. You’ll then see one of half a dozen or so messages, roughly along the same lines:

“Hi all, you must register your name by downloading my movie click here and your name will automatically be registered”

Click to enlarge

Downloading…your movie?

Well, this took a weird turn. A few of the links lead to a blogspot page touting “Ellen Degeneres givaways 2019.”

Click to enlarge

“To become a winner, by downloading one movie, you have been registered as a winner”

Uh-huh. Weirdly the site also claims to offer up John Wick 3, Hellboy, and Shazam, which don’t feel very Ellen-ish. Speaking of not very Ellen-ish, one of the other sites offers up those other well-known Ellen Degeneres classics: Glass and Escape Room.

Click to enlarge

Yet another site, which appears to have fallen out of a late 1990’s design wormhole, sends you elsewhere when clicking the register button.

Click to enlarge

Where to next?

All of these blogs send clickers to the kind of movie sign-up portal we’ve been seeing online for some time. Suffice to say, we won’t go over old ground, but you are absolutely not going to win any Ellen competitions by registering on any of the below sites. At best, you’ll end up with a one-off membership fee or a rolling subscription.

That’s quite a scam daisy-chain

It is! It’s such a weirdly specific target, and so poorly thought out. Are the core demographic of Ellen fans really going to start with a cookie-cutter chain letter spam missive on Facebook, get caught up in a maze of confusing “Ellen starred in Batman Returns, you know” blogger pages, before ending up on a variety of utterly unrelated “sign up to watch this movie” portals—and then actually sign up?

Generally, most scams that have a movie sign-up site as a destination are a lot more straightforward than this: one click, BAM. Done. Even when these scams cross into strange realms, such as the fake John Wick ebooks from February, they tend to net out a more simple, and thus easier to ensnare users, process.

This scam has more twists and turns than Ellen popping up unannounced at the end of Usual Suspects. If we had to guess, we’d say “strong opening performance, closely followed by a viewing figures nosedive.”

A captive audience?

From a cursory glance at stats available for the blogspot websites via the Bitly links, this theory would appear to be borne out. There’s a lot of sharing and commenting apparently taking place on Facebook itself, but in terms of translating to actual movie spam page clickthroughs?

Click to enlarge

Not so much. Only one of the three sites have anything approaching a regular flow of traffic, and those are small numbers. The second site has about 1,400 clicks, but that’s spread across two spikes in February and April. The third site has a grand total of 48 clicks at time of writing.

When the daisy chain snaps

Someone had a clever idea here: focus a scam around a celebrity you wouldn’t perhaps think of being the bait, and wrap it across multiple social media profiles. In theory, it could have been a winner for the individuals behind it. However, all inventiveness began and ended with the inclusion of Ellen. In the same way innovative online fakeouts gave way to endless, dreary years of “here’s a survey scam,” those seem to have been replaced by “here’s a movie sign-up scam” instead.

What you tend to see now is the movie sign-up scams jammed into almost every social engineering con trick around. They are—just like Ellen playing Agent Smith in The Matrix—inevitable.

Cancelling the show

Ultimately, then, this is a good example of a low-level scam gone utterly off the rails. Overloading something like this with needless complexity and multiple steps sounds cool on paper, but what this actually does is help potential victims steer clear. When they get bored, or confused, or drift off, that’s bad news for the scammers, and great news for everyone else.

If you’re behind this, please: Keep up the terrible work.

The post Ellen DeGeneres giveaway scam spreading on social media appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (April 8 – 14)

Malwarebytes - Mon, 04/15/2019 - 14:42

Last week on Labs, we said hello to Baldr, a new stealer on the market, we wondered who is managing the security of medical management apps, discussed the different perceptions of personal information, and we looked at fake Instagram assistance apps found on Google Play that are stealing passwords.


Other cybersecurity news
  • German pharmaceuticals giant Bayer says it has been hit by malware, possibly from China, but that none of its intellectual property has been accessed. (Source: The Register)
  • Canadian police last week raided the residence of a Toronto software developer behind “Orcus RAT,” a product that has been used in countless malware attacks. (Source: Krebs on Security)
  • In response to concerns raised by the European Commission, Facebook has agreed to update its terms and conditions in the EU to make it clear to users how their personal data is used. (Source: BetaNews)
  • Three vulnerabilities have been discovered in the Verizon Fios Quantum Gateway, a very popular router which, when exploited together, could give an attacker complete control of a victim’s network. (Source: ThreatPost)
  • New variants of the sextortion scams are now attaching password-protected zip files that contain alleged proof that the sender has a video recording of the recipient. (Source: BleepingComputer)
  • Chamois, the botnet you probably never heard about before, is losing ground again after having controlled some 20 million devices at its peak. (Source: Duo Secuirty)
  • A global Amazon team listens to what we tell Alexa and reviews audio clips in an effort to help the voice-activated assistant respond to commands. (Source: Bloomberg)
  • An attacker gained access to the servers hosting Matrix.org. The intruder potentially had access to unencrypted message data, password hashes, and access tokens. (Source: Matrix.org)
  • US-Cert issued a warning that Multiple Virtual Private Network (VPN) applications store the authentication and/or session cookies insecurely in memory and/or log files. (Source: Cert.org)
  • Fake news peddlers have devised a cunning new way to prevent their posts from getting removed from social media. Instead of linking to fake news, bad actors are now linking to posts promoting older news articles that may no longer be accurate, but won’t be reported as fake since they were once legitimate news. (Source: ThreatPost)

Stay safe, everyone!

The post A week in security (April 8 – 14) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fake Instagram assistance apps found on Google Play are stealing passwords

Malwarebytes - Fri, 04/12/2019 - 17:40

We all want those Instagram likes and followers. Many apps on Google Play claim they can assist you with that effort. But what if the app that’s supposed to be helping you is also stealing your username and password? 

As a matter of fact, that’s exactly what we found in three fake Instagram assistance apps still available on Google Play at the time of this writing. Moreover, these fake apps are targeting Iranian users. Malwarebytes already detects the malicious apps as Android/Trojan.Spy.FakeInsta.

What’s in a like?

As the psychology of social media reveals how addicting it can be to receive likes and even better, followers, on platforms such as Instagram, users often look for shortcuts or other ways to game the system in order to get that rush of dopamine. 

That’s where Instagram assistance apps come into play—Google Play, that is! Apps that claim to boost your likes and increase your followers are an attractive notion, especially when building a thriving Instagram account organically can take months or even years. Malware authors are great opportunists, and there is certainly a lot of opportunity to exploit when it comes to creating account-stealing fake apps.

InstaStolen account

Let’s use an app named Followkade as a case study of this new-found Instagram credential stealer.

App Name: Followkade

Package Name: com.followkade.insta

Installs: 50,000+

Reviews: 4.0 out of 6,999 total respondents

As you can see, it’s a highly-rated app with thousands of downloads and reviews. Customers on Google Play looking to determine the app’s legitimacy would be none-the-wiser.

After install, the app opens to a splash page, and then a page asking for Instagram credentials.

I used the following to log in:

Username: test_username

Password: test_password

After opening a network scanner, I pressed Login. Along with normal login traffic to Instagram, there was some additional network traffic going on here. Take a look at the screenshots below with proof of the stolen credentials.

There it is in plain text: my test username and password being sent to a known malicious website.

Insta targets

There are many apps that pose as so-called helpers piggybacking off the social media craze. Some of them are legitimate apps that might be able to help users boost likes and followers as advertised. However, malware authors can too easily mimic the above board apps, and they bank on users’ desire to find fast validation through social media acceptance.  

The other two apps that we found, LikeBegir and Aseman Security, also target Iranian users, as does Followkade. LikeBegir claims it will increase likes, help users buy cheap coins, and provide daily gifts. Aseman Security, ironically, boasts that it will boost security for your Instagram page and prevent it from being hacked.

I would imagine there aren’t a lot of Iranian Instagram assistance apps on Google Play, so it’s an easy target for malware authors of that region. In these cases, picking a highly-rated and installed app isn’t much help to be safe.

Acknowledgement and tips

Many thanks to Malwarebytes Forum patron AmirGooran for tipping us off about the fake apps. 

If you’re looking to boost your Instagram community, it’s a lot safer to do it the old-fashioned way: by creating quality content with well-edited, creative photos. Take the time to write engaging captions with appropriate hashtags to attract others. And build your community by following and interacting with other top content creators you truly appreciate—not just using the follow for a follow model.

And if you’re interested in securing your Instagram account, once again, the old-fashioned ways win out. Be sure to use strong password credentials, which means long passwords that don’t have easily guessable information such as birthdays or family names, and nothing that has been used for another account. We typically recommend folks use a password manager so they needn’t worry about remembering 27 different passwords. In addition, avoid using the Insta Messages function for communicating any confidential, important information, because it has no end-to-end encryption option whatsoever.

Read more: How do I secure my social media profile?

Like anything in life, building a respectable social media following takes work. Avoid the shortcuts: Not only do they fail at doing the things they promise—they may also take away much more than you would receive. After all, are fake likes really worth getting your personal information stolen? Stay safe out there!

The post Fake Instagram assistance apps found on Google Play are stealing passwords appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What is personal information? In legal terms, it depends

Malwarebytes - Thu, 04/11/2019 - 17:03

In early March, cybersecurity professionals around the world filled the San Francisco Moscone Convention Center’s sprawling exhibition halls to discuss and learn about everything infosec, from public key encryption to incident response, and from machine learning to domestic abuse.

It was RSA Conference 2019, and Malwarebytes showed up to attend and present. Our Wednesday afternoon session—“One person can change the world—the story behind GDPR”—explored the European Union’s new, sweeping data privacy law which, above all, protects “personal data.”

But the law’s broad language—and finite, severe penalties—left audience members with a lingering question: What exactly is personal data?

The answer: It depends.

Personal data, as defined by the EU’s General Data Protection Regulation, is not the same as “personally identifiable information,” as defined by US data protection and cybersecurity laws, or even “personal information” as defined by California’s recently-signed data privacy law. Further, in the US, data protection laws and cybersecurity laws serve separate purposes and, likewise, bestow slightly separate definitions to personal data.

Complicating the matter is the public’s instinctual approach to personal information, personal data, and online privacy. For everyday individuals, personal information can mean anything from telephone numbers to passport information to postal codes—legal definitions be damned.

Today, in the latest blog for our cybersecurity and data privacy series, we discuss the myriad conditions and legal regimes that combine to form a broad understanding of personal information.

Companies should not overthink this. Instead, data privacy lawyers said businesses should pay attention to what information they collect and where they operate to best understand personal data protection and compliance.

As Duane Morris LLP intellectual property and cyber law partner Michelle Donovan said:

“What it comes down to, is, it doesn’t matter what the rules are in China if you’re not doing business in China. Companies need to figure out what jurisdictions apply, what information are they collecting, where do their data subjects reside, and based on that, figure out what law applies.”

What law applies?

The personal information that companies need to protect changes from law to law. However, even though global data protection laws define personal information in diverse ways, the definitions themselves are not important to every business.

For instance, a small company in California that has no physical presence in the European Union and makes no concerted efforts to market to EU residents does not have to worry about GDPR. Similarly, a Japanese startup that does not collect any Californians’ data does not need to worry about that state’s recently-signed data privacy law. And any company outside the US that does not collect any US personal data should not have to endure the headaches of complying with 50 individual state data breach notification laws.

Baker & McKenzie LLP of counsel Vincent Schroeder, who advises companies on privacy, data protection, information technology, and e-commerce law, said that the various rules that determine which laws apply to which businesses can be broken down into three basic categories: territorial rules, personal rules, and substantive rules.

Territorial rules are simple—they determine legal compliance based on a company’s presence in a country, state, or region. For instance, GDPR applies to companies that physically operate in any of the EU’s 28 member-states, along with companies that directly market and offer their products to EU citizens. That second rule of direct marketing is similar to another data privacy law in Japan, which applies to any company that specifically offers its products to Japanese residents.

“That’s the ‘marketplace rule,’ they call it,” Schroeder said. “If you’re doing business in that market, consciously, then you’re affecting the rights of the individuals there, so you need to adhere to the local regulatory law.” 

Substantive rules, on the other hand, determine compliance based on a company’s characteristics. For example, the newly-passed California Consumer Privacy Act applies to companies that meet any single one of the following three criteria: pull in annual revenue of $25 million, derive 50 percent or more of that annual revenue from selling consumers’ personal information, or buy, receive, sell, or share the personal information of 50,000 or more consumers, households, or devices.

Businesses that want to know what personal information to legally protect should look first to which laws apply. Only then should they move forward, because “personal information” is never just one thing, Schroeder said.

“It’s an interplay of different definitions of the territorial, personal, and substantive scopes of application, and for definitions of personal data,” Schroeder said.

Personal information—what’s included?

The meaning of personal information changes depending on who you ask and which law you read. Below, we focus on five important interpretations. What does personal information mean to the public? What does it mean according to GDPR? And what does it mean according to three state laws in California—the country’s legislative vanguard in protecting its residents’ online privacy and personal data.

The public

Let’s be clear: Any business concerned with legal obligations to protect personal information should not start a compliance journey by, say, running an employee survey on Slack and getting personal opinions.

That said, public opinions on personal data are important, as they can influence lawmakers into drafting new legislation to better protect online privacy.

Jovi Umawing, senior content writer for Malwarebytes Labs who recently compiled nearly 4,000 respondents’ opinions on online privacy, said that personal information is anything that can define one person from another.

“Personal information for me is relevant data about a person that makes them unique or stand out,” Umawing wrote. “It’s something intangible that one owns or possesses that (when combined with other information) points back to the person with very high or unquestionable accuracy.”

Pieter Arntz, malware intelligence researcher for Malwarebytes, provided a similar view. He said he considers “everything that can be used to identify me or find more specific information about me as personal information.” That includes addresses, phone numbers, Social Security numbers, driver’s license info, passport info, and, “also things like the postal code,” which, for people who live in very small cities, can be revealing, Arntz said.

Interestingly, some of these definitions overlap with some of the most popular data privacy laws today.

GDPR

In 2018, the General Data Protection Regulation took effect, granting EU citizens new rights to access, transport, and delete personal data. In 2019, companies are still figuring out what that personal data encompasses.

The text of the law offers little clarity, instead providing this ocean-wide ideology: “Personal data should be as broadly interpreted as possible.”

According to GDPR, the personal data that companies must protect includes any information that can “directly or indirectly” identify a person—or subject—to whom the data belongs or describes. Included are names, identification numbers, location data, online identifiers like screen names or account names, and even characteristics that describe the “physical, physiological, genetic, mental, commercial, cultural, or social identity of a person.”

That last piece could include things like an employee’s performance record, a patient’s medical diagnosis history, a user’s specific anarcho-libertarian political views, and even a person’s hair color and length, if it is enough to determine that person’s identity.

Donovan, the attorney from Duane Morris, said that GDPR’s definition could include just about any piece of information about a person that is not anonymized.

“Even if that information is not identifying [a person] by name, if it identifies by a number, and that number is known to be used to identify that person—either alone or in combination—it could still associate with that person,” Donovan said. “You should assume that if you have any data about an individual that is not anonymized when you get it, it’s likely going to be covered.”

The California Consumer Privacy Act

In June 2018, California became the first state in the nation to respond to frequent online privacy crises by passing a comprehensive, statewide data privacy law. The California Consumer Privacy Act, or CCPA, places new rules on companies that collect California residents’ personal data.

The law, which will go into effect in 2020, calls this type of data “personal information.”

“Personal information,” according to the CCPA, is “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”

What that includes in practice, however, is a broad array of data points, including a person’s real name, postal address, and online IP address, along with biometric information—like DNA and fingerprint data—and even their browsing history, education history, and what the law vaguely describes as “audio, electronic, visual, thermal, olfactory, or similar information.”

Aside from protecting several new data types, the CCPA also makes a major change to how Californians can assert their data privacy rights in court. For the first time ever, a statewide data privacy law details “statutory damages,” which are legislatively-set, monetary amounts that an individual can ask to recover when filing a private lawsuit against a company for allegedly violating the law. Under the CCPA, people who believe their data privacy rights were violated can sue a company and ask for up to $750.

This is a huge shift in data privacy law, Donovan said.

“For the first time, there’s a real privacy law with teeth,” Donovan said.

Previously, if individuals wanted to sue a company for a data breach, they needed to prove some type of economic loss when asking for monetary damages. If, say, a fraudulent credit card was created with stolen data, and then fraudulent charges were made on that card, monetary damages might be easy to figure out. But it’s rarely that simple.  

“Now, regardless of the monetary damage, you can get this statutory damage of $750 per incident,” Donovan said.

California’s data breach notification law and data protection law

If we stay in California but go back in time several years, we see the start of a trend—California has been the first state, more than once, to pass data protection legislation.

In 2002, California passed its data breach notification law. The first of its kind in the United States, the law forced companies to notify California residents about unauthorized access to their “personal information.”

The previous definitions of personal information and data that we’ve covered—GDPR’s broad, anything-goes approach, and CCPA’s inclusion of heretofore unimagined “olfactory,” smell-based personal data—do not apply here.

Instead, personal information in the 17-year-old law—which received an update five years ago—is defined as a combination of types of information. The necessary components include a Californian’s first and last name, or first initial and last name, paired up with things like their Social Security number, driver’s license number, and credit card number and corresponding security code, along with an individual’s email address and password.

So, if a company suffers a data breach of a California resident’s first and last name plus their Social Security number? That’s considered personal information. If a data breach compromises another California resident’s first initial, last name, and past medical insurance claims? Once again, that data is considered personal information, according to the law.

In 2014, this definition carried somewhat over into California’s data protection law. That year, then-California governor Jerry Brown signed changes to the state’s civil code that created data protection requirements for any company that owns, licenses, or maintains the “personal information” of California residents.

According to Assembly Bill No. 1710, “personal information” is, once again, the combination of information that includes a first name and last name (or first initial and last name), plus a Social Security number, driver’s license number, credit card number and corresponding security number, and medical information and health information.

The definitions are not identical, though. California’s data protection law, unlike its data breach notification law, does not cover data collected by automated license plate readers, or ALPRs. ALPRs can indiscriminately—and sometimes disproportionately—capture the license plate numbers of any vehicles that cross into their field of vision.

Roughly one year later, California passed a law to strengthen protections of ALPR-collected data.

The takeaway

By now, it’s probably easier to define what personal information isn’t rather than what it is (obviously, there is a legal answer to that, too, but we’ll spare the details). These evolving definitions point to a changing legal landscape, where data is not protected solely because of its type, but because of its inherent importance to people’s privacy.

Just as there is no one-size-fits-all definition to personal information, there is no one-size-fits-all to personal data protection compliance. If a company finds itself wondering what personal data it should protect, may we suggest something we have done for every blog in this series: Ask a lawyer.

Join us again soon for the next blog in our series, in which we will discuss consumer protections for data breaches and online privacy invasions.  

The post What is personal information? In legal terms, it depends appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Who is managing the security of medical management apps?

Malwarebytes - Wed, 04/10/2019 - 15:00

One truth that is consistent across every sector—be it technology or education—is that software is vulnerable, which means that any device running software applications is also at risk. While virtually any application-running device could be compromised by an attacker, vulnerabilities in medical management apps pose a unique and more dangerous set of problems.

Now add to vulnerabilities the issue of data privacy, especially that of sensitive medical information, and you have a perfect storm.

In a recent report, Data sharing practices of medicines related apps and the mobile ecosystem: traffic, content, and network analysis, published by BMJ, researchers analyzed the top-rated Android apps for medicine management and found that 19 out of the 24 tested apps shared user data outside of the app.

Because medical records are such a lucrative data set, attackers often target the healthcare industry, seeking out and eventually finding the weakest link in the supply chain. That’s why it’s important for stakeholders to consider the broader implications of weaknesses in health and medical apps.

According to the US Food & Drug Administration (FDA), medical apps that pose risks to patient health and safety have been regulated since 1997. “While many mobile apps carry minimal risk, those that can pose a greater risk to patients will require FDA review.”

As medical management apps offer the convenience of care at home, some devices have become directly intertwined with patient care. While some apps may only offer benign image-processing services, others may include data on test results, appointments, drug refills, and more. seem benign that some medical. This is why the FDA categorizes medical apps by risk.

What could go wrong?

Security concerns come not necessarily from the app itself, but from third parties that are creating the apps that interface with that data. “Developers relied on the services of infrastructure related third parties to securely store or process user data, thus the risks to privacy are lower. However, sharing with infrastructure related third parties represents additional attack surfaces in terms of cybersecurity,” the BMJ report said.

“Furthermore, the presence of trackers for advertising and analytics, uses additional data and processing time and could increase the app’s vulnerability to security breaches.”

Data that sits on any app or database can be compromised, but medical management apps are home to a trove of private information and different types of proprietary data, as well as whatever the healthcare provider has interfacing with that app, according to penetration tester, Mike Jones.

“From what I’ve experienced with medical management apps, the risks are through the roof because the apps are not under the same regulations as the Health Insurance Portability and Accountability Act (HIPAA). When you look at the amount of data that any kind of home health or medical service offers, if it is managed through an app, one of the biggest concerns is data leakage.”

Sharing and selling data might be a new reality in today’s digital, research-driven world, but it’s important to first strip the data of its context so that patient privacy is not interfered with. Yet, sharing and securing data don’t have to be mutually exclusive concepts, said Warren Poschman, senior solutions architect at comforte AG.

“Want to know what meds I’m taking or what procedures I’ve had so it can be cross referenced and insights gained? Absolutely! Want to know that it was me specifically that takes that medication or has had those procedures? Absolutely not! Regulatory bodies need to start ensuring that companies anonymize the data so that it can be safely used no matter where it travels to.”

Risk extends beyond the medical data

Perhaps even more concerning than an attacker being able to access the data collected or stored on these apps is the reality that if a malicious actor tampers with them, patients can get the wrong medications or medications could be diverted to different places, Jones said.

In Hacking the Hospital, a two-year study that evaluated cybersecurity risks in hospitals, Independent Security Evaluators (ISE) found two different web applications through which an adversary could remotely “deploy attacks that target and compromise patient health. We demonstrated that a variety of deadly remote attacks were possible within these facilities,” the report said. That was in 2016.

Fast forward three years, and ISE, executive partner Ted Harrington remains concerned about the risks to patient safety with medical management apps.

“What is critically important is that these solutions ensure that the appropriate amount of medicine goes to the right patient.”

When it comes to patient safety, the healthcare industry has established practices of redundancies, but these practices have largely been influenced by regulations. Highly-regulated industries are motivated to make changes in order to be compliant, but compliance isn’t synonymous with security, Harrington said.

Though many medical apps are regulated by the FDA, medical management apps don’t fall under HIPAA regulations, and those established practices that ensure patient safety among the providers and staff aren’t usually extended to software.

Still, there are a variety of direct and indirect implications for those that are responsible for delivering care if medical apps are compromised in any way.

“The delivery of care relies heavily on technology, which needs to be accurate,” Harrington said. “If there were instances that demonstrated these solutions are inaccurate, that could undermine faith in technology, and that can negatively impact things like the speed at which professionals can deliver care. Speed is second only to accuracy in the delivery of care.”

Where do apps go from here?

It’s a question to which there is no single, clear answer. The complexities and speed of innovation have created formidable obstacles when it comes to the security of medical and health apps.

As technology advances, more developers are relying on artificial intelligence and machine learning in software, “deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care,” according to the FDA.

These changes in technology also drive the evolution of regulations, which Jones said have to ensure security throughout the development lifecycle. The FDA is, in fact, “considering a total product lifecycle-based regulatory framework for these technologies that would allow for modifications to be made from real-world learning and adaptation, while still ensuring that the safety and effectiveness of the software as a medical device is maintained.”

Greater than good intentions

Without falling victim to fear, uncertainty, and doubt, there is reality to the belief that medical management apps can be the difference between life and death. To shift the focus from compliance to security, Harrington said, “We need to understand technology the way an attacker would understand it. How would a hacker exploit this technology? So, you start with building out a threat model.”

Not all hackers are financially motivated, which is why it’s also important to perform a security assessment that goes beyond running a scanner. “That’s ineffective,” said Harrington. “You need to go deeper, as deep as an attacker would.”

Increasingly, more security-minded professionals are advocating for developers to take more personal responsibility. I am the Cavalry, for example, recently published The Case for a Hippocratic Oath for Connected Medical Devices: Viewpoint in the Journal of Medical Internet Research (JMIR), in which the authors ask whether manufacturers and adopters of these connected technologies should be governed by the symbolic spirit of the Hippocratic Oath.

“The idea of holding developers responsible is in the right spirit,” Harrington said. After all, if a bridge collapses and an investigation finds that it was structurally deficient, contractors, inspectors, maintenance, and even the engineers who designed the bridge can be charged with negligence. Should not the same be true of those that build the technology that bridges the gap between medical professionals and patients?

The post Who is managing the security of medical management apps? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Say hello to Baldr, a new stealer on the market

Malwarebytes - Tue, 04/09/2019 - 15:00

By William Tsing, Vasilios Hioureas, and Jérôme Segura

Over the past few months, we have noticed increased activity and development of new stealers. Unlike many banking Trojans that wait for the victim to log into their bank’s website, stealers typically operate in grab-and-go mode. This means that upon infection, the malware will collect all the data it needs and exfiltrate it right away. Because such stealers are often non-resident (meaning they have no persistence mechanism) unless they are detected at the time of the attack, victims will be none-the-wiser that they have been compromised.

This type of malware is popular among criminals and covers a greater surface than more specialized bankers. On top of capturing browser history, stored passwords, and cookies, stealers will also look for files that may contain valuable data.

In this blog post, we will review the Baldr stealer which first appeared in underground forums in January 2019, and was later seen in the wild by Microsoft in February.

Baldr on the market

Baldr is likely the work of three threat actors: Agressor for distribution, Overdot for sales and promotion, and LordOdin for development. Appearing first in January, Baldr quickly generated many positive reviews on most of the popular clearnet Russian hacking forums.

Previously associated with the Arkei stealer (seen below), Overdot posts a majority of advertisements across multiple message boards, provides customer service via Jabber, and addresses buyer complaints in the reputational system used by several boards.

Of interest is a forums post referencing Overdot’s previous work with Arkei, where he claims that the developers of both Baldr and Arkei are in contact and collaborate on occasion.

Unlike most products posted on clearnet boards, Baldr has a reputation for reliability, and it also offers relatively good communication with the team behind it.

LordOdin, also known as BaldrOdin, has a significantly lower profile in conjunction with Baldr, but will monitor and like posts surrounding it.

He primarily posts to differentiate Baldr from competitor products like Azorult, and vouches that Baldr is not simply a reskin of Arkei:

Agressor/Agri_MAN is the final player appearing in Baldr’s distribution:

Agri_MAN has a history of selling traffic on Russian hacking forums dating back roughly to 2011. In contrast to LordOdin and Overdot, he has a more checkered reputation, showing up on a blacklist for chargebacks, as well as getting called out for using sock puppet accounts to generate good reviews.

Using the alternate account Agressor, he currently maintains an automated shop to generate Baldr builds at service-shop[.]ml. Interestingly, Overdot makes reference to an automated installation bot that is not connected to them, and is generating complaints from customers:

This may indicate Agressor is an affiliate and not directly associated with Baldr development. At presstime, Overdot and LordOdin appear to be the primary threat actors managing Baldr.

Distribution

In our analysis of Baldr, we collected a few different versions, indicating that the malware has short development cycles. The latest version analyzed for this post is version 2.2, announced March 20:

We captured Baldr via different distribution chains. One of the primary vectors is the use of Trojanized applications disguised as cracks or hack tools. For example, we saw a video posted to YouTube offering a program to generate free Bitcoins, but it was in fact the Baldr stealer in disguise.

We also caught Baldr via a drive-by campaign involving the Fallout exploit kit:

Technical analysis (Baldr 2.2)

Baldr’s high level functionality is relatively straight forward, providing a small set of malicious abilities in the version of this analysis. There is nothing ground breaking as far as what it’s trying to do on the user’s computer, however, where this threat differentiates itself is in its extremely complicated implementation of that logic.

Typically, it is quite apparent when a malware is thrown together for a quick buck vs. when it is skillfully crafted for a long-running campaign. Baldr sits firmly in the latter category—it is not the work of a script kiddie. Whether we are talking about its packer usage, payload code structure, or even its backend C2 and distribution, it’s clear Baldr’s authors spent a lot of time developing this particular threat.

Functionality overview

Baldr’s main functionality can be broken down into five steps, which are completed in chronological order.

Step 1: User profiling

Baldr starts off by gathering a list of user profiling data. Everything from the user account name to disk space and OS type is enumerated for exfiltration.

Step 2: Sensitive data exfiltration

Next, Baldr begins cycling through all files and folders within key locations of the victim computer. Specifically, it looks in the user AppData and temp folders for information related to sensitive data. Below is a list of key locations and application data it searches:

AppData\Local\Google\Chrome\User Data\Default AppData\Local\Google\Chrome\User Data\Default\Login Data AppData\Local\Google\Chrome\User Data\Default\Cookies AppData\Local\Google\Chrome\User Data\Default\Web Data AppData\Local\Google\Chrome\User Data\Default\History AppData\Roaming\Exodus\exodus.wallet AppData\Roaming\Ethereum\keystore AppData\Local\ProtonVPN Wallets\Jaxx Liberty\ NordVPN\ Telegram Jabber TotalCommander Ghisler

Many of these data files range from simple sqlite databases to other types of custom formats. The authors have a detailed knowledge of these target formats, as only the key data from these files is extracted and loaded into a series of arrays. After all the targeted data has been parsed and prepared, the malware continues onto its next functionality set.

Step 3: ShotGun file grabbing

DOC, DOCX, LOG, and TXT files are the targets in this stage. Baldr begins in the Documents and Desktop directories and recursively iterates all subdirectories. When it comes across a file with any of the above extensions, it simply grabs the entire file’s contents.

Step 4: ScreenCap

In this last data-gathering step, Baldr gives the controller the option of grabbing a screenshot of the user’s computer.

Step 5: Network exfiltration

After all of this data has been loaded into organized and categorized arrays/lists, Baldr flattens the arrays and prepares them for sending through the network.

One interesting note is that there is no attempt to make the data transfer more inconspicuous. In our analysis machine, we purposely provided an extreme number of files for Baldr to grab, wondering if the malware would slowly exfiltrate this large amount of data, or if it would just blast it back to the C2.

The result was one large and obvious network transfer. The malware does not have built-in functionality to remain resident on the victim’s machine. It has already harvested the data it desires and does not care to re-infect the same machine. In addition, there is no spreading mechanism in the code, so in a corporate environment, each employee would need to be manually targeted with a unique attempt.

Packer code level analysis

We will begin with the payload obfuscation and packer usage. This version of Baldr starts off as an AutoIt script built into an exe. Using a freely available AIT decompiler, we got to the first stage of the packer below.

As you can see, this code is heavily obfuscated. The first two functions are the main workhorse of that obfuscation. What is going on here is simply reordering of the provided string, according to the indexes passed in as the second parameter. This, however, does not pose much of a problem as we can easily extract the strings generated by simply modifying this script to ConsoleWrite out the deobfuscated strings before returning:

The resulting strings extracted are below:

Execute BinaryToString @TempDir @SystemDir @SW_HIDE @StartupDir @ScriptDir @OSVersion @HomeDrive @CR @ComSpec @AutoItPID @AutoItExe @AppDataDir WinExists UBound StringReplace StringLen StringInStr Sleep ShellExecute RegWrite Random ProcessExists ProcessClose IsAdmin FileWrite FileSetAttrib FileRead FileOpen FileExists FileDelete FileClose DriveGetDrive DllStructSetData DllStructGet DllStructGetData DllStructCreate DllCallAddress DllCall DirCreate BinaryLen TrayIconHide :Zone.Identifier kernel32.dll handle CreateMutexW struct* FindResourceW kernel32.dll dword SizeofResource kernel32.dll LoadResource kernel32.dll LockResource byte[ VirtualAlloc byte shellcode [

In addition to these obvious function calls, we also have a number of binary blobs which get deobfuscated. We have included only a limited set of these strings as to not overload this analysis with long sets of data.

We can see that it is pulling and decrypting a resource DLL from within the main executable, which will be loaded into memory. This makes sense after analyzing a previous version of Baldr that did not use AIT as its first stage. The prior versions of Baldr required a secondary file named Dulciana. So, instead of using AIT, the previous versions used this file containing the encrypted bytes of the same DLL we see here:

Moving forward to stage two, all things essentially remain equal throughout all versions of the Baldr packer. We have the DLL loaded into memory, which creates a child process of the main Baldr executable in a suspended state and proceeds to hollow this process, eventually replacing it with the main .NET payload. This makes manually unpacking with ollyDbg nice because after we break on child Baldr.exe load, we can step through the remaining code of the parent, which writes to process memory and eventually calls ResumeThread().

As you can see, once the child process is loaded, the functions that it has set up to call contain VirtualAlloc, WriteProcessMemory, and ResumeThread, which gives us an idea what to look out for. If we dump this written memory right before resume thread is called, we can then easily extract the main payload.

Our colleague @hasherezade has made this step-by-step video of unpacking Baldr:

Payload code analysis

Now that we have unpacked the payload, we can see the actual malicious functionality. However, this is where our troubles began. For the most part, malware written in any interpreted language is a relief for a reverse engineer as far as ease of analysis goes. Baldr, on the other hand, managed to make the debugging and analysis of its source code a difficult task, despite being written in C#.

The code base of this malware is not straight forward. All functionality is heavily abstracted, encapsulated in wrapper functions, and utilizes a ton of utility classes. Going through this code base of around 80 separate classes and modules, it is not easy to see where the key functionality lies. Multiple static passes over the code base are necessary to begin making sense of it all. Add in the fact that the function names have been mangled and junk instructions are inserted throughout the code, and the next step would be to start debugging the exe with DnSpy.

Now we get to our next problem: threads. Every minute action that this malware performs is executed through a separate thread. This was obviously done to complicate the life of the analyst. It would be accurate to say that there are over 100 unique functions being called inside of threads throughout the code base. This does not include the threads being called recursively, which could become thousands.

Luckily, we can view local data as it is being written, and eventually we are able to locate the key sections of code:

The function pictured above gathers the user’s profile, as mentioned previously. This includes the CPU type, computer name, user accounts, and OS.

After the entire process is complete, it flattens the arrays storing this data, resulting in a string like this:

The next section of code shows one of the many enumerator classes used to cycle directories, looking for application data, such as stored user accounts, which we purposely saved for testing.

The data retrieved was saved into lists in the format below:

In the final stage of data collection, we have the threads below, which cycle the key directories looking for txt and doc files. It will save the filename of each txt or doc it finds, and store the file’s contents in various arrays.

Finally, before we proceed to the network segment of the malware, we have the code section performing the screen captures:

Class 2d10104b function 1b0b685() is one of the main modules that branches out to do the majority of the functionality, such as looping through directories. Once all data has been gathered, the threads converge and the remaining lines of code continue single threaded. It is then that the network calls begin and all the data is sent back to the C2.

The zipped data is encrypted via XOR with a 4 byte key and version number obtained from contacting the C2 via a first network request. The second request sends the cyphered data back to the C2.

Panel

Like other stealers, Baldr comes with a panel that allows the customers (criminals that buy the product) to see high-level stats, as well as retrieve the stolen information. Below is a panel login page:

And here, in a screenshot posted by the threat actor on a forum, we see the inside of the panel:

Final analysis

Baldr is a solid stealer that is being distributed in the wild. Its author and distributor are active in various forums to promote and defend their product against critics. During a short time span of only a few months, Baldr has gone through many versions, suggesting that its author is fixing bugs and interested in developing new features.

Baldr will have to compete against other stealers and differentiate itself. However, the demand for such products is high, so we can expect to see many distributors use it as part of several campaigns.

Malwarebytes users are protected against this threat, detected as Spyware.Baldr.

Thanks to S!Ri for additional contributions.

Indicators of compromise

Baldr samples

5464be2fd1862f850bdb9fc5536eceafb60c49835dd112e0cd91dabef0ffcec5 -> version 1.2 1cd5f152cde33906c0be3b02a88b1d5133af3c7791bcde8f33eefed3199083a6 -> version 2.0 7b88d4ce3610e264648741c76101cb80fe1e5e0377ea0ee62d8eb3d0c2decb92 > version 2.2 8756ad881ad157b34bce011cc5d281f85d5195da1ed3443fa0a802b57de9962f (2.2 unpacked)

Network traces

hwid={redacted}&os=Windows%207%20x64&file=0&cookie=0&pswd=0&credit=0&autofill=0&wallets=0&id=BALDR&version=v1.2.0 hwid={redacted}&os=Windows%207%20x64&file=0&cookie=0&pswd=0&credit=0&autofill=0&wallets=0&id=BALDR&version=v2.0

The post Say hello to Baldr, a new stealer on the market appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (April 1 – 7)

Malwarebytes - Mon, 04/08/2019 - 15:52

Last week, Malwarebytes Labs took readers on a brief tour of some of the world’s most notable data privacy laws, explored how gamers can protect themselves against cyberthreats, and offered thoughts about the reports that a 23-year-old Chinese woman gained access to President Donald Trump’s Mar-a-Lago resort while carrying four cellphones, a hard drive, a laptop, and a thumb drive that was “infected” with malware.

We also provided an in-depth look into the importance of cybersecurity in critical public infrastructure, like water management plants and power plants.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (April 1 – 7) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Was this really an attempt by the Chinese?

Malwarebytes - Wed, 04/03/2019 - 15:43

Last weekend, during President Trump’s visit to the Mar-a-Lago resort, a 23-year-old Chinese woman attempted to gain access to the Florida resort by lying and bluffing her way in. After some discussion at the gate, she was escorted to the reception of the resort where it was found out that she was not on the list of people that were allowed to enter.

According to the report a search of her belongings showed she was carrying four cellphones, a hard drive, a laptop and a thumb drive that was found to be infected with malware.

The word infected was emphasized by us because it raises an important question. A thumb drive can have malware on it that is inactive. The malware can be deployed when the carrier is able to connect it to a target system. But it can also have malware on it that will deploy automatically once it is connected to a system. For example, like we have seen in USB drives dropped in the parking lot of a corporation that a threat actor wants to infiltrate. The third option is that the thumb drive is actually infected without the knowledge of the carrier. We sometimes see an old worm resurface that has infected the root of a thumb drive and consequently infects the system it was connected to. These are usually older worms that were widely spread and get a second chance when someone finds and uses an old USB stick.

As you can see, it is very important to know which of these scenarios is true here. Given the circumstances we are led to believe that the first scenario might be true.

But even if this is true this seems an amateur attempt that we should not attribute to the Chinese government or one of their APT groups too quickly. While it is true that Russian and Chinese attempts to gain access to important information are getting more overt, this one seems to be of a less professional nature. We will have to wait and see. Ms Zhang has a detention hearing April 8 and an arraignment April 15, so hopefully we will learn some more then.

According to Malwarebytes’ expert on China and APT groups William Tsing:

Although China has a long history of manipulating members of the Chinese diaspora towards espionage goals, we lack sufficient information at this time to conclude definitively that Zhang was engaged as an intelligence collector.  What we can say for sure is that businesses at high risk of cyber attack – such as Mar a Lago – can take measures to lower their risk profile.  Knowing your customers, and what legitimate business activity looks like, can assist in spotting fraudulent or dangerous behavior.  Empowering employees to challenge or alert to suspicious activity can stop an attack in its tracks.  Lastly, hotels of any sort are functionally impossible to secure well due to their transient population, and should not be the location of any sensitive or significant business transactions.

What we do know is that secret service agents at the gate verified that the last name on the passport she presented matched that of one of the club members, so when she claimed she wanted to use the pool she was escorted to the front desk. There she showed an invitation – in Chinese – for a United Nations friendship event. There was no-one that could read the invitation, but no such event was scheduled, so Ms Zhang was questioned and eventually detained.

President Trump was not at the resort at the moment this went down, but he was playing golf at a nearby facility.

The post Was this really an attempt by the Chinese? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How gamers can protect against increasing cyberthreats

Malwarebytes - Wed, 04/03/2019 - 15:00

A few years ago, cybersecurity scryers predicted that the video gaming industry would be the next big target of cybercriminals. Whether this will come true in the future or not, the average gamer may have little to no idea of what awaits them, much less be prepared for it.

In fact, while generally more technically adept than the average Joe, most gamers lack familiarity with risks they could encounter while gaming or browsing the web for game-related content. For the majority of US households, this takes place on devices such as the personal computer, smartphone, and the dedicated gaming console.

Factoring in the gaming industry’s steady growth since 2011—the changes in consumer gaming perception, habits, and appetite for new content, tech, and accessories—and the expectation that, despite a foreseen nominal dip, the industry will still hit high marks on sales at end of year, it is more crucial than ever to educate gamers on cybersecurity best practices. This includes the various threats gamers may encounter online, their real-world consequences, and what they can do to protect themselves.

While a lot has changed in the gaming industry in the last five years, most of the tried-and-tested tactics of ensnaring the unfamiliar (and oftentimes, the experienced) are still around, causing panic and making headlines.

So, without further ado, here are the risks every gamer—on a PC, mobile, or gaming console—should keep an eye out for.

Malware and potentially unwanted programs (PUPs)

Malware and PUPs have been the top-of-mind threats to online gamers, and for a good reason. They come in many, many forms—key generators; game cracks; trainers; fake mobile game apps [1][2], game installers, clients/launchers, and audio protocol; game hacks; cheat files [1][2]; infected or risky mods; unofficial game patches; bogus emulators—you name it. At this point, it won’t be a surprise to consider that every conceivable software related to gaming might have a malicious equivalent in the wild.

Malware doesn’t only appear as applications, but can also be embedded in image files. In 2016, cybercriminals were found to have hidden a Trojan in image files in over 60 Android apps using stenography. Perhaps even more surprising, cryptomining code was included in Abstractism, a platform that was once peddled in Steam and was eventually pulled from the market after a flood of complaints.

Malicious binaries can also exploit software vulnerabilities, the way the TeslaCrypt ransomware did when, with the aid of several known exploit kits, it took advantage of unpatched Adobe Flash Player programs.

Lastly, malware can affect gamers when they connect to infected servers. In the report, Study of the Belonard Trojan, exploiting zero-day vulnerabilities in Counter-Strike 1.6, security experts at Russian antivirus firm Doctor Web investigated Belonard, a Trojan that takes advantage of weaknesses in both Steam and pirated versions of Counter-Strike 1.6 (CS 1.6).

Once infected with Belonard, gamers are then made part of a botnet, which can further propagate the promotion and marketing of other potentially malicious servers.

Survey scams

We sometimes wonder how a tactic this old can stick around for so long, and we find the answer in a longstanding phishing truism: It works.

Survey scammers immediately jumped on the Far Cry 5 craze by offering “free” copies of the game after it was released in Q2 2018. Unbeknownst to users who are led more by their desire to get a free Triple-A title game than to protect their data, they sign up to a service that purports to offer “unlimited movies,” but end up giving away their email addresses, receive even more offers they don’t want, and realize in the end that they didn’t get any of what was offered to them.

A similar flocking happened when Grand Theft Auto 5 (GTA V) came out in Q3 2017. Many scammers used YouTube to market their so-called money generators, which are survey scams, to nudge gamers to give away their personally identifiable information (PII) or download a potentially malicious file.

Let’s also not forget the amount of scammery that went down when Pokemon Go reached peak hype.

Phishing scams

Steam users are probably more than familiar with the times when phishers used squatted domains to lure them into giving out their credentials to Steam or their favorite third-party trading site, like CS:GO Lounge.

sleamcummunity.com and steamcornmunity.com were just two of several new domains that popped up, made to look like a Steam Community page, and used in several campaigns aiming to harvest Steam accounts. We believed that the stolen accounts could be used to lead more Steam users into giving away their credentials as well.

Similarly, a fake CS:GO Lounge domain was registered and mimicked the real trading and bidding site. Criminals behind it were also after Steam credentials. To rub salt to the wound, they even added a Trojan that pretended to be a Steam activation file.

A phishing campaign targeting PS4 users. Not a particularly good one.

Account takeover (ATO)

An account takeover is the result of credential fraud caused by phishing, hacking, or a data breach. Anyone maintaining an account online is at risk.

Ubisoft, the company behind Assassin’s Creed and the Tom Clancy brand, was compromised in 2013. While the company was mum about how it happened, one of our experts hinted that an employee may have been spear-phished, allowing the criminals to gain access to their internal network. Ubisoft prompted its users to quickly change their passwords.

Employees aren’t the only likely targets of those with nefarious intentions. Game developer’s forums are also at risk. Bohemia Interactive’s DayZ had theirs compromised, with hackers accessing and downloading usernames, passwords, and email addresses.

Ad flooding and malvertising

Ads, whether showcased on websites or apps, are perceived as more of an annoyance than a threat by normal users. But when they become too aggressive, Malwarebytes characterizes them as adware.

Mobile users who enjoy playing free games can probably attest that they can tolerate ads—they’re usually not in the way of the game they’re playing anyway. But if ads are more prevalent than the actual game, then expect to hear users complain. A lot.

Of course, some ads also contain malvertising, which opens up the angle that ads can be used as infection vectors to reach users who aren’t usually bothered by them.

Cyberbullying

Not all threats video gamers encounter online are after their information or their money. Some are after them, their reputation, their peace of mind. We implore every gamer to be wary of the items below as much as the items above because they can cause mental and emotional damage, rather than financial.

According to the US Department of Health and Human Services, the division that maintains the stopbullying.gov website, online bullying includes flaming, harassment, exclusion, denigration, outing/shaming caused by deception or pretension, and doxing. Nude photo sharing or revenge porn can also be considered a form of cyberbullying.

Cyberbullying can happen to gamers while interacting online, whether that’s using voice features of multiplayer games, or in forums or other chat functions of gaming platforms.

We’ve covered the topic of cyberbullying on several occasions, especially during events like the National Cybersecurity Awareness Month (NCSAM). We shared tech that could help curb cyberbullying, statistics on online bullying trends, and demystified the myths surrounding this act. It pays to go back and read these posts.

Trolling/griefing

Trolling could be both fun and funny. At least at first. But after the raucous laughter dies down to a chuckle, gamers eventually decide to get serious and carry.

Except they can’t.

Because sometimes that troll continues to stand in the open doorway doing jumping jacks, preventing gamers from going to the next room and advancing in gameplay.

This was what happened after Ubisoft officially released Tom Clancy’s The Division.

On the other hand, griefing—the term used to bring grief onto players by ruining their overall experience—is not lost in Elite Dangerous, a space exploration simulation. For one of its players, Commander DoveEnigma13, the end game is to reach a distant star system called Colonia. It may be his last chance to make the trip, as he had been battling a terminal illness for at least three years. So, with other Elite players, his daughter, and Frontier (the game developers) helping make this voyage a success, the Enigma Expedition was born.

However, reports of other Elite Dangerous griefers were sabotaging the expedition by attacking the final waypoint, a mega-ship called Dove Enigma, which Frontier also created as homage for the Commander. Without this, it would be difficult for the Enigma fleet of 560+ players strong to reach Colonia due to fuel shortage. However, in an interview with the Polygon, one of the players who was part of the fleet said that “the threat is minor at best.”

Read: When trolls come in a three-piece suit

Stalking

Thanks to Pokemon Go, augmented reality (AR) has become part of the modern gamer’s vocabulary. It’s the future of interactive and immersive gaming, bringing the experience to new heights. Unfortunately for some, AR games like Ingress have also made a way for gamers with questionable intent to use unofficial tools to stalk other gamers, visit their real-life homes, and leave creepy messages on doorsteps for the homeowners to see.

Intoku, an Ingress gamer, admitted to Kotaku in an interview: “Players on both sides have stalked and been stalked.” With a game that is based around real-world locations, players shouldn’t be surprised, nor should they expect little or no risk when playing such games.

Swatting

Swatting might start off as a prank call to emergency services, but the results—a dispatch of a large number of armed police officers to a particular address—can quickly become deadly, as we’ve seen in the Andrew Finch case. And yet, Peter “Rolly Ranchers” Varady, a then 12-year old YouTube streamer, was swatted less than a month after Finch’s death. This happened days after Cizzorz, a renowned YouTube streamer with millions of subscribers, helped him dramatically increase his subscriber count from 400 to almost 100,000.

In another story, a gamer with the pseudonym “Obnoxious” used swatting to get back at mostly young and female gamers who ignored or declined his friend requests on League of Legends (LoL).

In response to numerous swatting stories, some local US law enforcement agencies offer an anti-swatting service to video gamers and YouTubers.

Grooming

Probably the worst risk young gamers can encounter online is grooming, which is when a pedophile prepares a child for a meeting with the intention of committing a sexual offense. Not only is grooming a targeted act, but it’s also premeditated. Sometimes, it can be stopped if a parent happens to be in the same room as their child, or law enforcement is already tailing a suspect. Other times, it can lead to tragedy beyond words.

Breck Bednar was 14-years-old when he met Lewis Daynes online. Daynes was the ringmaster of the “virtual clubhouse” where Bednar and his friends at school would hang out. He claimed to be a computer engineer running a multimillion pound company. Daynes groomed Bednar into tricking his parents in order to arrange a meeting. He invited Bednar to his flat in Essex one Sunday in February 2014. Bednar texted his father that he’d be spending the night at a friend’s (who wasn’t Daynes). That was the last time they spoke.

There’s another side of grooming that is built around the highly popular game, Fortnite: the cybercrime kind. According to the BBC, teenagers as young as 14 admit to stealing private gaming accounts and reselling them online. Experts say that organized crime is linked to these activities, and that cybercrime grooming is taking place behind the scenes by dangerous persons or groups.

Play it safe. Always.

With a myriad of risks in online gaming, from financial to physical, it’s especially important to adhere to cybersecurity best practices. The gaming community is active, engaged, and passionate—and criminals will take advantage of that to the best of their ability. Head them off at the pass by following our advice:

  • Explore your options. Regardless of your gaming platform, it always pays to know how it works. Since a lot of PC-based games use launchers, acquaint yourself with their settings and customize them with security and privacy in mind.
  • Take advantage of additional security and privacy options when available. These launchers may have some form of two-factor authentication (2FA) to ensure that a user who asserts they own the account can verify this claim easily.
  • Update all software installed on your gaming rig or, if you’re a console gamer, the firmware and the games installed in it.
  • Always treat links sent your way—either by someone you’ve known for a long time or by someone you just met—as suspect. Because of the number of ways gaming accounts can be taken over by miscreants—and most of the time, victimized gamers are not aware of this—it’s wise to handle links with caution. It would be easier if you have other means to contact the link sender other than the gaming platform to verify that indeed it was them who messaged you. Ideally, if you and your friends and family members play games to bond, establish amongst yourselves a verification process, like a keyword/phrase you can mention or type up in chat. Not saying the keyword/phrase can denote that you’re not talking to the person they claim to be.
  • Use a form of password management that works for you. We know it causes fatigue just to remember all those username and password combinations. Based on some comments we’ve received on the Malwarebytes Labs blog, we also know that not everyone is using password managers, but instead have created their own way of managing and storing passwords. Go with what works, as long as your passwords are kept safe and secure. Most of all, avoid reusing passwords.
  • Manage your gaming profiles. These days, gaming profiles should be treated the way a regular social media profile and feed should be. Don’t reveal information about yourself that is deemed sensitive. You can pick and choose who sees your gaming activities and who doesn’t. Use your options wisely.
  • Keep your shields up. If suspicious files claim they can help you in your gaming, but you must first disable your antivirus or turn off your firewall, that’s a major red flag. If a piece of software wants to have free rein in your system without your security protections on, you better find safer alternatives.
  • Play games in the presence of or within earshot of your parents/carer. Grown-ups living with minors who are into gaming are always advised to get involved this way. They don’t have to breathe down their children’s necks, but they should at least pop in from time to time and make sure nothing nefarious is taking place—whether that’s the content of the game itself or the conversations happening amongst players.
Game over

We can say with confidence that many of the risks to online gamers a few years ago are still the risks they face today. Although nowadays, news of gamers behaving badly toward other gamers are at equal footing with news about malware and online criminals targeting gamers. Because of the real-world and life-changing impact they present to people behind the avatars—and to their families and loved ones—more is at stake now than just playing along in a computer-generated world. Gamers are not only called to take cybersecurity seriously, but they’re also called to be responsible digital citizens.

Playing video games is meant to be fun; a way for us to relax, blow off steam, and de-stress. However, let’s also recognize that gaming is already part of the overall threat landscape. Make sure that your information—and your person—are safe from harm in the digital world and beyond.

Game on!

The post How gamers can protect against increasing cyberthreats appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The global data privacy roadmap: a question of risk

Malwarebytes - Tue, 04/02/2019 - 15:00

For most American businesses, complying with US data privacy laws follows a somewhat linear, albeit lengthy, path. Set up a privacy policy, don’t lie to the consumer, and check the specific rules if you’re a health care provider, video streaming company, or kids’ app maker.

For American businesses that want to expand to a new market, though, complying with global data privacy laws is more akin to finding dozens of forks in the road, each one marked with an indecipherable signpost.

Should a company expand to China? That depends on whether the company wants to have its source code potentially analyzed by the Chinese government. Okay, what about South Korea? Well, is the company ready to pay three percent of its revenue for a wrongful data transfer, or to have one of its executives spend time behind bars?

Europe is an obvious market to capture, right? That’s true, but, depending on which country, the local data protection authorities could issue enormous fines for violating the General Data Protection Regulation.

What if a company just follows in the footsteps of the more established firms, like Google, Amazon, or Microsoft, which all opened data centers in Singapore in the past two years? Once again, the answer depends on the company. If it’s providing a service that Singapore considers “essential,” it will have to heed a new cybersecurity law there.

At this point, a company might think about entering a country with no data privacy laws. No laws, no getting in trouble, right? Wrong. Data privacy laws can sprout up seemingly overnight, and future compliance costs could severely cut into a company’s budget.

While this may appear overcomplicated, one guiding principle helps: If a company cannot afford to comply with a country’s data privacy laws, it probably should not expand to that country. The risk, which could be millions in penalties, might not outweigh the reward.

Today, for the third piece in our data privacy and cybersecurity blog series, which also took a look at current US data privacy laws and federal legislation on the floor, we explore the decision-making process of a mid-market-sized company that wants to expand its business outside the United States.

With the help of Reed Smith LLP counsel Xiaoyan Zhang, we looked at several notable data privacy laws in Europe, Asia, Latin America, the Middle East, and Africa.

Issue-spotting within a culturally-crafted landscape

Before a company expands into a new country, it should try to truly comprehend the data privacy laws located within, Zhang said. She said this involves more than just reading the law; it requires training one’s thinking into an entirely different culture.

Unlike crimes including manslaughter and robbery—which have near-universal definitions—Zhang said data privacy violations fluctuate from region to region, with interpretations rooted in a country’s history, economy, public awareness, and opinions on privacy.

“Data privacy is not like murder, which is much more straightforward,” Zhang said. “Privacy law is very intimately tied into culture.”

So, while overseas concepts might appear familiar— like protecting “personally identifiable information” in the US and protecting “personal information” in the European Union—the culture behind those concepts varies.

For example, in the European Union, a history of fierce antitrust regulation and government enforcement helped usher GDPR’s passage. In fact, Austrian online privacy advocate Max Schrems—whose legal complaints against Facebook heavily influenced the final text of GDPR—remarked years ago that he was surprised at the lack of tall garden hedges around Americans’ homes. The country’s understanding of privacy, Schrems realized, was different than that of Austria, and so, too, are its data privacy laws.

Similarly, Zhang said she has fielded many questions from EU lawyers who assume that data privacy regulations around the world are similar to those in GDPR.

“EU lawyers are used to thinking that, for every data collection, there must be a legitimate purpose, and they insist on asking the same questions,” Zhang said. “When I’m talking about legal advice in China, they’ll say ‘Oh, our medical device needs to collect data from users, does China have any law or statutes that give us a legitimate business purpose to collect that data?’”

Zhang continued: “No. In China, you don’t need that. It’s totally different.”

The differences can be managed with the right help, though.

The safest path for market expansion is to rely on a global data privacy lawyer to “issue-spot” any obvious global compliance issues, Zhang said. These experts will look at what type of data a company handles—including medical, financial, geolocation, biometric, and others—what type of service the company performs, and whether the company will need to perform frequent cross-border data transfers. Depending on all these factors, each company’s individual roadmap for data privacy compliance will be unique.

However, Zhang led us on a bit of a world tour, detailing some of the notable data privacy laws in Europe, Asia, Africa, the Middle East, and Latin America. Company expansion into these markets, Zhang emphasized, depends on whether a company is ready for compliance.

Many countries, many laws Europe

Starting with Europe there is, of course, GDPR. Complying with the sweeping set of provisions is tricky because GDPR gives each EU member-state the authority to enforce the new data protection law on its own turf.

This enforcement is done through Data Protection Authorities (DPAs), which oversee, investigate, and issue fines for GDPR violation. Each member-state has its own DPA, and, in the months before GDPR’s implementation, the DPAs gave mixed signals about what local enforcement would look like.

France’s DPA, the National Data Protection Commission (CNIL), said that companies that are at least trying to comply with GDPR “can expect to be treated leniently initially, provided that they have acted in good faith.”

Less than one year later, though, that leniency met its limit. CNIL hit Google with the largest GDPR-violation fine on record, at roughly $57 million.

The best defense to these penalties, Zhang said, is to consult with local legal experts who know the region’s enforcement history and details.

“You cannot just seek consultation from a GDPR expert. If you want to go specifically to Germany, you need German lawyers who can offer insight on things that are specific to Germany,” Zhang said. “That’s for all of Europe.”

Latin America

Outside of Europe—but still inspired by GDPR—is Latin America. Zhang said several Latin American countries have enacted, or are considering, legislation that protects the data privacy rights of individuals.

In 2018, Brazil passed its comprehensive data protection law, which protects people’s personal information and includes tighter protections for sensitive information that discloses race, ethnicity, religion, political affiliation, and biometrics. Argentina also forwarded privacy protections for its citizens, and it earned a special clearance in GDPR as a “whitelisted” party, meaning that personal data can be moved to Argentina from the EU without extra safeguards.

Asia

Moving to China, a whole new risk factor comes into play—surveillance.

China’s cybersecurity law grants the Chinese government broad, invasive powers to spy on Internet-related businesses that operate within the country. Implemented in 2017, the law allows China’s foreign intelligence agency to perform “national security reviews” on technology that foreign companies want to sell or offer in China.

This authority raised alarm bells for the researchers at Recorded Future, who attributed past cyberattacks directly to the Chinese government. Researchers said the law could give the Chinese government the power to both find and exploit zero-day vulnerabilities in foreign companies’ products, all for the price of admission into the Chinese market.

“China’s law has a hidden angle for government control and monitoring,” Zhang said. “It has a different rationale.”

Outside of China, Singapore has garnered the attention of Google, Microsoft, and Amazon, which all built data centers in the country in the past few years. The country passed its Personal Data Protection Act in 2012 and its Cybersecurity Act in 2018, the latter of which sets up a framework for monitoring cybersecurity threats in the country.

The law has a narrow scope, as it only applies to companies and organizations that control what the Singaporean government calls “critical information infrastructure,” or CII. This includes computer systems that manage banking, government, healthcare, and aviation services, among others. The law also includes data breach notification requirements.

Moving to South Korea, the risk for organizations goes up dramatically, Zhang said. The country’s Personal Information Protection Act preserves the privacy rights of its citizens, and its penalties include criminal and regulatory fines, and even jail time. Cross-border data transfers, in particular, are strictly guarded. One wrongful transfer can result in a fine of up to three percent of a company’s revenue.

Africa

Traveling once again, expansion into Africa requires an understanding of the continent’s burgeoning, or sometimes non-existent, data privacy laws. Zhang said that, of Africa’s more than 50 countries, only about 15 have data protection laws, and even fewer have the regulators necessary to enforce those laws.

“Among [the countries], nine have no regulators to enforce the law, and five have a symbolic law but it’s not enforced,” Zhang said.

So, that invites the question: What exactly does happen if a company expands into a country that doesn’t have any data privacy laws?

What happens is potentially more risk.

First, a country could actually develop and pass a data privacy law within years of a company’s expansion into its borders. It’s not unheard of—less than one year after Amazon announced its rollout into Bahrain, the country introduced its first comprehensive data privacy law. Second, compliance with the new data privacy law could be expensive, Zhang said, forcing a company into a tough situation where it might have to withdraw entirely from the new market.

“One common misconception is that if a country doesn’t have a law at all, it’s a good country to go to,” Zhang said. “You should think twice about whether that’s the case.

Expand or not? It’s up to each company

There is no single roadmap for companies entering new markets outside the United States. Instead, there are multiple paths a company can take depending on its product, services, the data it collects, data it will need to move between borders, and its tolerance for risk.

The safest path, Zhang said, is to ask questions upfront. It is far better to make an informed decision about how to enter a market—even if compliance is costly—than to be surprised with fines or penalties later on.

The post The global data privacy roadmap: a question of risk appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Compromising vital infrastructure: water management

Malwarebytes - Mon, 04/01/2019 - 15:00

It’s probably unnecessary to explain why water management is considered part of our vital infrastructure, but it’s a wider field than you might expect—and almost every one of its components can be integral to our survival.

We all need clean water to drink. As much as I like my coffee, I can’t make it with contaminated liquids. And the farmers that grow our coffee need water to irrigate their land. On top of that, the water we use in our households and workplaces needs to be cleaned before it goes back into nature.

In some countries, and especially in large river delta areas, we need a high level of control over the water level to prevent flooding. Other areas need methods to retain water to avoid droughts or to keep vital transportation methods that depend on rivers and canals on the move.

We also use water to generate energy, for example, through dams and mills. In the first decade of this millennium, hydropower accounted for about 20 percent of the world’s electricity, and with the increasing need for clean energy, we can expect this percentage to rise.

Water management is considered so critical that tampering with a water system is a US Federal Offense (42 U.S.C. § 300i-1).

Yet, cybercriminals have found ways to compromise these vital systems as well. Let’s take a look at their methods of attack.

Hardware

The Supervisory Control and Data Acquisition (SCADA) architecture that is in use in various water management plants, despite their diversity, is for the most part consistent. There are only so many companies that produce Programmable Logic Controllers (PLCs). In the past, vulnerabilities have been found in widely-used PLCs made by General Electric, Rockwell Automation, Schneider Modicon, Koyo Electronics, and Schweitzer Engineering Laboratories. And I would dare to wager that some have been found that we haven’t been made aware off.

One of the best organized safety aspects of water and sewage plants is its physical access (which is not always easy to secure either, if only because of the size of some of these installations). But, according to the 2018 Cybersecurity Risk and Responsibility in the Water Sector report by the American Water Works Association (AWWA):

“Cybersecurity is a top priority for the water and wastewater sector. Entities, and the senior individuals who run them, must devote considerable attention and resources to cybersecurity preparedness and response, from both a technical and governance perspective. Cyber risk is the top threat facing business and critical infrastructure in the United States.”

The report goes on to say that getting cybersecurity right is not an easy mission and many organizations have limited budgets, aging computer systems, and personnel who may lack the knowledge and experience for building robust cybersecurity defenses and responding effectively to cyberattacks.

In cyberwarfare, a mass shutdown of computers controlling waterworks and dams could result in flooding, power outages, and shortage of clean water. In the long run, this could lead to famine and disease. In March and April 2018, the US Department of Homeland Security and Federal Bureau of Investigation warned that the Russian government is specifically targeting the water sector and other critical infrastructure sectors as part of a multi-stage intrusion campaign.

Malware

One of the major threats to water-energy plants is Industroyer, aka CrashOverRide, an adaptable malware that can automate and orchestrate mass power outages. The most dangerous component of CrashOverride is its ability to manipulate the settings on electric power control systems. It also has the capability of erasing the software on the computer system that controls circuit breakers. CrashOverRide clearly was not designed for financial gain. It’s purely a destructive tool.

Another malware that many industrial plants are threatened by is called Stuxnet. This threat is designed to spread through Windows systems and go after certain programmable controllers by seeking out their related software. Near the end of 2018, the Onslow Water and Sewer Authority (ONWASA) said it would have to completely restore a number of its internal systems thanks to an outbreak of Emotet and one of the ransomware variants it is known to deliver.

Earlier in 2018, the first cryptocurrency mining malware impacting industrial controls systems and SCADA servers was found in the network of a water utility provider in Europe. This was not seen as a targeted attack, but rather the result of an operator accessing the Internet on a legacy Human Machine Interface (HMI).

Not that SCADA systems are free of targeted attacks. A honeypot that mimicked a water-pump SCADA network was found by hackers within days and soon became the target of a dozen serious attacks.

Insider threats are another cause for concern. In 2007, headlines told of an intruder who installed unauthorized software and damaged the computer used to divert water from the Sacramento River. In hindsight, this turned out to be a former, and probably disgruntled, employee.

An infected laptop PC gave hackers access to computer systems at a Harrisburg, PA, water treatment plant. An employee’s laptop was compromised via the Internet, likely through a watering hole attack, and then used as an entry point to install a virus and spyware on the plant’s computer system.

Countermeasures

A lot of what we can learn from these incidents will already sound familiar to most of our readers. Countermeasures that security teams in water management plants and organizations can apply follow many of the same cybersecurity best practices as corporations protecting against a breach. Some of our recommendations include the following:

  • A clear and strict Bring Your Own Device (BYOD) policy can help prevent staff bringing in unwanted threats to the network.
  • A strict and sensible password regime can hinder brute force attacks and should close out employees who left the firm.
  • Legacy systems that serve as human interfaces should not have Internet access.
  • Easy backup and restore should be made possible to keep any disruption limited in time and impact. Needless to say, this is imperative for critical systems.
  • Software running on industrial controls systems and SCADA servers should not give away the nature of the plant or the underlying hardware. This makes it harder for attackers to find out which exploits will be successful.
  • Use secure software, even though you cannot control or check the security of your hardware.
  • Monitor the processors and servers that are vital to the infrastructure constantly so any abnormal behavior will be flagged immediately.
Water and power

As you can see, there are many similarities between water management plants and power plants. While water management may be even more vital to our existence, many of the threats are basically the same. This is due to the similarities in plant infrastructure and hardware.

And when the threats are the same, you will see that the countermeasures are also similar. What’s strange, however, is that despite both water and power being vital to the country’s infrastructure, their cybersecurity budgets are quite limited, and they often have to work with legacy systems.

When the city of Atlanta was crippled by a ransomware attack in March 2018, city utilities were also disrupted. For roughly a week, employees with the Atlanta Department of Watershed Management were unable to turn on their work computers or gain wireless Internet access. Two weeks after the attack, Atlanta completely took down its water department website “for server maintenance and updates” until further notice.

Instead of systems backing each other up, they brought each other down like dominoes—an almost perfect example of Murphy’s Law, or the “butter side down” rule, as my grandma used to call it. It doesn’t have to be that way, and when it comes to our vital infrastructure, it shouldn’t.

Stay safe and hydrated, everybody!

The post Compromising vital infrastructure: water management appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 25 – 31)

Malwarebytes - Mon, 04/01/2019 - 08:24

Last week, we looked at plugin vulnerabilities, location tracking app problems, and talked about plain text password woes. We also looked at federal data privacy regulation and took a deep dive into  BatMobi Adware.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (March 25 – 31) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Awakening the beast: BatMobi adware

Malwarebytes - Fri, 03/29/2019 - 15:00

On February 12, a patron of the Malwarebytes Forum alerted us of an issue with ad redirects that seemed to come out of nowhere. An outcry from other commenters filled the forum thread, all experiencing the same redirects to the same exact websites. Our web protection team traced the offending websites back to the culprit—the adware known as BatMobi.

What is BatMobi?

BatMobi is an Advertisement Software Development Kit (Ad SDK), which is essentially a software library that connects applications to ad networks. Developers insert Ad SDKs into their apps’ code to gain revenue through ads. Thus, they can offer their apps for free and still make money. Most variants of BatMobi were clean and safe to use—until recently.

Based on a Reddit post about the sudden web redirects on January 21, it appears these “clean” versions of BatMobi turned into mobile adware around mid January. Adware is a subcategory of Potentially Unwanted Programs (PUPs), which means it hangs around the fringes of bad behavior and often results in poor user experiences. Furthermore, BatMobi has always had a slightly more aggressive version we consider low-level adware. We detect this as Android/Adware.BatMobi.

Triggered by Google Play

An interesting component of this newly seen BatMobi variant is the location in which it was popping up ads—Google Play. Forum patrons verified the ads were popping up whenever an app was updating or installing in Google Play. BatMobi is using Chrome Custom Tabs within its code to open websites in Google Play whenever it was triggered by these events. Although the websites being redirected to are relatively safe sites, they are an unwanted nuisance for the user—exactly what we consider adware.

Tracking down the beast

Usually, pinpointing the source of an adware app on a customer’s device is simple, especially when knowing the adware variant, as in this case. Thanks to all the great Malwarebytes forum participants, I had a large set of data to work with in the form of what we call Apps Reports.

This is a list of apps along with data about their MD5, package name, and other components to assist tracking down infections. Even with all the data, finding BatMobi was a nightmare: It hides deep within an app’s code, in different apps on each user’s device, and no other mobile anti-malware vendors detect it. Nevertheless, I was able to make some headway and find a couple of patterns of infection. Here were my findings.

Uptodown

The search started with the third-party app store Uptodown. More specifically, apps that download videos from YouTube, such as Videoder, Video Downloader, Snaptube, and TubeMate were delivering ads to users the most. These apps all come with hidden versions of BatMobi.  Removing these apps solved the issue for many, but still it persisted for others.

Click to view slideshow. Mi Mobile

Another component that further complicates detecting and removing BatMobi is that we found it on apps pre-installed on Mi Mobile devices—specifically, the Xiaomi Redmi Note 5. The infected apps are as listed:

Package name: com.mi.android.globalpersonalassistant
App name: App vault 

Package name: com.android.providers.downloads.ui       
App name: Downloads

Please note that not all versions of these apps have BatMobi nor do all Xiaomi Redmi Note 5 devices—only a select few.  Detections are in place in Malwarebytes for Android to alert users of its presence.

If you are having issues with adware on pre-installed apps, you can follow our removal instructions for disabling or uninstalling.

Warning: Make sure to read Restoring apps onto the device (without factory reset) in the rare case you need to revert/restore apps.

Use this/these command(s) during step 7 under Uninstalling Adups via ADB command line to remove:

adb shell pm uninstall -k –user 0 com.mi.android.globalpersonalassistant
adb shell pm uninstall -k –user 0 com.android.providers.downloads.ui

Still unknowns

Even after finding two dominant sources of the Batmobi infection, there are still cases left unsolved. You see, as suddenly as the ads appeared, they abruptly stopped in early March.  Without active cases to see if removing apps will remediate or not, finding these deeply hidden BatMobi variants has become nearly impossible. I’m confident that there are versions still on Google Play, but finding them now is searching for a needle in millions of haystacks.

The scary reality of Ad SDKs

Technically, since these hidden BatMobi variants no longer trigger ads inappropriately, they are no longer considered adware. I suppose that’s the good news. My assumption is that BatMobi made a change on their servers without warning, thus triggering the ads in January. But we don’t know why there was an abrupt stop in March. What happened? Maybe an overwhelming amount of complaints to BatMobi caused a change of heart?

This all leaves us with an uneasy feeling about Ad SDKs. It highlights their power to switch from clean and safe to adware overnight. It’s a scary reality to have code lay dormant in legitimate apps that can turn malicious so quickly. I reiterate that yes, these website redirects were to relatively safe sites, but the potential for worse is present.

Developers beware

The last thing a developer wants is for their app to be on an anti-malware scanner’s adware list without warning. In the past, we have seen ad companies clearly move from legitimate to serving adware, becoming overly aggressive with data collection and/or aggressively pushing ad content, as in the case above. However, in those cases it was easy to make a clear cut distinction of the cause of infection. This time, its much more unclear which components were causing the issue, and so much is still left unknown.

Unfortunately, finding an Ad SDK that developers can trust is an ongoing challenge. All we can say is do your research and choose wisely. If an Ad SDK has any variants that are considered adware, as with BatMobi, it’s a wise decision to stay clear.

Stay safe out there!

The post Awakening the beast: BatMobi adware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

US Congress proposes comprehensive federal data privacy legislation—finally

Malwarebytes - Thu, 03/28/2019 - 15:00

The United States might be the only country of its size—both in economy and population—to lack a comprehensive data privacy law protecting its citizens’ online lives.

That could change this year.

Never-ending cybersecurity breaches, recently-enacted international privacy laws, public outrage, and crisis after crisis from the world’s largest social media company have pushed US Senators and Representatives into rarely-charted territory: regulation.

Before Congressmembers’ desks are at least four federal bills that would change how companies handle and protect Americans’ private data. The bills seek better user privacy through increased transparency, oversight, fines, and liability, and, in the case of one bill, the possibility of jail time for dishonest tech executives.

Several US states are also considering comprehensive data privacy bills, taking inspiration from California, which passed its own law last year. If those state laws pass, a new wrinkle will be added to the broader country-wide debate: Should state privacy protections be respected or should one federal law supersede those rules?

This month, Malwarebytes Labs launched its limited blog series about data privacy and cybersecurity laws. In this second blog in the series, we explore five federal data privacy bills.

How we got here

For decades, Congress regulated data privacy based on single, sector-specific issues. Rather than writing laws to protect all types of data, they instead wrote laws to combat individual crises.

In the late 80s, that crisis was a Supreme Court nominee’s video rental history being leaked to the press, resulting in the Video Privacy Protection Act. In the late 90s, that crisis was the potential targeting of children online, resulting in the Children’s Online Privacy Protection Act. In the mid-2000s, the kidnapping and murder of a Kansas teenager prompted lawmakers to discuss lowering protections on GPS data held by cell phone providers. (The proposed bill failed passage multiple times.)

This reactive approach is just how Congress works, said Michelle Richardson, director of the data and privacy project at Center for Democracy and Technology (CDT).

“This country has generally allowed companies to do their thing until something goes quite wrong,” Richardson said. “It has to get worse before the US and its decision-makers and its cowboy personality feel ready to intervene.”

Today, Congress is again ready to intervene. The crisis at hand is two-fold.

First, data breaches of Yahoo, Uber, Equifax, Marriot, Target, the Sony PlayStation Network, Facebook, Anthem, JPMorgan Chase, and many more have resulted in Americans’ personally identifiable information being stolen or accessed by cybercriminals. This PII includes names, Social Security numbers, credit card numbers, passport numbers, dates of birth, account passwords, physical and email addresses, and even employment histories.

Second, even when a company hasn’t suffered a breach, Americans’ personal data has been misused or left astray. The FBI searched private company DNA databases. A period-tracking app shared its users’ pregnancy decisions and menstrual tracking information with Facebook. And political beliefs were reaped in an effort to sway a US presidential election.

Congress has concluded that user privacy can no longer be solely entrusted to America’s technology companies.

“The digital space can’t keep operating like the Wild West at the expense of our privacy,” said Amy Klobuchar, Democratic Senator of Minnesota and presidential candidate.

Data privacy legislation has huge support outside of Capitol Hill, too—from the public. Richardson said that, thanks to the work of researchers, journalists, and civil liberties advocates, the public better understands how their data moves from company to company.

“We don’t give nearly enough credit to civil media [outlets] and civil society [groups] for the research they’ve done into data practices and for giving people cold, hard facts about how their data is collected,” Richardson said.

That research has exposed not just personal data misuse, but also corporate irresponsibility.

Last year, Reuters showed that Facebook failed to fulfill its promise to control the wildfire-like spread of hate speech on its platform in Myanmar. The Intercept exposed Google’s plans to build a censored version of its online search tool in China, resulting in several employee departures and renewed questions about Google’s removal of its “Don’t Be Evil” tagline. ACLU showcased the failures in Amazon’s facial recognition software, revealing that the technology falsely matched 28 members of Congress with mugshots of arrestees.

Some US states have already responded.

Last year, Vermont passed a law regulating data brokers, and California passed its California Consumer Privacy Act. The law gives Californians the right to know which data is collected on them, whether that data is sold, the option to opt out of those sales, and the right to access that data. The law will take effect at the start of 2020.

In the meantime, other states are aiming to follow suit. Washington, Utah, and New York legislatures are all considering new laws that could give their residents better access and control to the information that companies collect on them.

International data privacy law is even further ahead.

Last year, the European Union successfully completed its effort to pull together the data privacy laws of its 28 member-states into one cohesive package. The General Data Protection Regulation came into effect on May 25, 2018, and since then, it has produced lawsuits against Facebook and a record fine out of France against Google.

At home and abroad, regulation is in the air.

The proposals

Since last April, multiple US Senators have tried to take on the mantle of the public’s chief data privacy protector. Some tried to show their commitment to data privacy by asking Facebook CEO Mark Zuckerberg pointed questions during his Congressional testimony regarding the Cambridge Analytica scandal. One Senator—and presidential candidate—made a direct public appeal to break up Amazon, Google, and Facebook.

But in putting actual ideas onto paper, four Senators have emerged as frontrunners in America’s data privacy debate. Senators Klobuchar, Ron Wyden of Oregon, Marco Rubio of Florida, and Brian Schatz of Hawaii have directly sponsored individual, separate bills to protect Americans from opaque and unfair data collection.

Google, Facebook, Amazon, Apple, Microsoft, Yahoo, Uber, Netflix, and countless others could be affected by these proposals.

The bills ask for essentially the same thing: tighter controls on user data. Consequences often include higher fines from the Federal Trade Commission (FTC), which currently serves as the country’s primary data misuse regulator.

Sen. Klobuchar’s bill—the first of the four to be formally introduced in April 2018—would require certain companies to write their terms of service agreements in “language that is clear, concise, and well-organized.” It would also require companies to give users the right to access data collected on them (similar to California’s state bill and to GDPR), along with notifying users about a data breach within 72 hours.

Sen. Rubio’s bill—the American Data Dissemination Act (ADD)—would require the FTC to write its own privacy recommendations for Congress to later approve. The ADD asks that the FTC’s  rules closely align with the Privacy Act of 1974, which restricts how federal agencies collect, store, and share Americans’ personal information. If passed, the FTC would have up to 27 months to get its own recommendations approved.

The ADD would also “preempt”—meaning, it would nullify—current and upcoming state data privacy laws. If passed, companies would only need to comply with the FTC’s federal rules that Congress would later approve. California and Vermont would wave goodbye to their newly-passed laws, and Utah, Washington, and New York would likely shut down their own efforts.

But preemption could be a deal-breaker for free speech advocates, digital rights groups, and government representatives.

“Under the Rubio bill, Americans would not have their privacy protected,” said Center for Digital Democracy Executive Director Jeff Chester, in speaking to Bloomberg. “State preemption is a non-starter as far as the consumer and privacy groups community and their allies in Congress are concerned.”

In California, the state’s attorney general also pushed back.

“For those of you following debate over data #privacy, note: We oppose any attempt to pre-empt #California’s privacy laws…” wrote Sarah Lovenheim, communications advisor to California Attorney General Xavier Becerra.

The opposition to Sen. Rubio’s bill is compounded by its slow timeline, making it impossible for lawmakers to know what specific rules they could be asked to approve in two years’ time.

The ADD demands Congress make an unknown, gameshow-style choice: Keep the data privacy protections you have, or choose what’s behind Door Number Two?

Sen. Wyden’s bill—the Consumer Data Protection Act—sets itself apart as the only bill that includes jail time consequences.

Sen. Wyden’s bill would require data-collecting companies to deliver annual reports that detail their internal privacy-protecting efforts. Those reports would need to be signed and confirmed by a high-level company executive, like a CEO or CTO. But if those executives confirm a false report, they could face jail time, the bill proposes.

The Consumer Data Protection Act would also require the FTC to set up a “Do Not Track” website where Americans could register to opt out of online tracking and third-party data sharing. Companies that fail to comply with consumers’ wishes would face fines.

This “Do Not Track” proposal is far from perfect. If a company’s requirement to get user consent clashes with that user’s Do Not Track preferences, the bill proposes a harmful compromise: Put the services behind a price tag. Paying for privacy is wrong, and, even if the bill passes, companies should refuse to engage in such a dangerous practice.

Finally, there is Sen. Schatz’s Data Care Act, which relies on a novel interpretation of corporate responsibility. The bill equates the responsibility that doctors have to their patients’ information as the same responsibility that technology companies should have to user data.

“Just as doctors and lawyers are expected to protect and responsibly use the personal data they hold, online companies should be required to do the same,” Sen. Schatz said in a press release.

The bill creates rules under five broad umbrellas—the “duty to care,” the “duty of loyalty,” the “duty of confidentiality,” federal and state enforcement, and rulemaking authority by the FTC to enforce the bill.

Fifteen Senators from both parties have signed on as co-sponsors, including Sen. Klobuchar. (Sens. Rubio and Wyden have not.) Several civil rights organizations, including Free Press, EFF, and CDT, have voiced support.

“We commend Senator Schatz for tackling the difficult task of drafting privacy legislation that focuses on routine data processing practices instead of consumer data self-management,” said CDT’s Richardson in a press release.

Here, Richardson is talking about something that she and the policy team at CDT find particularly important: consent. Many of today’s data privacy bills lean heavily on the idea that clearer terms of service and more notifications and more annual reports will somehow empower consumers to make the right choices for themselves when consenting to use online platforms.

But that’s unfair, Richardson said.

“[CDT’s] biggest concern is that a lot of these proposals are a notice-and-consent model. They look at these agreements we sign and say, ‘Maybe make them clearer,’ for example,” Richardson said. “That’s doubling down on our existing system, where it’s up to individuals to micromanage their relationships with hundreds, if not thousands of companies that touch their data every day.”

So, CDT—which routinely discusses already-authored legislation with Congressmembers—took a different approach. The organization wrote its own bill.

The bill’s rules are not built on consent. Instead, CDT’s bill focuses, Richardson said, on “what are the things you can’t sign away? What are your digital civil rights?”

CDT’s bill would give US persons—including residents—the rights to access, correct, and delete data that is collected on them, along with the right to take their personal data and move it somewhere else (which is similar to a right granted in the European Union’s GDPR). The bill would also require the FTC to investigate and write rules barring discriminatory practices in online advertising.

Companies affected by CDT’s bill would be given 30 days to put into place mechanisms for users to exercise their above rights. Also, if those companies license or sell personal information to third parties, they would need to assure that their third-party partners are practicing the same privacy commitments as the companies themselves.

Similar to Sen. Rubio’s bill, CDT’s bill would pre-empt state laws, but only those that focus on data privacy. Laws that deal with, say, consumer protection or data breaches, would remain intact.

As to which federal bill will prevail—it’s a bit of a tossup. Passing a bill into law is never as easy as getting the best idea forward. Big Tech is sure to lobby against any bill that would cut into its business model, and civil liberties groups could, depending on the legislation, disagree with one another about the best path forward.

Until then, CDT thinks it is taking the right approach, removing the burden from users and instead protecting what their rights should look like in the future.

Richardson put it plainly: “This is a moment about having corporations treat us better.”

In our next blog in the series, we will look at data privacy compliance for businesses seeking to expand outside the US market.

The post US Congress proposes comprehensive federal data privacy legislation—finally appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Location data leaks from family tracking app database

Malwarebytes - Wed, 03/27/2019 - 16:00

An app called Family Locator, which allows family members to keep track of one another recently experienced an exposed database issue of the worst kind. Specifically: the MongoDB database was left exposed with no password, like so many other recent infosec tales of woe. The end result is the location of about 280,000 users leaking in real time.

For a location tracking app that also includes information about children, this is quite the error. Map views, family maps, and push notifications to let you know where everybody is all sound great—until random people also potentially have access to it. This is the fate handed to Family Locator these past few days, although nobody knows how long the sensitive data has been exposed.

What was leaked?

The Family Locator database records held names, email, plain text passwords, and photographs, along with coordinates tied to user-allocated names, such as office, home, and condo. As per the TechCrunch report, none of it was encrypted, a misstep repeated by Facebook last week.

On a related note, the app’s privacy policy is rather short and to the point:

What information do we collect and how we use it

Contact information:

When you create an account, we may collect your personal information such as your username, first and last name and email address.

We may send important or promotional information about our products.

Geolocation data:

We collect your location through GPS, WiFi, or phone network in order to provide our Service.

Do we disclose any information to outside parties?

No. We do not sell, trade, or otherwise transfer to outside parties any of your personally identifiable information.

 Changes to our privacy policy

We may update this policy at any time by posting changes on this page.

It seems the most-urgently required change to the page is the addition of the word “whoops.”

Was there a real-world impact to this?

There absolutely was. After setting up a dummy account and verifying the accuracy of their coordinates against what was listed in the database, TechCrunch contacted one user randomly, who validated that their location exposed in the database was also correct, and that one of their family members using the app was their child.

This is, frankly, terrible, especially as TechCrunch found numerous other parent/child combinations in the database.

Did it all go wrong at this point?

You bet it did. I’ve reported hundreds of security fails down the years. I’ve had data exposure issues fixed on image hosting websites, exploits on social networking portals patched up, data hauls taken offline, outbreaks on instant messaging platforms shut down, and much more besides.

Many people working in infosec do the same thing, all the time. Security awareness, even for other developers, used to be pretty bad a decade or more ago—it was pretty much throw a paper plane and hope something lands.

Things are supposed to be much better now, right?

In the case of Family Locator, they aren’t.

What happened next sounds like one of my wild goose chases from yesteryear. No useful information could be found on the site’s WHOIS record or privacy policy page (as you can see above), and zero contact information was listed on the website. TechCrunch bought business records to finally obtain a name tied to the business, but that still didn’t get them any further.

Microsoft, who host the MongoDB database in question, were contacted, and eventually it was taken offline. Presumably they contacted the app developer, but it seems they’ve still not acknowledged their leaky database, either way.

Are MongoDB breaches a thing?

Sadly, yes. MongoDB is wonderful to deploy, but people seem to lose interest at the “locking it down” stage [1], [2], [3]. Sometimes, it’s deviations from default configurations causing the problem. Other times, nobody set a password. This is disappointing, given the security documentation available to ensure everything on the server stays secure.

What now?

If you’re one of the app users caught up in these events, try not to panic. While the data was exposed, it’s most likely to be abused by marketers and scrapers, and not so much hardened criminals. While this isn’t exactly great, it’s still better (and more probable) than “dubious stalker character uses this data to lurk near my home.” The chances of someone like that not only being able to find the data, but be close enough to your location to do something with it are remote.

It’s also a good reminder that we can’t possibly predict how secure a service is when signing up to it.  The more access you give to your personal life, the more damage can be done should something go wrong afterwards. This may not be massively reassuring, but it’s sadly where we’re at. It’s up to app developers to step up and do a better job of it.

The post Location data leaks from family tracking app database appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Facebook’s plain text misstep, and other password sins

Malwarebytes - Wed, 03/27/2019 - 15:00

Two days after an article by Brian Krebs disclosed that hundreds of millions of Facebook account passwords had been stored in plain text for years, Facebook released a statement indicating they hash and salt passwords, more or less in accordance with industry best practice.

Plain text storage of credentials is a fairly egregious security misstep, but there’s a variety of other ways credential security can fail. Given the trend toward sharp increase in third party credential mishandling in recent years, we’d like to take a look at some of the other ways companies handling user data can leave users insecure, irrespective of password length, complexity, or obscure security questions.

Outdated hashes

Not every hashing algorithm provides the same degree of security, and many large scale third-party breaches have occurred at companies using deprecated algorithms, or in some instances, none at all. The most common algorithm for many years, MD5, was shown to be insecure via single block collision in 2010, and therefore inappropriate for most security uses. MD5 weaknesses were further underlined when Microsoft published that the authors of Flame malware used MD5 vulnerabilities to forge a Windows certificate.

Despite these issues, MD5 was still widely used in password hashing and allowed for attackers to easily crack exfiltrated credentials, as seen with the 2013 Yahoo breach. Institutional inertia with updating hashes can dramatically increase the end user impact of a breach, as seen in the haveibeenpwned list of data breaches that show frequent use of MD5. Organizations with a serious commitment to data handling best practices should be using either bcrypt or scrypt for secure credential storage.

Misconfigured servers

Cloud servers are typically a great way for enterprises to reduce cost and increase speed of infrastructure rollout. Security on an assortment of cloud services, however, can have sub-optimal default settings, or rely on users to implement security solutions. As a result, misconfigured servers have resulted in large scale data loss in a variety of settings. (While typically referred to as a breach after-the-fact, it’s an inappropriate descriptor due to the lack of any fortifications to be breached.)

In February of this year, Dow Jones exposed 4.4GB of sensitive human targeting data simply by failing to make the AWS server the data was stored on non-public. While not indexed by search engines, threat actors with dedicated crawlers could trivially locate and exfiltrate the data in question. This sort of vulnerability of inaction is more common than most users think, and can be as impactful as an actual breach.

Godaddy exposed 31,000 of its own server configurations in this way, and personal information from voter databases has been repeatedly exposed, most recently in 2017. With third-party service providers forming an integral part of most organizations’ infrastructure, we should expect this sort of “breach” to increase in frequency.

Security questions (with no other validation)

Common in enterprise financial platforms, questions about a user’s personal life started as a well-meaning account validation practice that was quickly proven as impractical for securing data.

Cementing their reputation for poor security, Experian was noted for having an exceptionally poor implementation of security questions for account validation. While typically these questions are defined by end users and/or take unstructured input for the answers, Experian used unchangeable questions set only by them, based on their own data holdings on users’ personal information (regardless of accuracy), and made the answers trivially searchable, such as past addresses or date of birth.

Security questions might still hold a modicum of security if they allow free input for both the question and the answer, and are not used as a primary account validation method. However, proliferation of personal information online makes use of security questions largely a bad call.

Using passwords at all?

What if the fundamental problem is not with handling passwords, but using passwords at all? Some researchers think that long-standing security problems involving passwords are baked into the design of using passwords to authenticate. Hardware-based, passwordless authentication can sidestep issues involving secure credential storage and transmission by requiring a physical device to generate a time-bounded authentication token.

These schema are typically multifactorial designs involving smart cards, USB keys, QR codes, or biometrics. While none of these designs resolve authentication issues entirely (how do you replace a compromised thumbprint?), they can be significantly more secure than managing sending and storing a credential pair.

It’s not just about breaches

As seen above, password handling by third parties carries more risk than simply a breach and exfiltration. Quite often, mishandling credential management is more problematic than direct data loss, but it also points to fundamental design flaws in an organization’s infrastructure.

Reducing organizations’ attack surface should include a serious look at how passwords are stored, the appropriateness of an authentication scheme to a given use case, and whether your company may have outgrown passwords entirely.

The next time you hear about a third-party breach and wonder, “Is my password secure?”, you might also ask, “What was this organization doing with it in the first place?”

The post Facebook’s plain text misstep, and other password sins appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Plugin vulnerabilities exploited in traffic monetization schemes

Malwarebytes - Tue, 03/26/2019 - 15:00

In their Website Hack Trend Report, web security company Sucuri noted that WordPress infections rose to 90 percent in 2018. One aspect of Content Management System (CMS) infections that is sometimes overlooked is that attackers not only go after the CMSes themselves—WordPress, Drupal, etc.—but also third-party plugins and themes.

While plugins are useful in providing additional features for CMS-run websites, they also increase the surface of attack. Not all plugins are regularly maintained or secure, and some are even abandoned by their developers, leaving behind bugs that will never get fixed.

In the past few months, we have noticed threat actors leveraging several high profile plugin vulnerabilities to redirect traffic toward various monetization schemes, depending on a visitor’s geolocation and other properties. The WordPress GDPR compliance plugin vulnerability, and the more recent Easy WP STMP and Social Warfare vulnerabilities are a few examples of opportunistic attacks quickly adopted in the wild.

Redirection infrastructure

Hacked websites can be monetized in different ways, but one of the most popular is to hijack traffic and redirect visitors toward scams and exploits.

We started looking at the latest injection campaign following the notes from Sucuri’s blog post about the Social Warfare zero-day stored XSS. According to log data, the automated exploit attempts to load content from a Pastebin paste, which can be seen below. The obfuscated code reveals one of the domains used by the threat actors:

Pastebin code snippet used in automated attacks against vulnerable plugins

Our crawlers identified a redirection scheme via the same infrastructure related to these recent plugin hacks. Compromised websites are injected with heavily obfuscated code that decodes to setforconfigplease[.]com (the same domain as found in the Pastebin code).

Obfuscated code injected into hacked site

The first layer of redirection goes to domains hosted on 176.123.9[.]52 and 176.123.9[.]53 that will perform the second redirect via a .tk domain. Denis from Sucuri has tracked the evolution and rotation of these domains during the past few days.

New domain used in the "Easy WP SMTP" and "Social Warfare" (and some other) attacks — redrentalservice[.]com — registered 2019-03-21. Replacement for setforconfigplease[.]com (registered on March 4). https://t.co/2RWVxhLrfb and https://t.co/lqse0IwR61

— Denis (@unmaskparasites) March 22, 2019

Based on our telemetry, the majority of users redirected in this campaign are from Brazil, followed by the US and France:

Top detections based on visitors’ country of origin

Scams, malvertising, and more

The goal of this campaign (and other similar ones) is traffic monetization. Threat actors get paid to redirect traffic from compromised sites to a variety of scams and other profit-generating schemes. Over the past few months, we have been following this active redirection campaign involving the same infrastructure described earlier.

Keeping track of any ongoing threat gives insight into the threat actor’s playbook—whether changes are big or small. Code may go through iterations, from clear text to obfuscated, or perhaps may contain new features.

While there are literally dozens of final payloads based on geolocation and browser type delivered in this campaign, we focused on a few popular ones that people are likely to encounter. By hijacking traffic from thousands of hacked websites, the crooks fingerprint and redirect their victims while trying to avoid getting blocked.

 

Traffic redirections by payload type

Browser lockers and tech support scams

Historically, we have seen this sub campaign as one of the main purveyors of browser lockers, used by tech support scammers. New domains with the .tk TLD are generated every few minutes to act as redirectors to browlocks. Back in October 2018, Sucuri mentioned this active campaign abusing old tagDiv themes and unpatched versions of the Smart Google Code Inserter plugin.

Browser lockers continue to be a popular social engineering tool to scare people into thinking their computers are infected and locked up. While there is no real malware involved, there are clever bits of JavaScript that have given browser vendors headaches. The “evil cursor” is one of those tricks that effectively prevents users from closing a tab or browser window, and has just been recently fixed.

Browlock urging victims to call fake Microsoft support

Ad fraud and clickjacking

One particular case we documented deals with ad fraud via decoy sites that look like blogs to display Google Ads. This fraudulent scheme was exposed back in August, showing how traffic from hacked sites could generate $20,000 in ad revenue per month.

However, in a twist implemented shortly after, the fraudsters fooled users that attempted to close the ad and hijacked their mouse to actually click on the ad instead. Indeed, as you move your mouse cursor toward the X, the ad banner shifts up and rather than closing the ad, your click opens it up.

The crooks use CSS code dynamically appended to the page that monitors the mouse cursor and reacts when it comes over the X. The timing is important to capture the click a few milliseconds later when the ad banner comes in focus. These client-side tricks are implemented to maximize ad profits, since revenue generated from ad clicks is much higher.

CSS code responsible for click fraud

Malvertising and pop-ups

There is no end to the number of malvertising schemes criminals can deploy. A particularly sneaky one is abusing the push notifications for Chrome, a feature that is a rogue advertiser’s dream. This allows websites to pop notifications in the bottom right corner of your screen even while you are not browsing the site in question. Those pop-ups tend to be snake oil PC optimizers and adult webcams or solicitations.

Fake video player tricking users to accept notifications

Form scrapers and skimmers

For a brief period of time, we saw the addition of a JavaScript scraper and what appeared to be a rudimentary skimmer in some traffic chains. It is unclear what the purpose was, unless it was some kind of experiment coupled with the regular .tk redirects.

Skimmers are most commonly found on e-commerce sites, in particular those running the Magento CMS. They are probably the most lucrative way to monetize a hacked site, unless, of course, there’s no user data to steal, in which case malicious redirects are second best.

Form scraper and skimmer identified in redirection infrastructure

Website traffic as a commodity

Website security is similar to computer security in that site owners are also exposed to zero-day exploits and must always patch. Yet, without proactive protection (i.e. web application firewall) and with site owners failing to roll out their security updates in a timely manner, zero-days can be incredibly effective.

When critical vulnerabilities are discovered, it can be a matter of hours before exploitation in the wild is observed. Compromised websites turn into a commodity relied upon for various monetization schemes, which in turn feeds into the buying and selling of malicious traffic.

Malwarebytes users are protected against these scams, thanks to our web-blocking capabilities. For additional protection with browser lockers, forced extensions, and other scams we recommend our browser extension.

Indicators of compromise (IOCs)

176.123.9[.]52

redrentalservice[.]com
setforconfigplease[.]com
somelandingpage[.]com
setforspecialdomain[.]com
getmyconfigplease[.]com
getmyfreetraffic[.]com

176.123.9[.]53

verybeatifulpear[.]com
thebiggestfavoritemake[.]com
stopenumarationsz[.]com
strangefullthiggngs[.]com

simpleoneline[.]online
lastdaysonlines[.]com
cdnwebsiteforyou[.]biz

The post Plugin vulnerabilities exploited in traffic monetization schemes appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 18 – 24)

Malwarebytes - Mon, 03/25/2019 - 15:46

Last week on Malwarebytes Labs, we touched on the susceptibility of hospitals against phishing attacks, password reuse, the risk of interactive TV shows to side-channel attacks, and Facebook’s new and out-of-character plan to promote privacy in the platform.

Other cybersecurity news
  • A study highlighted that 20 percent of Americans do not trust anyone with the protection of their data, suffer security fatigue, and want tighter controls over how others handle and protect their personal data. (Source: Help Net Security)
  • Epic Games found themselves in hot water after multiple accusations of its Epic Games Launcher purportedly scanning and collecting information of Steam users without their consent—a significant privacy red flag. They promised to fix this. (Source: Bleeping Computer)
  • Miscreants used the tragic Boeing 737 Max crash to push spam containing a malicious .JAR file. This file installs a RAT called Houdini H-Worm and the Adwind information stealer. (Source: Bleeping Computer)
  • Meet Kiddle, the child-friendly search engine that is powered by Google Safe Search but revealed that it’s not affiliated with Google. (Source: Sophos’ Naked Security Blog)
  • A Google Photos vulnerability could have allowed hackers to track when, where, and with whom photos were taken. Good news: It’s now patched. (Source: Imperva Blog)
  • Formjacking, the stealing of information entered in forms, is on the rise. And companies should focus on it. (Source: IT World Canada)
  • Business email compromise (BEC)—or at least its core methodology—began moving from email to SMS. (Source: Agari Blog)
  • A malicious spam campaign pretending to originate from the Center for Disease Control and Prevention (CDC) contained news about a new flu pandemic. It also contained a GandCrab attachment. (Source: My Online Security)
  • Millions of users downloaded a compromised iPhone app that called to nearly two dozen malicious servers to serve malvertising to devices. (Source: SC Magazine)
  • Learn4Life, a recovery program for at-risk teens, is teaching students about network security—something they wouldn’t likely learn from traditional high school. (Source PR Newswire)

Stay safe, everyone!

The post A week in security (March 18 – 24) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Researchers go hunting for Netflix’s Bandersnatch

Malwarebytes - Fri, 03/22/2019 - 15:00

A new research paper from the Indian Institute of Technology Madras explains how popular Netflix interactive show Bandersnatch could fall victim to a side-channel attack.

In 2016, Netflix began adding TLS (Transport Layer Security) to their video content to ensure strangers couldn’t eavesdrop on viewer habits. Essentially, now the videos on Netflix are hidden away behind HTTPS—encrypted and compressed.

Previously, Netflix had run into some optimisation issues when trialling the new security boost, but they got there in the end—which is great for subscribers. However, this new research illustrates that even with such measures in place, snoopers can still make accurate observations about their targets.

What is Bandersnatch?

Bandersnatch is a 2018 film on Netflix that is part of the science fiction series Black Mirror, an anthology about the ways technology can have unforeseen consequences. Bandersnatch gives viewers a choose-your-own-adventure-style experience, allowing for various options to perform task X or Y. Not all of them are important, but you’ll never quite be sure what will steer you to one of 10 endings.

Charlie Brooker, the brains behind Bandersnatch and Black Mirror, was entirely aware of the new, incredibly popular wave of full motion video (FMV) games on platforms such as Steam [1], [2], [3]. Familiarity with Scott Adams text adventures and the choose your own adventure books of the ’70s and ’80s would also be a given.

No surprise, then, that Bandersnatch—essentially an interactive FMV game as a movie—became a smash hit. Also notable, continuing the video game link: It was built using Twine, a common method for piecing together interactive fiction in gaming circles.

What’s the problem?

Researchers figured out a way to determine which options were selected in any given play-through across multiple network environments. Browsers, networks, operating systems, connection type, and more were changed for 100 people during testing.

Bandersnatch offers two choices at multiple places throughout the story. There’s a 10-second window to make that choice. If nothing is selected, it defaults to one of the options and continues on.

Under the hood, Bandersnatch is divided into multiple pieces, like a flowchart. Larger, overarching slices of script go about their business, while within those slices are smaller fragments where storyline can potentially branch out.

This is where we take a quick commercial break and introduce ourselves to JSON.

Who is JSON?

He won’t be joining us. However, JavaScript Object Notation will.

Put simply, JSON is an easily-readable method of sending data between servers and web applications. In fact, it more closely resembles a notepad file than a pile of obscure code.

In Bandersnatch, there are a set of answers considered to be the default flow of the story. That data is prefetched, allowing users who choose the default or do nothing to stream continuously.

When a viewer reaches the point in the story where they must make a choice, a JSON file is triggered from the browser to let the Netflix server know. Do nothing in the 10-second window? Under the hood, the prefetched data continues to stream, and viewers continue their journey with the default storyline.

If the viewer chooses the other, non-default option, however, then the prefetched data is abandoned and a second, different type of JSON file is sent out requesting the alternate story path.

What we have here is a tale of two JSONs.

Although the traffic between the Netflix browser and its servers is encrypted, researchers in this latest study were able to decipher which choices its participants made 96 percent of the time by determining the number and type of JSON files sent.

Should we be worried?

This may not be a particularly big problem for Netflix viewers, yet. However, if threat actors could intercept and follow user choices using a similar side channel, they could build reasonable behavioral profiles of their victims.

For instance, viewers of Bandersnatch are asked questions like “Frosties or sugar-puffs?”, “Visit therapist or follow Colin?”, and “Throw tea over computer or shout at dad?”. The choices made could potentially reveal benign information, such as food and music preferences, or more sensitive intel, such as a penchant for violence or political leanings.

Just as we can’t second guess everyone’s threat model (even for Netflix viewers), we also shouldn’t dismiss this. There are plenty of dangerous ways monitoring along these lines could be abused, whether the data is SSL or not. Additionally, this is something most of us going about our business probably haven’t accounted for, much less know what to do about it.

What we do know is that it’s important that content providers—such as gaming studios or streaming services—affected by this research account for it, and look at ways of obfuscating data still further.

Afterall, a world where your supposedly private choices are actually parseable feels very much like a Black Mirror episode waiting to happen.

The post Researchers go hunting for Netflix’s Bandersnatch appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Are hackers gonna hack anymore? Not if we keep reusing passwords

Malwarebytes - Thu, 03/21/2019 - 15:00

Enterprises have a password problem, and it’s one that is making the work of hackers a lot easier. From credential stuffing to brute force and password spraying attacks, modern hackers don’t have to do much hacking in order to compromise internal corporate networks. Instead, they log in using weak, stolen, or otherwise compromised credentials.

Take the recent case of Citrix as an example. The FBI informed Citrix that a nation-state actor had likely gained access to the company’s internal network, news that came only months after Citrix forced a password reset because it had suffered a credential-stuffing attack.

“While not confirmed, the FBI has advised that the hackers likely used a tactic known as password spraying, a technique that exploits weak passwords. Once they gained a foothold with limited access, they worked to circumvent additional layers of security,” Citrix wrote in a March 6th blog post.

Passwords problems abound

While a recent data privacy survey conducted by Malwarebytes found that an overwhelming majority (96 percent) of the 4,000 cross-generational respondents said online privacy is crucial, nearly a third (29 percent) admitted to reusing passwords across multiple accounts.

Survey after survey shows that passwords are the bane of enterprise security. In a recent survey conducted by Centrify, 52 percent of respondents said their organizations do not have a password vault, and one in five still aren’t using MFA for administrative privileged access.

“That’s too easy for a modern hacker,” said Torsten George, Cybersecurity Evangelist at Centrify. “Organizations can significantly harden their security posture by adopting a Zero Trust Privilege approach to secure the modern threatscape and granting least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment.”

How hackers attack without hacking

The problem with password reuse is that in order for an attacker to gain a foothold into your network, malicious actors don’t have to use advanced tactics. “In many cases, first stage attacks are simple vectors such as password spraying and credential stuffing and could be avoided with proper password hygiene,” according to Daniel Smith, head of threat research at Radware.

When cybercriminals are conducting password spraying attacks, they typically scan an organization’s infrastructure for externally-facing applications and network services, such as webmail, SSO, and VPN gateways.

Because these interfaces typically have strict timeout features, malicious actors will opt for password spraying over brute force attacks, which allows them to avoid being timed out or trigger an alert to administrators.

“Password spraying is a technique that involves using a limited set of passwords like Unidesk1, test, C1trix32 or nsroot that are discovered during the recon phase and used in attempted logins for known usernames,” Smith said. “Once the user is compromised, the actors will then employ advanced techniques to deploy and spread malware to gain persistence in the network.”

Cybercriminals have also been targeting cloud-based accounts by leveraging Internet Message Access Protocol (IMAP) for password-spray attacks, according to Proofpoint. One tricky hitch with IMAP is that two-factor authentication inherently can’t work, so it is automatically bypassed when authenticating, said Justin Jett, director of audit and compliance for Plixer.

“Because password-spraying attacks don’t generate an alarm or lock out a user account, a hacker can continually attempt logging in until they succeed. Once they succeed, they may try to use the credentials they found for other purposes,” Jett said.

Tightening up password security

The reality is that guessing passwords is easier for hackers than it is for them to go up against technology. If we’re being honest, there is a strong chance that an attacker is already in your network, given the widespread problem of password reuse. Because passwords are used to authenticate users, any conversation about augmenting password security has to look at the bigger picture of authentication strategies.

On the one hand, it’s true that password length and complexity are critical to creating strong passwords, but making each password unique has its challenges. Password managers have proven to address the problem of remembering credentials for multiple accounts, and these tools are indeed an important piece of an overall password security strategy.

“The pervasiveness of password stuffing, brute force and other similar attacks shows that password length is no longer a deterrent,” said Fausto Oliveira, principal security architect at Acceptto.

Instead, Oliveira said enabling continuous authentication on privileged employee, client, and consumer accounts is one preemptive approach that can stop an attacker from gaining access to sensitive information—even if they breach the system with a brute force attack.

“It is not about a simple 123456, obvious P@55word password versus a complicated passphrase, but recognizing that all of your passwords are compromised. This includes those passwords you have not yet created, you just don’t know it yet.”

Passwords continue to be a problem because their creation and maintenance is largely the responsibility of the user. There’s no technology to change human behavior, which only exacerbates the issues of password reuse and overall poor password hygiene.

Organizations that want to tighten up their password security need to look seriously at more viable solutions than trusting users, which may include eliminating passwords altogether.

The post Are hackers gonna hack anymore? Not if we keep reusing passwords appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds