Techie Feeds

Hacking with AWS: incorporating leaky buckets into your OSINT workflow

Malwarebytes - Fri, 09/13/2019 - 20:44

Penetration testing is often conducted by security researchers to help organizations identify holes in their security and fix them, before cybercriminals have the chance. While there’s no malicious intent for the researcher, part of his job is to think and act like a cybercriminal would when hacking, or attempting to breach, an enterprise network.

Therefore, in this article, I will review Amazon AWS buckets as an avenue for successful penetration tests. The case study I’m using is from a reconnaissance engagement I conducted against a business. I will specifically focus on how I was able to use AWS buckets as an additional avenue to enrich my results and obtain more valuable data during this phase.

NOTE: For the safety of the company, I will not be using real names, domains, or files obtained. However, the concept will be clearly illustrated despite the lack of specifics.

The goal of this article is to present an alternative or an additional method for professionals conducting pen-tests against an organization. In addition, I hope that it may also serve as a warning for companies deciding to use AWS to host private data and a reminder to secure potentially leaky buckets.

What is an AWS bucket?

Amazon Simple Storage Service (S3) provides an individual or business the ability to store and access content from Amazon’s cloud. This concept is not new, however, because businesses use AWS buckets to not only store and share files between employees, but also host Internet-facing services, we have seen a wealth of private data being exposed publicly.

The types of data we have discovered range from server backups and backend web scripts to company documents and contracts. Files within S3 are organized into “buckets,” which are named logical containers accessible by a static URL.

A bucket is typically considered public if any user can list the contents of the bucket, and private if the bucket’s contents can only be listed or written by certain S3 users.

Checking if a bucket is public or private is easy. All buckets have a predictable and publicly accessible URL. By default this URL will be either of the following:

s3.amazonaws.com/[bucket_name]/
or
[bucket_name].s3.amazonaws.com/

Pen-test workflow: hacking phases

Let’s begin by talking about the first phase of a penetration test: reconnoissance. The purpose of this phase is to gather as much information about a target in order to build an organized list of data which will be used in future hacking phases (scanning, enumeration, and gaining access).

https://www.vectra-corp.com/penetration-testing/

In general, some of the data which pen-testers hope to obtain during this phase is as follows:

  • Names, email addresses, and phone numbers of employees (to be used for phishing)
  • Details of systems used by the business (domains, IPs, and other services to be enumerated in future phases)
  • Files containing usernames, passwords, leaked data, or other access-related items
  • Spec sheets, contracts, vendor documents, infrastructure outlines, notes
  • Customer data (side channel compromise of a trusted third party can be just as valuable as the target themselves)
Alternate infrastructure

All of the items above are examples of things we can and have found while scouring AWS buckets. But first, before we get into the information discovered, let’s talk about finding the buckets themselves and the role that an S3 bucket search can play in a pen-test.

An easy first step in any recon phase is to enumerate the primary domain via brute force hacking or any other method, looking to hopefully find subdomains which may be hosting services within a company. This is pretty standard procedure, but a business does not always have all of its data or resources internally hosted. Often there are unofficial IPs that may be hosted offsite serving a secondary role to the primary business or possibly hosting developer resources or even file storage.

While the company’s internal services, such as mail, websites, firewalls, security, and documentation may be hosted within a subdomain, for several reasons there are still possibilities that offsite or separate servers are used by the company. For this reason, it is a good idea to expand your search, using not only Google hacks and keywords to look for related services or domains.

External resource example

Once specific example related to this was a PACS server data leak from one of UCLA’s medical centers. Now while this server was technically operated by a UCLA med centers, it was not an official service of UCLA, so to speak.

Translation: This server was not part of the UCLA domain. It happened to be independently hosted by one of the residing med center employees, yet in a more obscure way, it was still related to the company. This is an example of the sort of side channel opportunities available to criminals.

Finding leaky buckets

Moving forward, an Amazon S3 bucket is a prime example of one such “unrelated” service not directly tied to the business’ infrastructure. My main purpose for introducing this is to give pen-testers a new hacking avenue in addition to Google hacks. Because although Google searching on a company can lead you to their AWS bucket, it is more effective to search the open buckets directly.

There are a number of tools that can be used to discover wide-open buckets. The one I’d like to highlight is the web application Grayhat Warfare. Of all the tools I use, it is the most user friendly and most easily accessible.

As you can see below, it is quite intuitive:

Let’s take a look at this application and see how a pen-tester might try to use it to discover a bucket owned by an organization.

There are a few ways in which pen testers can uncover unsecured data at their companies. One is by searching for filenames that you might expect to find from the target organization. For example, knowing some of the services or products the enterprise produces, you might search those specific product names, or company_name.bak.

Additionally, having completed some other recon, incorporating usernames in this search can lead to some results. In general, this is the part of the process that requires all of the creativity and thinking outside of the box.

Hacking with AWS case study

Now let’s dig into the case study to see these recon methods in action. In this specific case, the target was an entertainment company that produces content for the music industry. From some Google searching, I happened to come across the fact that they developed an Android app. The app name in this case had no relation to the actual company name; these are exactly the discoveries needed to make to expand searches of leaky buckets.

Using the name of the app and searching it within Grayhat Warfare, I was lucky enough to find an AWS bucket containing a file with the name of the app. One important thing to note is that the bucket name and URL were also completely different and unrelated to the name of the company.

This unrelated naming scheme is often by design. Rather than creating an obvious name, business infrastructure architects often name servers according to themes. You might see planets, Greek gods, Star Wars characters, or favorite bands. This makes searching for services a bit more obscure for a hacker.

Once the app filename was found within a bucket, it was simply a matter of manually looking through the rest of the files to verify if the bucket in fact belonged to the target organization. This eventuality led me to find even more info to use for a deeper recon search.

One amazing find on this server was actually a zip file with the app’s source code. This contained database IPs, usernames, and passwords. This is something that may never have been discovered using traditional recon methods since the IP happened to be an offsite DreamHost account. It was completely untied to any of the company’s resources.

OSINT standards plus AWS buckets = more data

The main point I wanted to illustrate from my test case is how hacking with AWS can be incorporated into the pen-test workflow as an iterative fingerprinting cycle. Using Google hacks, Shodan, and social networks are a standard for open source intelligence (OSINT). We use these traditional methods to gather as much data as possible, then once we have found as much as we can find, we can blast that data against bucket search tools to retrieve deeper info.

From that point, pen-testers can restart the whole search process with this new data. Recon can often be recursive, one result leading to another, leading to another. However, incorporating AWS bucket searching into your pen-test workflow can provide data that may not have been obtained using the other methods.

If any readers have other hacking or search tools they have come across or alternative methods for recon, please feel free to mention them below in the comments.

The post Hacking with AWS: incorporating leaky buckets into your OSINT workflow appeared first on Malwarebytes Labs.

Categories: Techie Feeds

YouTube ordered to cough up $170M settlement over COPPA infraction

Malwarebytes - Thu, 09/12/2019 - 20:15

Last week, the Federal Trade Commission (FTC) announced that it has required Google and YouTube to pay a settlement fee totaling $170 million after its video-sharing platform was found violating the Children’s Online Privacy Protection Act (COPPA). The complaint was filed by the FTC and the New York Attorney General, with the former set to receive the penalty amounting to $136 million and the latter $34 million.

According to the FTC’s press release, this penalty “is by far the largest amount the FTC has ever obtained in a COPPA case since Congress enacted the law in 1998.”

Comparison of privacy cases won against Google (Courtesy: The FTC)

Note that the complaint doesn’t involve the YouTube Kids app, a YouTube service dedicated to showcasing child-directed content only. Although the app still displays ads, albeit to a limited degree, it doesn’t track child data for this purpose.

This win over Google and YouTube follows on the heels of several complaints filed by the FTC in 2019 aiming to protect children’s privacy online. In May, three dating apps were removed from both the Apple and Google market places after the FTC alleged that the apps allowed 12-year-old kids to access them.

i-Dressup.com, a dress-up game website, agreed to settle charges in April after the FTC filed a complaint, alleging that the operators of the website failed to ask for parental consent when collecting data from children under 13.

In a similar case in February, the FTC reached a settlement with Musical.ly, now popularly known as TikTok, after its operators were found to collect data of young children, which includes their full names, email addresses, and other personally identifiable information (PII) without their parents’ consent.

A summary of YouTube’s violation

Since its inception, YouTube has touted itself as a video-sharing platform for general audience content. It was created for adults and not intended for children under 13 years of age.

Through the years, however, YouTube has become a constant companion of young children. In fact, according to market research company, Smarty Pants, YouTube is recognized as the most beloved brand among US kids aged 6–12 years old for four straight years since 2016. [1][2][3][4]

It’s also exceedingly difficult to defend the argument that “YouTube is not for children” when a sizable amount of child-directed content is already present—and continues to grow and rake in billions of views—on the platform.

YouTube’s business model is dependent on collecting personal information and persistent identifiers (i.e., cookies) from users for behavioral or personalized advertising. Child-directed channel owners who chose to monetize their content allowed YouTube to collect data of their target audience: children under 13, the age group YouTube said it wasn’t built for.

It’s easy to assume that YouTube may not have the means to know which data belongs to which age group, else they would have acted on it. However, according to the complaint [PDF], YouTube did know that they were collecting data from children.

More surprising, even, is the fact that Google used YouTube’s brand popularity among young kids as part of their marketing tactic, selling themselves to manufacturers and brands of child-centric products and services as “the new Saturday Morning Cartoons” among others.

How Google sells YouTube to third-parties offering goods and services to children. (Courtesy: The FTC)

Despite this knowledge, YouTube never attempted to notify parents about their data collection process, nor did they ask parents for consent in the data collection. In COPPA’s eyes, these are enormous red flags.

Good news: positive change is at hand

The settlement agreed upon between the FTC and Google/YouTube includes a monetary relief—the $170 million payout in this case—and three injunctive reliefs, which is defined as an act or prohibition the companies must complete as ordered by the court. As per the press release, the injunctions are as follows:

  • Google and YouTube must “develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA.”
  • Google and YouTube must “notify channel owners that their child-directed content may be subject to the COPPA Rule’s obligations and provide annual training about complying with COPPA for employees who deal with YouTube channel owners.”
  • Google and YouTube are prohibited from violating the COPPA Rule, and the injunction “requires them to provide notice about their data collection practices and obtain verifiable parental consent before collecting personal information from children.”
A birds-eye view of what Google and YouTube should be doing in the months to come (Courtesy: The FTC)

As content creators are also culpable in letting the platform know about the kind of content they’re producing and posting, the FTC has noted that failure to inform YouTube that their content is aimed at children could be subject to removal from YouTube and other civil penalties.

Susan Wojcicki, CEO of YouTube, took to its official blog to personally update readers by expanding on these changes and reminding users that the company has been actively making changes within the video platform from Q4 2017.

“We’ve been significantly investing in the policiesproducts and practices to help us do this,” wrote Wojcicki. “From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We’ve been taking a hard look at areas where we can do more to address this…”

Here is a list of expanded changes that YouTube will undergo in the coming months:

  • In four months, data from anyone viewing content directed at children will be treated as coming from a child regardless of the viewer’s actual age.
  • Personalized ads will no longer be served within child-directed content. (Note that this doesn’t mean that no ads will be shown.)
  • Some YouTube features, like comments and notifications, will be unavailable on such content.
  • YouTube will be using machine learning to find child-directed content.
  • YouTube will further promote their YouTube Kids app to parents by running a campaign on YouTube itself and creating a desktop version of the app.
  • YouTube will be providing support to family and kid content creators during the transition phase.
  • YouTube is launching a $100 million fund for creators to make content that are both original and thoughtful. This fund is dispersed over the next three years.
General dissatisfaction with results

While some may see this as a historic win for the FTC and the New York Attorney General, others view it as another exercise in avoiding due punishment for another big company breaking the law.

Case in point: Commissioner Rohit Chopra, one of the two commissioners who voted against the settlement, pointed out in his statement [PDF] the same mistakes the Commission made in a similar Facebook case: “[There is] no individual accountability, insufficient remedies to address the company’s financial incentives, and a fine that still allows the company to profit from its lawbreaking.”

Chopra also noted inconsistencies with the way the FTC handles cases of small companies versus large firms. Ergo, the former is penalized excessively while the latter gets off easy. With this point, James P. Steyer, founder and CEO of Common Sense Media, agrees.

“The settlement is nothing more than a slap on the wrist for a company as large as Google, and does not enforce meaningful change to truly protect children’s data and privacy,” Steyer said in an official statement.

However, he also recognized that YouTube’s stated reforms are moving the dialogue forward. “YouTube’s commitment to enacting specific reforms on the platform is also a step in the right direction, but they must now put resources behind their statement. Kids and families must be a top priority in both Washington, DC, and in Silicon Valley.”

Commissioner Rebecca Slaughter, the other dissenting party, raised in her own statement [PDF] that the injunctions are incomplete, for they lack the orders and/or mechanisms—a “technological backstop”—that ensure content creators are telling the truth in properly designating channels of child-directed content.

Slaughter isn’t the only one to mention what’s lacking in the settlement. Speaking to Angelique Carson, editor for the International Association of Privacy (IAPP) in The Privacy Advisory Podcast, Linnette Attai, president of global compliance firm PlayWell and COPPA expert, expressed her concerns.

“We’re not seeing the rigorous third-party auditing that we’ve seen, traditionally, in COPPA settlements. We’re not seeing requirements to delete data, which is something that you will see in very early COPPA settlements but seems to have fallen off as an option for the FTC in recent years,” she said. “It’s one thing to say, ‘You cannot use this data.’ It’s quite another to say, ‘You have to delete it,’ which ensures that you cannot accidentally use it.”

V for vigilance

Every person has data, and in this day and age, it’s passed around regularly, oftentimes nonchalantly, to those who may or may not appreciate its value. If users are unfazed about big and small companies crossing lines to monetize personal data, perhaps a stark reminder that cybercriminals are after your PII, too, will make you seriously consider how you approach your data privacy.

Children’s data is being targeted by threat actors, too. In fact, when it comes to fraud, cybercriminals prefer data belonging to kids over adults. This is why parents and carers must be extra vigilant in keeping their data—and the data of their little ones—secure. Talking to your kids about what it’s like online [PDF], knowing the ways you can protect your child’s privacy, and reporting companies to the FTC for potential COPPA rights violations are three big steps you can take to not only improve your child’s privacy posture but their security posture as well.

Stay safe, everyone! Especially for your kids.

The post YouTube ordered to cough up $170M settlement over COPPA infraction appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Five years later, Heartbleed vulnerability still unpatched

Malwarebytes - Thu, 09/12/2019 - 15:00

The Heartbleed vulnerability was introduced into the OpenSSL crypto library in 2012. It was discovered and fixed in 2014, yet today—five years later—there are still unpatched systems

This article will provide IT teams with the necessary information to decide whether or not to apply the Heartbleed vulnerability fix. However, we caution: The latter could leave your users’ data exposed to future attacks.

What is the Heartbleed vulnerability?

Heartbleed is a code flaw in the OpenSSL cryptography library. This is what it looks like:

memcpy(bp, pl, payload);

In 2014, a vulnerability was found in OpenSSL, which is a popular cryptography library. OpenSSL provides developers with tools and resources for the implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. 

Websites, emails, instant messaging (IM) applications, and virtual private networks (VPNs) rely on SSL and TLS protocols for security and privacy of communication over the Internet. Applications with OpenSSL components were exposed to the Heartbleed vulnerability. At the time of discovery, that was 17 percent of all SSL servers.

Upon discovery, the vulnerability was given the official vulnerability identifier CVE-2014-0160, but it’s more commonly known by the name Heartbleed. The latter was invented by an engineer from Codenomicon, who was one of the people that discovered the vulnerability.

The name Heartbleed is derived from the source of the vulnerability—a buggy implementation of the RFC 6520 Heartbeat extension, which packed inside it the SSL and TLS protocols for OpenSSL.

Heartbleed vulnerability behavior

The Heartbleed vulnerability weakens the security of the most common Internet communication protocols (SSL and TSL). Websites affected by Heartbleed allow potential attackers to read their memory. That means the encryption keys could be found by savvy cybercriminals.

With the encryption keys exposed, threat actors could gain access to the credentials—such as names and passwords—required to hack into systems. From within the system, depending on the authorization level of the stolen credentials, threat actors can initiate more attacks, eavesdrop on communications, impersonate users, and steal data.

How Heartbleed works

Image source

The Heartbleed vulnerability damages the security of communication between SSL and TLS servers and clients because it weakens the Heartbeat extension.

Ideally, the Heartbeat extension is supposed to secure the SSL and TLS protocols by validating requests made to the server. It allows a computer on one end of the communication to send a Heartbeat Request message.

Each message contains a payload—a text string that contains the transmitted information—and a number that represents the memory length of the payload—usually as a 16-bit integer. Before providing the requested information, the heartbeat extension is supposed to do a bounds check that validates the input request and returns the exact payload length that was requested.

The flaw in the OpenSSL heartbeat extension created a vulnerability in the validation process. Instead of doing a bounds check, the Heartbeat extension allocated a memory buffer without going through the validation process. Threat actors could send a request and receive up to 64 kilobytes of any of the information available in the memory buffer.

Memory buffers are temporary memory storage locations, created for the purpose of storing data in transit. They may contain batches of data types, which represent different stores of information. Essentially, a memory buffer keeps information before it’s sent to its designated location. 

A memory buffer doesn’t organize data—it stores it in batches. One memory buffer may contain sensitive and financial information, as well as credentials, cookies, website pages and images, digital assets, and any data in transit. When threat actors exploit the Heartbleed vulnerability, they trick the Heartbeat extension into providing them with all of the information available within the memory buffer.

The Heartbleed fix

Bodo Moeller and Adam Langley of Google created the fix for Heartbleed. They wrote a code that told the Heartbeat extension to ignore any Heartbeat Request message that asks for more data than the payload needs.

Here’s an example of a Heartbleed fix:

if (1 + 2 + payload + 16 > s->s3->rrec.length) return 0; /* silently discard per RFC 6520 sec. 4 */ How the Heartbleed vulnerability shaped OpenSSL as we know it

The discovery of the Heartbleed vulnerability created worldwide panic. Once the fixes were applied, idle fingers started looking for the causes of the incident. Close scrutiny at OpenSSL revealed that this widely-popular library was maintained solely by two men with a shockingly low budget.

This finding spurred two positive initiatives that changed the landscape of open-source:

  • Organizations realized the importance of supporting open-source projects. There’s only so much two people can do with their personal savings. Organizations, on the other hand, can provide the resources needed to maintain the security of open-source projects.
  • To help finance important open-source projects, Linux started the Core Infrastructure Initiative (CII). The CII chooses the most critical open-source projects, which are deemed essential for the vitality of the Internet and other information systems. The CII receives donations from large organizations and offers them to OSS initiatives in the form of programs and grants.

As with any change-leading crisis, the Heartbleed vulnerability also carried a negative side-effect: the rise of vulnerability brands. The Heartbleed vulnerability was discovered at the same time by two entities—Google and Codenomicon.

Google chose to disclose the vulnerability privately, sharing the information only with OpenSSL contributors. Codenomicon, on the other hand, chose to spread the news to the public. They named the vulnerability, created a logo and a website, and approached the announcement like a well-funded marketing event.

In the following years, many of the disclosed vulnerabilities were given an almost celebrity-like treatment, with PR agencies building them up into brands, and marketing agencies deploying branded names, logos, and websites. While this can certainly help warn the public against zero-day vulnerabilities, it can also create massive confusion.

Nowadays, security experts and software developers are dealing with vulnerabilities in the thousands. To properly protect their systems, they need to prioritize vulnerabilities. That means deciding which vulnerability requires patching now, and which could be postponed. Sometimes, branded vulnerabilities are marketed as critical when they aren’t.

When that happens, not all affected parties have the time, skills, and resources to determine the true importance of the vulnerability. Instead of turning vulnerabilities into a buzz word, professionals could better serve the public by creating fixes.

Heartbleed today

Today, five years after the disclosure of the Heartbleed vulnerability, it still exists in many servers and systems. Current versions of OpenSSL, of course, were fixed. However, systems that didn’t (or couldn’t) upgrade to the patched version of OpenSSL are still affected by the vulnerability and open to attack.

For threat actors, finding the Heartbleed vulnerability is a prize; one more easily accessed by automating the work of retrieving it. Once the threat actor finds a vulnerable system, it’s relatively simple to exploit the vulnerability. When that happens, the threat actor gains access to information and/or credentials that could be used to launch other attacks.

To patch or not to patch

The Heartbleed vulnerability is a security bug that was introduced into OpenSSL due to human error. Due to the popularity of OpenSSL, many applications were impacted, and threat actors were able to obtain a huge amount of data. 

Following the discovery of the vulnerability, Google employees found a solution and provided OpenSSL contributors with the code that fixed the issue. OpenSSL users were then instructed to upgrade to the latest OpenSSL version. 

Today, however, the Heartbleed vulnerability can still be found in applications, systems, and devices, even though it’s a matter of upgrading the OpenSSL version rather than editing the codebase. If you are concerned that you may be affected, you can test your system for the Heartbleed vulnerability and patch to eliminate the risk or mitigate, if the device is unable to support patching.

Any server or cloud platform should be relatively easy to patch. However, IoT devices may require more advanced mitigation techniques, because they are sometimes unable to be patched. At this point, we recommend speaking with your sysadmin to determine how to mitigate the issue.

The post Five years later, Heartbleed vulnerability still unpatched appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Vital infrastructure: emergency services

Malwarebytes - Wed, 09/11/2019 - 19:29

Organizations in the emergency services sector are there for the public to provide help when situations get out of hand or are too much to handle. This can be because the problem requires special tools and skills to use them, and the organizations are set up to provide assistance at short notice. We are all familiar with the three main types of organizations that fall in this category:

  • Police departments
  • Fire departments
  • Emergency medical services

But there are other similar organizations that can be put in the same category, for example, bomb squads, SWAT teams, HAZMAT teams, and sea rescue teams. These and similar groups exist both in the public as in the private sector.

One of the prerequisites for these types of first responders is that they react swiftly, accurate, and with coordinated effort. Besides regular drills, this requires a lot of automation and computerized equipment. Which is what makes it all the worse if one of these organizations get hindered by malware.

Ransomware doesn’t care whether it’s locking up a system full of family pictures or one that is filled with police files. And some malware authors have shown that they make use of the urgency to get certain systems back online, and up the ante accordingly.

Police

Police departments and sheriff’s offices alike store a lot of confidential information about victims and suspects. Information that could give threat actors a good angle for a phishing campaign or extortion. Another delicate matter on police records is evidence. Evidence could become inadmissible even if there is only a suspicion that there has been illegal access to the system it was stored on. So these systems should at all times be kept inaccessible from the Internet to ward off information stealers, ransomware, and remote access trojans (RATs).

A Texas police department learned this the hard way went they lost 1TB of critical CCTV data due to a ransomware attack. The chief of police decided not to pay the ransom even though they did not have adequate backups, which led to a total loss of all the data.

In 2017, ransomware infected 70 percent of storage devices that held recorded data from D.C. police surveillance cameras eight days before President Trump’s inauguration, forcing major citywide re-installation efforts.

Another law enforcement agency that found itself hit by a ransomware attack was the Lauderdale County Sheriff’s Department in Meridian, Mississippi, on May 28, 2018. They became a victim to a variant of Dharma/Crysis ransomware and most of their systems were taken down by the attack. For Lauderdale County, an old, forgotten password was exploited by attackers to deliver the ransomware.

Emergency medical services

When you are in urgent need of medical attention or need to be transported to a medical facility in a hurry, you count on emergency medical services to come to the rescue. What the paramedics need most in such cases is trustworthy lines of communication to provide and receive updates about the medical emergency or the traffic conditions. The communications equipment in question can be diverse and include phones, radios, computers, and dispatch systems.

What you don’t want is some unnamed malware to cripple your communications systems. This happened to the St John Ambulance service in New Zealand. Mobile data and paging services were worst affected by the problem, suggesting that some sort of bandwidth-hogging worm overloaded the system. Dispatch staff normally send information on jobs over the ambulance crew via on-board mobile data terminals. Because of the malware, they had to call ambulance stations or the mobile phones of crew members instead.

Fire departments

The same communications dependency is certainly true for fire departments, whether they are a public fire department, or a company fire brigade trained to deal with specific dangers. They need to know all the relevant information about the situation, and they want to know it before they get there so they can anticipate and plan their actions accordingly.

One small slip, however, and an entire fire department can fall victim to a malware attack, which can cripple internal communications and data storage or compromise sensitive information for both department members and everyday citizens.

The Honolulu Fire Department personnel inadvertently downloaded a ransomware computer virus that infected about 20 of their computers in 2016, forcing the department to temporarily shut down all its administrative computers. The department’s emergency response was thankfully not affected because their computer-aided dispatch system and the computers in the firetrucks operate on a separate network.

Emergency services infrastructure

In some countries all the public emergency services use the same overhead infrastructure to communicate with each other and to receive calls. You really want these systems to be robust and redundant, but nevertheless sometimes they fail or get compromised.

Not attributed to malware but to a software bug, the Dutch emergency number—112 which is the Dutch equivalent of 911—was unreachable for hours. As it turned out, the backup system used the exact same software including the bug, which rendered the backup system quite useless in this scenario. The singular services responded quickly by providing the public with alternatives, but retrospectively, the service interruption was held responsible for two deaths.

In 2017, hackers managed to set off emergency sirens throughout the city of Dallas on a very early Saturday morning. Not only does the public lose trust in the system when false alarms occur, the consequences of a coinciding with a real emergency could have been disastrous. The mayor used the hack as a reason to upgrade and better safeguard the city’s technology infrastructure.

Precautions

What can we take away from the examples we mentioned?

  • Backup systems not only need to be easily deployed, but also need to be truly independent.
  • Even when the budget is tight, these systems need to be prioritized.
  • Separated networks can save your bacon, especially if you can keep them detached from the world wide web.
  • Backup systems are not the only backups you need. Important files need to be backed up as well.

And when it concerns sensitive and important data like evidence or investigation records, extra care is needed.

  • Systems should always be up and running so they are available for queries, but only to those with the proper authority. Backup systems should be adequate and separate.
  • Apply the principle of least privilege, making sure that users, systems, and processes only have access to those resources that are necessary to perform their duties.
  • The systems need a form of guaranteed integrity to ensure that the data entered into a system are untampered with, and it should be possible to track back any changes when they are needed.

A problem that most emergency services have in common is a limited budget and often the lack of a dedicated staff to handle IT security. That money is often spent on other necessary means—all understandable in a sector where human lives are regularly at stake. But recent events in the US have demonstrated all too well that emergency services need to be well orchestrated. There is no lack of dedication from the people doing these jobs, so they should be allowed to work with the best—and safest—equipment. Equipment that we can trust to be secured.

Stay safe, everyone!

The post Vital infrastructure: emergency services appeared first on Malwarebytes Labs.

Categories: Techie Feeds

300 shades of gray: a look into free mobile VPN apps

Malwarebytes - Tue, 09/10/2019 - 16:41

The times, they are a changin’. When users once felt free to browse the Internet anonymously, post about their innermost lives on social media, and download apps with frivolity, folks are playing things a little closer to the vest these days.

Nowadays, users are paying more attention to privacy and how their personal information is transmitted, processed, stored, and shared. Nearly every day, they are bombarded with news of data breaches, abuses or neglect of personal information by tech giants, and the growing sophistication of cybercriminal tactics and scams.

No wonder Internet users are on a hunt for certain tools that will give them added privacy—and not just security—while surfing the web, either at home, in the office, or on the go.

While some might go for Tor or a proxy server to address their need for privacy, many users today embrace virtual private networks, or VPNs.

Depending on who you ask, a VPN is any and all of these: [1] a tunnel that sits between your computing device and the Internet, [2] helps you stay anonymous online, preventing government surveillance, spying, and excessive data collection of big companies, [3] a tool that encrypts your connection and masks your true IP address with one belonging to your VPN provider, [4] a piece of software or app that lets you access private resources (like company files on your work intranet) or sites that are usually blocked in your country or region.

Not all VPNs are created equal, however, and this is true regardless of which platform you use. Out of the increasing number of VPN apps already out there, which is currently in the hundreds, a notable number of them are categorized as unsafe—especially those that are free.

In this post, we’ll take a closer look at free VPNs for mobile devices—a category many say has the highest number of unsafe apps.

But first, the basics.

How do VPNs work?

Rob Mardisalu of TheBestVPN illustrated a quick diagram of how VPNs work—and it’s pretty much as simple as it looks.

A simple VPN illustration (Courtesy of TheBestVPN)

Normally, using a VPN requires the download and installation of an app or file that we call a VPN client. Installing and running the client creates an encrypted tunnel that connects the user’s computing device to the network.

Most VPN providers ask users to register with an email address and password, which would be their account credentials, and offer a method of authentication—either via SMS, email, or QR code scanning—to verify that the user is indeed who they say they are.

Once fully registered and set up, the user can now browse the public Internet as normal, but with enhanced security and privacy.

Let’s say the user conducts a search on their browser or directly visits their bank’s official website. The VPN client then encrypts the query or data the user enters. From there, the encrypted data goes to the user’s Internet Service Provider (ISP) and then to the VPN server. The server then connects to the public Internet, pointing the user to the query results or banking website.

Regardless of which data is sent, the destination website always sees the origin of the data as the VPN server and its location—and not the user’s own IP address and location. Neat, huh?

What VPNs don’t do

However comforting using VPNs can be, realize that they can’t be all things privacy and security for all users. There are certain functions they cannot or will not complete—and this is not limited to the kind of VPN you use.

Here are some restrictions to be aware of. VPNs don’t:

  • Offer full anonymity. Keeping you anonymous should be inherent in all available VPNs on the market. However, achieving full anonymity online using VPNs is nearly impossible. There will always be traces of data from you that VPNs collect, even those that don’t keep logs—and by logs, we mean browsing history, IP address(es), timestamps, and bandwidth.
  • Connect you to the dark web. A VPN in and of itself won’t connect you to the dark web should you wish to explore it. An onion browser, like the Tor browser, can do this for you. And many are espousing the use of both technologies—with the VPN masking the Tor traffic, so your ISP won’t know that you’re using Tor—when surfing the web.
  • Give users full access to their service for free. Forever. Some truly legitimate VPNs offer their services for free for a limited time. And once the trial phase expires, users must decide on whether they would pay for this VPN or look for something else free.
  • Protect you from law enforcement when subpoenaed. VPNs will not allow themselves to be dragged into court if law enforcement has reason to believe that you are engaging in unlawful activities online. When VPN providers are summoned to provide evidence of their user activities, they have zero compelling reason not to comply.
  • Protect you from yourself. No anti-malware company worth its salt would recommend users visit any website they want, open every email attachment, or click all the links under the sun because their security product protects them. Being careful online and avoiding risky behaviors, even when using a security product, is still an important way to protect against malware infection or fraud attempt. Users should apply the same security vigilance when using VPNs.
Who uses VPNs and why?

What started out as an exclusive product for businesses to ensure the security of files shared among colleagues from different locations has become one of the world’s go-to tools for personal privacy and anonymity.

Average Internet users now have access to more than 300 VPN brands on the market, and they can be used for various purposes.

According to the latest findings on VPN usage by market research company GlobalWebIndex, the top three reasons why Internet users around the world would use a VPN service are:

  1. to access location-restricted entertainment content
  2. to use social networks and/or news services (which may also have location restrictions)
  3. to maintain anonymity while browsing the web

Mind you, these aren’t new. These reason have consistently scored high in many VPN usage studies published before.

What motivates you to use a VPN? Here are the top reasons. (Courtesy of GlobalWebIndex)

Users from emerging markets are the top users of VPN worldwide, particularly Indonesia at 55 percent, India at 43 percent, the UAE at 38 percent, Thailand at 38 percent, Malaysia at 38 percent, Saudi Arabia at 37 percent, the Philippines at 37 percent, Turkey at 36 percent, South Africa at 36 percent, and Singapore at 33 percent.

The report also noted that among the 40 countries studied, motivational factors for using VPNs vary. Below is a summary table of this relationship:

The majority of the countries, including the US, use VPNs to access better entertainment content. While this reveals that not every VPN user is concerned their privacy, we can glean from the graph which ones are. (Courtesy of GlobalWebIndex) Mobile VPN apps are most popular

A couple more interesting takeaways from the report: A majority of younger users are surfing the Internet with VPNs, especially on mobile devices. The details are as follows:

  • A vast majority of Internet users aged 16-24 (74 percent) and 25-34 (67 percent) use VPNs.
  • Users access the Internet using VPNs on mobile devices, which in this case includes smart phones (69 percent) and tablets (33 percent).
  • 32 percent use VPNs on mobile devices nearly daily compared to 29 percent at this frequency on a PC or laptop.

With so many (mostly younger) users adopting both mobile and desktop VPNs to view paid content or beef up privacy, it’s no wonder that Android and iOS users often opt for free mobile VPN apps instead of paid products belonging to more established names.

But parsing through hundreds of brands is no easy feat. And the more you investigate, the more difficult it is to choose. For the average user, this is too much work when all they want to do is watch Black Mirror on Netflix. And that’s likely why so many unsafe apps make their way onto the market and are installed on users’ mobile devices.

“Free” doesn’t mean “risk-free”

When it comes to free stuff on the Internet, the majority of us know that we don’t really get something for nothing. Most of the time, we pay with our data and information. If you think this doesn’t apply to free mobile VPN apps, think again.

“There is a significant problem with free VPN apps in Google Play and Apple’s App Store,” says Simon Migliano, head of research at Top10VPN, in an email interview. He further explains: “[V]ery few of the VPN providers offer any transparency about their fitness to operate such a sensitive service. The privacy policies are largely junk, while 25 percent of apps suffer DNS leaks and expose your identity. The majority are riddled with ad trackers and are glorified adware at best, spyware at worst.”

In December 2018, Migliano published an investigation report on the top 20 free VPN apps on Android that appears in UK and US Google Play searches. According to the report, more than three quarters (86 percent) of these VPN apps, which have millions of downloads, have privacy policies that are deemed unacceptable: the use of generic privacy policy templates that don’t have VPN-specific clauses; the apps track user activity or share user data with third parties; and little detail on logging policies that, in its absence, “could lull people into [a] false sense of security” to name a few. Privacy is just one of the many concerns Top10VPN had unearthed.

Top free VPN apps aimed at iPhone users have similar problems. In fact, in a follow-up investigation report, 80 percent of these were considered non-compliant to Guideline 5.4, a new addition to Apple’s App Store Review Guidelines, which was introduced a month ago.

Apple dedicated this subsection in the updated guide for VPN apps (emphasis ours).

Top10VPN also noted that several of the top 20 VPN apps on both Android and iOS have ties to China.

There were other investigations in the past about mobile VPN apps, both free and commercial. Thanks to them, we’ve seen improvements over the years, yet some of these concerns persist. Also note the severe lack of user awareness, which helped such questionable free VPN apps to have high ratings, encouraging more downloads, and possibly keeping them at the top of the ranks.

In a 2016 in-depth research report [PDF] published by the Commonwealth Scientific and Industrial Research Organization (CSIRO) along with the University of South Wales and UC Berkeley, researchers revealed that some mobile VPN apps, both paid and free, leak user traffic (84 percent for IPv6 and 66 percent for DNS), request sensitive data from users (80+ percent), employ zero traffic encryption (18 percent), and more than a quarter (38 percent) use malware or malvertising.

Traffic leaking was a problem not exclusive to free VPN apps. Researchers from Queen Mary University of London and Sapienza University of Rome had found that even commercial VPN apps were guilty of the same problem. They also found that the DNS configurations of these VPN apps could be bypassed using DNS hijacking tactics. Details of their study can be viewed in this Semantic Scholar page.

Free VPNs behaving badly

Research findings are one thing, but organizations and individuals finding and sharing their experiences of the problems surrounding free VPNs makes all the technical stuff on paper become real. Here are examples of events where free VPNs were (or continue to be) under scrutiny and called out for their misbehavior.

The Hotspot Shield complaint. Mobile VPN app developer AnchorFree, Inc. was in the limelight a couple of years ago—and not for a good reason. The Center for Democracy & Technology (CDT), a digital rights advocacy group, had filed a complaint [PDF] with the FTC for “undisclosed and unclear data sharing and traffic redirection occurring in Hotspot Shield Free VPN that should be considered unfair and deceptive trade practices under Section 5 of the FTC Act.”

The complaint said that Hotspot Shield was found to inject JavaScript code into users’ browsers for advertising and tracking, thus, also exposing them to monitoring from law enforcement and other entities. CDT’s complaint led to a denial of the claims by the app developer and the ensuing formation of its annual transparency report.

HolaVPN caught red handed. HolaVPN is one of the many recognizable and free mobile VPN apps. In 2015, a spammer by the pseudonym of Bui began a spam attack against 8chan, which later revealed that he/she was able to do so with the help of Luminati, a known network of proxies and a sister company to HolaVPN. Lorenzo Franceschi-Bicchierai noted in his Motherboard piece that Luminati’s website boasted of having “millions” of exit nodes. Of course, these nodes were all free HolaVPN users.

In December 2018, AV company, Trend Micro, revealed that they found evidence of the former KlipVip cybercrime gang (who were known to spread fake AV software or rogueware) using Luminati to conduct what researchers believe is a massive-scale ad click fraud campaign.

Innet VPN and Secnet VPN malvertising. Last April, Lawrence Abrams of BleepingComputer alerted iPhone users of some mobile VPNs taking a page out of fake AV’s book in ad promotion: scare tactics. Users clicking a rogue ad on popular sites found themselves faced with pop-up messages claiming that their mobile device was either infected or they were being tracked.

Unfortunately, that was not the first time this happened—and may not be the last. Our own Jérôme Segura saw first-hand a similar campaign exactly a year before the Bleeping Computer report, but it was pushing users to download a VPN called MyMobileSecure.

VPNs are not inherently evil

In spite of inarguable evidence of the shady side of free mobile VPN apps, the fact is not all of them are bad. This is why it’s crucial for mobile users who are currently using or looking into using a free VPN service to conduct research on which brands they can trust with their data and privacy. No one wants an app that promises one thing but does the complete opposite.

When users insist on using a free VPN service, Migliano suggests that they should sign up to a service based on the freemium model, as these platforms don’t have advertising, so it keeps privacy intact. He also offered helpful questions users need to ask themselves when picking the best VPN that fits their needs.

“Look for information about the company. Are they are [sic] a real company with real people, with an address, phone number and all the things that normal companies tend to have? Do they have a VPN-specific privacy policy that explains their logging and data retention policies? Have they taken steps to minimize the risk of data misuse, such as deleting all server logs in real time for example? Do they have proper customer support?”

Also watch out for VPN reviews. They can be disguised adverts.

Finally, users have the choice to go for a paid service, which is a business model a majority of well-established and legitimate mobile VPN services follow. Or they can create their own. As not everyone is savvy enough to do the latter, the former is the next logical choice. Migliano agrees.

“The best thing you can do is pay for a VPN,” he said. “It costs money to operate a VPN network and so if you aren’t paying directly, your browsing data is being monetized. This is clearly a cruel irony given that a VPN is intended to protect a user’s privacy.”

The post 300 shades of gray: a look into free mobile VPN apps appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (September 2 – 8)

Malwarebytes - Mon, 09/09/2019 - 16:01

Last week on Malwarebytes Labs, we looked at a smart social engineering toolkit, delved into TrickBot tampering with trusted texts, and explained five ways to help keep remote workers safe.

Other cybersecurity news

Stay safe!

The post A week in security (September 2 – 8) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

When corporate communications look like a phish

Malwarebytes - Mon, 09/09/2019 - 15:36

Many organizations will spend significant sums of money on phishing training for employees. Taking the form of regular awareness training, or even simulated phishes to test employee awareness, this is a common practice at larger companies.

However, even after training, a consistent baseline of employees will still click a malicious link from an unknown sender. Today, we’ll look at a potential reason why that might be: corporate communications often look like phishes themselves, causing confusion between legitimate and illegitimate senders.

Corporate communications templates

Below is an email template found on a Microsoft technet blog, used as an example of how a sysadmin can communicate with users.

https://blogs.technet.microsoft.com/smeems/2017/12/13/protecting-email-ios-android/

While well meaning, and providing users with pretty good instructions, this template falls afoul of phishing design in a few ways.

  • The large “Action Required” in red with an exclamation point creates a false sense of urgency disproportionate to the information presented.
  • There is no way provided to authenticate the message as legitimate corporate communications.
  • The email presents all information at once on the same page, irrespective of relevance to an individual user.
  • The link for assistance is at the bottom and suggests a generic mailbox rather than referencing a person to contact.

So what’s the harm here? Surely a user can ignore some over-the-top design and take the intended message? One problem is that per Harvard Business Review, the average office worker receives on average 120 emails per day. When operating under consistent information overload, that worker is going to be taking cognitive shortcuts to reduce interactions with messages not relevant to them.

So training the employee to respond reflexively to “Action Required” can cue them to do the same with malicious emails. Including walls of texts in the body of the email reinforces scanning for a call to action (especially links to click), and a lack of message authentication or human assistance ensures that if there’s any confusion about safety, the employee will err on the side of not asking for help.

Essentially, well-meaning communications with these design flaws train an overloaded employee to exhibit bad behaviors—despite anti-phishing training—and discourage seeking help. It’s no wonder that, according to the FBI, losses from business email compromise (BEC) have increased by 1,300 percent since January 2015, and now total over $3 billion worldwide.

With this background in mind, what happens when the employee gets a message like this?

Of note is that both phishes are more accessible to a skimming reader than the Microsoft corporate notification, and the calls to action are less dramatic. The PayPal phish in particular has a passable logo and mimics the language of an actual account alert reasonably well.

A closer reader would spot incongruities right away: The first phish would be caught instantly. For the second, the sender domain does not belong to PayPal. If you copy the link and paste into a text editor, the link goes to an infected WordPress site rather than PayPal, and the boxed numbers with instructions look weird. But an employee receiving 120 emails a day is not a close reader. The phishes are “good enough.”

A safer alternative

So how do we do better? Let’s look at a notification email from AirBnB.

First and foremost, the notification is brief. The entire content of relevance to the user is communicated in a single sentence, made large and bold for readability upfront. What follows are details for the end user to authenticate the transaction, listed in an order of probable descending interest to the user.

Next is a clear path to obtain assistance, voiced in language suggestive of a person at the other end. Last is a brief explanation of why the user should consider the communication legitimate, with multiple use cases provided to set expectations.

The myth of the stupid user

Industry discussion of phishing and click through rates centers largely around how awful and ignorant users are. Solutions proffered generally concern themselves with restricting email functionality, “effective” shame-and-blame punishments for clicking the malicious link, and repetitive phish training that neither aligns to how users engage with email, nor provides appropriate tools for responding to ambiguous emails, like the notification template above.

All of this is a waste of time and budget.

If an organization has a “stupid user” problem, a more effective start to address it would be looking at design cues in that user’s environment. How many emails are they getting a day, and of those, how many look functionally identical? How many aren’t really relevant or useful to their job?

When network defenders send out communications to the company, do they look or feel like phishes? If the user gets a sketchy email, who’s available to help them? Do they know who that person is, if anyone? Structuring employees’ email loads such that they follow the steps below will both “smarten up” an employee quickly and cost nothing. Employees should therefore:

  • Have a light enough burden to engage critically with messages
  • Get corporate comms tailored to their job requirements
  • Have an easy way to authenticate that trusted senders are who they say they are
  • Be able to get help with zero friction

So before organizations engage in more more wailing and gnashing of teeth over the “stupid user” and the cost of training and prevention, think for a long while on how communication happens in your company, where the pain points are, and how you can optimize that workflow.

After all, wouldn’t you like to get less email, too?

The post When corporate communications look like a phish appeared first on Malwarebytes Labs.

Categories: Techie Feeds

5 simple steps to securing your remote employees

Malwarebytes - Wed, 09/04/2019 - 14:06

As remote working has become standard practice, employees are working from anywhere and using any device they can to get the job done. That means repeated connections to unsecured public Wi-Fi networks—at a coffee shop or juice bar, for example—and higher risks for data leaks from lost, misplaced, or stolen devices.

Think about it.

Let’s say your remote employee uses his personal smart phone to access the company’s cloud services, where he can view, share, and make changes to confidential documents like financial spreadsheets, presentations, and marketing materials. Let’s say he also logs into company email on his device, and he downloads a few copies of important files directly onto his phone.

Now, imagine what happens if, by accident, he loses his device. Worse, imagine if he doesn’t use a passcode to unlock his phone, making his device a treasure trove of company data with no way to secure it.

Recent data shows these scenarios aren’t just hypotheticals—they’re real risks. According to a Ponemon Institute study, from 2016 through 2018, the average number of cyber incidents involving employee or contractor negligence has increased by 26 percent.

To better understand the challenges and best practices for businesses with remote workforces, Malwarebytes teamed up with IDG Connect to produce the white paper, “Lattes, lunch, and VPNs: securing remote workers the right way.” In the paper, we show how modern businesses require modern cybersecurity, and how modern cybersecurity means more than just implementing the latest tech. It also means implementing good governance.

Below are a few actionable tips from our report, detailing how companies should protect both employer-provided and personal devices, along with securing access to company networks and cloud servers.  

If you want to dive deeper and learn about segmented networks, VPNs, security awareness trainings, and how to choose the right antivirus solution, you can read the full report here.

1. Provide what is necessary for an employee to succeed—both in devices and data access.

More devices means more points of access, and more points of access means more vulnerability. While it can be tempting to offer every new employee the perks of the latest smart phone—even if they work remotely—you should remember that not every employee needs the latest device to succeed in their job.

For example, if your customer support team routinely assists customers outside the country, they likely need devices with international calling plans. If your sales representatives are meeting clients out in the field, they likely need smart devices with GPS services and mapping apps. Your front desk staff, on the other hand, might not need smart devices at all.

To ensure that your company’s sensitive data is not getting inadvertently accessed by more devices than necessary, provide your employees with only the devices they need.

Also, in the same way that not every employee needs the latest device, not every employee needs wholesale access to your company’s data and cloud accounts, either.

Your marketing team probably doesn’t need blanket access to your financials, and the majority of your employees don’t need to rifle through your company’s legal briefs—assuming you’re not in any kind of legal predicament, that is.

Instead, evaluate which employees need to access what data through a “role-based access control” (RBAC) model. The most sensitive data should only be accessible on a need-to-know basis. If an employee has no use for that data, or for the platform it is shared across, then they don’t need the login credentials to access it.

Remember, the more devices you offer and the more access that employees are given, the easier it is for a third party or a rogue employee to inappropriately acquire data. Lower your risk of misplaced and stolen data by giving your employees only the tools and access they need.

2. Require passcodes and passwords on all company-provided devices.

Just like you use passcodes and passwords to protect your personal devices—your laptop, your smart phone, your tablet—you’ll want to require any employee that uses an employer-provided device to do the same.

Neglecting this simple security step produces an outsized vulnerability. If an unsecured device is lost or stolen, every confidential piece of information stored on that device, including human resources information, client details, presentations, and research, is now accessible by someone outside the company.

If your employees also use online platforms that keep them automatically logged in, then all of that information becomes vulnerable, too. Company emails, worktime Slack chats, documents created and shared on Dropbox, even employee benefits information, could all be wrongfully accessed.

To keep up with the multitude of workplace applications, software, and browser-based utilities, we recommend organizations use password managers with two-factor authentication (2FA). This not only saves employees from having to remember dozens of passwords, but also provides more secure access to company data.

3. Use single sign-on (SSO) and 2FA for company services.

Like we said above, the loss of a company device sometimes results in more than the leak of just locally-stored data, but also network and/or cloud-based data that can be accessed by the device.

To limit this vulnerability, implement an SSO solution when employees want to access the variety of your available platforms.

Single sign-on offers two immediate benefits. One, your employees don’t need to remember a series of passwords for every application, from the company’s travel request service to its intranet homepage. Two, you can set up a SSO service to require a secondary form of authentication—often a text message sent to a separate mobile device with a unique code—when employees sign in.

By utilizing these two features, even if your employee has their company device stolen, the thief won’t be able to log into any important online accounts that store other sensitive company data.

Two of the most popular single sign-on providers for small and medium businesses are Okta and OneLogin.

4. Install remote wiping capabilities on company-provided devices.

So, your devices have passwords required, and your company’s online resources also have two-factor authentication enabled. Good.

But what happens if an employee goes turncoat? The above security measures help when a device is stolen or lost, but what happens when the threat is coming from inside, and they already have all the necessary credentials to plunder company files?

It might sound like an extreme case, but you don’t have to scroll far down the Google search results of “employee steals company data” to find how often this happens.

To limit this threat, you should install remote-wiping capabilities on your company-provided devices. This type of software often enables companies to not just wipe a device that is out of physical reach, but also to locate it and lock out the current user.

Phone manufacturer-provided options, like Find my iPhone on Apple devices and Find my Mobile on Samsung devices, let device owners locate a device, lock its screen, and erase all the data stored locally.

5. Implement best practices for a Bring Your Own Device (BYOD) policy.

When it comes to remote workers, implementing a Bring Your Own Device policy makes sense. Employees often prefer using mobile devices and laptops that they already know how to use, rather than having to learn a new device and perhaps a new operating system. Further, the hardware costs to your business are clearly lower.

But you should know the risks of having your employees only accomplish their work on their personal devices.

Like we said above, if your employee loses a personal device that they use to store and access sensitive company data, then that data is at risk of theft and wrongful use. Also, when employees rely on their personal machines to connect to public, unsecured Wi-Fi networks, they could be vulnerable to man-in-the-middle attacks, in which unseen threat actors can peer into the traffic that is being sent and received by their machine.

Further, while the hardware costs for using BYOD are lower, sometimes a company spends more time ensuring that employees’ personal devices can run required software, which might decrease the productivity of your IT support team.

Finally, if a personal device is used by multiple people—which is not uncommon between romantic partners and family members—then a non-malicious third party could accidentally access, distribute, and delete company data.

To address these risks, you could consider implementing some of the following best practices for the personal devices that your employees use to do their jobs:

  • Require the encryption of all local data on personal devices.
  • Require a passcode on all personal devices.
  • Enable “Find my iPhone,” “Find my Mobile,” or similar features on personal devices.
  • Disallow jailbreaking of personal devices.
  • Create an approved device list for employees.

It’s up to you which practices you want to implement. You should find a balance between securing your employees and preserving the trust that comes with a BYOD policy.

Takeaways

Securing your company’s remote workforce requires a multi-pronged approach that takes into account threat actors, human error, and simple forgetfulness. By using some of the methods above, we hope you can keep your business, your employees, and your data that much safer.

The post 5 simple steps to securing your remote employees appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (August 26 – September 1)

Malwarebytes - Tue, 09/03/2019 - 19:02

Last week on Malwarebytes Labs, we analysed the Android xHelper trojan, we wondered why the Nextdoor app would send out letters on behalf of their customers, reported about a study that explores the clickjacking problem across top Alexa-ranked websites, wondered how to get the board to invest in higher education cybersecurity, and shared our view on the discovery of unprecedented new iPhone malware.

Other cybersecurity news
  • Malware was discovered in a Google Play listed PDF-maker app that had over 100 million downloads. (Source: Techspot)
  • Insurance companies are fueling a rise in ransomware attacks by telling their customers to take the easy way to solve their problems. (Source: Pro Publica)
  • Hackers are actively trying to steal passwords from two widely used VPNs using unfixed vulnerabilities. (Source: ArsTechnica)
  • A new variant of the Asruex Backdoor targets vulnerabilities that were discovered more than six years ago in Adobe Acrobat, Adobe Reader, and Microsoft Office software. (Source: DarkReading)
  • In a first-ever crime committed from space, a NASA astronaut has been accused of accessing mails and bank accounts of her estranged spouse while aboard the International Space Station (ISS). (Source: TechWorm)
  • Command and control (C2) servers for the Emotet botnet appear to have resumed activity and deliver binaries once more. (Source: BleepingComputer)
  • A security researcher has found a critical vulnerability in the blockchain-based voting system Russian officials plan to use next month for the 2019 Moscow City Duma election. (Source: ZDNet)
  • The French National Gendarmerie announced the successful takedown of the wide-spread RETADUP botnet, remotely disinfecting more than 850,000 computers worldwide. (Source: The Hacker News)
  • The developers behind TrickBot have modified the banking trojan to target customers of major mobile carriers, researchers have reported. (Source: SCMagazine)
  • A coin-mining malware infection previously only seen on Arm-powered IoT devices has made the jump to Intel systems. (Source: The Register)

Stay safe!

The post A week in security (August 26 – September 1) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

TrickBot adds new trick to its arsenal: tampering with trusted texts

Malwarebytes - Tue, 09/03/2019 - 15:26

Researchers from Dell Secureworks saw a new feature in TrickBot that allows it to tamper with the web sessions of users who have certain mobile carriers. According to a blog post that they published early last week, TrickBot can do this by “intercepting network traffic before it is rendered by a victim’s browser.”

If you may recall, TrickBot, a well-known banking Trojan we detect as Trojan.TrickBot, was born from the same threat actors behind Dyreza, the credential-stealing malware our own researcher Hasherazade dissected back in 2015. Secureworks named the developers behind TrickBot as Gold Blackburn.

TrickBot rose into prominence when it rivaled Emotet and became the number one threat for businesses in the last quarter of 2018.

Before it took yet another step up its evolutionary ladder, TrickBot already has an impressive repertoire of features, such as a dynamic webinject it uses against financial institution websites; a worm module; a persistence technique using Windows’s Scheduled Task; the ability to steal data from Microsoft Outlook, cookies, and browsing history; the means to target point-of-sale (PoS) systems; and the capability to spread via spam messages and moving laterally within an affected network via the EternalBlue, Eternal Romance, or the EternalChampion exploit.

Now, more recently, the same webinject feature is used against the top three US-based mobile carriers: Verizon Wireless, T-Mobile, and Sprint. Augmentation to accommodate attacks against users of these companies was added to TrickBot on August 5, August 12, and August 19, according to Dell Secureworks.

How does the attack work?

When users of affected systems decide to visit legitimate websites of Verizon Wireless, T-Mobile, or Sprint, TrickBot intercepts the response from official servers and passes it on to the threat actors’ command-and-control (C&C) server, jump starting its dynamic webinject feature. The C&C server then injects scripts—specifically, HTML and JavaScript (JS) scripts—within the affected user’s web browser, consequently altering what the user sees and doesn’t see before the web page is rendered. For example, certain texts, warning indicators, and form fields may be removed or added, depending on what the threat actors are trying to achieve.

Dell Secureworks researchers were able to capture proof of certain changes TrickBot make on the original page of mobile carrier sites.

One can tell the differences here, but users who are not aware and in the middle of signing in may not be able to tell that something is amiss. (Courtesy: Dell Secureworks)

Above is a side-by-side comparison of Verizon Wireless’s sign in page before (image of the right) and after (image on the left) TrickBot tampered with it. Aside from some texts missing, notice also new added fields, specifically those asking for PIN numbers.

In the case of Sprint, the change is more subtle and quite seamless: an additional PIN form displays once users are able to successfully sign in with their user name and password.

The sudden targeting of mobile phone PINs suggests that threat actors using TrickBot are showing interest in getting involved with certain fraud tactics like port-out fraud and SIM swap, according to the researchers.

A port-out fraud happens when threat actors call their target’s mobile carrier to request the target’s number be switched or ported over to a new network provider. SIM swapping or SIM hijacking works in a similar fashion, but instead of changing to a new provider, the threat actor requests for a new SIM card from the carrier that they can put in their own device.

These will cause all calls, MMS, and SMS supposedly for you to be sent to the threat actor instead. And if their target is using text-based two-factor authentication (2FA) on their online accounts, the threat actor can easily intercept company-generated messages to gain access to those accounts. This results in account takeover (ATO) fraud.

Such a scam is typically done when threat actors already got a hold of their target’s credentials and wish to circumvent 2FA.

How to protect yourself from TrickBot?

So as not to reinvent the wheel, we implore you, dear Reader, to go back and check our post entitled TrickBot takes over as top business threat wherein we outlined remediation steps that businesses (and consumers alike) can follow. This post also has a section on preventative measures—ways one can lessen the likelihood of TrickBot infection in endpoints—starting with regular employee education and awareness campaigns on the latest tactics and trends about the threat landscape.

Note that Malwarebytes automatically detects and removes TrickBot without user intervention.

I think I may have fallen victim to this. What now?

The best action to take is to call your mobile carrier to report the fraud, have your number blocked, and consider requesting a new number. You can also report the scammers or fraudsters to the FTC.

Go ahead and changes the passwords of all your online accounts that you have tied in with your phone number.

You might also want to consider using stronger authentication methods, such as the use of time-based one-time passwords (OTP) 2FA—Authy and Google Authenticator comes to mind—for accounts that hold extremely sensitive information about you, loved ones and friends, and your business or employees.

Enable a PIN on mobile accounts.

Lastly, familiarize yourself with the ways you can limit the possibility of a port out or SIM swap attack happening again. WIRED produced a brilliant story on how to protect yourself against a SIM swap attack while Brian Krebs over at KrebsOnSecurity has a piece on how to fight port out scams.

As always, stay safe, everyone!

The post TrickBot adds new trick to its arsenal: tampering with trusted texts appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New social engineering toolkit draws inspiration from previous web campaigns

Malwarebytes - Tue, 09/03/2019 - 15:15

Some of the most common web threats we track have a social engineering component. Perhaps the more popular ones are those encountered via malvertising, or hacked websites that push fraudulent updates.

We recently identified a website compromise with a scheme we had not seen before; it’s part of a campaign using a social engineering toolkit that has drawn over 100,000 visits in the past few weeks.

The toolkit, which we dub Domen, is built around a detailed client-side script that acts as a framework for different fake update templates, customized for both desktop and mobile users in up to 30 languages.

Loaded as an iframe from compromised websites (most of them running WordPress) and displayed over top as an additional layer, it entices victims to install so-called updates that instead download the NetSupport remote administration tool. In this blog we describe its tactics, techniques, and procedures (TTPs) that remind us of some past and current social engineering campaigns.

Fake Flash Player update

The premise looks typical of many other social engineering toolkit templates we’ve come across before. Here, users are tricked into downloading and running a Flash Player update:

Figure 1: Fake Flash Player update notification

Note that the domain wheelslist[.]net belongs to a legitimate website that has been hacked and where an iframe from chrom-update[.]online is placed as a layer above the normal page:

Figure 2: Deobfuscated code found on compromised site that loads malicious iframe

Clicking the UPDATE or LATER button downloads a file called ‘download.hta’, indexed on Atlassian’s Bitbucket platform and hosted on an Amazon server (bbuseruploads.s3.amazonaws.com):

Figure 3: Bitbucket project from user ‘Garik’

Upon execution, that HTA script will run PowerShell and connect to xyxyxyxyxy[.]xyz in order to retrieve a malware payload.

Figure 4: Malicious mshta script retrieves payload from external domain

That payload is a package that contains the NetSupport RAT:

Figure 5: Process tree showing execution flow Figure 6: Observed HTTP traffic confirming NetSupport RAT infection Link with “FakeUpdates” aka SocGholish

In late 2018, we documented a malicious redirection campaign that we dubbed FakeUpdates, also known as SocGholish based on a ruleset from EmergingThreats. It leverages compromised websites and performs some of the most creative fingerprinting checks we’ve seen, before delivering its payload (NetSupport RAT).

We recently noticed a tweet that reported SocGholish via the compromised site fistfuloftalent[.]com, although the linked sandbox report shows the same template we described earlier, which is different than the SocGholish one:

Figure 7: New theme erroneously associated with SocGholish

The reason why the sandbox is flagging SocGholish is because the compromised site contains artifacts related to it, and does, in some circumstances, actually redirect to it:

Figure 8: SocGholish template

This hacked site actually hosts two different campaigns and based on some browser and network fingerprinting, you might be served one or the other. This can be confirmed by looking at the injected code in two different pieces of JavaScript, the first one being flagged by the EmergingThreats ruleset.

Figure 9: Comparing two campaigns by looking at the injected JavaScript

Although the templates for SocGholish and the new campaign are different, they both:

  • can occasionally be found on the same compromised host
  • abuse or abused a cloud hosting platform (Bitbucket, Dropbox)
  • download a fake update as ‘download.hta’
  • deliver the NetSupport RAT

Side note: A publicly saved VirusTotal graph (saved screenshot here) shows that the threat actors also used DropBox at some point to host the netSupport RAT. They double compressed the file, first as zip and then as rar.

Similarities with SocGholish could be simply due to the threat actor getting inspired by what has been done before. However, the fact that both templates deliver the same RAT is something noteworthy.

Link with EITest

At about the same time as we were reviewing this new redirection chain, we saw this other one identified by @tkanalyst tagged as FontPack that is reminiscent of the HoeflerText social engineering toolkit reported by Proofpoint in early 2017.

Figure 10: New ‘FontPack’ soc. engineering schem

Going back to the traffic capture we collected before, we immediately notice the same infrastructure that includes a JavaScript template (template.js) and a panel (.xyz domain):

Figure 11: Web traffic reveals same artifacts used in fake Flash Player theme

A closer look at the template.js file confirms they are practically identical except for a different payload URL and some unique identifiers:

Figure 12: Template.js is the social engineering framework Domen social engineering kit

The template.js file is a beautiful piece of work that goes beyond fake fonts or Flash Player themes. While we initially detected this redirection snippet under the FontPack label, we decided to call this social engineering framework Domen, based on a string found within the code.

The single JavaScript file controls a variety of templates depending on the browser, operating system, and locale. For instance, the same fake error message is translated into 30 different languages.

Figure 13: Customized templates based on operating system’s language

One particular variable called “banner” sets the type of social engineering theme: var banner = ‘2’; // 1 – Browser Update | 2 – Font | 3 – Flash

Figure 14: Customized templates based on operator’s choice

We already documented the Flash Player one, while the Font (HoeflexText copycat) and some of its variations (Chrome, Firefox) was also observed. Here’s the third one, which is a browser update:

Browser update Figure 15: Internet Explorer template Figure 16: Chrome template Figure 17: Firefox template Figure 18: Edge template Figure 19: Other browsers’ template

There is also a template for mobile devices (which again is translated into 30 languages) that instructs users how to download and run a (presumably malicious) APK:

Figure 20: Instructions on how to install APK files for Android users Scope and stats

The scope of this campaign remains unclear but it has been fairly active in the past few weeks. Every time a user visits a compromised site that has been injected with the Domen toolkit, communication takes place with a remote server hosted at asasasqwqq[.]xyz:

Figure 20: Connection to panel seen in template.js script

The page will create a GET request that returns a number:

Figure 21: Network traffic showing number of visits

If we trust those numbers (a subsequent visit increments it by 1), it means this particular campaign has received over 100,000 views in the past few weeks.

Over time, we have seen a number of different social engineering schemes. For the most part, they are served dynamically based on a user’s geolocation and browser/operating system type. This is common, for example, with tech support scam pages (browlocks) where the server will return the appropriate template for each victim.

What makes the Domen toolkit unique is that it offers the same fingerprinting (browser, language) and choice of templates thanks to a client-side (template.js) script which can be tweaked by each threat actor. Additionally, the breadth of possible customizations is quite impressive since it covers a range of browsers, desktop, and mobile in about 30 different languages.

Protection

Malwarebytes users were already protected against this campaign thanks to our anti-exploit protection that thwarts the .hta attack before it can even retrieve its payload.

Note: We shared a traffic capture with the folks at EmergingThreats who created a new set of rules for it.

Indicators of compromise

Domen social engineering kit host

chrom-update[.]online

Malicious .HTA

bitbucket[.]org/execuseme1/1312/downloads/download.hta

NetSupport loader

xyxyxyxyxy[.]xyz/wwwwqwe/11223344.exe
mnmnmnmnmnmn[.]club/qweeewwqe/112233.exe

Panels

asasasqwqq[.]xyz
sygicstyle[.]xyz
drumbaseuk[.]com

NetSupport RAT

9c69a1d81133bc9d87f28856245fbd95bd0853a3cfd92dc3ed485b395e5f1ba0
58585d7b8d0563611664dccf79564ec1028af6abb8867526acaca714e1f8757d
b832dc81727832893d286decf50571cc740e8aead34badfdf1b05183d2127957

The post New social engineering toolkit draws inspiration from previous web campaigns appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Making the case: How to get the board to invest in higher education cybersecurity

Malwarebytes - Wed, 08/28/2019 - 17:31

Security leaders in institutions of higher education face unique challenges, as they are charged with keeping data and the network secure, while also allowing for a culture of openness, sharing, and communication—all cornerstones of the academic community. And depending on the college or university, concerns such as tight budgets and staffing shortages can also make running a successful security program difficult. So how do CISOs get their boards to invest in higher education cybersecurity?

In the second part of our series of posts about CISO communication, we look at the considerations and skills required for presenting to the board on higher education cybersecurity, including which tactics will increase their understanding and financial support.

This month, I asked David Escalante, Director of Computer Policy & Security at Boston College and a veteran information security leader, for his perspective on what it takes to advocate for security in this environment.


What unique challenges do CISOs/security managers working in higher education have that differ from their peers in the public sector?

Many large universities are best thought of as small cities. Frequently, an organization is able to focus on a few products, or a range of products in its given industry space. Because of the diversity of things a university does, the variety of software and hardware required to run everything is huge, and this, in turn, means that security teams are stretched thin across all those systems, versus being able to focus on a smaller number of critical systems.

University environments have a culture of openness, and that can conflict culturally with a least privilege or zero trust security model.

Without getting into detail, risk trade-offs in higher education aren’t as well understood as in many other sectors. And because of the diverse systems alluded to above, balancing those trade-offs is complex.

What do education CISOs need to keep in mind when they communicate with either the board or other governing bodies in their organization?

Boards in education, in non-profits, and for state entities don’t tend to have the same makeup as public company boards do. For a non-profit example, think of the opera company whose board members are the big donors. As a result of this, we’ve noted that the “standard” templates for cybersecurity communication with the board tend not to strike the right notes, since they’re pitched for a public company board made up largely of senior corporate officers. So don’t just go “grab a template.” 

The trend we’ve seen, advice-wise, of “tell the board stories” seems to resonate better than, say, a color-coded risk register. The scope of the systems running at a big university that need to be secured, plus the board’s limited detailed knowledge, makes substantive conversations about specific security approaches difficult. It’s better to highlight things both good and bad than to try to be comprehensive.

It’s very hard to balance being technical or not. Use a mix. On the one hand, board members have probably read about ransomware bringing organizations to their knees, and may even have read up on ransomware to prep for the board meeting, and will expect some technical material on the subject. On the other hand, almost all board members will not be technical, so overdoing the technical component will lose them.

Don’t directly contradict your own management chain—if you’ve asked for more staff and haven’t gotten it, don’t ask the board for it.

What other advice would you give higher ed CISOs when it comes to communication?

On the non-board management side, if you aren’t already, it’s time to emphasize that security is everyone’s responsibility. The days when you could “set and forget” antivirus and be secure are long gone. 

Now social engineering and credential theft are rampant, and management is consuming information on personal mobile devices. Non-IT management needs to be clear that securing campuses is a team effort, not just an IT one. 

At BC, we have been having the CIO, versus the security team, communicate personally with senior management a couple times a year on specific cyberattacks we’ve seen to emphasize that they need to be vigilant partners, and not to assume that IT will catch all threats in advance.

The post Making the case: How to get the board to invest in higher education cybersecurity appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Study explores clickjacking problem across top Alexa-ranked websites

Malwarebytes - Tue, 08/27/2019 - 17:36

Clickjacking has been around for a long time, working hand-in-hand with the unwitting person doing the clicking to send them to parts unknown—often at the expense of site owners. Scammers achieve this by hiding the page object the victim thinks they’re clicking on under a layer (or layers) of obfuscation. Invisible page elements like buttons, translucent boxes, invisible frames, and more are some of the ways this attack can take place.

Despite being an old tool, clickjacking is becoming a worsening problem on the web. Let’s explore how clickjacking works, recent research on clickjacking, including the results of a study examining the top 250,000 Alexa-ranked websites, and other ways in which researchers and site owners are trying to better protect users from this type of attack.

Laying the groundwork

There are many targets of clickjacking. 

Cursors, cookies, files, even your social media likes. Traditionally, an awful lot of clickjacking relates to adverts and fraudulent money making. In the early days of online ad programs, certain keywords that brought a big cash return for clicks became popular targets for scammers. Where they couldn’t get people to unintentionally click on an ad, they’d try to automate the process instead.

Here’s an example from 2016, playing on the seemingly never-ending European cookie law messages on every website ever. Pop a legitimate ad, make it invisible, and overlay it across a Cookie pop-up. At that point, it’s unintentional advert time.

This is not to say clickjacking techniques are stagnant; here’s a good example of how these attacks are tough to deal with.

Clickjacking: back in fashion

There’s a lot of clickjack-related activity taking place at the moment, so researchers are publishing their works and helping others take steps to secure browsers.

One of those research pieces is called All your clicks belong to me: investigating click interception on the web, focusing on JavaScript-centric URL access. I was hoping the recording of the talk from USENIX Security Symposium would be available to link in this blog, but it’s not currently online yet—when it is, I’ll add it. The talk is all about building a way to observe possible clickjack activity on some of the most popular websites in the world and reporting back with the findings.

Researchers from a wide variety of locations and organisations pooled resources and came up with something called “Observer,” a customised version of Chromium, the open-source browser. With it, they can essentially see under the hood of web activity and tell at a glance the point of origin of URLs from every link.

As per the research paper, Observer focuses on three actions JavaScript code may perform in order to intercept a click:

  • Modifying existing links on a page
  • Creating new links on a page
  • Registering event handlers to HTML elements to “hook” a click

All such events are identified and tagged with a unique ID for whichever script kicked the process into life, alongside logging page navigation to accurately record where an intercepted click is trying to direct the victim.

Observer logs two states of each webpage tested: the page fully rendered up to a 45-second time limit, and then interaction data, where they essentially see what a site does when used. It also checks if user clicks update the original elements in any way.

Some of the specific techniques Observer looks for:

  • Visual deception tricks from third parties, whether considered to be malicious or accidental. This is broken down further into page elements which look as though they’re from the site, but are simply mimicking the content. A bogus navigation bar on a homepage is a good example of this. They also dig into the incredibly common technique of transparent overlays, a perennial favourite of clickjackers the world over.
  • Hyperlink interception. Third party scripts can overwrite the href attribute of an original website link and perform a clickjack. They detect this, as well as keeping an eye out for dubious third-party scripts performing this action on legitimate third-party scripts located on the website. Observer also checks for another common trick: large clickable elements on a page, where any interaction with the enclosed element is entirely under its control.
  • Event handler interception. Everything you do on a device is an event. Event handlers are routines which exist to deal with those events. As you can imagine, this is a great inroad for scammers to perform some clickjacking shenanigans. Observer looks for specific API calls and a few other things to determine if clickjacking is taking place. As with the large clickable element trick up above, it checks for large elements from third parties.
Study results

Observer crawled the Alexa top 250,000 websites from May 2018, ending up with valid data from 91.45 percent of the sites they checked accounting for timeouts and similar errors. From 228,614 websites, they ended up with 2,065,977 unique third-party navigation URLs corresponding to 427,659 unique domains, with an average of 9.04 third-party navigation URLs pointing to 1.87 domains.

Checking for the three main type of attack listed above, they found no fewer than 437 third-party scripts intercepting user clicks on 613 websites. Collectively, those sites receive about 43 million visitors daily. Additionally, a good slice of the sites were deliberately working with dubious scripts for the purpose of monetizing the stolen clicks, to the tune of 36 percent of interception URLs being related to online advertising.

The full paper is a fascinating read, and well worth digging through [PDF].

Plans for the future

Researchers point out that there’s room for improvement with their analysis—this is more of a “getting to know you” affair than a total deep dive. For example, with so many sites to look at, they only look at the main page for analysis. If there were nasties lurking on subpages, they wouldn’t have seen it.

They also point out that their scripted website interaction quite likely isn’t how actual flesh-and-blood people would use the websites. All the same, this is a phenomenal piece of work and a great building block for further studies.

What else is happening in clickjacking?

Outside of conference talks and research papers, there’s also word that a three-year-old suggestion for combating iFrame clickjacking has been revived and expanded for Chrome. Elsewhere, Facebook is suing app developers for click injection fraud.

As you can see from a casual check of Google/Yahoo news, clickjacking isn’t a topic perhaps covered as often as it should be. Nevertheless, it’s still a huge problem, generates massive profits for people up to no good, and deserves to hog some of the spotlight occasionally.

How can I avoid clickjacking?

This is an interesting one to ponder, as this isn’t just an end-user thing. Website owners need to do their bit, too, to ensure visitors are safe and sound on their travels [1], [2], [3]. As for the people sitting behind their keyboards, the advice is pretty similar to other security precautions.

Given how much of clickjacking is based around bogus advertising cash, consider what level of exposure to ads you’re comfortable with. Deploying a combination of adblockers and script control extensions, especially where JavaScript is concerned, will work wonders.

Those plugins could easily break the functionality of certain websites though, and that’s before we stop to consider that many sites won’t even give you access if ads are blocked entirely. It comes down, as it often does, to ad networks waging war on your desktop. How well you fare against the potential risks of clickjacking could well depend where exactly you plant your flag with regards advertiser access to your system.

Whatever you choose, we wish you safe surfing and a distinct lack of clickjacking. With any luck, we’ll see more research and solutions proposed to combat this problem in the near future.

The post Study explores clickjacking problem across top Alexa-ranked websites appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Nextdoor neighborhood app sends letters on its users’ behalf

Malwarebytes - Tue, 08/27/2019 - 16:35

Dutch police departments and consumer organizations issued warnings about the use of the Nextdoor neighborhood app because people received letters (yes, as in snail-mail) pretending to come from someone in their neighborhood, which the alleged senders did not send or deliver. So, everyone figured there must be some kind of scam going on and decided to warn the public.

Nextdoor is an app that you can use to stay informed about what’s going on in your neighborhood. It can be used to find last-minute babysitters, share safety tips, or simply communicate with neighbors. The app ties people together based on their location, so in this way, it is different from many other apps where people can form their own groups.

We talked to a woman whom we’ll refer to as W.H., as she wishes to remain anonymous. Letters in her neighborhood were delivered with her as the sender. The letters were asking the receivers to install the app and join the community. W.H. did not send those letters, but she was a user of the Nextdoor app. And she remembered receiving an email from Nextdoor asking whether she would like to invite the people in her neighborhood.

“Hi W.

Invite your neighbors to help grow your Nextdoor neighborhood. This are [sic] 100 extra invitations to send to your neighbors!

Click the button below and we will automatically and completely free of charge send 100 personalized invitations to your closest neighbors by mail.

The invitation will have your name and street on them and contain information about Nextdoor.

Kind regards,

Michel on behalf of Nextdoor Netherlands”

W.H. clicked the button expecting to get the option to select a number of her neighbors that she wanted to invite, but all she got was a notice that the link had expired. She didn’t think about it again until one of her neighbors showed her the letter they received and informed her about the warnings that had already started to circulate by then.

This is an example of the letter that was sent out in her name.

“Howdy neighbor,

Our neighborhood uses the free and invitation-only neighborhood app Nextdoor. It is our hope that you will join as well. In this neighborhood app we share local tips and recommendations……

It is 100% free and invitation only – for our neighbors only.

Download the Nextdoor app ….. and enter this invitation code to sign up for our neighborhood.

(this code expires in 7 days)

Your neighbor from [redacted]”

In a blog where Nextdoor explains (in Dutch) how this invitation model came to be, they point out that when you first register with the app, it also asks for your permission to send out invitations to your neighbors. This may indicate that there are members who didn’t even get the email W.H. received to ask whether she wanted to invite 100 extra neighbors. So to these users, a query from their neighbors about the letter may come as an even bigger surprise.   

Privacy policy

One effect that the commotion about the letters has invoked is that the Nextdoor privacy policy was held against the light by consumer organizations. The Dutch “Consumentenbond” finds the policy leaves too much room for privacy infringements and expects it will be a tough battle in court for all those that feel let down or even betrayed by the company. W.H. let us know she finds the app useful and will continue to use it.

To be fair, we should expect an amount of targeted advertising when we sign up for free apps like these. It is important to remember that when it comes to free apps, there is a good possibility that you and your personal data are the commodities.

Not a scam, but…

Neighborhood apps are becoming more popular because people want to be more involved with their communities, and because they provide a feeling of enhanced security.

Although the method used by Nextdoor to reach new customers is questionable, we can’t deny that they did inform their customers and asked for their permission. However, sending out snail mail messages in someone’s name is a bit unorthodox, therefore should have been communicated much more clearly. This method has backfired for Nextdoor, due to negative media attention, and may have scared more customers away than they have gained.

From the reaction on their own blog, where they explain the how and why behind this method, we learned that Nextdoor intends to keep mailing out letters on its users behalf, which is another reason we felt we should raise awareness about this matter.

Like many other apps of this kind, Nextdoor gathers information about their customers and uses it for targeted marketing. Given the type of data—community information, locations, names—this is extremely valuable for marketing purposes, but could also be a security issue.

Sharing your information with people that live in your neighborhood, but that you really don’t know very well could have its drawbacks as well. We advise not to ask everyone to keep an eye out while you share your vacation plans. You may also be informing the resident burglar.

Stay vigilant, everyone!

The post Nextdoor neighborhood app sends letters on its users’ behalf appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mobile Menace Monday: Android Trojan raises xHelper

Malwarebytes - Mon, 08/26/2019 - 19:04

Back in May, we classified what we believed was just another generic Android/Trojan.Dropper, and moved on. We didn’t give this particular mobile malware much thought until months later, when we started noticing it had climbed onto our top 10 list of most detected mobile malware.

Henceforth, we feel a piece of mobile malware with such a high number of detections prompts a proper name and classification. Therefore, we now call it Android/Trojan.Dropper.xHelper. Furthermore, this prominent piece of malware deserves a closer look. Let’s discuss the finer points of this not-so-helpful xHelper.

Package name stealer

The first noticeable characteristic of xHelper is the use of stolen package names. It isn’t unusual for mobile malware to use the same package name of other legitimate apps. After all, the definition of a Trojan as it relates to mobile malware is pretending to be a legitimate app. However, the package names this Trojan has chosen is unusual.

To demonstrate, xHelper uses package names starting with “com.muf.” This package name is associated with a number of puzzle games found on Google Play, including a puzzle called New2048HD with the package name com.mufc.fireuvw. This simple game only had a little more than 10 installs at the time of this writing. Why this mobile malware is ripping off package names from such low-profile Android apps is a puzzle in itself. In contrast, most mobile Trojans rip off highly-popular package names.

Full-stealth vs semi-stealth

xHelper comes in two variants: full-stealth and semi-stealth.  The semi-stealth version is a bit more intriguing, so we’ll start with this one. On install, the behavior is as follows:

  1. Creates an icon in notifications titled “xhelper”
  2. Does not create an app icon or a shortcut icon
  3. After a couple of minutes, starts adding more icons to notifications: [GameCenter] Free Game
    1. Press on either of these notifications, and it directs you to a website that allows you to play games directly via browser.
    2. These websites seem harmless, but surely the malware authors are collecting pay-for-click profit on each redirect.

The full-stealth version also avoids creating an app icon or shortcut icon, but it hides almost all traces of its existence otherwise. There are no icons created in notifications. The only evidence of its presence is a simple xhelper listing in the app info section.

Digging deeper into the dropper

Mobile Trojan droppers typically contain an APK within the original app that is dropped, or installed, onto the mobile device. The most common place these additional APKs are stored is within the Assets Directory.

In this case, xHelper is not using an APK file stored in the Assets Directory.  Instead, it’s a file that claims to be a JAR file, usually with the filename of xhelperdata.jar or firehelper.jar. However, try to open this JAR file in a Java decompiler/viewer, and you will receive an error.

The error is the result of two reasons: First, it’s actually a DEX file, which is a file that holds Android code unreadable to humans. To clarify, you would need to convert a DEX file to a JAR file to read it. Secondly, this file is encrypted.

Considering that the hidden file is encrypted, we assume that the first step xHelper takes upon opening is decryption. After decryption, it then uses an Android tool known as dex2oat that takes an APK file and generates compilation artifact files that the runtime loads. In other words, it loads or runs this hidden DEX file on the mobile device. This is clever workaround to simply installing another APK and obfuscates its true intentions.

What’s in a DEX

Every variant of xHelper uses this same method of disguising an encrypted file and loading it at runtime onto a mobile device. In order to further analyze xHelper, we needed to grab a decrypted version of the file caught during runtime. In this case, we were able to do so by running xHelper on a mobile device. Once it finished loading, it was easy to export the DEX file from storage. 

However, even the decrypted version is an obfuscated tough nut to crack.  In addition, each variant has slightly different code, making it difficult to pinpoint exactly what is the objective of the mobile malware.

Nevertheless, it’s my belief that its main function is to allow remote commands to be sent to the mobile device, aligning with its behavior of hiding in the background like a backdoor. Regardless of its true intentions, the clever attempt to obfuscate its dropper behavior is enough to classify this as a nasty threat.

High probability of infection

With xHelper being on our top 10 most detected list, there is a good chance Android users might come across it. Since we added the detection in mid-May 2019, it has been removed from nearly 33,000 mobile devices running Malwarebytes for Android. That number continues to rise by the hundreds daily.

The big question: What is the source of infection that is making this Trojan so prominent? Obviously this type of traffic wouldn’t come from carelessly installing third-party apps alone. Further analysis shows that xHelper is being hosted on IP addresses in the United States. One was found in New York City, New York; and another in Dallas, Texas. Therefore, it’s safe to say this is an attack targeting the United States.

We can, for the most part, conclude that this mobile infection is being spread by web redirects, perhaps via the game websites mentioned above, which are hosted in the US as well, or other shady websites.

If confirmed to be true, our theory highlights the need to be cautious of the mobile websites you visit. Also, if your web browser redirects you to another site, be extra cautious about click anything. In most cases, simply backing out of the website using the Android’s back key will keep you safe. 

Stay safe out there!

The post Mobile Menace Monday: Android Trojan raises xHelper appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (August 19 – 25)

Malwarebytes - Mon, 08/26/2019 - 15:38

Last week on Malwarebytes Labs, we reported on the presence of Magecart on a type of poker software; outlined how the Key Negotiation of Bluetooth (KNOB) attack works; followed the money on a Bitcoin sextortion campaign; looked back at DEF CON 27; and reported on continuing ransomware attacks on several US cities.

Other cybersecurity news
  • After turning away two vulnerability reports brought about by the same independent security researcher, Valve Corporation, the company behind the Steam video gaming platform admitted its mistake and updated its policies. (Source: Ars Technica)
  • The Security Service of Ukraine (SBU) arrested power plant operators after finding cryptominers in Ukraine’s Yuzhnoukrainsk nuclear power plant, which compromised its security. (Source: Coin Telegraph)
  • A couple of spyware apps built based on an open-sourced espionage tool called AhMyth were found in the Google Play Store. The company has since removed these apps. (Source: ESET’s WeLiveSecurity Blog)
  • Google is the latest company to join Twitter and Facebook to clean up their backyard of hundreds of YouTube channels spreading misinformation about protests in Hong Kong. (Source: CNBC)
  • According to a report, Facebook phishing attacks surged in Q2 of this year, and Microsoft remained the most phished brand for five consecutive quarters. (Source: Help Net Security)
  • NordVPN, a popular VPN service, was found to be one of the many brands cloned by cybercriminals in a malware campaign to spread the Bolik banking Trojan. (Source: HackRead)
  • State-sponsored espionage teams from China, Russia, and Vietnam are now targeting medical research, report says. (Source: Dark Reading)
  • Syrk ransomware found to be masquerading as an “aimbot” targeted Fortnite players. (Source: Cyren Blog)
  • A fresh Facebook hoax about making private content public flooded the social platform. (Source: Sophos’s Naked Security Blog)
  • On the above vein, an old Instagram hoax became known and fooled several celebrities and politicians. (Source: WIRED)

Stay safe!

The post A week in security (August 19 – 25) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Ransomware continues assault against cities and businesses

Malwarebytes - Fri, 08/23/2019 - 15:00

Ransomware continues to make waves in the US, forcing multiple cities and organizations into tough choices. Pressed for cash and time, local government organizations are left with few options: Either pay the ransom as soon as possible and encourage criminals to continue bringing essential services to their knees, or refuse and be left with a massive cleanup bill.

When a $50,000 ransom becomes millions of dollars in cleanup, forensics, external tech assistance, and more, sadly more and more organizations are throwing up their hands and paying the ransom.

Doing so almost certainly encourages the same or similar threat actor groups to come back around again at a later date, applying claims for their daily dose of extortion racket money. So what should these cities do?

We take a look at the most recent attacks, how US and international cities have handled them, and our advice for dealing with the aftermath.

A cone of silence: Texas

Twenty-three (23) local government organizations in Texas were recently hit by a coordinated attack likely from a single threat actor. Unlike some previous assaults on city infrastructure where information was released quickly, here officials are keeping their cards close to their chest. No word yet as to which networks, devices, or other technological infrastructure were affected, which family of ransomware was behind the attack, how defenses were penetrated, or if a ransom was paid.

According to WIRED, response teams from “TDIR, the Texas Division of Emergency Management, Texas Military Department, Department of Public Safety, and the Texas A&M University System’s Security Operations Center/Critical Incident Response Team SOC/CIRT” are all working to bring systems back online. This may suggest they held out on paying the ransom, and either the scam pages were taken down (meaning no ransom could be paid), or they missed a deadline and all systems were permanently locked out.

Either way, it could be that Texas is trying a new tactic: regardless of outcome, prevent the endgame of the attack from gaining oxygen. Simply hearing that someone paid or held off and had their network crushed makes it a lot easier for future potential attackers to figure out what worked, what didn’t, who paid up, and who is more likely to give nothing in return.

While it’s unlikely we won’t hear more and at least find out which files were used in the attack, it will be interesting to see if this tactic pays off for at-risk organizations or simply digs them a deeper hole.

Paying up: Florida

Florida has been hit particularly hard by ransomware attacks, and in just one month no less than three Florida municipal governments have been dumped on by the triple threat of Emotet, TrickBot, and Ryuk ransomware. Sadly, all three cases were triggered by the age-old trick of a booby-trapped attachment sent via email. Lake City was for all intents and purposes knocked out of digital commission, having to revert to pen and paper in place of locked-out computer systems. Emergency services remained untouched, but everywhere else—from email and land lines to credit card payments and city departments—chaos reigned.

Eventually, they ended up paying some US$460,000 in Bitcoin to the ransomware authors to release compromised systems. Riviera Beach, struck by a similar attack, ended up paying a cool US$600,000 to fix their hijack. These are incredible amounts of money to send to attackers who may simply have lucked out getting their infection files on the networks of big fish targets, but a drop in the ocean compared to the clean up costs—and that’s why cybercriminals keep getting away with it.

Some of these payments are covered by insurers, with many offering ransom protection as part of their services. As many have noted, paying the ransom is bad enough in that it essentially encourages attackers to keep going. Turning payments into an accepted cost of doing business removes much of the threat from organizations and probably means many simply won’t bother to spend on upgrading their network protection. After all, if the insurance companies are going to pay, then why bother?

However, complacency from organizations will only result in bigger and bigger fines from emboldened cybercriminals, who will most certainly capitalize on the opportunity to squeeze more money out of cities and companies. Eventually, insurance companies will drop organizations or require excessive monthly payments if the attacks keep happening.

Anthony Dagostino, global head of cyber risk at Willis Towers Watson, told Insurance Journal magazine, “We’re already getting word that some insurance companies are not providing the coverage or are adding to the deductibles.”

A state of emergency: Louisiana

Regardless of who pays and who doesn’t, make no mistake: People are taking these attacks seriously. We’re at the point where governors are declaring a state of emergency when these assaults on crucial infrastructure take place. After attacks on multiple school districts, Louisiana Governor John Bel Edwards called it in. Prior to that, Colorado gained some level of cybersecurity fame by issuing the the first-ever state of emergency executive order for a computer-centric attack.

A global threat: Johannesburg

The US may be grappling with the lion’s share of ransomware attacks, but let’s not forget this is a truly worldwide problem. In July, Johannesburg in South Africa found itself unable to respond to power failures after a successful ransomware attack. It potentially affected up to a quarter million people, preventing customers from buying electricity, causing issues with electrical supplies, and even stopping energy firms from dealing with localized blackouts.

Businesses under ransomware threat

Unlike the hugely-popular band Radiohead, who can choose to give away their ransomed music instead of succumbing to extortion attempts, organizations faced with a ransomware attack have no similar alternative. Pay up, or deal with the mess left behind is all that’s available. And as attacks ramp up, if they don’t look at preventative action, they may be forced to make a call between bad and worse.

It’s not just hospitals under attack from ransomware – all manner of healthcare can be impacted. No fewer than 400 dental offices were recently brought to their knees by what is claimed to be Sodinokibi. With the attack in full swing, payments, patient charts, and the ability to perform x-rays were all unavailable. Around 100 practices were able to get back online, and there’s some debate as to whether some organisations actually paid the ransom. Given emergency patients in severe pain would need an x-ray to proceed with treatment, this is quite a nasty attack to contemplate.

As our most recent quarterly report highlights,

Over the last year, we’ve witnessed an almost constant increase in business detections of ransomware, rising a shocking 365 percent from Q2 2018 to Q2 2019.

That’s quite a bump. Some other key findings:

  • Ransomware families such as Ryuk and RobinHood are mostly to blame for targeted attacks, though SamSam and Dharma also made appearances. 
  • The ransomware families causing the most trouble for businesses this quarter were Ryuk and Phobos, which increased by an astonishing 88 percent and 940 percent over Q1 2019, respectively. GandCrab and Rapid business detections both increased year over year, with Rapid gaining on Q2 2018 by 319 percent.
  • Where leading ransomware countries are concerned, the United States took home the gold with 53 percent of all detections from June 2018 through June 2019. Canada came in a distant second with 10 percent, and the United Kingdom and Brazil followed closely behind, at 9 percent and 7 percent, respectively.
  • Texas, California, and New York were the top three states infected with ransomware, ganged up on with a combination of GandCrab, Ryuk, and Rapid, which made up more than half of the detections in these states. Interestingly, the states with the most ransomware detections were not always the most populous. North Carolina and Georgia rounded out our top five ransomware states, but they are not as heavily-populated as Florida or Pennsylvania, neither of which made our list.
Where to go from here

The pressure is most definitely on. Businesses and local governments must ensure they not only have a recovery plan for ransomware attacks, but a solid line of layered defense, complete with a smattering of employee training in the bargain. When so many attacks begin with a simple email attachment, it’s frustrating to think how many major incidents could’ve been avoided by showing employees how to recognize phishing attempts or other malicious emails.

Of course, securing the line of defense and taking preventative action is just one part. The growing willingness to pay the ransom and on some fundamental level encourage threat actors to do it all over again is not helping. However, with the ever-present threat of budget cuts and a lack of funding/security resources in general, it’s difficult to pass judgment.

More and more government officials will need to make their case to the board on why cybersecurity is an important business investment. And the board will need to listen. Otherwise, ransomware authors will continue to dine like kings. 

Think you may have been hit with Ryuk ransomware? Download our Ryuk Emergency Kit to learn how to remediate the infection.

The post Ransomware continues assault against cities and businesses appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The lucrative business of Bitcoin sextortion scams (updated)

Malwarebytes - Thu, 08/22/2019 - 15:00

Update (2019-09-04): A new wave of sextortion emails purporting to have originated from a group of hackers called ChaosCC—a play on the legitimate European white hat hacking community, Chaos Computer Club (CCC)—has recently caught the attention of the security world. Below is a sample email we captured in our honeypot.

Reports against the Bitcoin address, 1KE1EqyKLPzLWQ3BhRz2g1MHh5nws2TRk, started pouring in to Bitcoin abuse sites in late August just last week. As of this writing, this address has received a total of more or less 2,500 USD worth of Bitcoins.

After a quiet period following a surge in late 2018 to early 2019, the online blackmail scheme known as sextortion scams are back on the radar and on the uptick.

According to a report from Digital Shadows, a leading UK-based cybersecurity company that monitors potential threats against businesses, there are several resources available to embolden novice criminals to a life of extortion. These resources include: access to credentials leaked from past breaches, tools and technologies that aid in creating campaigns, training from online extortionists, and a trove of DIY extortion guides that exist on the dark web.

The report also finds that these fledgling extortionists and accomplices are incentivized with high salaries if they are able to hook high-earning targets, such as doctors, lawyers, or company executives—information that can be gleaned by scouring LinkedIn profiles or other social media accounts.

With a number of creative ways to wring money out of Internet users, the high potential of a hefty payout, and many helping hands from professional criminals, we shouldn’t expect online sextortion scams to stop (permanently) any time soon. Just ask the Nigerian Prince how well his retirement is going.

To look at what motivates threat actors to adopt sextortion scams as part of their criminal repertoire, we did what all good detectives do when trying to break open a case: We followed the money. Find out what we discovered on the trail begun by a single sextortion campaign.

The spam

We were able to determine several Bitcoin sextortion schemes being implemented in the wild, but for this post, we looked at its most common distribution form: email spam.

The sextortion email, with its message embedded as an image file—a common tactic to avoid spam filters.

The full text of this email reads:

Hi, this account is now hacked! Change your password right now!
You do not heard about me and you may not be most likely surprised for what reason you are reading this letter, is it right?
I’mhacker who crackedyour email boxand devicesnot so long ago.
You should not make an attempt to contact me or try to find me, it’s hopeless, since I sent you this message using YOUR account that I’ve hacked.
I set up special program on the adult videos (porn) site and guess that you enjoyed this site to have a good time (you understand what I mean).
When you have been taking a look at vids, your internet browser started out to act like a RDP (Remote Control) that have a keylogger that granted me ability to access your desktop and web camera.
Consequently, my softwareobtainedall info.
You have typed passwords on the online resources you visited, I already caught them.
Of course, you could possibly change them, or perhaps already changed them.
But it doesn’t matter, my malware updates needed data every time.
What actually I have done?
I generated a backup of your every system. Of all files and personal contacts.
I formed a dual-screen video recording. The first screen demonstrates the film you were watching (you’ve got a very good taste, haha…), and the 2nd screen shows the movie from your webcam.
What should you do?
Great, in my opinion, 1000 USD will be a reasonable price for your little secret. You’ll make your deposit by bitcoins (if you don’t recognize this, search “how to purchase bitcoin” in Google).
My bitcoin wallet address:
163qcNngcPxk7njkBGU3GGtxdhi74ycqzk
(It is cAsE sensitive, so just copy and paste it).
Warning:
You have 2 days to perform the payment. (I have an exclusive pixel to this e-mail, and right now I understand that you’ve read this email).
To monitorthe reading of a letterhead the actionsin it, I installeda Facebook pixel. Thanks to them. (Anything thatis usedfor the authorities may helpus.)
 
In the event I do not get bitcoins, I shall undoubtedly offer your video files to each of your contacts, including relatives, colleagues, etcetera?

There are many variations of this spam content, but they all follow a similar template: We’ve hacked your account, we have video proof of you visiting porn sites and watching sexual content, and we now demand payment or we’ll release the video of you to the public. In fact, Cisco’s Talos Security Intelligence & Research Group was able to retrieve an email spam template, which they said the extortionists mistakenly sent out to their targets.

The Electronic Frontier Foundation (EFF) also keeps an updated record of variants of Bitcoin sextortion messages that you can look up in this blog post.

As we followed the money in this investigation, the only relevant piece of information we needed from the sextortion email was the Bitcoin address, which in this case is 163qcNngcPxk7njkBGU3GGtxdhi74ycqzk. This is our starting point.

The investigation

To better understand the next steps in our investigation, readers should first grasp the basics of how cryptocurrency and the blockchain work.

Paper money and coins are to the real, material world as digital currency is to the online, electronic world.

Bitcoin is one of thousands of digital currencies available online to date. Specifically, it is a virtual currency—because it is controlled by its creators and used and embraced by a virtual community—and at the same time a cryptocurrency—because it uses strong encryption algorithms and cryptographic schemes to ensure its resistant to forgery and cryptanalysis.

The blockchain, as the name suggests, is a collection of data blocks that are linked together to form a chain. This system, commonly likened to a ledger, is used by several cryptocurrencies—Bitcoin is one of them. Each block in a chain contains information on multiple transactions. And each transaction has a transaction ID, or TXID. Because of the way cryptocurrency wallets and sites record Bitcoin inputs to addresses, a single TXID may contain multiple entries in its record.

While real-world ledgers are private and exclusive only to organizations and individuals that keep financial records, the blockchain Bitcoin operates in is not. This makes it easy for anyone, including security researchers, to look up cryptocurrency transactions online using publicly available tools, such as a block explorer.

In a Bitcoin block, transaction information includes the sender and receiver—all identified by Bitcoin addresses—and the amount paid in Bitcoin.

Keep these concepts in mind as we go back to the sextortion campaign at hand and navigate the trenches of Bitcoin transactions.

Going with the (Bitcoin through the blockchain) flow

The Bitcoin address in our sextortion email, 163qcNngcPxk7njkBGU3GGtxdhi74ycqzk, actually has a small transaction history.

The brief transaction history of 163qcNngcPxk7njkBGU3GGtxdhi74ycqzk

However, we were able to take a closer look at these transactions and uncover additional addresses, giving us further insight into this particular campaign.

According to TXID 94c86a55bb3081312d6020e67202e8c93a43d897f4a289cc655c0e9e6d9e31b4, the balance of 0.25924622 BTC was sent to another Bitcoin address, 3HXdb3HAw1wVzU9b7ZSigvGaStd8KoZ3zJ on March 13, 2019. During that time, this BTC value was worth approximately US$1,000, which is the amount demanded in the ransom email.

This TXID also contains 23 additional inputs from other Bitcoin addresses, which are likely also under the control of the same actor(s) behind the sextortion campaign, to 3HXdb3HAw1wVzU9b7ZSigvGaStd8KoZ3zJ. Naturally, all BTC values from these inputs were combined, totaling 4.16039634 BTC (approximately US$16,100 at time of investigation).

2The 24 total inputs from other Bitcoin addresses in one TXID. Notice that most input values are similar to the ransom demand actors extorted from targets.

Looking closely at 3HXdb3HAw1wVzU9b7ZSigvGaStd8KoZ3zJ, we found it has 11 other transactions that follow a similar pattern to from the transaction we just reviewed.

3HXdb3HAw1wVzU9b7ZSigvGaStd8KoZ3zJ and its transaction history. Highlighted here is the aforementioned TXID 94c86a55bb3081312d6020e67202e8c93a43d897f4a289cc655c0e9e6d9e31b4.

We can confirm that that each of these transactions contains extorted funds. Take, for example, TXID b8ae16d604947f67d2b27774e6cfa7afcdb7ede651bdd539b5a5dc555be302aa:

Inputs within TXID b8ae16d604947f67d2b27774e6cfa7afcdb7ede651bdd539b5a5dc555be302aa

All Bitcoin addresses in this TXID have been reported as associated with criminal activity on Bitcoin-Spam, a public database of crypto-addresses used by hackers and criminals. Here are links to their respective scam reports and the amount of money they received based on the Bitcoin price as of this writing:

Further analysis past the consolidation address becomes difficult as the thieves begin a laundering process to hide their illicit gains by splitting and mixing the stolen funds.

This particular scam campaign appears to have been most active between February 1, 2019 until March 13, 2019, collecting a total of 21.6847451 BTC, which is a little over US$220,000 at current exchange rates.

Money, money, money

When it comes to email sextortion scams, suffice to say, business is unfortunately incredibly good. While the simplicity and profitability of the scam may serve an invitation for would-be criminals, the more users become aware of the scheme, the less we’ll be lining the bad guys’ pockets with our cryptocash.

But more importantly, this should be a wake-up call for users. A lot of people, even those who consider themselves Internet-savvy, are falling for or are rattled by the extortion messaging, especially those emails that make use of old passwords to scare innocent people into parting with their money.

If you or someone you know may have received sextortion emails, know that it’s highly likely they’re not watching you. What threat actors describe in their emails is not actually taking place.

Furthermore, don’t panic. Do your due diligence and secure accounts that have been affected by massive breaches in the past (if you haven’t already). And lastly, if you want to do as little hoop-jumping as possible, just delete the email and file them away in your mind as harmless spam.

The post The lucrative business of Bitcoin sextortion scams (updated) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Bluetooth vulnerability can be exploited in Key Negotiation of Bluetooth (KNOB) attacks

Malwarebytes - Wed, 08/21/2019 - 15:56

Those who are familiar with Bluetooth BR/EDR technology (aka Bluetooth Classic, from 1.0 to 5.1) can attest that it is not perfect. Like any other piece of hardware or software technology already on market, its usefulness comes with flaws.

Early last week, academics at Singapore University of Technology, the CISPA Helmholtz Center for Information Security, and University of Oxford released their research paper [PDF] on a type of brute-force attack called Key Negotiation of Bluetooth, or KNOB. KNOB targets and exploits a weakness in the firmware of a device’s Bluetooth chip that allows hackers to perform a Man-in-the-Middle (MiTM) attack via packet injection and disclose or leak potentially sensitive data.

The Bluetooth vulnerability that KNOB targets is identified as CVE-2019-9506. According to the paper, Bluetooth chips manufactured by Intel, Broadcom, Apple, and Qualcomm are vulnerable to KNOB attacks.

What causes a KNOB attack?

The researchers have identified two circumstances of Bluetooth programming that allow KNOB attacks to be successful.

Firstly, Bluetooth inherently allows the use of keys that have a minimum length of 1 byte, which may hold 1 character. Think of this as a one-character password. Such a password would have a low entropy—meaning it would be easily predictable or guessed. Although keys with low entropy can still keep a Bluetooth-paired connection secure, hackers can easily circumvent them with a brute-force attack.

Researchers said that the 1-byte lower limit was put in place to follow international encryption regulations.

And, secondly, Bluetooth inherently does not check changes in entropy, which occurs when two devices start to “negotiate” the key length they will be using to encrypt their connection. Worse, this pre-pairing phase isn’t encrypted. The device receiving the pairing request will have no choice but to accept the low-entropy key.

Essentially, this leaves users expecting that they can safely exchange potentially sensitive data with a trusted paired device over what they thought was a secure connection—but it is not. And there is no way for them to know this.

How does it work?

The researchers implemented their attack via an illustration of people named Alice, Bob, and Charlie, with the first two as potential targets and the last as the attacker.

  1. Alice, who in this example is the owner of the master device—the Bluetooth device trying to establish a secure connection with another Bluetooth device—sends a pairing request to Bob, who is the owner of the slave device—the Bluetooth device receiving the request. A master can be paired with many slaves, but for this example, we’ll only use one, which is Bob’s.
  2. Before the two devices are paired, Alice and Bob must first agree on an encryption key to use to secure their connection. This is where the negotiation takes place. Alice would like her and Bob to use an encryption key with 16 bytes of entropy.
  3. Charlie, the man-in-the-middle attacker, intercepts this proposal and changes the entropy value of 16 bytes to 1 byte before sending it off to Bob.
  4. Bob receives the modified request for the use of an encryption key with 1 byte of entropy and sends an acceptance message back to Alice.
  5. Charlie intercepts the acceptance message and changes it to a proposal to use an encryption key with 1 byte of entropy.
  6. Alice receives the modified proposal and accepts the use of an encryption key with 1 byte of entropy and sends an acceptance message back to Bob.
  7. Charlie drops the acceptance message from Alice because, to the best of Bob’s knowledge, he didn’t send any proposal to Alice that would merit an acceptance.
  8. The pairing between Alice’s and Bob’s devices is successful.

Unfortunately, Alice and Bob would have no idea that they are relying on a poorly-encrypted Bluetooth connection that Charlie can easily infiltrate while they exchange data.

While these may sound simple enough, it’s highly unlikely that we’ll see someone performing this kind of attack—random or targeted—in watering holes like coffee shops and airports. Implementing a successful KNOB attack in the wild and over-the-air needs some expensive devices, such as a Bluetooth protocol analyzer and a finely-tuned brute force script. It is also exceedingly difficult to implement an over-the-air attack, which is why the researchers admitted to opting for a simpler, cheaper, and more reliable means of testing the effectiveness of a KNOB attack in their simulations.

Does KNOB affect me?

Researchers surmised that, as KNOB attacks Bluetooth at the architectural level, its vulnerability “endangers potentially all standard compliant Bluetooth devices, regardless [of] their Bluetooth version number and implementation details.”

Fortunately, the team already disclosed the vulnerability to the Bluetooth Special Interest Group (SIG)—the organization responsible for maintaining the technology and overseeing its standards—the International Consortium for Advanced Cybersecurity on the Internet, and the CERT Coordination Centre in Q4 2018.

In a security notice, SIG announced that it has remedied the vulnerability by updating the Bluetooth Core Specification to recommend the use of encryption keys with a minimum of 7 bytes of entropy for BR/EDR connections.

To know if your Bluetooth devices are vulnerable to the KNOB attack, recall if you have updated them since late 2018. If you haven’t, chances are that your devices are vulnerable. The researchers were positive that updates after that date fixed the vulnerability.

If you’re still unsure, Carnegie Mellon University put together information on systems that KNOB can affect.

How to protect your Bluetooth devices

Patching all your Bluetooth devices is the logical next step, especially if you’re unsure if you have since late last year.

Here is a concise list of security update notices from product vendors of Bluetooth-enabled devices you might want to check out:

When it comes to sharing potentially sensitive data with someone else, Bluetooth isn’t the best technology that truly guarantees a safe and secure exchange. So as a final note, you’re better off using other more secure methods of sharing data.

As for your Bluetooth headphones, should you be worried? Maybe not so much. But you might want to think about your IoT devices, mobile phones, and smart jewelry.

Stay informed and stay safe!

The post Bluetooth vulnerability can be exploited in Key Negotiation of Bluetooth (KNOB) attacks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Magecart criminals caught stealing with their poker face on

Malwarebytes - Tue, 08/20/2019 - 15:00

Earlier in June, we documented how Magecart credit card skimmers were found on Amazon S3. This was an interesting development, since threat actors weren’t actively targeting specific e-commerce shops, but rather were indiscriminately injecting any exposed S3 bucket.

Ever since then, we’ve monitored other places where we believe a skimmer might be found next. However, we were somewhat intrigued when we received a report from one of our customers saying that they were getting a Magecart-related alert when they ran their poker software.

Typically, skimmers such as those used by Magecart criminals operate within the web browser by using malicious bits of JavaScript to steal personal details, including payment data, from victims. In this blog, we review the curious poker case that started with a detection for a Windows program but was also tied to a website compromise.

Software application connects to Magecart domain

Poker Tracker is a software suite for poker enthusiasts that aims to help players improve their game and make the online gaming experience smoother. The Holdem and Omaha versions retail from $59.99 to $159.99 and can be purchased directly from the vendor’s website.

From the customer’s report, we saw that Malwarebytes was blocking the connection to the domain ajaxclick[.]com when the poker software application Poker Tracker 4 (PokerTracker4.exe) was launched.

Our first step was to try and reproduce this behavior to have a better understanding of what was going on behind the scenes. Sure enough, after the installation process was complete and we launched the program, we also noticed the same web connection block (Figure 1).

Figure 1: Malwarebytes stopped the connection to a malicious domain when we launched the poker application. Traffic analysis reveals web skimmer

In order to find out more about the data this application may be requesting or sending to ajaxclick[.]com, we inspected the network traffic and in particular any communications with the 172.93.103[.]94 IP address. The interesting bit is this HTTP GET request that retrieves a JavaScript file (click.js) from the aforementioned domain name.

Figure 2: Network traffic capture reveals the full URL path for the malicious domain

If we take a closer look, we recognize the typical attributes of a credit card skimmer. (As a side note, another JavaScript snippet also hosted on ajaxclick[.]com was recently identified by a security researcher.) After decoding the entire script, we can see in greater detail the data exfiltration process:

Figure 3: Code snippet showing how the skimmer collects and exfiltrates the stolen data

The skimmer was customized for the pokertracker.com site, as not only do the variable names match its input form fields, but the data portion of the skimmer script has the site’s name hardcoded as well.

Figure 4: Checkout page and credit card number field targeted by the skimmer

Based on our observations, ajaxclick[.]com includes different skimmers that have each been customized for individual victim websites. To prevent security researchers from scrutinizing each skimmer, in some instances the threat actors have implemented server-side code that ensures a unique referer is passed with the HTTP request headers.

By enumerating the ajaxclick[.]com/ajax/libs/x.x.x/click.js URL path, we can check if a skimmer script exists at that particular location. If it does, the server will return the 200 HTTP status code. If it doesn’t, it will return a 404 instead. This process allowed us to discover several other skimmers, including another, more detailed one for the pokertracker.com site located at ajaxclick[.]com/ajax/libs/1.3.6/click.js.

Figure 5: More skimmer scripts hosted on the same malicious domain Drupal site hack behind incident

For a minute, we thought the poker application might have been Trojanized. However, when using the software, we noticed that the program also acts as a browser by displaying web pages within its user interface. In this case, content is retrieved from pt4.pokertracker.com:

Figure 6: Web traffic revealing the sub-domain that the poker application loads internally

This sub-domain, as well as the root domain (main website at pokertracker.com), are both running Drupal version 6.3x, which is outdated and vulnerable. They were both injected with the skimmer. This is the type of activity we are accustomed to with Magecart, although the fact that the site was running Drupal instead of Magento (the most targeted platform by web skimmers) was a bit of a surprise.

Figure 7: The main website poketracker.com was also hacked with the same skimmer.

Every time users were launching PokerTracker 4, it would load the compromised web page within the application, which would trigger a block notification from Malwarebytes as the skimming script attempted to load. However, it’s worth noting that users going directly to the poker website were also exposed to the skimmer.

We reported this incident to the owners of PokerTracker and they rapidly identified the issue and removed the offending Drupal module. They also told us that they tightened their Content Security Policy (CSP) to help mitigate future attacks via harmful external scripts.

What this incident tells us is that users might encounter web skimmers in unexpected locations—and not just in online shopping checkout pages. At the end of the day, anything that will load unvalidated JavaScript code is susceptible to being caught in the crosshairs. As a result, the Magecart robbers have a nice, wide playing field in front of them. Of course, they’ve got to get through defenders first.

Indicators of Compromise

Skimmer domain and IP address

ajaxclick[.]com
172.93.103[.]194

Known skimming scripts

ajaxclick[.]com/ajax/libs/1.0.2/click.js
ajaxclick[.]com/ajax/libs/1.1.2/click.js
ajaxclick[.]com/ajax/libs/1.1.3/click.js
ajaxclick[.]com/ajax/libs/1.2.1/click.js
ajaxclick[.]com/ajax/libs/1.3.2/click.js
ajaxclick[.]com/ajax/libs/1.3.4/click.js
ajaxclick[.]com/ajax/libs/1.3.6/click.js
ajaxclick[.]com/ajax/libs/1.3.9/click.js
ajaxclick[.]com/ajax/libs/1.4.0/click.js
ajaxclick[.]com/ajax/libs/1.4.1/click.js

Exfiltration gate

www-trust[.]com

The post Magecart criminals caught stealing with their poker face on appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds