Techie Feeds

Maine inches closer to shutting down ISP pay-for-privacy schemes

Malwarebytes - Wed, 06/05/2019 - 15:00

Maine residents are one step closer to being protected from the unapproved use, sharing, and sale of their data by Internet service providers (ISPs). A new state bill, already approved by the state House of Representatives and Senate, awaits the governor’s signature.

If signed, the bill would provide some of the strongest data privacy protections in the United States, putting a latch on emails, online chats, browser history, IP addresses, and geolocation data collected and stored by ISPs like Verizon, Comcast, and Spectrum. The bill goes further: Unlike a data privacy proposal in the US and a new data privacy law in California, the Maine bill explicitly shuts down any pay-for-privacy schemes.

The Act to Protect the Privacy of Online Customer Information (or LD 946 for short) would go into effect on July 1, 2020. It is, with minor exception, widely supported, even among its intended targets.

“We sell Internet access, and we know that if people can’t trust the Internet, then the value of the Internet is significantly lessened, as it will be used less for sensitive applications,” wrote Fletcher Kittredge and Kerem Durdag, CEO and COO of Maine-based ISP GWI. “Even if government regulation blocks us from making money selling customer data (something we never ever do), we still benefit because a trusted Internet is more valuable to all our customers.”

Not everyone agrees, though.

The Maine State Chamber of Commerce opposes the bill and, following the Senate’s unanimous approval last week (35–0), has vowed to “ensure that this harmful bill does not become law.”

The Chamber’s arguments have puzzled the ACLU of Maine, a supporter of LD 946. According to the nonprofit, the Chamber has engaged in “gaslighting” and “disingenuous” advertising, serving as a mouthpiece for the region’s big ISPs.

The Chamber did not respond to requests for comment.

Further, the Chamber commissioned a public survey that handwaves away the actual matter at hand: Should ISPs be restricted from selling user data?

To the ACLU of Maine, that answer is clear: Yes.

“This bill protects Mainers from having their ISPs sell their data without their knowledge and consent,” said Oamshri Amarasingham, advocacy director of ACLU of Maine.

The bill

Sponsored by Maine state Democratic Senator Shenna Bellows, LD 946 would prohibit ISPs from using, disclosing, selling, or allowing access to customers’ “personal information.” That includes the content of online communications, web browsing history, app usage history, “precise geolocation information,” and health and financial information.

This bill does not exist in a vacuum. In February, Motherboard revealed that, for years, actual, honest-to-God bounty hunters could access the location data of AT&T, T-Mobile, and Sprint customers. It gets better (worse): The location data was initially intended for 911 operators, but was sold to data aggregators by the telecom companies themselves.

Away from bounty hunter headlines, The Verge also spotlighted AT&T’s future profiteering plans last month to monetize nearly every piece of its customers’ data.

Under LD 946, that activity would be regulated.

The bill allows for some exceptions. An ISP could sell user data so long as the user consents to that sale, and ISPs could also use and disclose user data when complying with court orders, rendering bills, protecting users from fraud and abuse, and providing their services, so long as the user data is necessary to those services. Further, ISPs could disclose geolocation data in the case of emergencies, like dispatching 911 services.

The bill also closes a few potential loopholes, prohibiting ISPs from requiring that users consent to the sale of their data in order to use their services. The bill also states that ISPs must provide “clear, conspicuous, and nondeceptive notice” when users consent to sell their data.

Finally, the bill shuts down any “pay-for-privacy” schemes that have already proved popular. According to the bill, ISPs cannot “charge a customer a penalty or offer a customer a discount based on the customer’s decision to provide or not provide consent” to having their data sold, shared, or accessed by third parties.

Good.

As we previously wrote about Sen. Ron Wyden’s data privacy proposal, which includes a pay-for-privacy stipulation:

“[Pay-for-privacy] casts privacy as a commodity that individuals with the means can easily purchase. But a move in this direction could further deepen the separation between socioeconomic classes. The ‘haves’ can operate online free from prying eyes. But the ‘have nots’ must forfeit that right.”

The Maine state bill does its part to prevent that unequal outcome.

Maine Governor Janet Mills has until June 11 to sign the bill and turn it into law. If she misses the deadline, the bill automatically becomes law.

Amarasingham of ACLU of Maine expects a positive outcome.

“We are optimistic that [Governor Mills] will sign this bill,” Amarasingham said. “I know ISPs and the Chamber of Commerce are exerting a lot of pressure, but I’m proud to say Maine legislators didn’t cave to that. I hope the governor’s office won’t either.”

The opposition

The challenge to LD 946 includes claims of insufficiency, unproven rhetoric, misguiding statistics, and a question as to what legislation should accomplish.

As Amarasingham said, one of the bill’s main opponents is the Maine State Chamber of Commerce. In recent months, the Chamber funded a 30-second video ad criticizing the bill, hired a research firm to conduct public surveys about data privacy, and launched a website that asked Maine residents to tell their representatives to vote against the bill.

That website labeled LD 946 as “harmful to Maine’s consumers,” because, allegedly, the bill “will create greater consumer confusion and undermine consumers’ confidence in their online activities—a risk to the continued growth of the digital economy.”

That confusion argument showed up in a Central Maine opinion piece written by Mid-Maine Chamber of Commerce president and CEO Kimberly Lindlof. Lindlof wrote that a “patchwork” of state data privacy laws—with different standards across different state lines—could create a scenario where Maine residents “might have to opt in to a privacy setting in Maine but opt out of that setting if you go into another state for vacation.”

But the Mid-Maine Chamber of Commerce and the Maine State Chamber of Commerce both oppose LD 946 for another reason: The bill does not go far enough.

According to both agencies, LD 946 should apply not just to companies that provide Internet service, but also companies that operate their businesses online, such as Google and Facebook. The Chamber’s video ad, which it posted on Facebook, said that “it doesn’t make sense” to leave out these big Silicon Valley tech companies which have repeatedly failed to protect user data. (The video ad also claims that that LD 946 “exempts Facebook,” which is flatly untrue—it simply does not apply to Facebook. There are no written exemptions for the company.)

Boiled down, the Chamber wants a stronger bill.

However, this is an ideological argument about policy: Should legislation immediately achieve broad goals, or should it take individual steps towards those goals?

According to Amarasingham, the reality of policy-making is the latter.

“The nature of legislation and law reform is that it is incremental,” she said. “There is no one bill on any issue that solves an entire problem. This bill is an enormous first step and it is very important.”

Following the Senate’s approval of LD 946 last week, the Chamber responded on its website:

“Today the State Senate failed to protect the online privacy of all Maine consumers in passing LD 946, a fundamentally flawed bill that will do little to make Mainers’ personal privacy more secure on the Internet. Despite the fact that 87% [nearly 90%] of Mainers believe a state law should apply to all companies on the Internet according to a recent survey, senators chose to pass a bill that leaves consumers’ personal data unprotected when they are using websites, search engines, and social media apps.”

Those statistics deserve scrutiny.

The statement cites a Chamber-funded survey by David Binder Research, in which the firm conducted 600 telephone interviews between May 9 and May 11. The statistic referenced by the Chamber pertains to this question:

“If the Maine state legislature were to pass a law today to protect your personal privacy, should this law apply to just a few companies on the Internet, with the idea of passing more law [sic] in the future to cover additional companies on the Internet, or should this law apply to all companies?”

According to the survey, 87 percent of respondents answered “All companies.”

But that question asks respondents to make a choice between two entirely different things—one of them literally exists and the other does not.

LD 946, which applies to a “few companies,” is written. A bill that applies to “all companies” is not. This is a choice between reality and possibility.

Further, the question’s language obfuscates a core difference between “companies on the Internet”—like Google and Facebook—and companies that provide the internet. These are not the same.

The Maine State Chamber of Commerce did not respond to emailed questions about when it last created a website campaign against a bill, or about why it believes the potential for broader privacy protections supersedes the current bill’s incremental protections. The Chamber also did not reply to a voicemail providing similar questions.

If at this point, you’re confused about how incremental protections against sneaky ISP behavior could be seen as “harmful,” you’re not alone. Tracking the Chamber’s privacy-protective messaging against its anti-ISP-protection messaging can make anyone’s head spin.

“I can’t say that I fully understand why the Chamber is carrying Spectrum and AT&T’s water on this,” Amarasingham said. “Their top line, outward-facing message was Mainers deserve privacy protections, which is also our top line message.”

Amarasingham continued: “This is real privacy protection.”

Data privacy shoulds and should-nots

Should rules be written to stop Facebook and Google and dozens of Silicon Valley tech companies from profiting off your data? That depends on several factors, like what those rules would look like, how they would be implemented and enforced, and what exemptions would apply, not to mention whether those rules would nullify current state rules that are being pushed forward today.

But should ISPs be allowed to sell user data without consent when there is already a widely-supported plan in place to stop them? Absolutely not.

The post Maine inches closer to shutting down ISP pay-for-privacy schemes appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Magecart skimmers found on Amazon CloudFront CDN

Malwarebytes - Tue, 06/04/2019 - 15:00

Update (06-08-2019): The compromises of Amazon S3 buckets continue and some large sites are being affected. Our crawler spotted a malicious injection that loads a skimmer for the Washington Wizards page on the official NBA.com website.

The skimmer was inserted in this JavaScript library:

hxxps://s3[.]amazonaws[.]com/wsaimages/js/wizards[.]js

Interestingly, this same library had already been altered (loading content from installw[.]com) some time earlier in January of this year. We have reported this incident to Amazon. A complete archived scan of the page can be found here.

Late last week, we observed a number of compromises on Amazon CloudFront – a Content Delivery Network (CDN) – where hosted JavaScript libraries were tampered with and injected with web skimmers.

Although attacks that involve CDNs usually affect a large number of web properties at once via their supply chain, this isn’t always the case. Some websites either use Amazon’s cloud infrastructure to host their own libraries or link to code developed specifically for them and hosted on a custom AWS S3 bucket.

Without properly validating content loaded externally, these sites are exposing their users to various threats, including some that pilfer credit card data. After analyzing these breaches, we found that they are a continuation of a campaign from Magecart threat actors attempting to cast a wide net around many different CDNs.

The ideal place to conceal a skimmer

CDNs are widely used because they provide great benefits to website owners, including optimizing load times and cost, as well as helping with all sorts of data analytics.

The sites we identified during a crawl had nothing in common other than the fact they were all using their own custom CDN to load various libraries. In effect, the only resulting victims of a compromise on their CDN repository would be themselves.

This first example shows a JavaScript library that is hosted on its own dedicated AWS S3 bucket. The skimmer can be seen appended to the original code and using obfuscation to conceal itself.

Site loading a compromised JavaScript library from its own AWS S3 bucket

This second case shows the skimmer injected not just in one library, but several contained within the same directory, once again part of an S3 bucket that is only used by this one website.

Fiddler traffic capture showing multiple JavaScript files on AWS injected with skimmer

Finally, here’s another example where the skimmer was injected in various scripts loaded from a custom CloudFront URL.

Fiddler traffic capture showing skimmer injected in a custom CloudFront repository Exfiltration gate

This skimmer uses two levels of encoding (hex followed by Base64) to hide some of its payload, including the exfiltration gate (cdn-imgcloud[.]com). The stolen form data is also encoded before being sent back to the criminal infrastructure.

While we would have expected to see many Magento e-commerce shops, some of the victims included a news portal, a lawyer’s office, a software company, and a small telecom operator, all running a variety of Content Management Systems (CMSes).

Snippet of the skimmer code showing functions used to exfiltrate data

As such, many did not even have a payment form within their site. Most simply had a sign up or login form instead. This makes us believe that Magecart threat actors may be conducting “spray and pray” attacks on the CDNs they are able to access. Perhaps they are hoping to compromise libraries for sites with high traffic or tied to valuable infrastructure from which they can steal input data.

Connection with existing campaign

The skimmer used in this attack looked eerily familiar. Indeed, by going back in time, we noted it used to have the same exfiltration gate (font-assets[.]com) identified by Yonathan Klijnsma in RiskIQ’s report on several recent supply-chain attacks.

RiskIQ, in partnership with Abuse.ch and the Shadowserver Foundation, sinkholed both that domain and another (ww1-filecloud[.]com) in an effort to disrupt the criminal’s infrastructure.

Comparison snapshots: the exfiltration gate changing after original domain gets sinkholed

A cursory look at this new cdn-imgcloud[.]com gate shows that it was registered just a couple days after the RiskIQ blog post came out and uses Carbon2u (which has a certain history) as nameservers.

Creation Date: 2019-05-16T07:12:30Z
Registrar: Shinjiru Technology Sdn Bhd
Name Server: NS1.CARBON2U.COM
Name Server: NS2.CARBON2U.COM

The domain resolves to the IP address 45.114.8[.]160 that belongs to ASN 55933 in Hong Kong. By exploring the same subnet, we can find other exfiltration gates also registered recently.

VirusTotal graph showing new gates and revealing that old gates are back online

What we can also see from the above VirusTotal graph, is that the two domains (font-assets[.]com and ww1-filecloud[.]com) that were previously sinkholed to 179.43.144[.]137 (server in Switzerland) came back into the hands of the criminals.

Historical passive DNS records show that on 05-25-2019, font-assets[.]com started resolving to 45.114.8[.]161. The same thing happened for ww1-filecloud[.]com, which ended up resolving to 45.114.8[.]159 after a few swaps.

Finding and exploiting weaknesses

This type of attack on private CDN repositories is not new, but reminds us that threat actors will look to exploit anything that is vulnerable to gain entry into systems. Sometimes, coming in from the front door might not be a viable option, so they will look for other ways.

While this example is not a third-party script supply-chain attack, it is served from third-party infrastructure. Beyond applying the same level of access control to your own CDN-hosted repositories as your actual website, other measures—such as validation of any externally loaded content (via Subresource Integrity checks, for example)—can save the day.

We reached out to the victims we identified in this campaign and several have already remediated the breach. In other cases, we filed an abuse report directly with Amazon. Malwarebytes users are protected against the skimmers mentioned in this blog and the new ones we discover each day.

Indicators of Compromise (IoCs)

ww1-filecloud[.]com,45.114.8[.]159
cdn-imgcloud[.]com,45.114.8[.]160
font-assets[.]com,45.114.8[.]161
wix-cloud[.]com,45.114.8[.]162
js-cloudhost[.]com,45.114.8[.]163

The post Magecart skimmers found on Amazon CloudFront CDN appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (May 27 – June 2)

Malwarebytes - Mon, 06/03/2019 - 17:09

Last week on Malwarebytes Labs, we took readers through a deep dive—way down the rabbit hole—into the novel malware called “Hidden Bee.” We also looked at the potential impact of a government agency’s privacy framework, and delivered to readers everything they needed to know about ATM attacks and fraud. Lastly, amidst continuing news about the City of Baltimore suffering a ransomware attack, we told readers what they should do to prepare themselves against similar threats.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (May 27 – June 2) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Leaks and breaches: a roundup

Malwarebytes - Mon, 06/03/2019 - 16:47

It’s time for one of our semi-regular breach/data exposure roundup blogs, as the last few days have brought us a few monsters. If you use any of the below sites, or if you think some of your data has been sitting around exposed, we’ll hopefully give you a better idea of what the issue is.

Seeing so many services be compromised or simply exposed for all to see without being secured is rather fatiguing, and we’d hate for the end result to be hands thrown in the air with a cry of “Why bother!” Without further ado, then, let’s take a look at breach number one.

Canva: Breached

Something in the region of 139 million users of graphic design service Canva had their data swiped by a hacker known for many other large compromises. Usernames, emails, real names, and cities were amongst the data swiped. A big chunk of users had a combination of password hashes and Google tokens grabbed, too.

There’s some issues with how Canva initially reported this. The “we’ve been hacked” message followed by a short email ramble about free images, led to concerns that many users may have ignored it completely. However, Canva has been quick to deal with the problem at hand and even have—shock and horror in amazement—a good slice of information about it on their status page. In fact, they have even more information on a dedicated update portal.

In a nutshell, Canva states that your login passwords are unreadable, other credentials are similarly secure, your designs are safe, your card details haven’t been grabbed, and you should change your login as a precautionary measure.

Phew.

Flipboard: breached

Breach number two: Massively-successful news aggregator Flipboard was also caught by an attack according to a statement released on May 28. This attack took place sometime between June 2018 and March 2019. They haven’t said how many accounts were breached, but as with Canva, they were careful to stress that stolen logins would be incredibly difficult to break into thanks to the fact that they didn’t store passwords in plain text. Additionally, they’ve reset everybody’s login credentials as a precautionary security response.

The attackers grabbed the usual collection of valuables: usernames, hashed/salted passwords, some email addresses, and third-party digital tokens. As with the Canva breach, Flipboard has been upfront about the whole fiasco and are being a lot more proactive than many companies faced with similar situations.

Amazingco: exposed data

Next up, we have another example of “utterly unsecured database full of information readily available to someone with a web browser.” This is incredibly common, and a major source of data breaches/leaks. Hacking into servers, exploiting databases, phishing logins from admins? Too much hard work. Criminals need only go looking for wide-open goal areas instead.

In this case, the open goal belonged to an Australian marketing company called Amazingco. 174,000 records were there for the taking, containing everything from names and addresses to phone numbers, event types, and even IP addresses and ports.

We don’t know how long the data was sitting there, and we also don’t know if this information was meant to be sitting on the open Internet, or if someone possibly misconfigured something. What we do know is that this database has now been taken offline.

At this stage, there’s no real way to know if someone up to no good has grabbed it. However, if people with good intentions could find it, then so, too, could bad ones. Customers of Amazingco should practice wariness of attacks, as spear phishing will likely now be the order of the day.

First American Financial Corp: exposed data

Possibly the largest and most damaging of the bunch, our fourth incident is another one where data is freely available to someone sporting a web browser. First American Financial Corp had “hundreds of millions of documents related to mortgage deals, going back to 2003” digitised and ready to view without authentication.

Social security numbers, drivers licenses, account statements, wire transaction records, bank account numbers, and much more were all lurking in the pile. That pile was estimated to weigh in at around 885 million files, and as security researcher Brian Krebs notes, this would be an absolute gold mine for phishers and purveyors of Business Email Compromise scams. The data has now been taken offline, but that’s scant consolation for anyone affected.

What’s the upshot?

Don’t panic, but do be cautious. According to security firm Mimecast, 65 percent of organisations saw an increase in impersonation attempts year over year. Some of the above leaks could be extremely useful to scammers wanting to muscle in on victims, and you never know when someone’s going to try it on. The slightest bit of inattentiveness could lead to a spectacular mishap, and we don’t want that taking place.

The post Leaks and breaches: a roundup appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Hidden Bee: Let’s go down the rabbit hole

Malwarebytes - Fri, 05/31/2019 - 17:32

Some time ago, we discussed the interesting malware, Hidden Bee. It is a Chinese miner, composed of userland components, as well as of a bootkit part. One of its unique features is a custom format used for some of the high-level elements (this format was featured in my recent presentation at SAS).

Recently, we stumbled upon a new sample of Hidden Bee. As it turns out, its authors decided to redesign some elements, as well as the used formats. In this post, we will take a deep dive in the functionality of the loader and the included changes.

Sample

831d0b55ebeb5e9ae19732e18041aa54 – shared by @James_inthe_box

Overview

The Hidden Bee runs silently—only increased processor usage can hint that the system is infected. More can be revealed with the help of tools inspecting the memory of running processes.

Initially, the main sample installs itself as a Windows service:

Hidden Bee service

However, once the next component is downloaded, this service is removed.

The payloads are injected into several applications, such as svchost.exe, msdtc.exe, dllhost.exe, and WmiPrvSE.exe.

If we scan the system with hollows_hunter, we can see that there are some implants in the memory of those processes:

Results of the scan by hollows_hunter

Indeed, if we take a look inside each process’ memory (with the help of Process Hacker), we can see atypical executable elements:

Hidden Bee implants are placed in RWX memory

Some of them are lacking typical PE headers, for example:

Executable in one of the multiple customized formats used by Hidden Bee

But in addition to this, we can also find PE files implanted at unusual addresses in the memory:

Manually-loaded PE files in the memory of WmiPrvSE.exe

Those manually-loaded PE files turned out to be legitimate DLLs: OpenCL.dll and cudart32_80.dll (NVIDIA CUDA Runtime, Version 8.0.61 ). CUDA is a technology belonging to NVidia graphic cards. So, their presence suggests that the malware uses GPU in order to boost the mining performance.

When we inspect the memory even closer, we see within the executable implants there are some strings referencing LUA components:

Strings referencing LUA scripting language, used by Hidden Bee components

Those strings are typical for the Hidden Bee miner, and they were also mentioned in the previous reports.

We can also see the strings referencing the mining activity, i.e. the Cryptonight miner.

List of modules:

bin/i386/coredll.bin
dispatcher.lua
bin/i386/ocl_detect.bin
bin/i386/cuda_detect.bin
bin/amd64/coredll.bin
bin/amd64/algo_cn_ocl.bin
lib/amd64/cudart64_80.dll
src/cryptonight.cl
src/cryptonight_r.cl
bin/i386/algo_cn_ocl.bin
config.lua
lib/i386/cudart32_80.dll
src/CryptonightR.cu
bin/i386/algo_cn.bin
bin/amd64/precomp.bin
bin/amd64/ocl_detect.bin
bin/amd64/cuda_detect.bin
lib/amd64/opencl.dll
lib/i386/opencl.dll
bin/amd64/algo_cn.bin
bin/i386/precomp.bin

And we can even retrieve the miner configuration:

.gist table { margin-bottom: 0; } Inside

Hidden Bee has a long chain of components that finally lead to loading of the miner. On the way, we will find a variety of customized formats: data packages, executables, and filesystems. The filesystems are going to be mounted in the memory of the malware, and additional plugins and configuration are retrieved from there. Hidden Bee communicates with the C&C to retrieve the modules—on the way also using its own TCP-based protocol.

The first part of the loading process is described by the following diagram:

Each of the .spk packages contains a custom ‘SPUTNIK’ filesystem, containing more executable modules.

Starting the analysis from the loader, we will go down to the plugins, showing the inner workings of each element taking part in the loading process.

The loader

In contrast to most of the malware that we see nowadays, the loader is not packed by any crypter. According the header, it was compiled in November 2018.

While in the former edition the modules in the custom formats were dropped as separate files, this time the next stage is unpacked from inside the loader.

The loader is not obfuscated. Once we load it with typical tools (IDA), we can clearly see how the new format is loaded.

The loading function

Section .shared contains the configuration:

Encrypted configuration. The last 16 bytes after the data block is the key.

The configuration is decrypted with the help of XTEA algorithm.

Decrypting the configuration

The decrypted configuration must start from the magic WORD “pZ.” It contains the C&C and the name under which the service will be installed:

Unscrambling the NE format

The NE format was seen before, in former editions of Hidden Bee. It is just a scrambled version of the PE. By observing which fields have been misplaced, we can easily reconstruct the original PE.

The loader, unpacking the next stage

NE is one of the two similar formats being used by this malware. Another similar one starts from a DWORD 0x0EF1FAB9 and is used to further load components. Both of them have an analogical structure that comes from slightly modified PE format:

Header:

WORD magic; // 'NE'
WORD pe_offset;
WORD machine_id;

The conversion back to PE format is trivial: It is enough to add the erased magic numbers: MZ and PE, and to move displaced fields to their original offsets. The tool that automatically does the mentioned conversion is available here.

In the previous edition, the parts of Hidden Bee with analogical functionality were delivered in a different, more complex proprietary format than the one currently being analyzed.

Second stage: a downloader (in NE format)

As a result of the conversion, we get the following PE: (fddfd292eaf33a490224ebe5371d3275). This module is a downloader of the next stage. The interesting thing is that the subsystem of this module is set as a driver, however, it is not loaded like a typical driver. The custom loader loads it into a user space just like any typical userland component.

The function at the module’s Entry Point is called with three parameters. The first is a path of the main module. Then, the parameters from the configuration are passed. Example:

0012FE9C 00601A34 UNICODE "\"C:\Users\tester\Desktop\new_bee.exe\""
0012FEA0 00407104 UNICODE "NAPCUYWKOxywEgrO"
0012FEA4 00407004 UNICODE "118.41.45.124:9000"

Calling the Entry Point of the manually-loaded NE module

The execution of the module can take one of the two paths. The first one is meant for adding persistence: The module installs itself as a service.

If the module detects that it is already running as a service, it takes the second path. In such a case, it proceeds to download the next module from the server. The next module is packed as as Cabinet file.

The downloaded Cabinet file is being passed to the unpacking function

It is first unpacked into a file named “core.sdb”. The unpacked module is in a customized format based on PE. This time, the format has a different signature: “NS” and it is different from the aforementioned “NE” format (detailed explanation will be given further).

It is loaded by the proprietary loader.

The loader enumerates all the executables in a directory: %Systemroot%\Microsoft.NET\ and selects the ones with the compatible bitness (in the analyzed case it was selecting 32bit PEs). Once it finds a suitable PE, it runs it and injects the payload there. The injected code is run by adding its entry point to APC queue.

Hidden Bee component injecting the next stage (core.sdb) into a new process

In case it failed to find the suitable executable in that directory, it performs the injection into dllhost.exe instead.

Unscrambling the NS format

As mentioned before, the core.sdb is in yet another format named NS. It is also a customized PE, however, this time the conversion is more complex than the NE format because more structures are customized. It looks like a next step in the evolution of the NE format.

Header of the NS format

We can see that the changes in the PE headers are bigger and more lossy—only minimalist information is maintained. Only few Data Directories are left. Also the sections table is shrunk: Each section header contains only four out of nine fields that are in the original PE.

Additionally, the format allows to pass a runtime argument from the loader to the payload via header: The pointer is saved into an additional field (marked “Filled Data” on the picture).

Not only is the PE header shrunk. Similar customization is done on the Import Table:

Customized part of the NS format’s import table

This custom format can also be converted back to the PE format with the help of a dedicated converter, available here.

Third stage: core.sdb

The core.sdb module converted to PE format is available here: a17645fac4bcb5253f36a654ea369bf9.

The interesting part is that the external loader does not complete the full loading process of the module. It only copies the sections. But the rest of the module loading, such as applying relocations and filling imports, is done internally in the core.sdb.

The loading function is just at the Entry Point of core.sdb

The previous component was supposed to pass to the core.sdb an additional buffer with the data about the installed service: the name and the path. During its execution, core.sdb will look up this data. If found, it will delete the previously-created service, and the initial file that started the infection:

Removing the initial service

Getting rid of the previous persistence method suggests that it will be replaced by some different technique. Knowing previous editions of Hidden Bee, we can suspect that it may be a bootkit.

After locking the mutex in a format Global\SC_{%08lx-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}, the module proceeds to download another component. But before it goes to download, first, a few things are checked.

Checks done before download of the next module

First of all, there is a defensive check if any of the known debuggers or sniffers are running. If so, the function quits.

The blacklist

Also, there is a check if the application can open a file ‘\??\NPF-{0179AC45-C226-48e3-A205-DCA79C824051}’.

If all the checks pass, the function proceeds and queries the following URL, where GET variables contain the system fingerprint:

sltp://bbs.favcom.space:1108/setup.bin?id=999&sid=0&sz=a7854b960e59efdaa670520bb9602f87&os=65542&ar=0

The hash (sz=) is an MD5 generated from VolumeIDs. Then follows the (os=) identifying version of the operating system, and the identifier of the architecture (ar=), where 0 means 32 bit, 1 means 64bit.

The content downloaded from this URL (starting from a magic DWORD 0xFEEDFACE – 79e851622ac5298198c04034465017c0) contains the encrypted package (in !rbx format), and a shellcode that will be used to unpack it. The shellcode is loaded to the current process and then executed.

The ‘FEEDFACE’ module contains the shellcode to be loaded

The shellcode’s start function uses three parameters: pointer to the functions in the previous module (core sdb), pointer to the buffer with encrypted data, size of the encrypted data.

The loader calling the shellcode Fourth stage: the shellcode decrypting !rbx

The beginning of the loaded shellcode:

The shellcode does not fill any imports by itself. Instead, it fully relies on the functions from core.sdb module, to which it passes the pointer. It makes use of the following function: malloc, mecpy, memfree, VirtualAlloc.

Example: calling malloc via core.sdb

Its role is to reveal another part. It comes in an encrypted package starting from a marker !rbx. The decryption function is called just at the beginning:

Calling the decrypting function (at Entry Point of the shellcode)

First, the function checks the !rbx marker and the checksum at the beginning of the encrypted buffer:

Checking marker and then checksum

It is decrypted with the help of RC4 algorithm, and then decompressed.

After decryption, the markers at the beginning of the buffer are checked. The expected format must start from predefined magic DWORDs: 0xCAFEBABE,0, 0xBABECAFE:

The !rbx package format

The !rbx is also a custom format with a consistent structure.

DWORD magic; // "!rbx"
DWORD checksum;
DWORD content_size;
BYTE rc4_key[16];
DWORD out_size;
BYTE content[];

The custom file system (BABECAFE)

The full decrypted content has a consistent structure, reminiscent of a file system. According to the previous reports, earlier versions of Hidden Bee used to adapt the ROMS filesystem, adding few modifications. They called their customized version “Mixed ROM FS”. Now it seems that their customization process has progressed. Also the keywords suggesting ROMFS cannot be found. The headers starts from the markers in the form of three DWORDS: { 0xCAFEBABE, 0, 0xBABECAFE }.

The layout of BABECAFE FS:

We notice that it differs at many points from ROM FS, from which it evolved.

The structure contains the following files:

/bin/amd64/coredll.bin
/bin/i386/coredll.bin
/bin/i386/preload
/bin/amd64/preload
/pkg/sputnik.spk
/installer/com_x86.dll (6177bc527853fe0f648efd17534dd28b)
/installer/com_x64.dll
/pkg/plugins.spk

The files /pkg/sputnik.spk and /pkg/plugins.spk are both compressed packages in a custom !rsi format.


Beginning of the !rsi package in the BABECAFE FS

Each of the spk packages contain another custom filesystem, identified by the keyword SPUTNIK (possibly the extension ‘spk’ is derived from the SPUTNIK format). They will be unpacked during the next steps of the execution.

Unpacked plugins.spk: 4c01273fb77550132c42737912cbeb36
Unpacked sputnik.spk: 36f3247dad5ec73ed49c83e04b120523.

Selecting and running modules

Some executables stored in the filesystem are in two version: 32 and 64 bit. Only the modules relevant to the current architecture are loaded. So, in the analyzed case, the loader chooses first: /bin/i386/preload (shellcode) and /bin/i386/coredll.bin (a module in NS custom format). The names are hardcoded in the loader within the loading shellcode:

Searching the modules in the custom file system

After the proper elements are fetched (preload and coredll.bin), they are copied together into a newly-allocated memory area. The coredll.bin is copied just after preload. Then, the preload module is called:

Redirecting execution to preload

The preload is position-independent, and its execution starts from the beginning of the page.

Entering ‘preload’

The only role of this shellcode is to prepare and run the coredll.bin. So, it contains a custom loader for the NS format that allocates another memory area and loads the NS file there.

Fifth stage: preload and coredll

After loading coredll, preload redirects the execution there.

coredll at its Entry Point

The coredll patches a function inside the NTDLL— KiUserExceptionDispatcher—redirecting one of the inner calls to its own code:

A patch inside KiUserExceptionDispatcher

Depending on which process the coredll was injected into, it can take one of a few paths of execution.

If it is running for the first time, it will try to inject itself again—this time into rundll32. For the purpose of the injection, it will again unpack the original !rbx package and use its original copy stored there.

Entering the unpacking function Inside the unpacking function: checking the magic “!rbx”

Then it will choose the modules depending on the bitness of the rundll32:

It selects the pair of modules (preload/coredll.bin) appropriate for the architecture, either from the directory amd64 or from i386:

If the injection failed, it makes another attempt, this time trying to inject into dllhost:

Each time it uses the same, hardcoded parameter (/Processid: {...}) that is passed to the created process:

The thread context of the target process is modified, and then the thread is resumed, running the injected content:

Now, when we look inside the memory of rundll32, we can find the preload and coredll being mapped:

Inside the injected part, the execution follows a similar path: preload loads the coredll and redirects to its Entry Point. But then, another path of execution is taken.

The parameter passed to the coredll decides which round of execution it is. On the second round, another injection is made: this time to dllhost.exe. And finally, it proceeds to the final round, when other modules are unpacked from the BABECAFE filesystem.

Parameter deciding which path to take

The unpacking function first searches by name for two more modules: sputnik.spk and plugins.spk. They are both in the mysterious !rsi format, which reminds us of !rbx, but has a slightly different structure.

Entering the function unpacking the first !rsi package:

The function unpacking the !rsi format is structured similarly to the !rbx unpacking. It also starts from checking the keyword:

Checking “!rsi” keyword

As mentioned before, both !rsi packages are used to store filesystems marked with the keyword “SPUTNIK”. It is another custom filesystem invented by the Hidden Bee authors that contain additional modules.

The “SPUTNIK” keyword is checked after the module is unpacked

Unpacking the sputnik.spk resulted in getting the following SPUTNIK module: 455738924b7665e1c15e30cf73c9c377

It is worth noting that the unpacked filesystem has inside of it four executables: two pairs consisting of NS and PE, appropriately 32 and 64 bit. In the currently-analyzed setup, 32 bit versions are deployed.

The NS module will be the next to be run. First, it is loaded by the current executable, and then the execution is redirected there. Interestingly, both !rsi modules are passed as arguments to the entry point of the new module. (They will be used later to retrieve more components.)

Calling the newly-loaded NS executable Sixth stage: mpsi.dll (unpacked from SPUTNIK)

Entering into the NS module starts another layer of the malware:

Entry Point of the NS module: the !rsi modules, perpended with their size, are passed

The analyzed module, converted to PE is available here: 537523ee256824e371d0bc16298b3849

This module is responsible for loading plugins. It will also create a named pipe through which it is will communicate with other modules. It sets up the commands that are going to be executed on demand.

This is how the beginning of the main function looks:

Like in previous cases, it starts from finishing to load itself (relocations and imports). Then, it patches the function in NTDLL. This is a common prolog in many HiddenBee modules.

Then, we have another phase of loading elements from the supplied packages. The path that will be taken depends on the runtime arguments. If the function received both !rsi packages, it will start by parsing one of them, retrieving loading submodules.

First, the SPUTNIK filesystem must be unpacked from the !rsi package:

After being unpacked, it is mounted. The filesystems are mounted internally in the memory: A global structure is filled with pointers to appropriate elements of the filesystem.

At the beginning, we can see the list of the plugins that are going to be loaded: cloudcompute.api, deepfreeze.api, and netscan.api. Those names are being appended to the root path of the modules.

Each module is fetched from the mounted filesystem and loaded:

Calling the function to load the plugin

Consecutive modules are loaded one after another in the same executable memory area. After the module is loaded, its header is erased. It is a common technique used in order to make dumping of the payload from the memory more difficult.

The cloudcompute.api is a plugin that will load the miner. More about the plugins will be explained in the next section of this post.

Reading its code, we find out that the SPUTNIK modules are filesystems that can be mounted and dismounted on demand. This module will be communicating with others with the help of a named pipe. It will be receiving commands and executing appropriate handlers.

Initialization of the commands’ parser:

The function setting up the commands: For each name, a handler is registered. (This is probably the Lua dispatcher, first described here.)

When plugins are run, we can see some additional child processes created by the process running the coredll (in the analyzed case it is inside rundll32):

Also it triggers a firewall alert, which means the malware requested to open some ports (triggered by netscan.api plugin):

We can see that it started listening on one TCP and one UDP port:

The plugins

As mentioned in the previous section, the SPUTNIK filesystem contains three plugins: cloudcompute.api, deepfreeze.api, and netscan.api. If we convert them to PE, we can see that all of them import an unknown DLL: mpsi.dll. When we see the filled import table, we find out that the addresses have been filled redirecting to the functions from the previous NS module:

So we can conclude that the previous element is the mpsi.dll. Although its export table has been destroyed, the functions are fetched by the custom loader and filled in the import tables of the loaded plugins.

First the cloudcompute.api is run.

This plugin retrieves from the filesystem a file named “/etc/ccmain.json” that contains the list of URLs:

Those are addresses from which another set of modules is going to be downloaded:

["sstp://news.onetouchauthentication.online:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.club:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.icu:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.xyz:443/mlf_plug.zip.sig"]

It also retrieves another component from the SPUTNIK filesystem: /bin/i386/ccmain.bin. This time, it is an executable in NE format (version converted to PE is available here: 367db629beedf528adaa021bdb7c12de)

This is the component that is injected into msdtc.exe.

The HiddenBee module mapped into msdtc.exe

The configuration is also copied into the remote process and is used to retrieve an additional package from the C&C:

This is the plugin responsible for downloading and deploying the Mellifera Miner: core component of the Hidden Bee.

Next, the netscan.api loads module /bin/i386/kernelbase.bin (converted to PE: d7516ad354a3be2299759cd21e161a04)

The miner in APT-style

Hidden Bee is an eclectic malware. Although it is a commodity malware used for cryptocurrency mining, its design reminds us of espionage platforms used by APTs. Going through all its components is exhausting, but also fascinating. The authors are highly professional, not only as individuals but also as a team, because the design is consistent in all its complexity.

Appendix

https://github.com/hasherezade/hidden_bee_tools – helper tools for parsing and converting Hidden Bee custom formats

https://www.bleepingcomputer.com/news/security/new-underminer-exploit-kit-discovered-pushing-bootkits-and-coinminers/

Articles about the previous version (in Chinese):

Our first encounter with the Hidden Bee:

https://blog.malwarebytes.com/threat-analysis/2018/07/hidden-bee-miner-delivered-via-improved-drive-by-download-toolkit/

The post Hidden Bee: Let’s go down the rabbit hole appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Ransomware isn’t just a big city problem

Malwarebytes - Fri, 05/31/2019 - 15:00

This month, one ransomware story has been making a lot of waves: the attack on Baltimore city networks. This attack has been receiving more press than normal, which could be due to the actions taken (or not taken) by the city government, as well as rumors about the ransomware infection mechanism.

Regardless, the Baltimore story inspired us to investigate other cities in the United States, identifying which have had the most detections of ransomware this year. While we did pinpoint numerous cities whose organizations had serious ransomware problems, Baltimore, nor any of the other high-profile city attacks, such as Atlanta or Greenville, was not one of them. This follows a trend of increasing ransomware infections on organizational networks that we’ve been watching for a while now.

To curb this, we are providing our readers with a guide on how to not only avoid being hit with ransomware, but deal with the ransomware fallout. Basically, this is a guide on how not to be the next Baltimore. While many of these attacks are targeted, cybercriminals are opportunistic—if they see an organization has vulnerabilities, they will swoop in and do as much damage as they can. And ransomware is about as damaging as it gets.

Baltimore ransomware attack

As of presstime, Baltimore city servers are still down. The original attack occurred on May 7, 2019, and as soon as it happened, the city shut down numerous servers on their networks to keep them secure from the possible spread of the ransomware.

The ransomware that infected Baltimore is called RobinHood, or sometimes RobinHood ransomware. When a ransom note was discovered, it demanded a payment of $100,000 or about 13 Bitcoins. Much like other ransomware, it came with a timer, demanding that the victims pay up by a certain date, or the cost of recovering files would go up by $10,000 a day.

RobinHood ransom note, Courtesy Lawrence Abrams & Bleeping Computer

RobinHood ransomware is a newer malware family but has already made a name for itself infecting other city networks, as it did for the City of Greenville. According to a report from the New York Times, some malware researchers have claimed that the NSA-leaked exploit EternalBlue is involved in the infection process, however analysis by Vitali Kremez at Sentinel One does not show any sign of EternalBlue activity. Rather, the method of spreading the ransomware from system to system involves manipulation of the PsExec tool.

This is not the first cyberattack Baltimore has dealt with recently. In fact, last year their 911 dispatch systems were compromised by attackers, leaving the dispatchers using pen and paper to conduct their work. Some outlets have blamed the city’s historically inefficient network design on previous Chief Information Officers (CIOs), of which there have been many. Two of its CIOs resigned in this decade alone amidst allegations of fraud and ethical violations.

Trends

Baltimore aside, ransomware aimed at organizations has been active in the United States over the course of the last six months, with periodic spurts and massive spikes that represent a new approach to corporate infection by cybercriminals.

The below heat map shows a compounding effect of ransomware detections in organizations across the country from the beginning of 2019 to now.

A heat map of ransomware detections in organizations from January 2019 to present day

Primary areas of heavy detection include regions around larger cities, for example, Los Angeles and New York, but we also see heavy detections in less populated areas as well. The below diagram further illustrates this trend: Color depth represents the overall detection amount for the state, while the size of the red circles represents the number of detections for various cities. The deeper the color, the more detections the state contains. The larger the circle, the higher number of detections in the city.

US map of overall state and city detections of organization-focused ransomware in 2019

When we take an even deeper look and identify the top 10 cities in 2019 (so far) with heavy ransomware detections, we see that none of them include cities we’ve read about in the news recently. This trend supports the theory that it doesn’t require being surrounded by victims of ransomware to become one.

Wherever ransomware decides to show up, it is going take advantage of weak infrastructure, configuration issues, and ignorant users to break into the network. Ransomware is becoming a more common weapon to lodge against businesses than it was in years past. The below chart expresses the massive spike of ransomware detections we saw earlier in the year.

January and February are shining examples of the kind of heavy push we saw from families like Troldesh earlier in the year. However, while it seems like ransomware is dying off after March, we think more of it as the criminals taking a breather. When we dig into weekly trends, we can see specific spikes that were due to heavy detections of specific ransomware families.

Unlike what we’ve observed in the past with consumer-focused ransomware, where a wide net was cast and we observed a near constant flood of detections, ransomware focused on the corporate world attacks in short pulses. These may be due to certain time frames being best for attacking organizations, or it could be the time required to plan an attack against corporate users, which calls for the collection of corporate emails and contact info before launching.

Regardless, ransomware activity in 2019 has already hit a record number, and while we have only seen a few spikes in the last couple of months, you can consider these road bumps between two big walls. We just haven’t hit the second wall yet.

Observations

Despite an increase in ransomware targeting organizational networks, city networks that have been impacted by ransomware do not show up on our list of top infected cities. This leads us to believe that ransomware attacks on city infrastructure, like what we are seeing in Baltimore, do not occur because of widespread outbreaks, but rather are targeted and opportunistic.

In fact, most of these attacks are due to vulnerabilities, gaps in operational security, and overall weak infrastructure discovered and exploited by cybercriminals. They often gain a foothold into the organization through ensnaring employees in phishing campaigns and infecting endpoints or having enough confidence to launch a spear phishing campaign against high-profile targets in the organization.

Real spear phishing email (Courtesy of Lehigh University lts.lehigh.edu)

There is also always a case to be made about misconfigurations, slow updating or patching, and even insider threats being the cause of some of these attacks. Security researchers and city officials still do not have a concrete answer for how RobinHood infected Baltimore systems in the first place.

Avoidance

There are multiple answers to the question, “How do I beat ransomware?” and unfortunately, none of them apply 100 percent of the time.  Cybercriminals spent the better part of 2018 experimenting on novel methods of breaking through defenses with ransomware, and it looks like they’re putting those experimentations to the test in 2019. Even if organizations follow “all the rules,” there are always new opportunities for infection. However, there are ways to get ahead of the game and avoid worst-case scenarios. Here are four areas that need to be considered when trying to plan for ransomware attacks:

Patches

While we did say that EternalBlue likely did not play a part in the spread of RobinHood ransomware, it has been used by other ransomware and malware families in the past. To this end, patching systems is becoming more and more important every day, because developers aren’t just fixing usability bugs or adding new features, but filling holes that can be exploited by the bad guys.

While patching quickly is not always possible on an enterprise network, identifying which patches are required to avoid a potential disaster and deploying those within a limited scope (as in, to systems that are most vulnerable or contain highly-prioritized data) is necessary. In most cases, inventorying and auditing patches should be completed, regardless if the patch can be rolled out across the org or not.

Upgrades

For the last seven or so years, many software developers, including those of operating systems, have created tools to help fight cybercrime within their own products. These tools are often not offered as an update to existing software, but are included in upgraded versions. Windows 10, for example, has anti-malware capabilities built into the operating system, making it a more difficult target for cybercriminals than Windows XP or Windows 7. Look to see which software and systems are nearing end-of-life in their development cycle. If they’ve been phased out of support by an organization, then it’s a good idea to look to upgrading software altogether.

In addition to operating systems, it’s important to at least consider and test an upgrade of other resources on the network. This includes various enterprise-grade tools, such as collaboration and communication platforms, cloud services, and in some cases hardware.

Email

Today, email attacks are the most common method of spreading malware, using either widespread phishing attacks that dupe whomever they can, or specially-crafted spear phishing attacks, where a particular target is fooled.

Therefore, there are three areas that organizations can focus on when it comes to avoiding ransomware infections, or any malware for that matter. This includes email protection tools, user education and security awareness training, and post email execution blocking.

There are numerous tools that provide additional security and potential threat identification for email servers. These tools reduce the amount of potential attack emails your employees will receive, however, they may slow down email sending and receiving due to checking all the mail coming in and out of a network.

User education, however, involves teaching your users what a phishing attack looks like. Employees should be able to identify a threat based on appearance rather than functionality and, at the least, know what to do if they encounter such an email. Instruct users to forward shady emails to the in-house security or IT teams to investigate the threat further.

Finally, using endpoint security software will block many attempts at infection via email, even if the user ends up opening a malicious attachment. The most effective endpoint solution should include technology that blocks exploits and malicious scripts, as well as real-time protection against malicious websites. While some ransomware families have decryptors available that help organizations retrieve their files, remediation of successful ransomware attacks rarely returns lost data.

Following the tips above will provide a better layer of defense against the primary methods of infection today, and can empower your organization to repel cyberattacks beyond ransomware.

Preparation

Being able to avoid infection in the first place is obviously preferable for organizations, however, as mentioned before, many threat actors develop novel attack vectors to penetrate enterprise defenses. This means that you need to not only establish protection to prevent a breach, but ready your environment for an infection that will get through.

Preparing your organization for a ransomware attack shouldn’t be treated as an “if” but a “when” if you expect it to be useful.

To that end, here are four steps for making your organization ready for “when” you experience a ransomware attack.

Step 1: Identify valuable data

Many organizations segment their data access based on required need. This is called compartmentalization, and means that no single entity within the organization can access all data.  To that end, you need to compartmentalize your data and how it’s stored in the same spirit. The point of doing this is to keep your most valuable (and biggest problem if lost) data segmented from systems, databases, or users who don’t need to access this data on a regular basis, making it more difficult for criminals to steal or modify said data.

Customers’ personally identifiable information, intellectual property, and financial information are three types of data that should be identified and segmented from the rest of your network. What does Larry, the intern, need access to customer data for? Why is the secret formula for the product you sell on the same server as employee birthdays?

Step 2: Segment that data

If needed, you should roll out additional servers or databases that you can put behind additional layers of security, be it another firewall, multi-factor authentication, or just limiting how many users can have access. This is where the data identified in the previous step is going to live. 

Depending on your operational needs, some of this data might need to be accessed more than others and, in that case, you’ve got to set up your security to account for it, otherwise you might hurt operational efficiency beyond the point where the risk is worth the reward.

Some general tips on segmenting data:

  • Keep the system with this data far away from the open Internet
  • Require additional login requirements, like a VPN or multi-factor authentication to access the data
  • There should be a list of systems and which users have access to data on which systems. If a system is somehow breached, there is where you start.
  • If you have the time and resources, roll out a server that barely has protection, add data that looks legitimate but, in reality, is actually bogus, and ensure that it’s vulnerable and easy to identify by an attacker. In some cases, criminals will take the low-hanging fruit and leave, ensuring your actual valuable data remains untouched.       
Step 3: Data backup

Now your data has been segmented based on how important it is, and it’s sitting behind a greater layer of security than before. The next step is to once again identify and prioritize important data to determine how much of it can be backed up (hopefully all the important data, if not all the company data).  There are some things to consider when deciding on which tools to use to establish a secure backup:

  • Does this data need to be frequently updated?
  • Does this data need to remain in my physical security?
  • How quickly do I need to be able to back up my data?
  • How easy should it be to access my backups?

When you can answer these questions, you’ll be able to determine which type of long-term storage solution you need. There are three options: online, local, and offsite.

Online

Using an online backup solution is likely going to be the fastest and easiest for your employees and/or IT staff. You can access from anywhere, use multi-factor authentication, and rest easy knowing it’s secured by people who secure data for a living. Backing up can be quick and painless with this method, however the data is outside of the organization’s physical control and if the backup service is breached, that might compromise your data.

Overall, online backup solutions are likely going to be the best option for most organizations, because of how easy they are to set up and utilize.

Local

Perhaps your organization requires local storage backups. This process can range from incredibly annoying and difficult to super easy and insecure.

Local storage allows you to store offline, yet onsite, maintaining a physical security presence. However, you are limited by your staff, resources, and space on how you can establish a backup operation locally. In addition, operational data that needs to be used daily may not be a candidate for this type of backup method.

Offsite

Our last option is storing data on removable hard drives or tapes and then having them stored in an offsite location. This might be preferable if data is especially sensitive and needs to be kept away from the location at which it was created or used. Offsite storage will ensure that your data is safe if the building explodes or is raided, but the process can be slow and tedious. You also are unlikely to use this method for operational data that requires regular access and backups.

Offsite backups are only needed in cases of storing extremely sensitive information, such as government secrets, or if the data needs to be maintained and kept for records, but regular access isn’t required.                                              

Step 4: Create an isolation plan

Our last step in preparing your organization for a ransomware attack is to know exactly how you will isolate an infected system. The speed and method in which you do this could save the entire organization’s data from an actively-spreading ransomware infection.

A good isolation plan takes into consideration as many factors as possible:

  • Which systems can be isolated quickly, and which need more time (e.g, endpoints vs. servers)?
  • Can you isolate the system locally or remotely?
  • Do you have physical access?
  • How quickly can you isolate systems connected to the infected one?

Ask yourself these questions about every system in your network. If the answer to how quickly you can isolate a system is “not fast enough,” then it’s time to consider reconfiguring your network to speed up the process.

Luckily, there are tools that provide network administrators with the ability to remotely isolate a system once an infection is detected. Investing time and resources into ensuring you have an effective plan for protecting the other systems on your network is paramount with the type of threats we see today.

Ransomware resilience

As we’ve covered, there has been a bumpy increase in organization-focused ransomware in 2019 and we expect to see more spikes in the months to come, but not necessarily in the cities you might expect. The reality is that the big headline cities hit with ransomware make up only a few of the hundreds of ransomware attacks that occur every single day against organizations across the country.

Cybercriminals will not obey the rules for how to conduct attacks. In fact, they are constantly looking for new opportunities, especially in places security teams are not actively covering. Therefore, spending all your resources on avoidance measures is going to leave your organization in a bad place. 

Taking the time to establish a plan for when you do get attacked, and building your networks, policies, and culture around that concept of resilience will prevent your organization from becoming another headline.

The post Ransomware isn’t just a big city problem appeared first on Malwarebytes Labs.

Categories: Techie Feeds

NIST’s privacy framework lets privacy tell its own story

Malwarebytes - Wed, 05/29/2019 - 18:51

Online privacy remains unsolved. Congress prods at it, some companies fumble with it (while a small handful excel), and the public demands it. But one government agency is trying to bring everyone together to fix it.

As the Senate sits on no fewer than four data privacy bills that their own members wrote—with no plans to vote on any—and as the world’s largest social media company braces for an anticipated multibillion-dollar privacy blunder, the US National Institute of Standards and Technology (NIST) has published what it calls a “privacy framework” draft.

Non-binding, unenforceable, and entirely voluntary to adopt, the NIST privacy framework draft serves mainly as a roadmap. Any and all companies, organizations, startups, and agencies can look to it for advice in managing the privacy risks of their users.

The framework draft offers dozens of actions that a company can take on to investigate, mitigate, and communicate its privacy risks, both to users and executives within the company. Nearly no operational idea is left unturned.

Have a series of third-party vendors in a large supply chain? The NIST framework has a couple of ideas on how to secure that. What about countless employees with just as many logins and passwords? The framework considers that, too. Ever pondered the enormous meaning of “data security” for your company? The NIST framework has a couple of entry points for how to protect data at rest and in transit.

Though wrought in government-speak and at times indecipherable nomenclature (suggested company actions are called “subcategories”), the 37-page privacy framework, according to one of its authors, has a simple and equally elegant purpose: It could finally let privacy tell its own story.

“To date, security [professionals] are telling a dramatic story. ‘We had these threats. Look what happened to these companies here,’” said NIST Senior Privacy Policy Advisor Naomi Lefkovitz. “But privacy [professionals] are over here saying ‘Privacy is a very important value,’ which is true, but it’s not quite as compelling when resources are being allocated.”

Lefkovitz continued: “We want privacy to be able to tell an equally compelling story.”

If successful, the NIST privacy framework could improve user privacy within organizations across the United States. It could better equip privacy officers to convince their companies to bulk up internal controls. And it could create an agreed-upon direction for privacy.

There are, of course, obstacles. A voluntary framework is only as successful as it is attractive—overly ambitious guidelines could turn the framework into a dud, tossed aside by the companies that handle the most user data.

Also, the framework should work in coordination with current data protection laws, rather than trying to overwrite those laws’ requirements. For example, as companies have built up their internal controls to comply with the European Union’s sweeping data protection law, the General Data Protection Regulation, a new approach to privacy could be seen as time-consuming, costly, and unnecessary.

Despite the potential roadblocks, NIST has been here before. Six years ago, the government agency was tasked with making a separate framework—one for cybersecurity.

The NIST cybersecurity framework

In 2013, through Executive Order 13636, President Barack Obama asked NIST to develop a strategy on securing the nation’s critical infrastructure from cyberattacks. This strategy, or framework, would include “standards, methodologies, procedures, and processes that align policy, business, and technological approaches to address cyber risks.” It would be voluntary, flexible, repeatable, and cost-effective for organizations to take on.

On February 12, 2014, NIST published the first version of its cybersecurity framework. The framework’s so-called “core” includes five functions that a company can take on to manage cybersecurity risks. Those functions are:

  • Identify
  • Protect
  • Detect
  • Respond
  • Recover

Each function includes “categories” and “subcategories,” the latter of which are actually outcomes that a company can try to achieve. It may sound confusing, but the framework simply organizes potential cybersecurity goals based on their purpose, whether that means identifying cybersecurity risks, protecting against those risks, detecting problems when they arise, or responding and recovering from them later on.

Several years, multiple workshops, more than 120 submitted comments, and one major update later, the framework has proved largely popular.

According to annual surveys of cybersecurity professionals by the Information Systems Security Association and Enterprise Strategy Group, the NIST cybersecurity framework has taken hold. In 2018, 46 percent of the survey’s 267 respondents said that they had “adopted some portions or all of the NIST cybersecurity framework” in the past two years. That same response showed up as a top five cybersecurity measure in 2017 and 2016.

In April 2018, when NIST released the cybersecurity framework’s Version 1.1 update, the US Chamber of Commerce, the Business Roundtable, and the Information Technology Industry Council all spoke in favor, with the Chamber of Commerce calling the framework “a pillar for managing enterprise cyber risks and threats.”

For NIST, the challenge will be translating these successes to privacy.

“Privacy is, if anything, more contextual than security, and therefore, it makes it very difficult to make one-size-fits-all rules and expect to get effective privacy solutions,” said Lefkovitz. “You can certainly get a checklist of solutions, but that doesn’t mean you’re providing any privacy benefits.”

The NIST privacy framework

The NIST privacy framework draft, published last month after a 48-day open comment period, is modeled closely to NIST’s cybersecurity framework. The privacy framework, just like the cybersecurity framework, has a core that includes five functions, each with its own categories and subcategories, the latter of which, again, actually describe outcomes. The privacy framework’s five core functions are:

  • Identify
  • Protect
  • Control
  • Inform
  • Respond

Again, companies can voluntarily use the framework as a tool, choosing the areas of privacy risk management where they need support.

For example, a company that wants to identify the privacy risks to its users can explore its inventory and mapping processes, supply chain risk management, and governance, which covers a company’s policies and regulatory and legal requirements. A company that wants to protect against privacy risks can look at achieving a number of options, including insuring that both remote access and physical access to data and devices are managed. Companies could also, for example, make sure that data is destroyed according to company policy.

The privacy framework has been well received, but there are improvements to be made.

“I think the draft is good as a starting point,” said Amie Stepanovich, US policy manager for Access Now, a digital rights and free expression advocacy group that submitted comments to NIST about the privacy framework. “It is a draft, though.”

Stepanovich said she liked that the privacy framework draft will be revisited in the future, and that it does not try to present a “one-size-fits-all” solution to privacy. She also said that she hopes the privacy framework can dovetail with current data protection laws, and not serve as a replacement to much-needed data privacy legislation.

Stepanovich added that the privacy framework’s focus on the user represents a potentially enormous shift for privacy risk management for many companies. Currently, Stepanovich said, privacy risk operates on three levers—legal liability risks, public relations risks, and future regulatory risks. Basically, companies calculate their privacy risk based on whether they’ll face a lawsuit, look bad in the newspaper, or look so bad in front of Congress that an entirely new law is crafted to rein them in.

The focus on the user, Stepanovich said, could meaningfully communicate to the public that their data is being protected in an all new way.

“The trust that people can have in companies—or data processors—will not come from legal compliance, because nobody says ‘Trust me, I do exactly what I have to do to not be sued,’” Stepanovich said. “If [data processors] are going beyond what needs to be done to serve interests of people who may be put at risk through their behavior, that starts to look like something people will pay attention to.”

But going above and beyond the current legal compliance landscape could actually be a roadblock for some companies.

When NIST opened its email box up for public comments, one major lobbying group suggested a list of “minimum attributes” to be included. The Internet Association, which represents the public policy interests of Google, Facebook, Uber, Airbnb, Amazon, and Twitter, just to name a few, asked that the framework have “compatibility with other privacy approaches.”

For many of the group’s represented companies, legal compliance is part of their privacy approach, and NIST’s privacy framework draft proposes a few outcomes that do not entirely line up with current legal requirements in the US.

For example, the privacy framework suggests that companies could structure their data management to “protect individuals’ privacy and increase manageability.” Some of the ways to do that, the privacy framework suggests, are by giving users the control to access, alter, and delete the data stored about them.

But a company that adheres to those suggestions could potentially face questions about how to fulfill certain government requests in which US intelligence agencies demand a user’s online messages or activity as part of an investigation.

Another “minimum attribute” proposed by the Internet Association is also missing from the draft: “Common and Accessible Language.”

A similar matter proved a pain point for Stepanovich, who is not associated with the Internet Association.

“This is not a draft document that people can easily understand,” Stepanovich said. She compared the privacy framework draft to, somewhat surprisingly, the hit ABC drama “Lost,” a circuitous six-season television show that included a disappearing island, time travel, and storytelling techniques such as flashbacks, flash-forwards and, remarkably, what can only be described as “flash-sideways” moments into a parallel, maybe-Heaven dimension.

“This is the ‘Lost’ problem,” Stepanovich said. “’Lost’ lost viewers every season because you couldn’t start watching it in season three and have any clue—it required watching every episode, and it kept getting more complicated, providing no entry point.”

TV analogies aside, Stepanovich’s bigger point is this: With no entry point for non-techies, the individuals who could be most impacted by this privacy framework will miss out on the opportunity to shape it.

“It shouldn’t just be cybersecurity, those who focus on tech, because tech is not necessarily the most at-risk community here. LGBT [individuals], civil rights [defenders], immigrants—populations who have a higher stake in the privacy conversation,” Stepanovich said. “If it is too difficult for us to understand, it is impossible for those groups to get in there and have the resources to devote to this issue. They need to be there.”

Beyond the draft

NIST’s privacy framework draft is just that, a draft. The agency scheduled a webinar for May 28 and a public workshop in Boise, Idaho, on July 8 and 9. Registration is free. A preliminary draft is expected in the summer, with Version 1.0 to be published in October.

Until then, everyone is invited to share their thoughts with NIST about what they expect to see from the privacy framework. We at Malwarebytes know you care about privacy—you’ve told us before. Feel free to tell your story about privacy. It could help shape the topic’s future.

The post NIST’s privacy framework lets privacy tell its own story appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Everything you need to know about ATM attacks and fraud: Part 1

Malwarebytes - Wed, 05/29/2019 - 15:00

Flashback to two years ago. At exactly 12:33 a.m., a solitary ATM somewhere in Taichung City, Taiwan, spewed out 90,000 TWD (New Taiwan Dollar)—about US$2,900 today—in bank notes.

No one was cashing out money from the ATM at the time. In fact, this seemingly odd system glitch was actually a test: The culprit who successfully infiltrated one of First Commercial Bank’s London branch servers issued an instruction to a cashpoint 7,000 miles away to check if a remote-controlled heist was possible.

Within 15 hours, 41 more ATM machines in other branches in Taichung and Taipei cities doled out an accumulated 83.27 million TWD, which is worth US$2.7 million today.

And while the Taiwan police were able to successfully nab the suspects of this million-dollar heist—most of whom hailed from European countries—many unnamed criminals taking advantage of flaws and weaknesses in ATM machines (even financial systems as a whole) are still at large.

Normal ATM users cannot fight against such an attack on their banks. But with vigilance, they can help bring criminals to justice. The catalyst for the successful capture of the heisters was intel provided by citizens who, after noticing the foreign nationals acting suspiciously and gathering relevant information—in this case, a car license plate number, and a credit card one of the suspects accidentally left behind—reported it to law enforcement.

Why ATMs are vulnerable

Just like any computing device, ATMs have vulnerabilities. And one way to understand why bad guys are drawn to them is to understand its components and the way it communicates with the bank network.

An ATM is composed of a computer (and its peripherals) and a safe. The former is enclosed in a cabinet. This is the same whether the ATM is a stand-alone kiosk or a “hole in the wall.” The cabinet itself isn’t particularly secure or sturdy, which is why criminals can use simple tools and a lock key purchasable online to break into it to gain access either to the computer or the safe.

The computer usually runs on Windows—a version specifically created for ATMs. ATM users don’t see the familiar Windows desktop interface because access is restricted. What we do see are user-facing applications that aid us in making transactions with the machine.

A rough list of attacks on ATMs (Source: Positive Technologies)

Up to now, many ATMs have still been running on Windows XP. If the OS is outdated, expect that the software in them may need some upgrading as well. That said, criminals can take their pick of which exploits to use to make the most of software vulnerabilities and take control of the remote system.

In some cases, interfaces like a USB port visible in public can encourage users with ill-intent to introduce malware to the machine via portable device. Since security software may not be installed in them, and there is a significant absence of authentication between peripherals and the OS, there is increased likelihood of infection. To date, there are already 20 strains of known ATM malware discovered.

The cash dispenser is directly attached to the safe where the cash is stored. A compromised computer can easily give criminals access to the interface between the computer and the safe to command it to dispense cash without using stolen customer card information.

The traffic between the ATM computer and the transaction processing server is usually not encrypted, making it a breeze for hackers to intercept transmitted data from customer to bank server. Worse, ATMs are notoriously known for implementing poor firewall protection, which makes them susceptible to network attacks.

Other notable weaknesses are missing or misconfigured Application Control software, lack of hard drive encryption, and little-to-no protection against users accessing the Windows interface and introducing other hardware devices to the ATM.

Types of ATM attacks and scams

Hacking a bank’s server is only one of the many known ways criminals can get their hands on account holders’ card details and their hard-earned cash. Some methods are clever and tactical. Some can be crude. And others are more destructive and dangerous. Whatever the cost, we can bet that criminals will do whatever it takes to pull off a successful heist.

We have highlighted types of ATM attacks here based on their classification: terminal tampering, physical attacks, logical attacks, and social engineering.

For this post, we’ll be delving deep into the first two. (The second two attacks will be featured in Part 2 of this series.)

Terminal tampering

The majority of ATM fraud campaigns involve a level of physically manipulating parts of the machine or introducing devices to it to make the scheme work.

Skimming. This is a type of fraud where a skimming device, usually a tandem of a card reader (skimmer) and keypad overlay or pinhole camera, is introduced to the machine by placing it over the card slot and keypad, respectively. The more closely it resembles that of the machine’s, the better it’ll work (and less likely to draw suspicion).

The purpose of the second reader is to copy data from the card’s magnetic stripe and PIN so the criminal can make forgeries of the card.

ATM skimmer devices (Source: The Hacker News)

Of course, there are many ways to capture card and PIN data surreptitiously. They all fall under this scheme. Examples include skimming devices that tap into the ATM’s network cables, which can intercept data in transit.

Criminals can up the ante of their skimming campaign by purchasing a second-hand ATM (at a bargain price) and then rigging it to record data. These do not dispense cash. This, by far, is the most convincing method because wary account holders wouldn’t think that an entire ATM is fake. Unfortunately, no amount of card slot jiggling can save their card details from this.

Skimming devices can also be slotted at point-of-sale (POS) terminals in shops or inside gas pumps. Some skimmers are small enough to be concealed in one’s hand so that, if someone with ill intent is handed a payment card, they can quickly swipe it with their skimmer after swiping it at a POS terminal. This is a video of a former McDonald’s employee manning the drive-thru window caught doing just that.

Shimming. One may refer to shimming as an upgraded form of skimming. While it still targets cards, its focus is recording or stealing sensitive data from their embedded chips.

A paper-thin shimming device is inserted in the ATM’s card slot, where it sits between the card and the ATM’s chip reader. This way, the shimmer records data from the card chip while the machine’s chip reader is reading it. Unlike earlier skimming devices, shimmers can be virtually invisible if inserted perfectly, making them difficult to detect. However, one sign that an ATM could have a shimming device installed is a tight slot when you insert your bank card.

Data stolen from chip cards (also known as EMV cards) can be converted to magnetic stripe data, which in turn can be used to create fake-out versions of our traditional magnetic stripe cards.

If you may recall, issuers once said that EMV cards offer better protection against fraud compared to traditional bank cards.

With more users and merchants now favoring chip cards, either due to convenience or industry compliance, it was expected that criminals would eventually find a way to circumvent the chip’s security and read data from it. Regrettably, they didn’t disappoint.

Card trapping. Although not as broadly reported as other ATM attack schemes, card trapping is alive and, unfortunately, kicking. Of late, this has victimized a 17-year old who lost her life savings and a friend of a former detective from Dundee, Scotland.

Card trapping is a method wherein criminals physically capture their target’s debit or credit card via an ATM. They do this by introducing a device, usually a Lebanese loop, that prevents the card from getting ejected once a transaction is completed. Criminals steal their target’s PIN by shoulder surfing or by using a small hidden camera similar to those used in skimming.

A card trap (Source: Police Service of Northern Ireland)

Another known card trap is called a spring trap.

Cash trapping. This is like card trapping, only criminals are after the cash their target just withdrew. A tool—either a claw-like tool, a huge fork-like device that keeps the cash slot open after an ATM withdrawal, or a “glue trap”—is introduced to the ATM cash slot to trap at least some of the cash or most all of it.

Cash trapping that doesn’t involve pincer implements normally use what is called a false ATM presenter. This is a fake ATM cash dispenser placed in front of the real one.

A false ATM presenter used to capture cash (Source: ENISA) Physical attacks

If wit isn’t enough to pull off a successful ATM heist, brute force might. As rough, unsophisticated, and sloppy as they look, criminals have achieved some success going this route.

Crooks who’d rather be loud than quiet in their activities have opted to using explosives—solid and gas, alike—chain lassoing the machine to be uprooted and dragged away, ram-raiding, and using a digger to dig a wall-mounted ATM out of a building. A sledgehammer, crowbar, and hammer have worked wonders, too.

This ATM theft took less than four minutes (Source: The Guardian)

In its fourth consecutive year, overall physical attacks on ATMs continue to increase in Europe, according to the European Association for Secure Transactions (EAST). From 2017 to 2018, there was a 16 percent increase in total losses (from US$34.6 million to US$40.2 million) and a 27 percent increase in reported incidents (from 3,584 to 4,549). The bulk of losses was due to explosive or gas attacks followed by robbery and ram-raiding.

“The success rate for solid explosive attacks is of particular concern,” said EAST Executive Director Lachlan Gunn in the report. “Such attacks continue to spread geographically with two countries reporting them for the first time in early 2019.”

The ATM Security Working Group (ATMSWG) published a document on best practices against ATM physical attacks [PDF] that financial institutions, banks, and ATM merchants can refer to and use in their planning to beef up the physical security of their machines. Similarly, the ATM Industry Association (ATMIA) has a handy guide on how to prevent ATM gas and explosive attacks [PDF].

Know when you’re dealing with a tampered ATM

Financial institutions and ATM providers know that they have a long way to go to fully address fraud and theft, and they have been finding and applying ways to step up their security measures. Of course, ATM users shouldn’t let their guard down, too.

Minimize the likelihood of facing a tampered ATM—or other dangers lurking about—by reading through and following these tips:

Before visiting an ATM
  • Pick an ATM that appears safe to use. It’s well lit, passers-by can see it, and it has a CCTV camera pointed at it. Ideally, go for an indoor ATM, which can be found in bank branches, shopping malls, restaurants, convenience shops, and others. Avoid machines that have been neglected or vandalized.
  • If you find yourself in an area you’re not familiar with, try going for ATMs that meet most of the physical requirements we mentioned.
  • Limit going to ATM locations alone, especially when you’re doing it outside of normal banking hours. Your friend or relative might help if the transaction goes awry. Also, their mere presence could fend off muggers and/or strangers.
  • Check over the ATM. Look for devices that may be sticking out from behind it or from any of its peripherals. Look for false fronts (over card and money slots, keypad, or, worse, over the entire face of the machine), tiny holes where cameras could be watching, cracks, mismatched key colors, etc. Report any of these signs to the bank then look for another ATM.
  • Lastly, spot anyone you think is loitering around your vicinity or acting suspiciously. Do not confront them. Instead, if their behavior is disturbing enough, report them to the police.
While using the ATM
  • Put away anything that will distract you while you use the ATM. Yes, we mean your phone and your Nintendo Switch. These could make one easily miss their awareness of their surroundings, which criminals can use to their advantage.
  • Tug on the card reader and cash dispenser to make sure there are no extra devices attached.
  • It pays to cover the keypad when entering your PIN, whether you’re alone in the queue or not. You may have checked over the ATM for signs for physical tampering but it’s always possible to miss things. Also, if the person next in the queue is too close, either remind them to move further back or cover the PIN pad as much as you can.
  • Always print a copy of your ATM receipt and put it away for safety. This way, you have something to refer to and compare against your bank statement.
After withdrawing from an ATM
  • Put your card, cash, and receipt away quickly and discretely.
  • If the ATM didn’t hand out your card, stay beside the machine and call your bank’s 24/7 support number to terminate your card, so when the criminals try to use it, it won’t work. Tell others behind you in the queue that your card is stuck, and they wouldn’t be able to use the machine until you get it out.
  • If the ATM didn’t dispense any money, record the exact date, time, and ATM location where this happened, and call the bank’s 24/7 support number. Snap a few pictures using your phone, too, and send a copy of it to yourself (either via SMS or email) so you have a digital record.
  • Also call your card issuer or bank to file for a claim, saying the ATM you use wouldn’t dispense the cash. Claiming would be a lot easier and faster if you used your own bank’s ATM inside the bank’s building.
  • If this happened at a convenience store, alert an employee at once. They may have a process to move things along. They can also stop other store shoppers from using the problematic machine.
  • Regularly review your bank statement for potential withdrawals and/or card use that you didn’t do yourself. Report the fraud to your bank if you spot any.
Considering other payment options

One doesn’t always have to keep withdrawing money from the ATM. If there are ways consumers can pay for goods without using cash from an ATM, they should at least consider these options.

Many are claiming that using the contactless or tap-and-pay feature of your card or smartphone is an effective way to combat account (and ATM fraud) entirely.

For Part 2 of this series, we’ll be looking at logical and social engineering attacks criminals use against ATMs and their users. Until then, be on the lookout and stay safe!

The post Everything you need to know about ATM attacks and fraud: Part 1 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Employee education strategies that work to change behavior

Malwarebytes - Tue, 05/28/2019 - 15:25

When people make the decision to get in shape, they have to commit the time and energy to do so. Going to the gym once isn’t going to cut it. The same is true when it comes to changing the culture of an organization. In order to be effective in changing employee behavior, training needs to be on-going and relevant.

Technology is rapidly evolving. Increasingly, new solutions are able to better defend the enterprise against malicious actors from the inside and out, but tools alone cannot protect against cyberattacks.

Verizon’s 2019 Data Breach Investigations Report (DBIR) found that:

While hacking and malicious code may be the words that resonate most with people when the term “data breach” is used, there are other threat action categories that have been around much longer and are still ubiquitous. Social engineering, along with misuse, error, and physical, do not rely on the existence of cyberstuff.

In short, people matter. Employee education matters.

Taking a technological approach to securing the enterprise has started to unravel over the last decade, according to Lance Spitzner, director, research and community at SANS Institute. “The challenge we are facing is that we have always perceived cybersecurity as a technical problem. Bad guys are using technology to attack technology, so let’s focus on using technology to secure technology,” Spitzner said.

Increasingly, organizations have come to understand that we have to address the human problem also. The findings from this year’s DBIR are evidence that human behavior is a problem for enterprise security. According to the report:

  • 33 percent of data breaches included social attacks
  • 21 percent resulted from errors in casual events
  • 15 percent of breaches were caused because of misuse by authorized users
  • 32 percent of breaches involved phishing
  • 29 percent of breaches involved the use of stolen credentials
Calling all stakeholders

Some organizations are still implementing the antiquated annual computer-based-training and wondering why their security awareness program isn’t working. Despite the security team’s understanding that they must do more, creating an effective employee education program takes buy-in from a variety of different stakeholders, said Perry Carpenter, chief evangelist and strategy officer of KnowBe4 and author of Transformational Security Awareness: What Neuroscientists, Storytellers, and Marketers Can Teach Us About Driving Secure Behaviors.

“If they are stuck in the once a year, they have to find a way to justify moving past that, so there is some selling they have to do to their executive team in order to get support for more frequent communications and more budget. It’s essentially the higher touch that they have to sell,” Carpenter said.

Even those organizations that don’t have the budget to use an outside vendor can find ways to create compelling content, which means that security teams are tasked with the burden of having to justify the need for more employee engagement.

One way to sell that need, according to Carpenter, is to leverage the psychological effect known as the decay of knowledge. “We go to something and two days later, we forget most of the content. The further away we get from it, the more irrelevant, disconnected, and invisible it becomes.”

Evidence shows that a greater frequency of security education is the first step toward creating a more engaging awareness program. “In all things that you do, you are either building strength or allowing atrophy,” Carpenter said.

Once you have the buy-in to be able to really grow the company’s security awareness program, you need to figure out how to connect with people. That’s why Carpenter is a fan of a marketing approach that uses several channels.

Given that some people learn best visually while others prefer in-person instruction, identifying which content forms are most engaging to different employees will inform the types of training needed for the program to succeed.

No more death by PowerPoint

The old computer-based training programs developed by auditors have done little to defend the enterprise against sophisticated phishing attacks. If you want people to care about security, you need to build a bridge between technology and people.

Sometimes, those who are highly technically skilled aren’t adept at communicating with people. “Traditionally, some of the biggest blockers to awareness programs were security people who believed if the content wasn’t technical that it wasn’t security,” Spitzner said.

Now, security professionals are starting to realize that employees respond differently to a variety of attack vectors, which is why Omer Taran, co-founder and CTO at CyberReady said that collecting and analyzing performance data in real time is crucial to building a better awareness education program.

“Specially designed ‘treatment plans’ should include an adjusted frequency, timely reminders, custom simulations, and training content that helps to reform this particularly susceptible group,” Taran said.

Empowering employees

In order for companies to stay a step ahead of cybercriminals, their employee education programs need to be engaging. That’s why building a security-aware culture is one of the most important steps the organization can take.

“Processes and policies are fine, but if you’re not winning hearts and minds and gaining buy-in from employees, it’s probably a non-starter. The bad guys don’t care how well-written your policies are, or even if you have any,” said Lisa Plaggemier, chief evangelist at Infosec.

It’s also important not to play the blame game. Rather, Plaggemier said, “empower employees with awareness campaigns and good quality training, delivered through a program that influences behavior.”

To make cybercrime and fraud protection key parts of your company culture, Plaggemier recommended that leaders and managers consider these tips:

  • Be an example. Leaders have the ability to shift attitudes, beliefs, and ultimately, employee behavior. If leaders are taking security shortcuts that put the company at risk, employees will not believe the company is serious about doing everything it can to keep a secure workplace.
  • Be clear. Where confusion can create a culture of reactive rather than proactive behaviors, clarity helps prioritize the work. Make it clear that protecting the business is a top priority by creating written policies and having clear processes and procedures in place.
  • Be repetitive. Repetition is key for instilling good security habits in your employees. Human beings create new habits over time by repeating their actions. Encourage employees to make those out-of-the-ordinary tasks, such as calling a vendor to confirm it’s really him asking you to change his “pay to” account, become routine.
  • Be positive. Fear, uncertainty, and doubt are not good motivators. Instead, use language that empowers your employees. Make people feel like they matter in the information you share with them so that they can be better, smarter, and more confident in their choices when faced with something potentially malicious.



The post Employee education strategies that work to change behavior appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (May 20 – 26)

Malwarebytes - Mon, 05/27/2019 - 07:03

Last week on Malwarebytes Labs, we took a look at a skimmer pretending to be a payment service provider, gave an overview of what riskware is, took a deep dive into concerns about PACS leaks, and dug around in the land of “These Governments said fix it…hurry up”.

Other cybersecurity news
  • Changes inbound for Microsoft network admins: If you’re managing Windows 10 updates, you’ll need to make some tweaks to System Center Configuration Manager (Source: Microsoft)
  • AI animates static images: First we had Deepfakes, now we have the Mona Lisa’s eyes following you around the room in a more literal way than you may be accustomed to (Source: The Register)
  • Baltimore Ransomware woes: An update on how Baltimore is coping two weeks after a devastating Ransomware attack (Source: New York Times)
  • Huge title insurance leak: First American Financial Corp. find themselves in the middle of a story involving unsecured documents dating back to 2003 (Source: Krebs on Security)
  • Trouble for T-Mobile: The telecommunications giant run into an issue which allowed people in the know to potentially view customer names / account numbers freely (Source: Daley Bee)
  • Security pros on the way out: Large amounts of pros have considered quitting the field due to a lack of resources (source: Help Net Security)
  • Party political security: Security Scorecard take a look at how robust political parties are in terms of their security prior to major elections (Source: Security Scorecard)
  • Canada, popular with phishers: Why is Canada a favourite for people launching fake mail campaigns? (source: Tech Republic)
  • But is it art: Is this laptop containing some of the most notorious pieces of Malware around worth one million dollars? (source: BBC)
  • School’s out: TrickBot gives the kids an early recess as a school’s IT infrastructure can’t cope with the attack (source: ZDNet)

Stay safe, everyone!

The post A week in security (May 20 – 26) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Medical industry struggles with PACS data leaks

Malwarebytes - Fri, 05/24/2019 - 18:05

In the medical world, sharing patient data between organizations and specialists has always been an issue. X-Rays, notes, CT scans, and any other data or related files have always existed and been shared in their physical forms (slides, paperwork).

When a patient needed to take results of a test to another practice for a second opinion or to a specialist for a more detailed look, it would require them to get copies of the documents and physically deliver them to the receiving specialists. Even with the introduction of computers into the equation, this manual delivery in some cases still remains common practice today.

In the medical field, data isn’t stored and accessed in the same way that it is in governments and private businesses. There is no central repository for a doctor to see the history of a patient, as there would be for a police officer accessing the criminal history of a given citizen or vehicle. Because of this, even with the digitization of records, sharing data has remained a problem.

The medical industry has stayed a decade behind the rest of the modern world when it comes to information sharing and technology. Doctors took some of their first steps into the tech world by digitizing images into a format called DICOM. But even with these digital formats, it still was, and sometimes still is, necessary for a patient to bring a CD with data to another specialist for analysis.

Keeping with the tradition of staying 10 years behind, only recently has this digital data been stored and shared in an accessible way. What we see today is individual practices hosting patient medical data on private and often in-house systems called PACS servers. These servers are brought online into the public Internet in order to allow other “trusted parties” to instantly access the data, rather than using the old manual sharing methods.

The problem is, while the medical industry finally joined the 21st century in info-TECH, they still remain a decade behind in info-SEC, resulting in patient’s private data being exposed and ripe for the picking by hackers. This is the issue that we’ll be exploring in this case study.

It’s in the setup

While there are hundreds of examples of exploitable medical devices/ services which have been publicly exposed so far, I will focus in detail on one specific case that deals with a PACS server framework, a system that has great prevalence in the industry and deserves attention because it has the potential to expose private patient data if not set up correctly.

The servers I chose to analyze are built off of a framework called Dicoogle. While the setup of Dicoogle I discovered was insecure, the framework itself is not problematic. In fact, I have respect for the developers, who have done a great job creating a way for the medical world to share data. As with any technology, often times the security comes down to how the individual company decides to implement it. This case is no exception.

Technical details

Let’s start with discovery and access. Generally speaking, anything that exists on the Internet can theoretically be searched for and found. It cannot hide, as far as a server on the Internet is concerned. It is just an IP address, nothing more. So, using Shodan and some Google search terms, it was not difficult to find a live server running Dicoogle in the wild.

The problem begins when we look at its access control. The specific server I reviewed simply allowed access to its front end web panel. There were absolutely no IP or MAC address restrictions. There is good argument to say this database should not have be exposed to the Internet in the first place, rather, it should run on a local network accessible only by VPN. But since security was likely not considered in the setup, I was not required to do any of the more difficult targeted reconnaissance necessary for more secured servers in hopes of finding the front page.

Now, we could give them the benefit of the doubt and say, “Maybe there are just so many people from all over the world legitimately needing access, so they purposely left it open but secured it in other ways.”

After we continue on to look at the remaining OPSEC fails, we can strike this “benefit of the doubt” from our minds. I will make a note that I did happen to come across implementations of Dicoogle that were not susceptible and remained intact. This fact just serves as a confirmation that in this case, we are indeed looking at an implementation error.

Moving on, just as a burglar trying to break into a house will not pull out his lock pick set before simply turning the door handle, we do not need to try any sophisticated hacks if the default credentials still exist in the system being audited.

Sadly, this was the case here. The server had the default creds, which are built into Dicoogle when first installed.

USERNAME: dicoogle
PASSWORD: dicoogle

This type of security fail is all too common throughout any industry.

However, our job is not yet done. I wanted to assess this setup in as many ways as possible to see if there were any other security fails. Default creds is just too lame of a bypass to stop there, and the problem is obviously easy enough to fix. So I began looking into Dicoogle’s developer documentation.

I realized that there are a number of API calls that were created for developers to build custom software interacting with Dicoogle. These APIs are either JavaScript, Python, or REST based. Although there are modules for authentication available for this server, they are not activated by default and require some setup. So, even if this target had removed the default credentials to begin with, they could be easily circumvented because all of the patient data can still be accessed via the API—without any authentication necessary.

This blood is not just on the hands of the team who set up the server, but unfortunately, the blame also lies in part on Dicoogle. When you develop software, especially one that is almost guaranteed to contain sensitive data, security should be implemented by design, and should not require the user to take additional actions. That being said, the majority of the blame belongs to host of this service, as they are the ones who are handling clients’ sensitive data.

Getting into a bit of detail now, you can use any of the following commands via programming or REST API to access this data and circumvent authentication.

[SERVER_IP]?query=StudyDate:[20141101 TO 20141103]
Using the resuilts from this query, the attacker can obtain individual user ID’s, the performing the following call:

/dump?uid=[retreivedID]
All of the internal data and meta data from the DICOM image can be pulled.

We can access all information contained within the databases using a combination of these API calls, again, without needing any authentication.

Black market data

“So whats the big deal?” you might ask. “This data does not contain a credit card and sometimes not even a social security number.” We have seen that on the black market, medical data is much more valuable to criminals than a credit card, or even a social security number alone. We have seen listings that show medical data selling for sometimes 10 times what a credit card can go for.

So why is this type of info so valuable to criminals? What harm can criminals do with a breach of this system?

For starters, a complete patient file will contain everything from SSN to addresses, phone numbers, and all related data, making it a complete package for identity theft. These databases contain full patient data and can easily be turned around and sold on the black market. Selling to another criminal may be less money, but it is easier money. Now, aside from basic ID theft and resale, let’s talk about some more targeted and interesting use cases.

The most simple case: vandalism and ransom. In this specific case, since the hacker has access into the portal, deleting and holding this data for ransom is definitely a possibility.

The next potential crime is more interesting and could be a lot more lucrative for criminals. As I have described in this article, medical records are stored in silos, and it is not possible for one medical professional to cross check patient data with any kind of central database. So, two scenarios emerge.

Number one is modification of patient data for tax fraud. A criminal could take individual patient records, complete with CT scan images or X-Rays, and, using freely-available DICOM image editors and related software, modify legitimate patient files to contain imposter information. When the imposter takes a CD to a doctor to become a new patient, the doctor will be none the wiser. So it becomes quite feasible for the imposter to now claim medicare benefits or some kind of tax refunds based on this disease, which they do not actually have.

Number two is even more extreme and lucrative. There have been documented cases where criminals create fake clinics, and submit this legitimate but stolen data to their own fake clinic on behalf of the compromised patient, unbeknownst to them. They then can receive the medical payouts from insurance companies without actually having a patient to work on.

Takeaways

There are three major takeaways from this research. The first is for the client of a medical clinic. Being that we have so much known and proven insecurity in the medical world, as a patient who is concerned about their identity being stolen, it may be wise to ask about how your data is being stored when you take it to any medical facility. If they do not have details on how your data is being safely stored, you are probably better off asking that your data be given to you the old fashioned way: as a CD. Although this may be inconvenient in some ways, at least it will keep your identity safe.

The second takeaway is for medical clinics or practices. If you are not prepared to invest the time and money into proper security, it is irresponsible for you to offer this type of storage. Either stick to the old school patient data methods or spend the time making sure you keep your patients’ identities safe.

At the bare minimum, if you insist on rolling out your own service, keep it local to your organization and allow access only to pre-defined machines. A username and password is not enough of a security measure, let alone the default one. Alternatively, if you do not have the technical staff to properly implement PACS servers, it is best to pay for a reputable cloud-based service who has a good record and documented security practices. You should not jump into the modern information world if you are not prepared to understand the necessary constraints that go along with it.

And finally, the last takeaway is for the developers. There have been enough examples over the last five years to prove that users either do not know or care enough about security. Part of this responsibility lies on you to create software that does not have the potential to be easily abused or put users in danger.

The post Medical industry struggles with PACS data leaks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Knowing when it’s worth the risk: riskware explained

Malwarebytes - Thu, 05/23/2019 - 19:22

If there’s one thing I like more than trivia quizzes, it’s quotes. Positive, inspirational, and motivational quotes. Quotes that impart a degree of ancient wisdom, or those that make you stop and consider. Reading them melts our fears, sorrows, and feelings of inadequacy away.

Some of the most inspiring quotes urge us to take risks in order to find meaning. If you don’t take risks, they say, you won’t be able to achieve remarkable things. The biggest risk, they say, is not taking a risk at all.

But when it comes to computer security, all that goes out the window. Taking risks on software you download onto your devices is not a recipe for success. Even if the programs are inherently benign, some may have features that can be used against you by those with malicious intent. No good can come of that.

What are these risky programs you’re talking about?

Did I lose you at “quotes?” That’s alright. These software programs that contain features that can easily be abused are known as riskware. They may come pre-installed on your computing device or they are downloaded and installed by malware.

How can something legit be a risk?

Such software was designed to have powerful features so it can do what it was programmed to do. Unfortunately, those same features can be used and/or abused by threat actors as part of a wider attack or campaign against a target. Riskware contains loopholes or vulnerabilities that can be exploited by cybercriminals and the threats they develop.

For example, there are monitoring apps available in the market that private individuals, schools, and businesses use to look after their loved ones, watch what their students are doing, or check employee activities. Those with ill intent could take over these apps to stalk certain individuals or capture sensitive information via logging keystrokes.

Read: When spyware goes mainstream

Riskware can be on mobile devices, too. On Android, there are apps created with an auto-install feature that have system-level rights and come pre-installed on devices; therefore, they cannot be removed (but can be disabled). The auto-installer we detect as Android/PUP.Riskware.Autoins.Fota, however, cannot be manually deactivated. Once exploited, it can be used to secretly auto-install malware onto susceptible devices.

Note that if you install software that your anti-malware program detects as riskware, then you need only make sure your security program is updated to stay safe.

How can you tell which software is riskware?

There are varying levels of malicious intent and capabilities for all software. In fact, any program should be assumed to have potential flaws and vulnerabilities that can be exploited. However, there are criteria for determining what is considered malware vs. riskware, and which software is deemed “safe.”

Pieter Arntz, malware intelligence researcher and riskware expert, makes this clear when he said that riskware can be classified based on the risks to data and devices involved.

“In my opinion, there are a few major categories of riskware, and you can split them up by type of risk they introduce,” Arntz said. “Some bring risk to the system because they introduce extra vulnerabilities, such as unlicensed Windows with updates disabled. Some bring risk to the user because having them is forbidden by law in some countries, such as hacking tools.”

Arntz continues: “Some monitor user behavior. When this is by design, a software may be labelled as riskware rather than spyware. Some bring risk to the system because they are usually accompanied by real malware, and their presence can be indicative of an infection. [And] some bring risk to the user because their use is against the Terms of Service of other software on the system, such as cracks.”

What’s the difference between riskware and PUPs?

Riskware and potentially unwanted programs (PUPs) are similar in that their mere presence could open systems up to exploitation. So, it’s no surprise that users might liken one to the other. However, there are different criteria for classifying riskware and PUPs.

Programs might be termed riskware because they put the user at risk in some way by:

  • Violating the terms of service (ToS) of other software or a user platform on the device.
  • Blocking another application or software from being updated and patched.
  • Being illegal in the user’s country.
  • Potentially being used as a backdoor for other malware.
  • Being indicative of the presence of other malware.

Whereas programs might be considered PUPs because:

  • They may have been installed without the user’s consent.
  • They may be supported by aggressive advertisements.
  • They may be bundlers or part of a bundle.
  • They may be misleading or offer a false sense of security.

Regardless of whether a program is a PUP or riskware, it’s important to evaluate critically whether or not the software is as useful and relevant as it is a nuisance or a potential risk.

Should I keep quarantined riskware or remove it?

If your anti-malware program detects and quarantines riskware, you likely have a choice whether or not to keep it. Our advice is to make a decision based on whether or not you installed the riskware yourself and then, if you did, weighing the benefits of the app against the risks outlined in the detection.

If riskware was installed without the user’s knowledge, it’s possible the software is part of an attack ensemble delivered by malware. I’d be more worried about the presence of malware in this case, and would delete the offending riskware.

If you want your anti-malware to stop detecting software you use that is classified as riskware, see if you can configure your security solution to exclude the file or whitelist it. That way, the software won’t be detected in the future. Want to know how to do this with your Malwarebytes product? Go here.

Stay safe out there!

The post Knowing when it’s worth the risk: riskware explained appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Governments increasingly eye social media meltdown

Malwarebytes - Wed, 05/22/2019 - 16:10

These are trying times for social networks, with endless reports of harassment and abuse not being tackled and many users leaving platforms forever. The major sites such as Facebook and Twitter do what they can, but sheer userbase volume and erroneous automated feedback leave people cold. Bugs such as potentially sharing location data when users enable it alongside other accounts on the same phone are something we’ve come to expect.

Just recently, Twitter and Instagram started trying to filter out what they consider to be erroneous information about vaccines, displaying links to medical information sites amongst search results.

Elsewhere, major portals are trying to establish frameworks for hate speech, or fight the rising tide of trolls and fake news. One of Facebook’s co-founders recently had some harsh words for Zuckerberg, claiming the recent catalogue of issues make a good case for essentially breaking the site up. It’s not that long ago since the so-called Cambridge Analytica scandal came to light, which continues to reverberate about security and privacy circles.

It was always burning

Wherever you look, there’s a whole lot of fire fighting going on and no real solutions on the horizon. In many ways, the sites are too big to fail and the various alternatives that spring up never quite seem to catch on. Mastadon made a huge splash at launch and seems to have a lot more success at tackling abuse than the big guns, but by the  same token, lots of people tried it for a month and never went back. Smaller, decentralised instances with dedicated admins/moderators went a long way toward keeping things usable, but ultimately it seems it was just too niche for people more versed with the familiar sights and sounds of Twitter.

Dogpiling from other regions focused on destabilizing the very platforms being used at any given time only adds fuel to the fire.

To summarise, not the greatest of times being had by social media portals. Bugs will come and go, and sneaky individuals will always try to game systems with spam or political propaganda. Most people would (probably?) agree that abuse is where the bulk of the issues and concerns lie. There’s nothing more frustrating then seeing people hounded by a service they make use of with no tools available to fight it.

The fightback begins

Here’s some of the ways sites are looking to tackle the abuse challenge before them.

1) Twitter has sanitised the way accounts interact for some time. The quality filters make it more difficult to witness drive-by abuse postings sent your way via a few changes of the settings. Assuming you don’t follow people sending you nonsense, then you’ll miss most of the barrage.

No confirmed email address? No confirmed phone number? Sporting a default profile avatar? Brand new accounts? All of these points and more will help keep the bad tweets at bay. Muting is also a lot more reliable than it used to be. A good opening salvo, but still more can be done.

Using this as a starting point, Twitter is now finding ways to combine these outliers of registration with actual user behaviour and the networks in which they operate to weed out bad elements. For example,

“We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other,” Twitter executives wrote in the post. “These signals will now be considered in how we organize and present content in communal areas like conversation and search.”

This would suggest that even if, say, your account ticks enough boxes to avoid the quality filter, you may not escape the algorithm hiding your tweets if it feels you spend a fair portion of time interacting with abusers. Those abusive accounts may also be discreetly hidden from view when browsing popular hashtags in an effort to prevent them from gaming the system.

It seems Twitter needs to balance out hiding abusive messages and clever, sustained trolling versus simply removing content from plain view that one may disagree with but isn’t actually abusive. This will be quite a challenge.

2) Political shenanigans cover the full range of social media sites, coming to prominence in the 2016 US election and beyond. Once large platforms began digging into their data in this realm, they found a non-stop stream of social engineering and manipulation alongside flat out lies. 100,000 political images were shared on open WhatsApp groups in the run-up to the 2018 Brazilian elections, and more than half contained mistruths or lies.

Facebook is commissioning studies into human rights impacts on places like Myanmar due to how the platform is used there. Last year, Oxford University found evidence of high-level political manipulation on social media in 48 countries.

These are serious problems. How are they being addressed long term?

Fix it…or else?

In a word, slowly.

Lawmaker pressure has resulted in some changes at the top. People and organisations wanting to place political ads on Facebook or Google must now supply the identity behind the ad in some regions. This is to combat the dubious tactic known as “dark advertising,” where only the intended target of an ad can see it—usually with zero indication as to who made it in the first place. WhatsApp is cutting down on message forwarding to try and prevent the spread of political misinformation.

Right now, the biggest players are gearing up for the European Parliament Elections—again, with the possible threat of action hanging over them should they fail to do an acceptable job. If they don’t remove bogus accounts quickly enough, if they fail to be rigorous and timely with fact checking and bad article deletion, then regulators could turn up the pressure on the Internet giants.

Canning the spam with a banhammer

It’s not all doom and slow-moving gloom, though. It’s now more common to see platforms making regular public-facing statements about how the war on fakery is going.

Facebook recently made an announcement that they’ve had to remove:

265 Facebook and Instagram accounts, Facebook Pages, Groups and events involved in coordinated inauthentic behavior. This activity originated in Israel and focused on Nigeria, Senegal, Togo, Angola, Niger and Tunisia along with some activity in Latin America and Southeast Asia. The people behind this network used fake accounts to run Pages, disseminate their content and artificially increase engagement. They also represented themselves as locals, including local news organizations, and published allegedly leaked information about politicians.

That’s a significant amount of time sunk into one coordinated campaign. Of course, there are many others and Facebook can only do so much at a time; all the same, this is encouraging. While it may be a case of too little, too late for social media platforms as a whole to start cracking down on abusive patterns now accepted as norms, they’re finally doing something to tackle the rot. All the while, governments are paying close attention.

UKGov steps up to the plate

The Jo Cox Foundation will work alongside politicians of all parties to tackle aspects of abuse online, which can lead to catastrophic circumstances. Their paper [PDF format] is released today, and has a particularly lengthy section on social media.

It primarily looks  at how people desiring to work in public office are being hammered on all sides by online abuse, and how it then filters down into various online communities. It weighs up the realities of current UK lawmaking…

The posting of death threats, threats of violence, and incitement of racial hatred directed towards anyone (including Parliamentary candidates) on social media is unambiguously illegal. Many other instances of intimidation, incitement to violence and abuse carried out through social media are also likely to be illegal.

…with the reality of the message volume people in public-facing roles are left with:

Some MPs receive an average of 10,000 messages per day

Where do you begin with something like that?

Fix it or else, part 2

Slap bang in the middle of multiple quoted comments from social media sites explaining how they’re tackling online abuse/trolling/political dark money campaigns, we have this:

It is clear to us that the social media companies must take more responsibility for the content posted and shared on their sites. After all, it is these companies which profit from that content. However, it is also clear that those companies cannot and should not be responsible for human pre-moderation of all of the vast amount of content uploaded to their sites.

Doesn’t sound too bad for the social media companies, right? Except they also go on to say this:

Government should bring forward legislation to shift the liability of illegal content online towards social media companies.

Make no mistake, what social media platforms want versus what they’re able to realistically achieve may be at odds with this timetable:

Click to enlarge

The battle lines, then, are set. Companies know there’s a problem, and it’s become too big to hope for some form of self-resolution. Direct, hands-on action and more investment in abuse/reporting methods, along with more employees to handle such reports, are sorely needed. At this point, if social media organisations can’t set this one to rest, then it looks as though someone else may go and do it for them. Depending on how that pans out, we may feel the after effects for some time to come.

The post Governments increasingly eye social media meltdown appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Skimmer acts as payment service provider via rogue iframe

Malwarebytes - Tue, 05/21/2019 - 15:38

Criminals continue to target online stores to steal payment details from unaware customers at a rapid pace. There are many different ways to go about it, from hacking the shopping site itself, to compromising its supply-chain.

A number of online merchants externalize the payment process to a payment service provider (PSP) for various reasons, including peace of mind that transactions will be handled securely. Since some stores will not process payments on their own site, one might think that even if they were compromised, attackers wouldn’t be able to steal customers’ credit card data.

But this isn’t always true. RiskIQ previously detailed how Magecart’s Group 4 was using an overlay technique that would search for the active payment form on the page and replace it with one prepped for skimming.

The one we are looking at today adds a bogus iframe that asks unsuspecting customers to enter their credit card information. The irony here is that the shopping site itself wouldn’t even ask for it, since visitors are normally redirected to the external PSP.

Skimmer injects its own credit card fields

Small and large online retailers must adhere to security requirements from Payment Card Industry Data Security (PCI-DSS) that go well beyond using SSL for their payment forms. Failing to do so can lead to large fines and even the cancellation of their accounts.

One of the most popular e-commerce platforms, Magento, can help merchants be PCI compliant via its Magento Commerce cloud product or integrated payment gateways and hosted forms without sensitive data flowing through or stored on the Magento application server itself.

During one of our web crawls, we spotted suspicious activity from a Magento site and decided to investigate further. The following image depicts two slightly different checkout pages based on the same platform, with the one on the right being the suspicious site we had identified.

On the left, the expected payment form; on the right the one with a rogue iframe.

What we notice are new fields to enter credit card data that did no exist on the left (untampered form). By itself, this may not be out of the ordinary since online merchants do use such forms (including iframes) as part of their checkout pages.

But there are some things that just don’t add up here. For example, right below the credit card field is text that says, “Then you will be redirected to PayuCheckout website when you place an order.” Why would a merchant want to get their customers to type in their credit card again and hurt their conversion rate?

And indeed the unsuspecting shopper will then be taken to another— legitimate this time—payment form to re-enter their credit card details. This should be an immediate red flag if you have to type in your information twice. This is the kind of scenario we typically see with phishing sites as well.

The legitimate (external) payment form

At this point, we know that this e-commerce site is yet another victim that fell into the hands of one the Magecart groups. In the following section, we look into at how this attack works.


A three-step exfiltration process

The Magento site has been hacked and malicious code injected into all of its pages. However, the most important one that we are going to look at is the actual checkout page.

The crooks first load their own innocuous iframe to collect the credit card data, which is then validated before being exfiltrated.

Traffic capture showing the steps involved in credit card theft

As we mentioned, injected code is present in all the PHP pages of that site, but it will only trigger if the current URL in the address bar is the shopping cart checkout page (onestepcheckout). Some extra checks (screen dimensions and presence of a web debugger) are also performed before continuing.

Injected snippet that checks for certain elements before loading the full skimmer

If the right conditions are met, an external piece of JavaScript is loaded from thatispersonal[.]com, a domain registered with REGISTRAR OF DOMAIN NAMES REG.RU LLC and hosted in Russia.

It’s worth noting that directly browsing to this URL without the correct referer (one of the hacked Magento sites) will return a decoy script instead. The complete script is largely obfuscated and creates the iframe-box we saw above for harvesting credit card details at the right place on screen.

The rogue, previously non-existent credit card fields

It also loads another long and yet again obfuscated script ([hackedsite]_iframe.js) where “hackedsite” is the name of the e-commerce site that was hacked. Its job is to process, validate, and then exfiltrate the user data.

A familiar sight, with data elements to be scraped and exiltrated

That data is sent via a POST request to the same malicious domain in a custom encoded format.

The network request that exfiltrates the stolen data The diversity of skimmers and attacks

This particular skimmer evolved slightly over time and wasn’t always used for the rogue iframe technique. Historical scans archived on urlscan.io show some changes with obfuscation going from a hex encoded array to string manipulation using split and join methods.

Criminals have many different ways of stealing data from online shoppers with web skimmers. While supply-chain attacks are the most damaging because they usually affect a larger number of stores, they are also more difficult to pull off.

Compromising vulnerable e-commerce sites via automated attacks is the most common approach. Once the skimmer is injected into the payment page, it can steal any data that is entered and immediately send it to the crooks. As we have seen in this article, even e-commerce sites that do not collect payment data themselves can be affected when the attackers inject previously non-existent credit card fields into the checkout page.

For online shoppers, this trick will be difficult to spot early on and perhaps only after being prompted for the same information again will they become suspicious.

While it is important for e-commerce sites to get remediated in order to prevent further theft, we know this process can be delayed for one reason or another. This is why we focus on the exfiltration gates to protect our customers in the event that they happen to be shopping on a compromised store.

Further reading Indicators of Compromise (IoCs)

thatispersonal[.]com
82.146.50[.]133
top5value[.]com
212.109.222[.]250
voodoo4tactical[.]com
212.109.222[.]249

The post Skimmer acts as payment service provider via rogue iframe appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (May 13 – 19)

Malwarebytes - Mon, 05/20/2019 - 15:57

Last week, Malwarebytes Labs reviewed active and unique exploit kits targeting consumers and businesses alike, reported about a flaw in WhatsApp used to target a human rights lawyer, and wrote about an important Microsoft patch that aimed to prevent a “WannaCry level” attack. We also profiled the Dharma ransomware—aka CrySIS—and imparted four lessons from the DDoS attack against the US Department of Energy that disrupted major operations.

Other cybersecurity news
  • Cybersecurity agencies from Canada and Saudi Arabia issued advisories about hacking groups actively exploiting Microsoft SharePoint server vulnerabilities to gain access to private business and government networks. A different patch for the flaw, which was officially designated as CVE-2019-0604, was already available as of February this year. (Source: ZDNet)
  • Nefarious actors behind adware try hard to be legit—or at least look the part. A recent discovery of a pseudo-VPN called Pirate Chick VPN in an adware bundle was one of the ways they attempted to do this. However, the software is actually a Trojan that pushes malware, particularly the AZORult information stealer. (Source: Bleeping Computer)
  • SIM-swapping, the fraudulent act of convincing a mobile carrier to swap a target’s phone number over to a SIM card owned by the criminal, doubled in South Africa. This scam is used to divert incoming SMS-based tokens used in 2FA-enabled accounts. (Source: BusinessTech)
  • Ransomware attacks on US cities are on the uptick. So far, there have been 22 known attacks this year. (Source: ABC Action News)
  • Typosquatting is back on the radar, and it’s mimicking online major new websites to push out fake news or disinformation reports, according to a report from The Citizen Lab. Some of the sites copied were Politico, Bloomberg, and The Atlantic. The group behind this campaign is Endless Mayfly, an Iranian “disinformation supply chain.” (Source: The Citizen Lab)
  • No surprise here: Researchers from Charles III University of Madrid (Universidad Carlos III de Madrid) and Stony Brook University in the US found that Android smartphones are riddled with bloatware, which creates hidden privacy and security risks to users. (Source: Sophos’s Naked Security Blog)
  • Organizations who are using the cloud to store PII were considering moving back to on-premise means to store data due to cloud security concerns, according to a survey. (Source: Netwrix)
  • The Office of the Australian Information Commissioner (OAIC) recently released a report about their findings on breaches in healthcare, which is still an ongoing problem. They found that such breaches were caused mainly by human error. (Source: CRN)
  • Websites of retailers are continuously facing billions of hacking attempts every year, according to an Akamai Technology report. Consumers should take this as a wake-up call to stop reusing credentials across all their online accounts. (Source: BizTech Magazine)
  • After the discovery of Meltdown and Spectre, security flaws found in Intel and AMD chips, several researchers have again uncovered another flaw that could allow attackers to eavesdrop on every piece of user data that a processor touches. Intel collectively calls attacks against this flaw as Microarchitectural Data Sampling (MDS). (Source: Wired)

Stay safe, everyone!

The post A week in security (May 13 – 19) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

4 lessons to be learned from the DOE’s DDoS attack

Malwarebytes - Fri, 05/17/2019 - 15:59

Analysts, researchers, industry professionals, and pundits alike have all posited the dangers of the next-generation “smart grid,” particularly when it comes to cybersecurity. They warn that without the right measures in place, unscrupulous parties could essentially wreak havoc on the bulk of society by causing severe outages or worse.

It is a real possibility, but up until now, it’s been something that’s largely hypothetical in nature. In March, an unidentified power company reported a “cyber event” to the Department of Energy (DOE) that caused major disruptions in their operations. While the event did not cause a blackout or power shortage, it was likened to the impact of a major interruption, including events like severe storms, physical attacks, and fuel shortages.

It’s easy to dismiss this as a one-off event, especially since there was no energy disruption to the public as a result. But, in fact, the exact opposite should be inferred from this. It’s merely the first toe over the line in a world where cyberattacks are consistently growing more dangerous, highlighting the need to understand and improve security moving forward.

What lessons can be learned from this attack, and what can hopefully be done to mitigate risk in the future?

1. Disruption comes in many forms

Almost immediately, the attack could be dismissed because it didn’t cause power outages or severe disruptions, but that’s the kind of ostrich-in-the-sand approach that leads to vulnerability in the future. Disruptions or delays can come in many forms, especially for utility providers.

When an attack is identified, the appropriate response teams must dedicate resources to dealing with the oncoming wave. That is essentially costing valuable hours and money, but it’s also taking those teams away from more important tasks. A particularly nasty attack could cause crews to pause or delay certain activities simply to cooperate with an investigation. That could then result in a provider losing efficiency, capabilities, or worse.

At the very least, providers that incur significant costs would need to recuperate the money somehow, and that will most likely roll back into pricing. It’s hard to imagine a minor cyberattack having such an impact on the market, but it’s a definite possibility.

2. Many cyberattacks are easily preventable

Sophisticated cyberattacks can cause a lot of damage, but many of them can be easily prevented with the right security in place. According to an official, the DOS event reported to the DOE happened because of a known software vulnerability that required a patch to fix—a patch that had also been previously published. Hitting “update” would have thwarted the attack.

There’s no further information about what, specifically, was attacked. It could have been computers or workstations, or other Internet-facing devices or network tools. Attackers could have stolen data, proprietary files, or held systems up for ransom. Whatever the damage done, it could have easily been prevented.

A recent study revealed that 87 percent of all focused attacks from January to mid-March 2018 were prevented. This was achieved through a combination of measures, the first being the adoption of breakthrough technologies.

But, just as important to stopping attacks is building a strong and proactive security foundation. The latter requires vigilant maintenance for the systems and devices in question, which would including updating the tech and applying security patches for known exploits.

3. DDoS attacks should be taken seriously

Today’s DoS and DDoS attacks are different seeing as they are more vicious, pointed, and capable. Originally, launching a DDoS attack meant sending a huge bulk of requests to an IP address that overload the related systems and lock out legitimate requests. Generally, while these attacks do come from a few different computers and sources, they use less complex request methods.

The problem with the current landscape is not just that the attacks have become more sophisticated themselves, but that there are so many more potential channels. The Mirai botnet, for example, took advantage of IoT devices such as security cameras, smart home tech, and more. In turn, this makes the scale and capability of the attack much stronger because there are so many more devices involved, and there’s so much more data flowing into the targeted systems.

A massive distributed-denial-of-service attack can take down company websites, entire networks or— in the case of Mirai—nearly the entire Internet. For utility providers this kind of attack could prove disastrous to operations, inundating network servers and equipment with requests and blocking out official communications.

DDoS attacks should be taken more seriously, and today’s enterprise world should be focused on preventing and protecting from them as much as any other threat. Most cloud service providers already do a great job protecting against these attacks. It becomes a real issue when hackers can take advantage of existing vulnerabilities, just as they did with the DOE event.

4. They aren’t time-limited

In the TechCrunch report about the incident, it’s revealed that the attack caused “interruptions of electrical system operation” for a period of over 10 hours. Ten hours is a decent amount of time, and it provides a glimpse at just how prolonged these threats can be. Network layer attacks can last longer than 48 hours, while application layer attacks can go on for days. Infiltration of systems and networks for spying—weeks and months.

It adds another layer to the problem, beyond general security. These attacks can last for increasingly long periods of time, and when it comes to utility providers and the smart grid, that could potentially mean lengthy service disruptions.

Imagine being without power or water for over 60 days because of a sophisticated DDoS attack? While not likely, such a scenario highlights the need to find backup solutions to the problem.

What, for instance, are these providers doing to ensure services are properly backed up and supported during large-scale cyberevents?

Cybersecurity should be a priority

The key takeaway here is that cybersecurity, in general, should be one of the highest priorities for all entities operating in today’s landscape, utility providers included. These attacks have grown to be sophisticated, targeted, capable, and more rampant.

The argument to be made isn’t necessarily that protecting from any one form of attack should be more important than others. It’s that all threats should be taken seriously, including DDoS attacks, which are growing more common. To make matters worse, there’s a much larger pool of channels and devices with which attacks can originate, and they can be carried out over long periods of time.

This increased risk poses some additional questions. Is the smart grid truly ready for primetime? Can it hope to compete against such threats? If cybersecurity is baked into its design, it has a fight chance.

The post 4 lessons to be learned from the DOE’s DDoS attack appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Threat spotlight: CrySIS, aka Dharma ransomware, causing a crisis for businesses

Malwarebytes - Wed, 05/15/2019 - 16:02

CrySIS, aka Dharma, is a family of ransomware that has been evolving since 2006. We have noticed that this ransomware has become increasingly active lately, increasing by a margin of 148 percent from February until April 2019. The uptick in detections may be due to CrySIS’ effective use of multiple attack vectors.

Profile of the CrySIS ransomware

CrySIS/Dharma, which Malwarebytes detects as Ransom.Crysis, targets Windows systems, and this family primarily targets businesses. It uses several methods of distribution:

  • CrySIS is distributed as malicious attachments in spam emails. Specific to this family is the use of malicious attachments that use double file extensions, which under default Windows settings may appear to be non-executable, when in reality they are.
  • CrySIS can also arrive disguised as installation files for legitimate software, including AV vendors. CrySIS operators will offer up these harmless looking installers for various legitimate applications as downloadable executables, which they have been distributing through various online locations and shared networks.
  • Most of the time, CrySIS/Dharma is delivered manually in targeted attacks by exploiting leaked or weak RDP credentials. This means a human attacker is accessing the victim machines prior to the infection by brute-forcing the Windows RDP protocol on port 3389.

In a recent attack, CrySIS was delivered as a download link in a spam email. The link pointed to a password-protected, self-extracting bundle installer. The password was given to the potential victims in the email and, besides the CrySIS/Dharma executable, the installer contained an outdated removal tool issued by a well-known security vendor.

This social engineering strategy worked to bring down user defenses. Seeing a familiar security solution in the installation package tricked users into believing the downloadable was safe, and the attack was successful.

The infection

Once CrySIS has infected a system, it creates registry entries to maintain persistence and encrypts practically every file type, while skipping system and malware files. It performs the encryption routine using a strong encryption algorithm (AES-256 combined with RSA-1024 asymmetric encryption), which is applied to fixed, removable, and network drives.

Before the encryption routine, CrySIS deletes all the Windows Restore Points by running the vssadmin delete shadows /all /quiet command.

The Trojan that drops the ransomware collects the computer’s name and a number of encrypted files by certain formats, sending them to a remote C2 server controlled by the threat actor. On some Windows versions, it also attempts to run itself with administrator privileges, thus extending the list of files that can be encrypted.

After a successful RDP-based attack, it has been observed that before executing the ransomware payload, CrySIS uninstalls security software installed on the system.

The ransom

When CrySIS has completed the encryption routine, it drops a ransom note on the desktop for the victim, providing two email addresses the victim can use to contact the attackers and pay the ransom. Some variants include one of the contact email addresses in the encrypted file names.

The ransom demand is usually around 1 Bitcoin, but there have been cases where pricing seems to have been adapted to match the revenue of the affected company. Financially sound companies often have to pay a larger ransomware sum.

Some of the older variants of CrySIS can be decrypted using free tools that have been made available through the NoMoreRansom project.

Countermeasures

While you do have a choice to deploy other software to remotely operate your work computers, RDP is essentially a safe and easy-to-use protocol with a client that comes pre-installed on Windows systems, as well as clients available for other operating systems. There are a few measures you can take to make it a lot harder to gain access to your network over unauthorized RDP connections:

  • Change the RDP port so port-scanners looking for open RDP ports will miss yours. By default, the server listens on port 3389 for both TCP and UDP.
  • Or use a Remote Desktop Gateway Server, which also gives you some additional security and operational benefits like 2FA. The logs of the RDP sessions can prove especially useful when you are trying to figure out what might have happened. As these logs are not on the compromised machine, they are harder to falsify by intruders.
  • Limit access to specific IPs, if possible. There should be no need for a whole lot of IPs that need RDP access.
  • There are several possibilities to elevate user privileges on Windows computers, even when using RDP, but all of the known methods have been patched. So, as always, make sure your systems are fully up-to-date and patched to prevent privilege elevation and other exploits from being used.
  • Use an effective and easy-to-deploy backup strategy. Relying on Restore Points doesn’t qualify as such and is utterly useless when the ransomware first deletes the restore points, as is the case with CrySIS.
  • Train your staff on the dangers of email attachments and downloading files from unofficial sources.
  • Finally, use a multi-layered, advanced security solution to protect your machines against ransomware attacks.
IOCs

Ransom.Crysis has been known to append these extensions for encrypted files:

.crysis, .dharma, wallet, .java, .adobe, .viper1, .write, .bip, .zzzzz, .viper2, .arrow, .gif, .xtbl, .onion, .bip, .cezar, .combo, .cesar, .cmb, .AUF, .arena, .brrr, .btc, .cobra,  .gamma, .heets, .java, .monro, .USA, .bkp, .xwx, .btc, .best, .bgtx, .boost, .heets, .waifu, .qwe, .gamma, .ETH, .bet, ta, .air, .vanss, . 888, .FUNNY, .amber, .gdb, .frend, .like, .KARLS, .xxxxx, .aqva, .lock, .korea, .plomb, .tron, .NWA, .AUDIT, .com, .cccmn, .azero, .Bear, .bk666, .fire, .stun, .myjob, .ms13, .war, .carcn, .risk, .btix, .bkpx, .he, .ets, .santa, .gate, .bizer, .LOVE, .LDPR, .MERS, .bat, .qbix, .aa1, and .wal

The following ransom note names have been found:

  • README.txt
  • HOW TO DECRYPT YOUR DATA.txt
  • Readme to restore your files.txt
  • Decryption instructions.txt
  • FILES ENCRYPTED.txt
  • Files encrypted!!.txt
  • Info.hta

Common file hashes:

  • 0aaad9fd6d9de6a189e89709e052f06b
  • bd3e58a09341d6f40bf9178940ef6603
  • 38dd369ddf045d1b9e1bfbb15a463d4c

The post Threat spotlight: CrySIS, aka Dharma ransomware, causing a crisis for businesses appeared first on Malwarebytes Labs.

Categories: Techie Feeds

WhatsApp fix goes live after targeted attack on human rights lawyer

Malwarebytes - Tue, 05/14/2019 - 16:46

If you use WhatsApp, you’ll want to update both app and device as soon as possible due to a freshly-discovered exploit. The vulnerability was found in Google Android, Apple iOS, and Microsoft Windows Phone builds of the app.

Unlike many mobile attacks, potential victims aren’t required to install or click on anything—they may not even be aware something malicious has taken place.

This attack came to light after CitizenLab suspected a human rights lawyer was being targeted, and after observing, deduced that they were, but the attacks were blocked by the fixes WhatsApp put in place.

We should stress these are smart, high-level attacks and not typically rolled out to target random people. No need to start panicking. Just apply fixes as required, and go about your day.

What typically happens with a mobile attack?

A large portion of mobile attacks usually involve some form of social engineering. Mobile manufacturers insist customers use their own closed ecosystem store to lessen the risk of becoming infected by something out in the wild.

For example, iPhone users can only download apps from iTunes. And Android devices have installs from third parties or unknown sources switched off by default. This means if your child ends up on a fake Angry Birds website offering up a bogus installer, they won’t be able to install the app because the device won’t allow it (unless you switched off the default settings).

While bad files can and do lurk on official mobile stores, ignoring unknown source installs definitely helps keep infection numbers down.

This sounds like a non-typical mobile hijack

That would definitely be the case.

The WhatsApp team worked out that a simple missed call was all it took to inject commercial spyware into the device. The call, made using WhatsApp’s voice call function, would lead to the infection being installed on the phone silently. It appears all record of the call log would be scrubbed too, so the victim wouldn’t even be aware something was amiss.

This is similar to how malware on the desktop will often delete files after the event to remain as stealthy as possible. When this happens, it can take a long time before someone realises what’s up. When they do, it’s usually too late, and the attackers have already reached their chosen objective.

What is the impact?

Whether your mobile device is used for something important or you do little beyond making calls, this exploit could do some serious damage. The spyware can scan messages and emails, alongside grabbing location data. Even if you think malware on your phone isn’t a big deal because you don’t do anything important on it, the attackers have something for everyone. Namely, the ability to turn on a phone’s microphone and camera, access photos, contacts, and more.

Given the stealthy way the attack was attempted, it’s impressive that WhatsApp caught it as quickly as they did. Engineers at Facebook have been busy sorting this one out over the weekend.

Is there an advisory?

There sure is. Named CVE-2019-3568, the advisory reads as follows:

Description: A buffer overflow vulnerability in WhatsApp VOIP stack allowed remote code execution via specially crafted series of SRTCP packets sent to a target phone number.

Affected Versions: The issue affects WhatsApp for Android prior to v2.19.134, WhatsApp Business for Android prior to v2.19.44, WhatsApp for iOS prior to v2.19.51, WhatsApp Business for iOS prior to v2.19.51, WhatsApp for Windows Phone prior to v2.18.348, and WhatsApp for Tizen prior to v2.18.15.

Last Updated: 2019-05-13

What do we do now?

In a word, update. If your apps and devices are set to update automatically, you should be good to go. If not, go and update manually as soon as possible. As mentioned earlier, you probably shouldn’t worry about having been infected, as it seems to have been a carefully targeted attack. There’s an excellent chance you’re not on the radar.

In fact, if your updates aren’t set to automatic, your immediate concerns should be about more mundane security threats. Please consider switching to automatic and save yourself needless worries.

For more information on general mobile security, feel free to check out our guide to spotting mobile phishes, and some simple tips for good mobile hygiene. With that, plus Malwarebytes’ security apps for Android and iOS, you should be good to go.

The post WhatsApp fix goes live after targeted attack on human rights lawyer appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Exploit kits: spring 2019 review

Malwarebytes - Tue, 05/14/2019 - 15:57

Exploit kit activity remains fairly unchanged since our last winter review in terms of active distribution campaigns. But this spring edition will feature a new exploit kit and another atypical EK, in that it specifically goes after routers.

The main driver behind these drive-by download attacks are various malvertising chains with strong geolocation filtering. This explains why some exploit kits will be less visible than others.

According to our telemetry, the US is by far the country most affected by exploit kits, while Spain and South Korea are leading in Europe and Asia, respectively.

Spring 2019 overview
  • Spelevo EK
  • Fallout EK
  • Magnitude EK
  • RIG EK
  • Underminer EK
  • Router EK
Vulnerabilties

Internet Explorer’s CVE-2018-8174 and Flash Player’s CVE-2018-15982 are the most common vulnerabilities, while the older CVE-2018-4878 (Flash) is still used by some EKs.

Spelevo EK

Spelevo EK is a new exploit kit that was identified in March 2019 and features the most recent Flash exploit (CVE-2018-15982). Based on our internal tests, Spelevo’s Flash exploit will check for and avoid virtual machines before delivering its payload.

Payloads seen: PsiX Bot, IcedID

Fallout EK

Fallout EK is one of the more active exploit kits with some of the more intricate URI patterns. For a while, Fallout was loading its IE exploit via a GitHub PoC, but it eventually switched back to self-hosting.

Payloads seen: GandCrab, Raccoon Stealer, Baldr

Magnitude EK

Not a lot has changed for Magnitude EK during the past few months, as it continues to target a few Asia Pacific (APAC) countries, and exclusively drops its own Magniber ransomware.

Payload seen: Magniber ransomware

RIG EK

RIG EK is also one of the popular exploit kits enjoying a wide distribution via malvertising campaigns, such as Fobos. RIG still uses Flash’s CVE-2018-4878, which comes with its own artifacts.

Payloads seen: AZORult, Pitou, ElectrumDoSMiner

Underminer EK

Underminer EK is distinct from its counterparts for its overkill obfuscation of Internet Explorer and Flash exploits, but more importantly for its unorthodox Hidden Bee payload.

Payload seen: Hidden Bee

Router EK

Router exploit kits are not new (see DNSChanger EK), but they are quite dangerous, as they are part of drive-by attacks that alter your router’s DNS settings via cross-site request forgery (CSRF). The particular one we show here (Novidade) targets Brazilian users. The end goal is typically to redirect users to phishing websites with victims being none the wiser.

Payload seen: DNS changer

Mitigation

Malwarebytes users are protected against these exploits kits, thanks to our anti-exploit and web protection technologies. The animation below features Malwarebytes Endpoint Protection and Response, one of our business products, and shows how it blocks each of these attacks.

The post Exploit kits: spring 2019 review appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (May 6 – 12)

Malwarebytes - Mon, 05/13/2019 - 15:55

Last week on Labs, we discussed what to do when you discover a data breach, how 5G could impact cybersecurity strategy, the top six takeaways for user privacy, vulnerabilities in financial mobile apps that put consumers and businesses at risk, and in our series about vital infrastructure, we highlighted threats that target financial institutions, fintech, and cryptocurrencies.


Other cybersecurity news
  • Mozilla announced their new add-on policies, which will go into effect June 10, 2019. The emphasis is that add-ons inform users about their intentions, and are not allowed to contain obfuscated code. (Source: Mozilla)
  • The FBI, working in conjunction with authorities in multiple nations, has arrested several individuals in connection with Deep Dot Web, a website that allegedly profiteered by taking commissions on referral links to dark web markets. (Source: Gizmodo)
  • An international malvertiser was extradited from the Netherlands to face hacking charges in New Jersey. The defendant conspired to expose millions of web users to malicious advertisements designed to hack and infect victims’ computers with malware. (Source: US Department of Justice)
  • In an attempt to allow users to block online tracking, Google has announced two new features—Improved SameSite Cookies and Fingerprinting Protection—that will be previewed by Google in the Chrome web browser later this year. (Source: The Hacker News)
  • A slew of high-severity flaws have been disclosed in the PrinterLogic printer management service, which could enable a remote attacker to execute code on workstations running the PrinterLogic agent. (Source: ThreatPost)
  • On Monday, May 6, accounting firm Wolters Kluwer started seeing technical anomalies in a number of their platforms and applications. After investigating, they discovered the installation of malware. As a precaution, they decided to take a broader range of platforms and applications offline. (Source: Wolters Kluwer)
  • After getting pounded with ransomware and malware for deploying distributed denial-of-service (DDoS) attacks, unpatched Confluence servers are now compromised to mine for cryptocurrency. (Source: Bleeping Computer)
  • The FBI is investigating a ransomware attack on Baltimore City’s network that shut down some of the city services. (Source: CBS Baltimore)
  • The Dharma ransomware tries to divert victim’s attention by using an old ESET tool. While the user is dealing with the installation of the ESET Remover, Dharma runs in the background. (Source: TechNadu)
  • The FBI and Department Homeland Security have jointly issued a new Malware Analysis Report warning of the dangers of ELECTRICFISH, a tunneling tool used for traffic funneling and data exfiltration by a North Korea government hacking group. (Source: SCMagazine)

Stay safe, everyone!

The post A week in security (May 6 – 12) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds