Techie Feeds

A week in security (December 16 – 22)

Malwarebytes - Mon, 12/23/2019 - 17:40

Last week on Malwarebytes Labs, we signalled that Mac threat detections have been on the rise in 2019, discussed how a new Consumer Online Privacy Rights Act (COPRA) would empower American users, warned that the Spelevo exploit kit debuts a new social engineering trick, and let our own Statler and Waldorf take you through a decade in cybersecurity fails: the top breaches, threats, and ‘whoopsies’ of the 2010s.

Other cybersecurity news
  • Much aligned with our own findings Amazon’s Ring security was found to be below par, awful even. (Source:
  • A Canadian clinical laboratory services provider has suffered a data breach that exposed sensitive information and admitted to paying the hackers to retrieve the stolen data. (Source: TechSpot)
  • 22-year old Londoner Kerem Albayrak was sentenced after attempting to blackmail Apple by threatening to factory reset 319 million iCloud accounts and selling the users’ data. (Source: BleepingComputer)
  • Hackensack Meridian Health paid an undisclosed amount in ransom to stop a cyber-attack that has disrupted the hospital owner’s computer network. (Source:
  • If you stopped at a Wawa mini mart recently, your payment card details may have been snatched. (Source: TheVerge)
  • Contractor admits planting logic bombs in his software to ensure he would get new work. (Source: ArsTechnica)
  • Frankfurt, one of the largest financial hubs in the world had to shut down its IT network following an infection with the Emotet malware. (Source: ZDNet)
  • The Maze ransomware gang started a campaign to pressure victims into paying ransom by publicly listing successful attacks and threatening to leak data. (Source: TechTarget)
  • Every minute of every day, everywhere on the planet, dozens of companies are logging the movements of millions of people with mobile phones and storing the information in gigantic data files. (Source: The New York Times)
  • A United Kingdom national appeared today in federal court on charges related to his role in a computer hacking collective known as The Dark Overlord. (Source: Department of Justice)

Stay safe, everyone!

The post A week in security (December 16 – 22) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A decade in cybersecurity fails: the top breaches, threats, and ‘whoopsies’ of the 2010s

Malwarebytes - Thu, 12/19/2019 - 18:03

This post was co-authored by Wendy Zamora and Chris Boyd. All opinions expressed belong to your mom.

Back in the days before climate change stretched frigid winter months directly into the insta-sweat of summer, there was a saying about March: in like a lamb, out like a lion. The same might be said about the last decade in cybersecurity fails.

What kicked off with a handful of stories about niche hacks ballooned into daily splashy headlines about massive data breaches, dangerous outbreaks, and increasingly sophisticated attack campaigns. The game has truly changed, generating a multi-billion-dollar industrial complex, and inspiring millions to stock up on tinfoil hats while saving trendy rumpus room designs to their Pinterest boards.

To comment on the sweeping changes brought on by the last 10 years of hacks, breaches, privacy debates, and evolutions in malware, Malwarebytes researchers Wendy Zamora and Chris Boyd take a look at the most noteworthy, mind-blowing, and sometimes chuckle-inducing cybersecurity fails that defined the decade.

2011: Game over, PlayStation

WZ: It all started with the gamers. In my mind, gaming is nearly as genre-defining as porn when it comes to testing, adopting, and embracing early tech evolutions. The two go hand-in-hand, so to speak.

I’ll just give you a minute to wipe that last image out of your head before proceeding.

Great. So, in 2011 the world got its first glimpse at the power of a good hack to not only steal data, but also bring operations to a grinding halt. The 77 million members of the Sony PlayStation Network, including minors under the age of 18, had their personal data exposed to hackers. But worse for the gamers, they were locked out of their accounts for 23 days, unable to play online, purchase, or otherwise indulge in their favorite pastime.

For the sheer number of users alone, this hack is noteworthy, but more, it was a foreshadowing of the ways in which cybersecurity fails could do more than just steal information—they could disrupt lives.

2012: Mat Honan’s digital life torched

CB: PlayStation was significant for sheer cultural impact, if not actual affected numbers, given the size of recent breaches. I usually groan when looking at yearly lists of cybersecurity fails because I know 90 percent of it is going to be the same generic breach we’ve all seen a hundred times over. Yes, it’s bad that six million customer records were swiped from a web-facing database. No, it doesn’t make for interesting reading.

Instead, I’m much more interested in specific examples of personal ruination. One such example is from 2012, when technology writer Mat Honan found his entire digital world torn in half. I’d argue this is one of the most spectacular digital demolition jobs I’ve ever seen. The crooks had no interest in him, his data, or his devices. They just wanted that sweet, sweet three-character Twitter handle. If everything important to him was torched along the way? Too bad, so sad.

This guy pretty much lost everything of real, singular importance to him in the attack. All those photos of his kid as a baby? Bam, gone. Google account taken over and deleted. iPhone and iPad data erased. Anything still on his MacBook drive was locked away behind features designed to make his life more secure, like the four-digit PIN. The worst feeling in the world isn’t just the compromise; it’s knowing that those helpful systems are a gigantic pain in the backside once someone who isn’t you is in the driving seat.

Some basic actions—enabling 2FA on gmail and making backups—would have essentially made this a non-event. Did Honan miraculously manage to get his photographs back? Sure. It was a lucky escape, and we generally don’t get that lucky. This was one of those landmark, hot knife through buttery cybersecurity fails. I double dare you to top it.

2013: Snowed under

WZ: Sure, sure, Honan’s digital demise uncovered many holes in security processes we previously thought were failsafe, and maybe taught Apple customer service a valuable lesson in active listening. But as you yourself noted—I don’t think anyone learned anything from it. In contrast, Edward Snowden jolted the world out of its collective ostrich pose and demonstrated how very much 1984 got it right.

Depending on which side of democracy you stand on, Snowden, a former CIA contractor-turned-whistleblower, is either a hero or a war criminal for his 2013 revelations about the extent and reach of NSA-sponsored surveillance systems set up in the aftermath of 9/11. Global telecommunications systems, Internet watch lists, international cooperation, the works. In the list of cybersecurity fails, this may be the Holy Grail.

Regardless of political stance, Snowden’s reveal was a real eye-opener for the public, and it sparked a massive worldwide debate that rages on to this day. They call it “the Snowden effect.”

Just ask anyone what’s more important to them: national security or personal privacy? Do they have “nothing to hide” or is their right to stay off the grid of upmost importance? If you can easily answer this question and guarantee everyone in the room with you agrees, then you must be reading this from far in the future, when this list will look positively quaint in comparison to yours.

2013: Cryptolocker ransomware changes the game

CB: Okay, Snowden is a double-edged sword. On the one hand, he helped confirm that those conspiracy theorists were onto something. On the other hand, he helped confirm that those conspiracy theorists were onto something. I also wonder if the significance of his findings made that much of an impact outside the US, considering lots of folks just shrugged and carried on regardless.

If you want actual global impact on a scale you can feel, ransomware is where it’s at. Cryptolocker ransomware, specifically.

Ransomware was all fun and games until Cryptolocker came onto the scene and dashed users’ hopes by being the first widespread malware to encrypt files and hold them hostage until ransom was paid. Ransomware prior to Cryptolocker mostly relied on cheap tricks instead of encryption, but its arrival in 2013 cemented this method’s popularity forever, spawning clones and higher encryption stakes by the bucketload.

2013 again: Target hack

WZ: Okay, I will totally give you Cryptolocker. Game changer, no question. But this next breach is the quintessential lesson in “it only takes one time,” the Occam’s razor of cybersecurity fails. It also happened to be the splashiest, loudest security news of the decade (so far). Why? Because everyone loves Target. Everyone.

In 2013, Target screwed up big time. Its HVAC vendor had been hit with malware via lowly phishing email, but the technician remained dubiously unaware of that infection, which went ahead and stole Target’s network credentials. Hey, kids! What happens when you give third parties access to your VPN without thoroughly vetting them or their equipment for threats? You get hacked.

Also, note to businesses of all sizes: Free scanners do not proactively block threats. (Yes, we know, the HVAC people were using the free version of Malwarebytes.) They detect and clean malware only when you run a scan. Had the vendor been using our real-time anti-malware technology (or any other antivirus platform with always-on protection), this attack would have been erased from history.

2014: sorry, celebs! The Sony Pictures hack

CB: Everyone may love Target in the US, but on the other side of the pond, we enjoy £1 stores where everything costs, uh, £1.50. No, I don’t understand it either. What I do understand is I’m about to up the stakes to DEFCON 1 (Is that the bad one?) with a hacking tale that truly went viral. Step forward for the second time today, Sony!

The long version of the Sony Pictures hack can be read here. The short version? A hacker group called Guardians of Peace pilfered massive amounts of data from Sony servers, and in the years that have followed, it’s now tricky to remember where conspiracy theories and documented facts cross paths. A shady North Korean conspiracy, FBI and NSA involvement, multiple unreleased movies dumped online, thinly-veiled references to terrorist acts unless The Interview was pulled from theatres, and more all happened in the space of a month.

This cybersecurity fail is the equivalent of a Fast and Furious movie where the smalltime family of car heisters somehow ends up stealing nuclear footballs and taking down Russian submarines in their spare time. Also, hurling insults at someone who starred in a film called Hackers seems like a great way to invoke the Gods of dramatic irony.

2015: not sorry, cheaters

WZ: Yikes, yeah, 2014 was not a great year to be a celebrity. Just ask the victims of The Fappening. But I’m going to pivot and mention one of the decade’s cybersecurity fails that was actually a good thing: The Ashley Madison hack.

Bringing to public conscious the term “hacktivism,” these do-gooders breached the database of the website dedicated to helping married people find true love by cheating on their partners. Some 32 million adulterers’ credentials and credit card information were dumped online, after which they were likely dumped by their angry spouses. There’s not much else I can say here except you guys are assholes and deserved this one. The end.

CB: Yeah, I got nothing. Those cheaters were bad and should feel bad.

2016: But her emails?

WZ: Look, everyone and their mother is going to say the DNC hack was the biggest cyber event of 2016. The Russians most certainly pinned the tail on the Democratic donkey, interfered in our elections, and overall made a right mess of things. There’s no doubt Russia’s actions cast a shadow over American democracy. But as far as global, far-reaching impact is concerned, I’ve got my eye on a different blight.

In 2016, a shady hacking group known as the Shadow Brokers started leaking NSA secrets, vulnerabilities, and exploits onto the Internet, embarrassing the agency, but more importantly, putting sophisticated tools in the hands of cybercriminals that would be employed over the remainder of the decade.

Most notably, they disclosed a group of SMB vulnerabilities and their accompanying exploits, which were later used to propagate the WannaCry infection laterally through thousands of endpoints, and which are still in use today to spread deadly Emotet and TrickBot infections in worm-like fashion.

If it weren’t for the cybersecurity fails caused by the Shadow Brokers, who knows? Threat actors might still be messing around with small potato consumer scams and identity theft. But with grown-up utilities in hand, they realized they could do a lot more damage to a lot more devices, and soon turned their greedy gaze to loftier goals.

2017: the year of the outbreak

CB: Well, super sneaky government tool thefts are all well and good, but the impact of ransomware retooling and running wild can’t be denied. In 2017, ransomware authors decided that just going after home users was becoming a little old hat, so they started targeting large organisations in a wave of outbreaks (fueled by the very exploits stolen from the NSA in 2016). Sadly for us, those organisations included many of the services we make use of on a daily basis, whose files and operations were encrypted and held up for Bitcoin ransom.

WannaCry, NotPetya, and BadRabbit were the big three ransomware epidemics of the year, but the malware made headlines time and time again as ransomware authors inched themselves into every available corner. Threat actors may have become a little less inventive during this period, but they certainly weren’t resting on their laurels.

Arguably the heaviest-hitting ransomware story of 2017 was the WannaCry attack on NHS, as £92m vanished down the plughole. This was a seismic attack, the aftershocks of which are still felt today, spinning off into unexpected places that have taken on a life of their own.

2017: crypto fever

WZ: I could go with Equifax here, but come on, son. Another day, another breach. In 2017, it was safe to say that basically anyone who had ever been online had their information compromised. Which is why I will instead turn to the birth of a brand-new form of cybercrime: cryptomining.

Bitcoin and other cryptocurrency had always been the favored tender of the black market, as it’s anonymous and nearly impossible to trace. However, in 2017, crypto became more mainstream as a sudden, acute increase in value had even the beariest of bears opening cryptowallets and investing in super-niche altcoins. So naturally, cybercriminals being the vultures of the Internet, they found a way to capitalize on all this carrion by jacking the CPU/GPU of other users’ systems to generate coin.

Starting in late 2017, we started noticing hundreds of millions of detections of, a CPU-mining platform that—while itself was a legitimate service—was being abused by cybercriminals to mine users without their permission. This kicked off a landslide of cryptomining activity that spawned the creation of multi-platform cryptomining malware, drive-by mining attacks, crypto-bundlers, crypto-themed scams, cryptowallet drainers, crypto crypto cryptors, and crypto.

While cryptomining has since died down from its 2017-2018 heyday, it remains forever part of the threat landscape, and I’m sure we’ll be seeing much more of it as cryptocurrency and blockchain technology take hold in the next decade.

2018: shine’s off social media

CB: 2018 was all about the covert use of data pulling the strings in every direction you can imagine. Data mining and digital assets plus social media makes for a cracking combination in the wrong hands, and it turns out Facebook was the place most of this war was fought and won (or lost, if you were on the receiving end).

Cambridge Analytica, a political consulting firm based in the UK, probably knew they’d walked into “oh, whoops” territory when their offices were raided in 2018. They’d been mucking around on multiple elections worldwide, but drew attention to themselves and Facebook after it was discovered that they’d been harvesting the personal information from 50 million Facebook user profiles without their permission. The repercussions from this story continue to be felt today, as lawmakers now scrutinize Big Tech for their data privacy policies.

2018: data privacy becomes a thing

WZ: Actually, I have to semi-agree on Cambridge Analytica. But I see your social media problems and I raise you an entire Internet of data privacy issues. In 2018, users got a rude awakening into the inner workings of the tech giants they’d come to love, rely on, and otherwise be addicted to. Wait, you’re selling my information to pharmaceutical companies? You can actually record my conversations through my digital home assistant? Suddenly, users had to be just as wary of legitimate tech companies as they were of cybercriminals.

The awareness of 2018 led to global action, as GDPR was put into effect, launching a million cookie notices and EULA rewrites. Digital data privacy had always been an issue, reaching far back to pre-Y2K years, and it will continue for many decades as we contend with biometrics and genetic data. But 2018 represented a period of public “wokeness” that forever changed the way we build, buy, regulate, and use technology.

2019: the year of the triple threat

CB: We’re too close to 2019 to be able to say conclusively what stuck and what stank, but the triple threat of Emotet, TrickBot, and Ryuk ransomware caused such massive problems across a range of critical infrastructure and business services that any 2019 listicle that doesn’t feature this attack is missing the mark. If your mailbox hasn’t detected the familiar twang of an Emotet malspam landing on the network yet, you’re doing very well indeed.

The triple threat officially saw light in 2018, but it was the attack of 2019. If there was news of a city declaring a state of emergency, a school shutting down for weeks, or a hospital shelling out thousands in ransom payment, you bet it was on account of these three devils. It’s an assault from every angle, and in an alien invasion, this would be the part where the hero escaped through a conveniently placed air vent.

Cybersecurity fail of the decade

All this arguing on which cybersecurity fails were most awe-inspiring, death-defying, or just plain stupid would be pointless if we didn’t wrap it up in a nice year-end bow. So, without further ado, we’ll now take our pick of the top cybersecurity fail of the decade. Drumroll please…

WZ: My vote is for Shadow Brokers because it set off a chain of events that allowed for cybercriminals to evolve into more sophisticated, industrialized players, essentially radically changing the threat landscape from a bunch of kids messing around in their basements to organized criminals aimed at taking down organizations, swiping millions of users’ personal data and making significant profit in the process.

CB: My pick is the Mat Honan hack. It’s not as big, or as flashy, or as sophisticated as most of the attacks on display. But what happened to him pretty much still happens to people now as their first introduction to the world of “All my data is gone forever.” How they torched his digital existence and salted the earth is beyond brutal—and, most chillingly, it was nothing personal.

Which of these cybersecurity fails would you vote for? Sound off in the comments!

The post A decade in cybersecurity fails: the top breaches, threats, and ‘whoopsies’ of the 2010s appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Spelevo exploit kit debuts new social engineering trick

Malwarebytes - Wed, 12/18/2019 - 16:00

2019 has been a busy year for exploit kits, despite the fact that they haven’t been considered a potent threat vector for years, especially on the consumer side. This time, we discovered the Spelevo exploit kit with its virtual pants down, attempting to capitalize on the popularity of adult websites to compromise more devices.

The current Chromium-dominated browser market share favors social engineering attacks and other threats that do not require the use of exploits in order to infect users. However, we continue to see malvertising campaigns pushing drive-by downloads in our telemetry. The malicious adverts are placed on tier 2 adult websites that still drive a lot of traffic.

Recently, we captured an unusual change with the Spelevo exploit kit where, after an attempt to trigger vulnerabilities in Internet Explorer and Flash Player, users were immediately redirected to a decoy adult site.

Figure 1: Exploit kit used in tandem with social engineering

Spelevo EK instructs the browser to load this site, which social engineers victims into installing a video codec in order to play a movie. This appears to be an effort from the Spelevo EK operator to double his chances of compromising new machines.

Spelevo EK changes its redirection URL

Based on our telemetry, there are a few campaigns run by threat actors converting traffic to adult sites into malware loads. In one campaign, we saw a malvertising attack on a site that draws close to 50 million visitors a month.

Figure 2: Traffic view from EK to soc. engineering site

We collected two main payloads coming directly from Spelevo EK:

  • Ursnif/Gozi
  • Qbot/Qakbot

One thing that Spelevo EK did which was a little bit different from other exploit kits is redirect victims to post exploitation, typically after a 10-second delay:

Figure 3: Google redirect with 10 second delay

However, in this latest capture, we noticed that the script had been edited and that the time was increased to 60 seconds:

Figure 4: Google redirect with 60 second delay

This change is important because it allows enough time for the exploit kit to run all the way and call the last URL part of the EK framework. Here, we noticed something new as well.

Previously, the URL immediately following the payload had the following ending pattern: &00000111&11. Now, the new pattern is 32 characters followed by the letter ‘n’.

Figure 5: Redirection from EK to decoy adult site

Before the refresh tag comes into effect, the browser is redirected to a new location, which happens to be a decoy adult site.

Social engineering as backup

There is nothing special about this fake adult site, but it works really well in the context of the malvertising chain. Victims were already engaged with the content and may not even realize that an exploitation attempt just happened.

Figure 6: Fake adult site tricking users with fake video codec

This time around, the site urges users to download a file called lookatmyplayer_codec.exe. Downloading video codecs to view media used to be fairly common back in the day, but isn’t really the case anymore. Yet, this kind of trick still works quite well and is an alternative method to compromise users.

The fake codec turns out to be Qbot/Qakbot, which is also one of the payloads distributed by Spelevo EK. In other words, the threat actor has two chances to infect victims: either via the exploit kit or fake codec.

This is not the first time that exploit kit operators have included social engineering schemes. In 2017, Magnitude EK was seen pushing a fake Windows Defender notification, while Disdain EK was tricking users with a fake Flash Player update.

Malwarebytes users are protected against both the exploit kit and payloads.

Indicators of compromise (IOCs)





Decoy adult site


The post Spelevo exploit kit debuts new social engineering trick appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New Consumer Online Privacy Rights Act (COPRA) would empower American users

Malwarebytes - Tue, 12/17/2019 - 17:28

Despite the already dizzying number of comprehensive data privacy proposals before the US Senate—nearly 10 have been introduced since mid-2018—yet another bill has entered the conversation: the Consumer Online Privacy Rights Act.

This time, the bill, called COPRA for short, is sponsored by a Democratic Senator from Washington whose name has rarely been cited in the country’s ongoing debate as to how to best protect Americans’ data.

The biggest differentiator about this 2019 latecomer bill? It ticks almost every box on the data privacy wishlist.

Granting Americans the right to access data about them? This bill’s got it. The right to grab that data and move it to another company? Also included. What about the right to opt out of data sharing and selling? Yep. And the requirement that companies get explicit approval for the processing and sharing of sensitive data, including biometrics, precise geolocation, and emails? You bet.

But, perhaps most importantly, the bill would give everyday Americans the right to sue a company that violated their data privacy rights, extending enforcement capabilities directly to the public.

Introduced by Senator Maria Cantwell, the Consumer Online Privacy Rights Act has already been welcomed by data privacy advocates across the country.

“This is the most sophisticated federal proposal to emerge to date and demonstrates that Senate Democrats are committed to setting a high bar for consumer privacy,” said Jules Polonetsky, the CEO of the nonprofit Future of Privacy Forum. “The bill provides a strong starting point that will move bipartisan debate forward, with private rights of action, limits on preemption, and the definition of sensitive data, among other issues, likely to be points of ongoing negotiation.”  

Consumer Online Privacy Rights Act: in a nutshell

The Consumer Online Privacy Rights Act (COPRA) would improve the relationship that Americans currently have with the multitude of companies that collect, store, share, and sell their data across the Internet.

COPRA would accomplish this by extending new rights to consumers—like the right to access data collected about them and the right to delete that data—while also placing new restrictions on companies.

Under COPRA, companies would no longer be able collect “sensitive covered data” without first getting explicit approval from a user. Nor would companies be able to ignore the data privacy and security of their users’ data, as each company subject to COPRA would need to appoint a privacy officer and a data security officer, both of whom would be tasked with performing annual data risk assessments.

COPRA would also create a new bureau within the Federal Trade Commission to aid enforcement. Further, state Attorneys General could file civil claims on behalf of their states’ residents when they believe there has been a violation of the law.

Though some of these ideas have propped up in federal data privacy bills introduced this year, COPRA differs in two major ways.

First, it would not impact any state data privacy laws that improve the data privacy of that state’s residents.

In 2018 and 2019, dozens of individual state legislatures took it upon themselves to try to solve data privacy, with California passing the California Consumer Privacy Act last year and Maine passing a data privacy bill focused on Internet Service Providers this year, to name just two. Similar efforts have produced laws that will either bolster or study data privacy in Nevada, Vermont, Illinois, Louisiana, and North Dakota.

Under COPRA, these laws—and new, similar ones—would go untouched.

This preservation and respect of state laws goes directly against the wishes of many of the companies that COPRA would regulate. Earlier this year, the CEOs of 50 of the largest global companies informed Congress about what a federal data privacy bill should include. High on the list was the demand that any federal bill negate, or preempt current and future state data privacy bills.

This corporate demand is not the only one that COPRA contradicts.

COPRA would extend what is called a “private right of action” to consumers, granting them the ability to personally file a civil claim against a company to allege that the company violated their data privacy rights. The group of 50 CEOs also oppose this idea, asking that no private right of action be included in a federal data privacy law.

Until now, everyday US consumers have suffered limited options in enacting their own data privacy rights, instead having to rely on state Attorneys General to act on their behalf, or having to try and prove the near-unprovable when making claims about alleged data breaches.

This private right of action is, as Purism CEO Todd Weaver told Malwarebytes earlier this year, a key component in any meaningful data privacy bill.

“If you can’t sue or do anything to go after these companies that are committing these atrocities, where does that leave us?” Weaver said. 

Below is a more detailed look at COPRA’s rights and restrictions.

COPRA’s consumer rights

The Consumer Online Privacy Rights Act would create new definitions of the types of data that receive protection in the United States. “Covered data,” the bill describes, is any information that “identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data.” Not included in this definition, though, is de-identified data, employee data, and public records.

Further, COPRA would create new restrictions on what it calls “sensitive covered data.” The defined list is long, but not exhaustive, including passport numbers, Social Security numbers, information about physical and mental health, financial account usernames and passwords, biometrics, precise geolocation, communications content and metadata (which means not just the words that consumers send to one another, but the time they sent it, and to what user or phone number they sent it to), emails, phone numbers, and any information that reveals race, religion, sexual orientation and behavior, and union membership.

That’s not all. Also included in “sensitive covered data” are calendars and address books, photos and videos—plus any nude pictures—and online activity over time and across different third-party services.

Unfortunately, the list leaves much to be desired, said Adam Schwartz, senior staff attorney at Electronic Frontier Foundation, as it still fails to include “extraordinarily sensitive” information like immigration status, marital status, employment history, and political history.

“So COPRA’s list of sensitive data is under-inclusive,” Schwartz wrote. “In fact, any such list will be under-inclusive, as new technologies make it ever-easier to glean highly personal facts from apparently innocuous bits of data. Thus, all covered information should be free from processing and transfer, absent opt-in consent, and a few other tightly circumscribed exceptions.”

Still, with these definitions of data, COPRA offers new data privacy rights to consumers.

For “covered data,” consumers have the rights to access, delete, and correct inaccuracies, along with the right to data portability and the right to opt-out of having their covered data “transferred” to other companies. That last right means that consumers would have the right to tell companies that they do not want to have their covered data disclosed, released, shared, disseminated, sold, or licensed to other companies.

The right to access under COPRA would allow consumers to not only obtain a copy of what covered data a company has on them, but also a list of the third parties that their data has been shared with to that point. Further, companies would have to explain why they shared a user’s covered data with a third party.

This level of information equips consumers with a better understanding of just how far their data travels in today’s data-driven economy.

Similarly, COPRA’s “right to delete” would extend to third parties. If a user requests that a company delete data collected on them, that company would also be obligated to inform the third parties with which it had shared that user’s data about the deletion request.

For “sensitive covered data,” consumers could relax, knowing that companies would not be allowed to collect any of that type of data without a user’s explicit, opt-in approval.

COPRA’s requirements for companies

As explained above, the Consumer Online Privacy Rights Act has two primary levers for accomplishing change—extending new rights to users while placing new restrictions on companies.

COPRA’s scope—the definition of the businesses it applies to—is broad, hewing exactly in line with the current Federal Trade Commission Act. Any entity subject to that law would also be subject to COPRA, with the exception of what COPRA defines as “small businesses.”

These are, the bill explains, businesses that do not exceed $25 million in revenue; do not process the covered data of an average of 100,000 or more individuals, households, and devices; and do not derive 50 percent or more of their annual revenue from transferring individuals’ data.

What that means is that COPRA would absolutely apply to the most common names in Big Tech—Facebook, Google, Amazon, Apple, Microsoft, Twitter, Oracle, and far more.

Under COPRA, companies would need to, for starters, post an easily-accessible privacy policy, a requirement that already applies to companies doing business in California. The privacy policy would need to include, among other things, the contact information for the company’s privacy and data security officers, the categories of data the company collects and processes and the reasons why, whether the company transfers data to third parties, and if so, what categories of data it transfers with stated purposes for the transfers and the identity of each third party that receives data in those transfers.

Companies would also be subject to new duties—a “duty of loyalty,” a “duty to secure data,” and a “duty to build privacy protective systems.” Combined, the new duties would prohibit companies from engaging in deceptive or harmful data practices, along with requiring companies to name a privacy officer and a data security officer. The officers, the bill explains, would need to oversee the implementation of a comprehensive data privacy program while also performing annual data risk assessments.

Further, companies would need to commit to what is called “data minimization.” Under this rule, companies could not “process or transfer covered data beyond what is reasonably necessary, proportion, and limited.”

Unfortunately, COPRA would allow companies to engage in certain data processing practices that consumers may personally view as invasive, so long as the company clearly lays out these practices in its stated privacy policy. This is a small mis-step in the bill, according to privacy advocates, as even the most thoughtful, well-written privacy policies gain few, if any, full reads from the average consumer.

Companies should not be given the opportunity to engage in potentially invasive data processing practices so long as they bury those practices in concise language on page 100 of their privacy policies.

Separately, a few of COPRA’s rights offered to consumers actually impact companies first.

Take, for example, the consumers’ “right to data security,” which would require companies to “establish, implement, and maintain reasonable data security practices to protect the confidentiality, integrity, and accessibility of covered data.” The specific requirements of those actions include assessing vulnerabilities, disposing of data when required, training employees, and taking preventive actions to correct and mitigate vulnerabilities, which could include installing administrative, technical, and physical safeguards.

The bill’s requirement that companies post privacy policies is another example, as it falls under the consumers’ “right to transparency.”

Finally of interest, COPRA would create a new requirement for companies that have implemented algorithmic decision-making processes into their data processing systems. Such companies would need to perform an annual assessment if their tools are used to determine housing eligibility, education, employment, or credit, along with distributing ads for the same areas, and access to public accommodations. Annual assessments would need to study whether the algorithmic decision-making systems produce discriminatory results.

A contender for comprehensive change

Data privacy has undergone massive change in the past 10 years alone. For much longer than that, the US has lacked comprehensive data privacy protections for everyone, no matter which state they live in.

It’s time for that to change. With the Consumer Online Privacy Rights Act, the US Senate now has one of the firmest options to consider. COPRA would not only extend new data privacy rights to Americans, it would also give them the tools to defend them.

We look forward to the next year in hopes that Congress will finally, actually, enact a meaningful federal data privacy law.

The post New Consumer Online Privacy Rights Act (COPRA) would empower American users appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mac threat detections on the rise in 2019

Malwarebytes - Mon, 12/16/2019 - 18:40

Conventional wisdom has been that, although not invulnerable to cyberthreats (as some old Apple ads would have you believe), Macs are afflicted with considerably fewer infections than Windows PCs. However, when reviewing our 2019 Mac detection telemetry, we noticed a startling upward trend. Indeed, the times, they are a-changin’.

To get a sense of how Mac malware performed against all other threats in 2019, we looked at the top detections across all platforms: Windows PCs, Macs, and Android. Of the top 25 detections, six of them were Mac threats. Overall, Mac threats accounted for more than 16 percent of total detections.

Perhaps 16 percent doesn’t sound impressive, but when you consider the number of devices on which these threats were detected, the results become extremely interesting. Although the total number of Mac threats is smaller than the total number of PC threats, so is the total number of Macs. Considering that our Mac user base is about 1/12 the size of our Windows user base, that 16 percent figure becomes more significant.

Detections per device

The most interesting statistic that emerged from our data was how many Mac detections we saw per machine in 2019. On Windows, we saw 4.2 detections per device this year. Our Mac users, on the other hand, saw 9.8 detections per device—more than double the amount of detections than Windows users.

Of course, there are obviously biases in this data. For example, these machines are all devices with Malwarebytes installed, and many Mac users still believe antivirus software is not needed. This means the Macs represented by the data may be machines that already had some kind of suspected infection, which is why Malwarebytes was installed in the first place.

However, the same could be said for PC users, who often believe that free Windows Defender is adequate protection, but then download Malwarebytes for Windows when their computer begins demonstrating signs of infection. Still, the overall threat detection rate for all Macs (and not just those with Malwarebytes installed) is likely not as high as this data sample.

Top five global threats

For the first time ever, Mac malware broke into the top five most-detected threats in the world. In fact, Mac malware represented the second- and fifth-most detected threats.

The Malwarebytes detection ranked as the second-highest of 2019 is a Mac adware family known as NewTab, clocking in at around 4 percent of our overall detections across all platforms.

NewTab is adware that uses browser extensions to modify the content of web pages. It can be found in the form of Chrome extensions, with some older versions available as outdated Safari extensions. However, due to Apple phasing out support for these older Safari extensions in favor of extensions bundled inside apps, NewTab often poses as apps, such as flight trackers, maps/navigation, email access, or tax forms.

Recently, NewTab has proliferated and is using a variety of seemingly randomly-chosen names. Although some earlier variants tricked users into downloading an app from something like a fake flight or package tracking website, more recently these have been bundled into more widely-distributed adware bundle installers.

Samples of NewTab apps

In fifth place, at 3 percent of the total detections, we see a detection named PUP.PCVARK. These are a variety of potentially unwanted programs from a particular developer, most of them clones of Advanced MacKeeper. (This app was so notorious that its site was eventually blacklisted by Google Safe Browsing, which is not something that typically happens for PUPs.)

PUP (n): abbreviation for potentially unwanted program

PUPs are programs that are generally not installed intentionally by the user, or that may use a variety of scare tactics or other unethical techniques to trick the user into installing or purchasing.

Growing Mac threat

If we delve further into our data, we see that Mac detections primarily consist of adware and PUPs. Traditional, “full” malware does exist for the Mac, of course, but it tends to be more targeted or otherwise limited in scope. For example, the Mokes and Wirenet malware targeted Mac users through a Firefox vulnerability this year, but only users at certain cryptocurrency companies were targeted, so infections were not widespread.

We’ve known for a long time that the “Macs don’t get viruses” wives’ tale was completely wrong. As time goes on, though, we’re seeing that Macs are increasingly popular targets, and the bad guys are ramping up their efforts to get a piece of the Mac market. If you use a Mac, stay alert, use antivirus software, and don’t allow yourself to be lulled into a false sense of security.

The post Mac threat detections on the rise in 2019 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (December 9 – 15)

Malwarebytes - Mon, 12/16/2019 - 17:08

Last week on Malwarebytes Labs, we cautioned readers against purchasing potentially privacy-invasive, cyber-insecure smart doorbells, warned about a new credit card skimmer vulnerability embedded within hundreds of fraudulent web sites selling supposedly name-brand shoes, and looked at the newest veteran’s assistance program launched by the nonprofit Women in CyberSecurity (WiCyS).

We also explained how mobile device sensors can be exploited by cybercriminals, provided tips on building an effective security operations center, and put our threat spotlight on Ryuk ransomware, trying to understand the who, what, where, when, and why of the nefarious malware.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (December 9 – 15) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New Women in CyberSecurity (WiCyS) veterans program aims to bridge skills gap, diversify sector

Malwarebytes - Fri, 12/13/2019 - 17:02

The cybersecurity industry has a problem: We have zero unemployment rate. Or so we’re told.

With experts predicting millions of job openings in the years to come—coupled with the industry’s projected growth of US$289.9 billion by 2026 and soaring cyberattacks against businesses—now is as good a time as any for organizations to face the problem of the alleged skills shortage and take action.

Because here’s the unspoken reality: Organizations may have problems filling IT roles, but it’s not for lack of available employees. Perhaps it’s simply a problem of overlooking the millions of applicants with transferable skills. That’s where Women in CyberSecurity aims to step in.

Women in CyberSecurity launches veterans program

Women in CyberSecurity (WiCyS), a not-for-profit organization, launched the Veteran’s Assistance Program this November with the goal of connecting female veterans to jobs in the cybersecurity industry and providing a support network for its members. The program aims to:

  • Eliminate barriers that may hinder a veteran job candidate from being employed
  • Grow the cybersecurity workforce by introducing more women into cybersecurity
  • Help female veterans navigate into the cybersecurity industry as they transition and re-adjust to civilian life
  • Offer female vets the opportunity, community support, and resources to launch a career and thrive in the field

Women in CyberSecurity isn’t the first or only organization that has looked to military veterans to help fill in vacant positions in cybersecurity. Palo Alto Networks recently launched a free cybersecurity training and certification initiative called Second Watch. The SANS Institute also opened the scholarship-based SANS CyberTalent VetSuccess Immersion Academy. Even Facebook partnered with and hundreds of students and professors last year to unveil the Facebook Cybersecurity University for Veterans in which 33 veterans were able to graduate from the camp.

But what sets the WiCyS Veteran’s Assistance Program apart is their intentionality and focus in bringing more female veterans into the fold. Although they don’t conduct cybersecurity trainings, they offer resources and more: professional development, career training, mentorship, and a dedicated support system that will be there for their members in the long run.

“We want to provide a community of sisterhood that understands what their challenges and needs are and match them to employers who also understand their needs and transferable skills,” says Dr. Amelia Estwick, founding member and vice president of the WiCyS Mid-Atlantic Affiliate.

WiCyS members who are female veterans are given an opportunity to receive a Veterans Fellowship Award, provided they are eligible.

The award, sponsored by the Craig Newmark Philanthropies, is exclusively for female veterans who (1) are interested in making a career in cybersecurity or (2) are already are in it but would like to further advance their career.

Applications for the award are still open until Sunday, December 15. Eligible candidates will start receiving notifications from December 22 onward.

The WiCyS Veteran Fellowship Award will be provided to selected eligible female veterans who are interested in or already working in cybersecurity. Applications are open until December 15. Visit for more information!

— Women in CyberSecurity (@WiCySorg) November 15, 2019 This hard task of shifting

Although transition programs are set up to help military members rejoin the workforce, finding a job hasn’t been easy for most of them, even with their highly-specialized, STEM-related training and cybersecurity’s dire need for diverse, skilled workers. This frustrates Estwick, who herself is a US Army veteran working in cybersecurity as Director of the National Cybersecurity Institute at Excelsior College.

“I kept hearing these numbers about lack of talent, and lack of diversity, and the skills gap, and I said, ‘Why do they keep saying these jobs aren’t filled when I talk to a lot of military women transitioning who say they can’t find a job?'” she recounts. “They have transferable skills that they’ve gained through the service. There’s no way in the world these women should have trouble finding jobs.”

Many former veterans already working in cybersecurity say that the industry is the best fit yet for those who served in the armed forces. The overlapping of cybersecurity and military terminologies and tactics alone merits this hand-in-glove relationship.

Not only are veterans armed with STEM-related knowledge and technical aptitude, Estwick asserts, they have also dealt with many of the same issues in privacy, compliance, and regulation we in cybersecurity deal with today. Furthermore, she says, they have more developed soft skills, such as teamwork, leadership, attention to detail, and communication—key skills they have carried with them from their time in the service.

An honorary salute

In our eyes, veterans are and will always be defenders and heroes. It shouldn’t surprise us to see them on the front lines of the digital world as well. For WiCyS, putting highly-skilled female veterans in view of organizations who are looking to fill in-demand positions is just the beginning.

“We’re dealing with female veterans, but eventually, we could add military spouses or other groups. There are so many other slices of the pie we want to add. But we need to identify what their needs are now. The more we have joining us, the better informed we are to address those needs,” says Estwick.

While transitioning to civilian life is a challenge to all former military members, female veterans in particular shouldn’t be bogged down by employment barriers—real and perceived—when planning to make a career in cybersecurity and grow in it.

“There’s this nasty fog over cybersecurity. You get a lot of women who feel they’re not technical enough, not good enough for the field. It’s a change of mindset.” Estwick counsels. “Don’t diminish yourself. You have extremely valuable talent, knowledge, skill, and ability that employers should be very proud to have.”

The post New Women in CyberSecurity (WiCyS) veterans program aims to bridge skills gap, diversify sector appeared first on Malwarebytes Labs.

Categories: Techie Feeds

5 tips for building an effective security operations center (SOC)

Malwarebytes - Fri, 12/13/2019 - 16:00

Security is more than just tools and processes. It is also the people that develop and operate security systems. Creating systems in which security professionals can work efficiently and effectively with current technologies is key to keeping your data and networks secure. Many enterprise organizations understand this need and are attempting to meet it with the creation of their own security operations center (SOC).

SOCs can significantly improve the security of an organization, but they are not perfect solutions and can be challenging to implement. Lack of skilled staff and the absence of effective orchestration and automation are the biggest hurdles, according to a recent SANS survey. Despite these hurdles, more organizations are looking to follow in the footsteps of the enterprise and build SOCs. Read on to learn exactly what is a security operations center, and how to create an effective one.

What is a security operations center (SOC)?

A security operations center, or SOC, consists of a team of people who are responsible for monitoring systems, identifying security issues and incidents, and responding to events. They are also typically the ones responsible for evaluating and enforcing security policies. A SOC team is typically responsible for covering the whole organization, not just a single department. While it’s mostly been embraced by larger organizations, SOCs are useful for businesses of any size, since all organizations are vulnerable to cyberattack.

SOC team members typically include:

  • SOC Manager—leads team operations and helps determine budget and agenda. They also serve as team representatives, interacting with other managers and executives.
  • Security Analyst—organizes and interprets data from reports and audits. They conduct risk assessments and use threat intelligence to produce actionable insights.
  • Forensic Investigator—analyzes incident data for evidence and behavioral information. They can work with law enforcement post incident.
  • Incident Responder—creates and follows Incident Response Plans (IRPs). They also conduct initial investigations and threat assessments.
  • Compliance Auditor—ensures processes comply with regulations. They can also handle compliance reporting.

SOCs must be customizable to an organization’s needs. To meet these differing needs, several types of SOCs exist, including:

  • Internal—consists of in-house security professionals
  • Co-managed—consists of a combination of internal and third-party professionals
  • Managed—consists of third-party professionals working remotely
  • Command—manages and coordinates smaller SOCs; useful for large enterprises
How to build an effective SOC

Building an effective SOC requires understanding the needs of your organization, as well as its limitations. Once these needs and limitations are understood, you can begin applying the following best practices. 

1. Choose your team carefully

The effectiveness of your SOC is reliant on the team members you choose. They are responsible for keeping your systems secure and determining which resources are needed to do so. When choosing, you need to include members that cover a range of skill sets and expertise

Team members must be able to:

  • Monitor systems and manage alerts
  • Manage and resolve incidents
  • Analyze incidents and propose action
  • Hunt and detect threats 

To accomplish these tasks, team members must also have a variety of skills, both soft and hard. The most important among these include intrusion detection, reverse engineering, malware handling and identification, and crisis management.

Do not make the mistake of only evaluating technical skills when building your team. Team members are required to work together closely during high-stress situations. For this reason, it is important to select members who can effectively collaborate and communicate.

2. Increase visibility

Visibility is key to being able to successfully protect a system. Your SOC team needs to be aware of where data and systems are in order to protect them. They need to know the priority of data and systems, as well as who should be allowed access

Being able to appropriately prioritize your assets enables your SOC to effectively distribute its limited time and resources. Having clear visibility allows your SOC to easily spot attackers and limits places where attackers can hide. To be maximally effective, your SOC must be able to monitor your network and perform vulnerability scans 24/7.

3. Select tools wisely

Having ineffective or insufficient tools can seriously hinder the effectiveness of your SOC. To avoid this, select tools carefully to match your system needs and infrastructure. The more complex your environment is, the more important it is to have centralized tools. Your team should not have to piecemeal information for analysis or use different tools to manage each device.

The more discrete tools your SOC employs, the more likely information is to be overlooked or ignored. If security members need to access multiple dashboards or pull logs from multiple sources, information is more difficult to sort through and correlate. 

When selecting tools, make sure to evaluate and research each tool prior to selection. Security products can be incredibly expensive and difficult to configure. It doesn’t make sense to spend time or money on a product or service that doesn’t integrate well with your system.

When deciding which tools to incorporate, you need to consider endpoint protection, firewalls, automated application security, and monitoring solutions. Many SOCs make use of System Information and Event Management (SIEM) solutions. These tools can provide log management and increase security visibility. SIEM can also be helpful for correlating data between events and automating alerts. 

4. Develop a robust incident response plan (IRP)

An IRP is a plan that outlines a standardized way of detecting and responding to security incidents. It should incorporate system knowledge, like data priority, as well as existing security policies and processes. A well-crafted IRP enables faster detection and resolution of incidents. There are many templates and guides available to help you create an incident response plan. Using these resources can ensure that no aspects are missed in your plan. It can also speed up the creation process.

Once your plan is established, it is not enough to simply wait until an incident occurs. Your SOC should make sure to practice using the plan with incident drills. Doing so can increase their response confidence when a real incident arises. It can also uncover any flaws, inconsistencies, or inefficiencies in the plan. It is the SOC team’s responsibility to make sure that your IRP is kept up to date as systems, staff, and security processes change.

5. Consider adding managed service providers (MSPs)

Many organizations use managed service providers (MSPs) as part of their SOC strategy. Managed services can provide the expertise that is otherwise lacking in your team. These services can also ensure that your systems are continuously monitored and that all events have an immediate response. Unless you have multiple shifts covering your SOC, constant coverage is something you are unlikely to be able to accomplish on your own.

The most common use of managed SOC services are for penetration testing or threat research. These are time-consuming tasks that can take significant expertise and expensive tools. Rather than devoting limited time and budgets to cover these tasks, your SOC can benefit from outsourcing or collaborating with third-party teams.

Securing organizations with SOCs

Creating a security operations center can be daunting. After all, it is meant to be the first and last stop when it comes to system security. Despite this, you can create an effective SOC team that meets the unique needs of your organization. It takes time, effort, and careful assessment, but the reward is a confidently secure network. 

Start by using the best practices outlined here and pay special attention to team selection. The members you choose not only dictate the SOC processes and tools to be implemented, but ultimately, the overall effectiveness of your program.

The post 5 tips for building an effective security operations center (SOC) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Threat spotlight: the curious case of Ryuk ransomware

Malwarebytes - Thu, 12/12/2019 - 22:33

Ryuk. A name once unique to a fictional character in a popular Japanese comic book and cartoon series is now a name that appears in several rosters of the nastiest ransomware to ever grace the wild web.

For an incredibly young strain—only 15 months old—Ryuk ransomware gaining such notoriety is quite a feat to achieve. Unless the threat actors behind its campaigns call it quits, too—Remember GandCrab?—or law enforcement collars them for good, we can only expect the threat of Ryuk to loom large over organizations.

First discovered in mid-August 2018, Ryuk immediately turned heads after disrupting operations of all Tribune Publishing newspapers over the Christmas holiday that year. What was initially thought of as a server outage soon became clear to those affected that it was actually a malware attack. It was quarantined eventually; however, Ryuk re-infected and spread onto connected systems in the network because the security patches failed to hold when tech teams brought the servers back.

Big game hunting with Ryuk ransomware

Before the holiday attack on Tribune Publishing, Ryuk had been seen targeting various enterprise organizations worldwide, asking ransom payments ranging from 15 to 50 Bitcoins (BTC). That translates to between US$97,000 and $320,000 at time of valuation.

This method of exclusively targeting large organizations with critical assets that almost always guarantees a high ROI for criminals is called “big game hunting.” It’s not easy to pull off, as such targeted attacks also involve the customization of campaigns to best suit targets and, in turn, increase the likelihood of their effectiveness. This requires much more work than a simple “spray-and-pray” approach that can capture numerous targets but may not net such lucrative results.

For threat actors engaged in big game hunting, malicious campaigns are launched in phases. For example, they may start with a phishing attack to gather key credentials or drop malware within an organization’s network to do extensive mapping, identifying crucial assets to target. Then they might deploy second and third phases of attacks for extended espionage, extortion, and eventual ransom.

To date, Ryuk ransomware is hailed as the costliest among its peers. According to a report by Coveware, a first-of-its-kind incident response company specializing in ransomware, Ryuk’s asking price is 10 times the average, yet they also claim that ransoms are highly negotiable. The varying ways adversaries work out ransom payments suggests that there may be more than one criminal group who have access to and are operating Ryuk ransomware.

The who behind Ryuk

Accurately pinpointing the origin of an attack or malware strain is crucial, as it reveals as much about the threat actors behind attack campaigns as it does the payload itself. The name “Ryuk,” which has obvious Japanese ties, is not a factor to consider when trying to discover who developed this ransomware. After all, it’s common practice for cybercriminals to use handles based on favorite anime and manga characters. These days, a malware strain is more than its name.

Instead, similarities in code base, structure, attack vectors, and languages can point to relations between criminal groups and their malware families. Security researchers from Check Point found a connection between the Ryuk and Hermes ransomware strains early on due to similarities in their code and structure, an association that persists up to this day. Because of this, many have assumed that Ryuk may also have ties with the Lazarus Group, the same North Korean APT group that operated the Hermes ransomware in the past.

Recommended read: Hermes ransomware distributed to South Koreans via recent Flash zero-day

However, code likeness alone is insufficient basis to support the Ryuk/North Korean ties narrative. Hermes is a ransomware kit that is frequently peddled on the underground market, making it available for other cybercriminals to use in their attack campaigns. Furthermore, separate research from cybersecurity experts at CrowdStrike, FireEye, Kryptos Logic, and McAfee has indicated that the gang behind Ryuk may actually be of Russian origin—and not necessarily nation-state sponsored.

As of this writing, the origins of Ryuk ransomware can be attributed (with high confidence, per some of our cybersecurity peers) to two criminal entities: Wizard Spider and CryptoTech.

The former is the well-known Russian cybercriminal group and operator of TrickBot; the latter is a Russian-speaking organization found selling Hermes 2.1 two months before the $58.5 million cyber heist that victimized the Far Eastern International Bank (FEIB) in Taiwan. According to reports, this version of Hermes was used as a decoy or “pseudo-ransomware,” a mere distraction from the real goal of the attack.

Wizard Spider

Recent findings have revealed that Wizard Spider upgraded Ryuk to include a Wake-on-LAN (WoL) utility and an ARP ping scanner in its arsenal. WoL is a network standard that allows computing devices connected to a network—regardless of which operating system they run—to be turned on remotely whenever they’re turned off, in sleep mode, or hibernating.

ARP pinging, on the other hand, is a way of discovering endpoints in a LAN network that are online. According to CrowdStrike, these new additions reveal Wizard Spider’s attempts to reach and infect as many of their target’s endpoints as they can, demonstrating a persistent focus and motivation to increasingly monetize their victims’ encrypted data.


Two months ago, Gabriela Nicolao (@rove4ever) and Luciano Martins (@clucianomartins), both researchers at Deloitte Argentina, attributed Ryuk ransomware to CryptoTech, a little-known cybercriminal group that was observed touting Hermes 2.1 in an underground forum back in August 2017. Hermes 2.1, the researchers say, is Ryuk ransomware.

The CryptoTech post about Hermes version 2.1 on the dark web in August 2017 (Courtesy of McAfee)

In a Virus Bulletin conference paper and presentation entitled Shinigami’s revenge: the long tail of the Ryuk ransomware, Nicolao and Martins presented evidence to this claim: In June 2018, a couple of months before Ryuk made its first public appearance, an underground forum poster expressed doubt on CryptoTech being the author of Hermes 2.1, the ransomware toolkit they were peddling almost a year ago that time. CryptoTech’s response was interesting, which Nicolao and Martins captured and annotated in the screenshot below.

CryptoTech: Yes, we developed Hermes from scratch.

The Deloitte researchers also noted that after Ryuk emerged, CryptoTech went quiet.

CrowdStrike has estimated that from the time Ryuk was deployed until January of this year, their operators have netted a total of 705.80 BTC, which is equivalent to US$5 million as of press time.

Ryuk ransomware infection vectors

There was a time when Ryuk ransomware arrived on clean systems to wreak havoc. But new strains observed in the wild now belong to a multi-attack campaign that involves Emotet and TrickBot. As such, Ryuk variants arrive on systems pre-infected with other malware—a “triple threat” attack methodology.

How the Emotet, TrickBot, and Ryuk triple threat attack works (Courtesy of Cybereason)

The first stage of the attack starts with a weaponized Microsoft Office document file—meaning, it contains malicious macro code—attached to a phishing email. Once the user opens it, the malicious macro will run cmd and execute a PowerShell command. This command attempts to download Emotet.

Once Emotet executes, it retrieves and executes another malicious payload—usually TrickBot—and collects information on affected systems. It initiates the download and execution of TrickBot by reaching out to and downloading from a pre-configured remote malicious host.

Once infected with TrickBot, the threat actors then check if the system is part of a sector they are targeting. If so, they download an additional payload and use the admin credentials stolen using TrickBot to perform lateral movement to reach the assets they wish to infect.

The threat actors then check for and establish a connection with the target’s live servers via a remote desktop protocol (RDP). From there, they drop Ryuk.

Systems infected with the Ryuk ransomware displays the following symptoms:

Presence of ransomware notes. Ryuk drops the ransom note, RyukReadMe.html or RyukReadMe.txt, in every folder where it has encrypted files.

The HTML file, as you can see from the screenshot above, contains two private email addresses that affected parties can use to contact the threat actors, either to find out how much they need to pay to get access back to their encrypted files or to start the negotiation process.

On the other hand, the TXT ransom note contains (1) explicit instructions laid out for affected parties to read and comply, (2) two private email addresses affected parties can contact, and (3) a Bitcoin wallet address. Although email addresses may vary, it was noted that they are all accounts served at Protonmail or Tutanota. It was also noted that a day after the unsealing of the indictment of two ransomware operators, Ryuk operators removed the Bitcoin address from their ransom notes, stating that it will be given to those affected once they are contacted via email.

There are usually two versions of the text ransom note: a polite version, which past research claims is comparable to BitPaymer’s due to certain similar phrasings; and a not-so-polite version.

Ryuk ransom notes. Left: polite version; Right: not-so-polite version BitPaymer ransom note: polite version (Courtesy of Coveware)
BitPaymer ransom note: not-so-polite version (Courtesy of Symantec)

Encrypted files with the RYK string attached to extension names. Ryuk uses a combination of symmetric (via the use of AES) and asymmetric (via the use of RSA) encryption to encode files. A private key, which only the threat actor can supply, is needed to properly decrypt files.

Encrypted files will have the .ryk file extension appended to the file names. For example, an encrypted sample.pdf and sample.mp4 files will have the sample.pdf.ryk and sample.mp4.ryk file names, respectively.

This scheme is effective, assuming that each Ryuk strain was tailor-made for their target organization.

While Ryuk encrypts files on affected systems, it avoids files with the extension .exe, .dll, and .hrmlog (a file type associated with Hermes). Ryuk also avoids encrypting files in the following folders:

  • AhnLab
  • Chrome
  • Microsoft
  • Mozilla
  • Recycle.bin
  • Windows
Protect your system from Ryuk

Malwarebytes continues to track Ryuk ransomware campaigns, protecting our business users with real-time anti-malware and anti-ransomware technology, as well as signature-less detection, which stops the attack earlier on in the chain. In addition, we protect against triple threat attacks aimed at delivering Ryuk as a final payload by blocking downloads of Emotet or TrickBot.

We recommend IT administrators take the following actions to secure and mitigate against Ryuk ransomware attacks:

  • Educate every employee in the organization, including executives, on how to correctly handle suspicious emails.
  • Limit the use of privilege accounts to only a select few in the organization.
  • Avoid using RDPs without properly terminating the session.
  • Implement the use of a password manager and single sign-on services for company-related accounts. Do away with other insecure password management practices.
  • Deploy an authentication process that works for the company.
  • Disable unnecessary share folders, so that in the event of a Ryuk ransomware attack, the malware is prevented from moving laterally in the network.
  • Make sure that all software installed on endpoints and servers is up to date and all vulnerabilities are patched. Pay particular attention to patching CVE-2017-0144, a remote code-execution vulnerability. This will prevent TrickBot and other malware exploiting this weakness from spreading.
  • Apply attachment filtering to email messages.
  • Disable macros across the environment.

For a list of technologies and operations that have been found to be effective against Ryuk ransomware attacks, you can go here.

Indicators of Compromise (IOCs)

Take note that professional cybercriminals sell Ryuk to other criminals on the black market as a toolkit for threat actors to build their own strain of the ransomware. As such, one shouldn’t be surprised by the number of Ryuk variants that are wreaking havoc in the wild. Below is a list of file hashes that we have seen so far:

  • cb0c1248d3899358a375888bb4e8f3fe
  • d4a7c85f23438de8ebb5f8d6e04e55fc
  • 3895a370b0c69c7e23ebb5ca1598525d
  • 567407d941d99abeff20a1b836570d30
  • c0d6a263181a04e9039df3372afb8016

As always—stay safe, everyone!

The post Threat spotlight: the curious case of Ryuk ransomware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The little-known ways mobile device sensors can be exploited by cybercriminals

Malwarebytes - Wed, 12/11/2019 - 17:51

The bevy of mobile device sensors in modern smartphones and tablets make them more akin to pocket-sized laboratories and media studios than mere communication devices. Cameras, microphones, accelerometers, and gyroscopes give incredible flexibility to app developers and utility to mobile device users. But the variety of inputs also give clever hackers new methods of bypassing conventional mobile security—or even collecting sensitive information outside of the device.

Anyone who is serious about security and privacy, both for themselves and for end users, should consider how these sensors create unique vulnerabilities and can be exploited by cybercriminals.

Hackers of every hat color have been exploiting mobile device sensors for years. In 2012, researchers developed malware called PlaceRider, which used Android sensors to develop a 3D map of a user’s physical environment. In 2017, researchers used a smart algorithm to unlock a variety of Android smartphones with near complete success within three attempts, even when the phones had fairly robust security defenses. 

But as updates have been released with patches for the most serious vulnerabilities, hackers in 2019 have responded by finding even more creative ways to use sensors to snag vulnerable data.

“Listening” to passwords

Researchers were able to learn computer passwords by accessing the sensors in a mobile device’s microphone. The Cambridge University and Linkoping University researchers created an artificial intelligence (AI) algorithm that analyzed typing sounds. Out of 45 people tested, their passwords were cracked seven times out of 27. The technique was even more effective on tablets, which were right 19 times out of 27, inside of 10 attempts.

“We showed that the attack can successfully recover PIN codes, individual letters, and whole words,” the researchers wrote. Consider how easily most mobile users grant permission for an app to access their device’s microphone, without considering the possibility that the sound of their tapping on the screen could be used to decipher passwords or other phrases.

While this type of attack has never happened in the wild, it’s a reminder for users to be extra cautious when allowing applications access to their mobile device’s mic—especially if there’s no clear need for the app’s functionality.

Eavesdropping without a microphone

Other analysts have discovered that hackers don’t need access to a device’s microphone in order to tap into audio. Researchers working at the University of Alabama at Birmingham and Rutgers University eavesdropped on audio played through an Android device’s speakerphone with just the accelerometer, the sensor used to detect the orientation of the device. They found that sufficiently loud audio can impact the accelerometer, leaking sensitive information about speech patterns.

The researchers dubbed this capability as “spearphone eavesdropping,” stating that threat actors could determine the gender, identity, or even some of the words spoken by the device owner using methods of speech recognition or reconstruction. Because accelerometers are always on and don’t require permissions to operate, malicious apps could record accelerometer data and playback audio through speech recognition software.

While an interesting attack vector that would be difficult to protect against—restricting access or usage of accelerometer features would severely limit the usability of smart devices—this vulnerability would require that cybercriminals develop a malicious app and persuade users to download it. Once on a user’s device, it would make much more sense to drop other forms of malware or request access to a microphone to pull easy-to-read/listen-to data.

Since modern-day users tend to pay little attention to permissions notices or EULAs, the advantage of permission-less access to the accelerometer doesn’t yet provide enough return on investment for criminals. However, we once again see how access to mobile device sensors for one functionality can be abused for other purposes.

Fingerprinting devices with sensors

In May, UK researchers announced they had developed a fingerprinting technique that can track mobile devices across the Internet by using easily obtained factory-set sensor calibration details. The attack, called SensorID, works by using calibration details from the accelerator, gyroscope, and magnetometer sensors that can track a user’s web-browsing habits. This calibration data can also be used to track users as they switch between browsers and third-party apps, hypothetically allowing someone to get a full view of what users are doing on their devices.

Apple patched the vulnerability in iOS 12.2, while Google has yet to patch the issue in Android.

Avoiding detection with the accelerometer 

Earlier this year, Trend Micro uncovered two malicious apps on Google Play that drop wide-reaching banking malware. The apps appeared to be basic tools called Currency Converter and BatterySaverMobi. These apps cleverly used motion sensors to avoid being spotted as malware. 

A device that generates no motion sensor information is likely an emulator or sandbox environment used by researchers to detect malware. However, a device that does generate motion sensor data tells threat actors that it’s a true, user-owned device. So the malicious code only runs when the device is in motion, helping it sneak past researchers who might try to detect the malware in virtual environments.

While the apps were taken down from Google Play, this evasive technique could easily be incorporated into other malicious apps on third-party platforms.

The mobile security challenges of the future

Mobile device sensors are especially vulnerable to abuse because no special permissions or escalations are required to access these sensors. 

Most end users are capable of using strong passwords and protecting their device with anti-malware software. However, they probably don’t think twice about how their device’s gyroscope is being used. 

The good news is that mobile OS developers are working to add security protections to sensors. Android Pie tightened security by limiting sensor and user input data. Apps running in the background on a device running Android Pie can’t access the microphone or camera. Additionally, sensors that use the continuous reporting mode, such as accelerometers and gyroscopes, don’t receive events.

That means that mobile security challenges of the future won’t be solved with traditional cryptographic techniques. As long as hackers are able to access sensors that detect and measure physical space, they’ll continue exploit that easy-to-access data to secure the sensitive information that they want.

As mobile devices expand their toolbox of sensors, that will create new vulnerabilities—and yet-to-be discovered challenges for security professionals.

The post The little-known ways mobile device sensors can be exploited by cybercriminals appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Hundreds of counterfeit online shoe stores injected with credit card skimmer

Malwarebytes - Tue, 12/10/2019 - 17:30

There’s a well-worn saying in security: “If it’s too good to be true, then it probably isn’t.” This can easily be applied to the myriad of online stores that sell counterfeit goods—and now attract secondary fraud in the form of a credit card skimmer.

Allured by great deals on brand names, many people end up buying products on dubious websites only to find out that what they paid for isn’t what they’re getting.

We recently identified a credit card skimmer injected into hundreds of fraudulent sites selling brand name shoes. Unfortunate shoppers may not only be disappointed with the faux merchandise, but they will also relinquish their personal and financial data to Magecart fraudsters.

Counterfeit shoes by the truckload

Think of the web as a never-ending whack-a-mole war between brands, security teams, and fraudsters—as legitimate companies work with security to take down one counterfeit site, another soon pops up.

One way fraudulent sites receive traffic is via forum spam. Crooks troll sporting and fitness forums and leave messages to entice users to visit the fake store:

Here’s that same counterfeit site selling Adidas, Nike, and other big brand name sneakers:

trainersnmd[.]com is hosted in Russia at 91.218.113[.]213. Looking at the subnet, we can see many more domains used in the same counterfeit business.

Some of those domains were taken over and replaced with a serving notice. For example in May 2019, Adidas filed a complaint for injunctive relief and damages against hundreds of fake Adidas stores.

Mass credit card skimmer injection

The skimming code was appended to a JavaScript file called translate.js. (A full copy of the deobfuscated skimmer can be found here.)

Stolen data, including billing addresses and credit card numbers, is exfiltrated to a server in China at 103.139.113[.]34.

What’s interesting is that this is actually a massive compromise across several IP subnets:

A cursory look at several domains using Sucuri’s SiteCheck revealed they are using the same outdated software:

It’s likely a malicious scanner simply crawled those IP ranges and used the same vulnerability to compromise each and every one of those counterfeit sites.

Online shopping and its risks

Shopping online these days is akin to walking into a minefield, yet many people aren’t aware of the dangers lurking behind every corner.

Based on our crawlers, we see new e-commerce sites fall victim to web skimmers every day. Looking at our telemetry, we can also correlate the number of web blocks to shopping patterns, such as Black Friday and Cyber Monday events.

We saw an increase in credit card skimming activity for Black Friday and Cyber Monday, but not as much as anticipated.

Many online stores were running deals for some time prior, even since late Oct.#Magecart #skimming #BlackFriday #CyberMonday

— MB Threat Intel (@MBThreatIntel) December 3, 2019

As we saw in this post, counterfeit sites pose a double threat, not only from obtaining illicit goods but also getting robbed of data by a different group of criminals.

While we cannot completely eliminate the threat of digital skimmers, here are some tips on how to reduce the risks associated with online shopping:

  • Make sure that your computer is malware-free and running the latest patches. Leverage a security product that offers web protection. Malwarebytes’ flagship anti-malware product, as well as its newly introduced (and free) Browser Guard extension for Chrome and Firefox can thwart Magecart-related skimmers by blocking malicious scripts and websites from loading, as well as exfiltrating, data.
  • If you are shopping on a site for the first time, check that it looks maintained. While this does not replace a thorough security scan, seeing notes such as “Copyright 2015” may indicate that the website is not really being looked after.
  • Minimize how often you enter your credit card data by relying on other payment methods instead. For example, large reliable online retailers, such as Amazon already have your payment details archived into your account. Other safe methods include Apply Pay or prepaid Visa or Mastercards.
  • Check your bank/credit card statements regularly to identify potentially fraudulent charges.
  • Help prevent further attacks by reporting any fraudulent activity (especially if you were victim) to law enforcement authorities.
Indicators of Compromise (IOCs)

Counterfeit sites injected with skimmer






The post Hundreds of counterfeit online shoe stores injected with credit card skimmer appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Please don’t buy this: smart doorbells

Malwarebytes - Mon, 12/09/2019 - 17:15

Though Black Friday and Cyber Monday are over, the two shopping holidays were just precursors to the larger Christmas season—a time of year when online packages pile high on doorsteps and front porches around the world.

According to some companies, it’s only logical to want to protect these packages from theft, and wouldn’t it just so happen that these same companies have the perfect device to do that—smart doorbells.

Equipped with cameras and constantly connected to the Internet, smart doorbells provide users with 24-hour video feeds of the view from their front doors, capturing everything that happens when a user is away at work or sleeping in bed.

Some devices, like the Eufy Video Doorbell, can allegedly differentiate between a person dropping off a package and, say, a very bold, very unchill goat marching up to the front door (it really happened). Others, like Google’s Nest Hello, proclaim to be able to “recognize packages and familiar faces.” Many more, including Arlo’s Video Doorbell and Netatmo’s Smart Video Doorbell, can deliver notifications to users whenever motion or sound are detected nearby.

The selling point for smart doorbells is simple: total vigilance in the palms of your hands. But if you look closer, it turns out a privatized neighborhood surveillance network is a bad idea.

To start, some of the more popular smart doorbell products have suffered severe cybersecurity vulnerabilities, while others lacked basic functionality upon launch. Worse, the data privacy practices at one major smart doorbell maker resulted in wanton employee access to users’ neighborhood videos. Finally, partnerships between hundreds of police departments and one smart doorbell maker have created a world in which police can make broad, multi-home requests for user videos without needing to show evidence of a crime.

The path to allegedly improved physical security shouldn’t involve faulty cybersecurity or invasions of privacy.

Here are some of the concerns that cybersecurity researchers, lawmakers, and online privacy advocates have found with smart doorbells.

Congress fires off several questions on privacy

On November 20, relying on public reports from earlier in the year, five US Senators sent a letter to Amazon CEO Jeff Bezos, demanding answers about a smart doorbell company that Bezos’ own online retail giant swallowed up for $839 million—Ring.

According to an investigation by The Intercept cited by the senators, beginning in 2016, Ring “provided its Ukraine-based research and development team virtually unfettered access to a folder on Amazon’s S3 cloud storage service that contained every video created by every Ring camera around the world.”

The Intercept’s source also said that “at the time the Ukrainian access was provided, the video files were left unencrypted, the source said, because of Ring leadership’s ‘sense that encryption would make the company less valuable,’ owing to the expense of implementing encryption and lost revenue opportunities due to restricted access.”

Not only that, but, according to the Intercept, Ring also “unnecessarily” provided company executives and engineers with access to “round-the-clock live feeds” of some customers’ cameras. For Ring employees who had this type of access, all they needed to actually view videos, The Intercept reported, was a customer’s email address.

The senators, in their letter, were incensed.

“Americans who make the choice to install Ring products in and outside their homes do so under the assumption that they are—as your website proclaims—‘making the neighborhood safer,’” the senators wrote. “As such, the American people have a right to know who else is looking at the data they provide to Ring, and if that data is secure from hackers.”

The lawmakers’ questions came hot on the heels of Senator Ed Markey’s own efforts in September into untangling Ring’s data privacy practices for children. How, for instance, does the company ensure that children’s likenesses won’t be recorded and stored indefinitely by Ring devices, the senator asked.

According to The Washington Post, when Amazon responded to Sen. Markey’s questions, the answers potentially came up short:

“When asked by Markey how the company ensured that its cameras would not record children, [Amazon Vice President of Public Policy Brian Huseman] wrote that no such oversight system existed: Its customers ‘own and control their video recordings,’ and ‘similar to any security camera, Ring has no way to know or verify that a child has come within range of a device.’”

But Sen. Markey’s original request did not just focus on data privacy protections for children. The Senator also wanted clear answers on an internal effort that Amazon had provided scant information on until this year—its partnerships with hundreds of police departments across the country.

Police partnerships

In August, The Washington Post reported that Ring had forged video-sharing relationships with more than 400 police forces in the US. Today, that number has grown to at least 677—an increase of roughly 50 percent in just four months.

The video-sharing partnerships are simple.

By partnering with Ring, local police forces gain the privilege of requesting up to 12 hours of video spanning a 45-day period from all Ring devices that are included within half a square mile of a suspected crime scene. Police officers request video directly from Ring owners, and do not need to show evidence of a crime or obtain a warrant before asking for this data.

Once the video is in their hands, police can, according to Ring, keep it for however long they wish and share it with whomever they choose. The requested videos can sometimes include video that takes place inside a customer’s home, not just outside their front door.

At first blush, this might appear like a one-sided relationship, with police officers gaining access to countless hours of local surveillance for little in return. But Ring has another incentive, far away from its much-trumpeted mission “to reduce crime in neighborhoods.” Ring’s motivations are financial.

According to Gizmodo, for police departments that partner up with Ring to gain access to customer video, Ring gains near-unprecedented control in how those police officers talk about the company’s products. The company, Gizmodo reported, “pre-writes almost all of the messages shared by police across social media, and attempts to legally obligate police to give the company final say on all statements about its products, even those shared with the press.”

Less than one week after Gizmodo’s report, Motherboard obtained documents that included standardized responses for police officers to use on social media when answering questions about Ring. The responses, written by Ring, at times directly promote the company’s products.

Further, in the California city of El Monte, police officers offered Ring smart doorbells as an incentive for individuals to share information about any crimes they may have witnessed.

The partnerships have inflamed multiple privacy rights advocates.

“Law enforcement is supposed to answer to elected officials and the public, not to public relations operatives from a profit-obsessed multinational corporation that has no ties to the community they claim they’re protecting,” said Evan Greer, deputy director of Fight for the Future, when talking to Vice.

Matthew Guariglia, policy analyst with Electronic Frontier Foundation, echoed Greer’s points:

“This arrangement makes salespeople out of what should be impartial and trusted protectors of our civic society.”

Cybersecurity concerns

When smart doorbells aren’t potentially invading privacy, they might also be lacking the necessary cybersecurity defenses to work as promised.

Last month, a group of cybersecurity researchers from Bitdefender announced that they’d discovered a vulnerability in Ring devices that could have let threat actors swipe a Ring user’s WiFi username and password.

The vulnerability, which Ring fixed when it was notified privately about it in the summer, relied on the setup process between a Ring doorbell and a Ring owner’s Wi-Fi network. To properly set up the device, the Ring doorbell needs to send a user’s Wi-Fi network login information to the doorbell. But in that communication, Bitdefender researchers said Ring had been sending the information over an unencrypted network.

Unfortunately, this vulnerability was not the first of its kind. In 2016, a company that tests for security vulnerabilities found a flaw in Ring devices that could have allowed threat actors to steal WiFi passwords.

Further, this year, another smart doorbell maker suffered so many basic functionality issues that it stopped selling its own device just 17 days after its public launch. The smart doorbell, the August View, went back on sale six months later.

Please don’t buy

We understand the appeal of these devices. For many users, a smart doorbell is the key piece of technology that, they believe, can help prevent theft in their community, or equip their children with a safe way to check on suspicious home visitors. These devices are, for many, a way to calmer peace of mind.

But the cybersecurity flaws, invasions of privacy, and attempts to make public servants into sales representatives go too far. The very devices purchased for security and safety belie their purpose.

Therefore, this holiday season, we kindly suggest that you please stay away from smart doorbells. Deadbolts will never leak your private info.

The post Please don’t buy this: smart doorbells appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (December 2 – December 8)

Malwarebytes - Mon, 12/09/2019 - 16:47

Last week on Malwarebytes Labs, we took a look at a new version of the IcedID Trojan, described web skimmers up to no good, and took a deep dive into containerization. We also explored a report bringing bad news for organizations and insider threats, and threw a spotlight on a video game phish attack.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (December 2 – December 8) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fake Elder Scrolls Online developers go phishing on PlayStation

Malwarebytes - Fri, 12/06/2019 - 20:29

A player of popular gaming title Elder Scrolls Online recently took to Reddit to warn users of a phish via Playstation messaging. This particular phishing attempt is notable for ramping up the pressure on recipients—a classic social engineering technique taken to the extreme.

A terms of service violation?

In MMORPG land, the scammers take a theoretically plausible deadline, crunch it into something incredibly short and ludicrous, and go fishing for the catch of the day. Behold the pressure-laden missive from one fake video game developer to a player:

Click to enlarge

The text of the phishing message reads as follows:

We have noticed some unusual activity involving this account. To be sure you are the rightful owner, we require you to respond to this alert with the following account information so that you may be verified,

– Email address

– Password

_ Date of birth on the account

In response to a violation of these Terms of Service, ZeniMax may issue you a warning, suspend or restrict certain features of the account. We may also immediately terminate any and all accounts that you have established. Temporarily or permanently ban the account, device, and/or machine from accessing, receiving, playing or using all or certain services.

Under the current circumstances, you have 15 minutes from opening this alert to respond with the required information. Failure to do so will result in an immediate account ban, permanently losing access to our servers on all platforms, along with all characters  associated with the account in question. Please be sure to double check your information and spelling before sending.

Yes, you read that correctly—a grand total of 15 whole minutes to panic email scammers back with your login details. But what exactly happened to warrant such an immediate need for verification? The vagueness of the fake message may actually work in the scammer’s favour here because MMORPG titles are often rife with cheating/botting/scamming, so developers are typically light on information when genuine infractions occur.

FOMO: oh no

FOMO, fear of missing out, is the lingering fear that not only have they never had it so good, but the “they” in question almost certainly isn’t you.

Marketers and sales teams exploit this ruthlessly, with sudden sales and the promise of things you can’t do without. Breaking hotel deals on websites can’t help but tell you how many people have the same deal open RIGHT NOW.

Video games, especially online titles and MMORPGs, take a similar approach, offering in-game purchases but rotating items slowly, leading to a form of digital scarcity that encourages transactions because gamers don’t know if the item will be seen again.

Inventory space, character slots, and many more crucial elements are at a premium, and people invest serious money to make the most out of their experience. With this in mind, people tend to be particular about keeping their account secure.

As a result, scammers are hugely effective at turning FOMO on its head, giving people a nasty dose of “fear of something about to happen or else.” Had a spot of bother with ransomware? No sweat, pay us in Bitcoin and you’ll get your documents back—as long as you do it within three days. Fake sextortion email claiming they’ve recorded you watching pornography? Yeah, that’ll be $1,000 in 48 hours or we’ll release the footage and tell all your friends and family.

“It wasn’t me, what did I do?”

You’ll often see people banned  from titles complaining on forums that all access has been revoked, with no explanation why besides a “You are banned, sorry” type message. Quite often they won’t even be able to follow up with support because the ban also locks them out of being able to raise a ticket. 

Scammers know they can skip some of the fake explanation shovel work as nobody ever receives a detailed explanation. This is to obscure the inner workings of fraud detection systems: If they spilled the beans, malicious individuals would adjust their behaviour accordingly. That’s a tricky situation for developers to tightrope walk across, but it is possible in the form of additional security measures. Does Elder Scrolls Online meet the challenge?

Sadly, the game doesn’t allow players to lock down accounts with a third-party authenticator. There’s no mobile app, and there are zero authentication sticks. What they do have is a few password suggestions and some information about their one-time password system.

It’s certainly good that the password system exists, and one would hope it would spring into life in this case, but players would probably appreciate a little more control over their security choices, as well as a few safety nets when things go wrong.

By comparison, the hugely popular Black Desert Online offers Google authenticator two-factor authentication (2FA). Blizzard has you covered with their own authenticator. Guild Wars offers both an authenticator app and SMS lockdowns.

Some simple rules to follow

Regardless of which game you play, remember:

  • Don’t reuse passwords
  • Make the password as strong as the system allows
  • Tie your account to a locked-down email address, ideally also secured with 2FA
  • Never, ever send login details to an email or text message asking for them until you’ve authenticated the message by hovering over the email address and links to see if they are legitimate, Googling to see if there are known scams or phishes associated with the company in question, and reading over the instructions carefully.
  • If you’re still in doubt whether an email is legitimate or not, err on the side of caution and go directly to your account’s website/login page. If there is a need to verify or change credentials, you can change them there.

Phishing is one of the oldest cyberattack methods on the book, yet it remains a favorite of scammers because, quite simply, it works. Don’t be fooled by FOMO, high-pressure deadlines, or too-good-to-be-true deals.

The post Fake Elder Scrolls Online developers go phishing on PlayStation appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Report: Organizations remain vulnerable to increasing insider threats

Malwarebytes - Thu, 12/05/2019 - 16:00

The latest data breach at Capital One is a noteworthy incident not because it affected over 100 million customer records, 140,000 Social Security numbers (SSNs), and 80,000 linked bank accounts. Nor was it special because the hack was the result of a vulnerable firewall misconfiguration.

Many still talk about this breach because a leak of this magnitude, which we’ve historically seen conducted by nation-state actors, was made possible by a single skilled insider: Paige A. Thompson. Thompson set a benchmark for single insider threat attacks against the banking industry—and we can expect that benchmark to be cleared.

On a more chilling note, criminal enterprises already have a market opened for corporate employees willing to trade proprietary secrets for cash as a form of “side job.” A number of these underground organizations, unsurprisingly, hail from countries outside the United States, such as Russia and China. Unfortunately for US organizations, these criminal enterprises pay really well.

Recently, Cybersecurity Insiders—in partnership with Gurucul, a behavior, identity, fraud, and cloud security analytics company—released results of its research on insider threats, revealing the latest trends, organizational challenges, and methodologies on how IT professionals prepare for and deal with this danger. Here are some of their key findings:

  • More than half of organizations surveyed (58 percent) said that they are not effective at monitoring, detecting, and responding to insider threats.
  • 63 percent of organizations think that privileged IT users pose the biggest security risk. This is followed by regular employees (51 percent), contractors/service providers/temporary workers (50 percent), and other privileged users, such as executives (50 percent).
  • 68 percent of organizations feel that they are moderately to extremely vulnerable to insider threats.
  • 52 percent of organizations confirm that it is more difficult for them to detect and prevent insider threats than detecting and preventing external cyberattacks.
  • 68 percent of organizations have observed that insider threats have become more frequent in the past 12 months.

The report also states reasons why organizations are increasingly having difficulty detecting and preventing insider threats, which include the increased use of applications and/or tools that leak data, an increased amount of data that leaves the business environment/perimeter, and the misuse of credential or access privileges.

The possible reasons for difficulty in detecting and preventing insider threats (Courtesy of Cybersecurity Insiders)

The CERT Insider Threat Center, part of the CERT Division at Carnegie Mellon’s Software Engineering Institute (SEI) that specializes in insider threats, has recently put forth a blog series that ran from October 2018 to August 2019 on the patterns and trends of insider threats. These posts contained breakdowns and analyses of what insider threats look like across certain industry sectors, statistics, and motivations behind insider incidents—and they’re quite different.

Below are a few high-level takeaways from these posts:

  • The CERT Insider Threat Center has identified the top three crimes insiders commit across industries: fraud, intellectual property theft, and IT systems sabotage.
  • Fraud is the most common insider threat incident recorded in the federal government (60.8 percent), finance and insurance (87.8 percent), state and local governments (77 percent), healthcare (76 percent), and the entertainment (61.5 percent) industries.
  • All sectors consistently experienced an insider incident perpetrated by trusted business partners. Typically, it ranges between 15 to 25 percent across all insider incident types and sectors. This should be an eye-opening statistic, especially for SMBs, as research suggests that they partner more with other businesses over hiring employees.
Scope of the insider threat problem (Courtesy of the Carnegie Mellon University Software Engineering Institute) Insider threats on the spotlight—finally!

The National Counterintelligence and Security Center (NCSC) and the National Insider Threat Task Force (NITTF), together with the Federal Bureau of Investigation, the Office of the Under Secretary of Defense (Intelligence), the Department of Homeland Security, and the Defense Counterintelligence and Security Agency declared September as National Insider Threat Awareness Month, and it launched this year.

The goal of the annual campaign is to educate employees about insider threats and to maximize the reporting of abnormal employee behavior before things escalate to an insider incident.

“All organizations are vulnerable to insider threats from employees who may use their authorized access to facilities, personnel or information to harm their organizations—intentionally or unintentionally,” says NCSC Director William Evanina in a press release [PDF], “The harm can range from negligence, such as failing to secure data or clicking on a spear-phishing link, to malicious activities like theft, sabotage, espionage, unauthorized disclosure of classified information or even violence.”

We have tackled insider threats at length on several occasions on the Malwarebytes Labs blog. Now is always the right time for organizations to give this cybersecurity threat some serious thought and plan on how they can combat it. After all, if businesses are only concerned about attacks from the outside, at some point they’ll be hit with attacks from the inside. The good news is organizations won’t have to wait for next September to start dealing with this problem today.

The CERT Insider Threat Center offers a list of common-sense recommendations for mitigating insider threats that every cybersecurity, managerial, legal, and human resource personnel should have on hand. The Center also showcases a trove of publications if organizations would like to go deeper.

We’d also like to add our own blog on the various types of insiders your organization may encounter and certain steps you can take to nipping insider risks in the bud. We also paid closer attention to workplace violence, a type of insider threat that is often forgotten.

Stay safe! And remember: When you see something, say something.

The post Report: Organizations remain vulnerable to increasing insider threats appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Explained: What is containerization?

Malwarebytes - Wed, 12/04/2019 - 17:00

Containerization. Another one of those tech buzzwords folks love to say but often have no idea what it means. A better way to organize children’s toys? The act of bringing tupperware out to dinner to safely transport home leftovers? Another name for Russian dolls?

Containerization is, of course, none of those things. But its definition might be best captured in a quick example rather than a description:

Eliza wrote a program on her Windows PC to streamline workflow between her department, a second department within the company, and a third outside agency. She carefully configured the software to eliminate unnecessary steps and enable low-friction sharing of documents, files, and other assets. When she proudly demoed her program on her manager’s desktop Mac, however, it crashed within seconds—despite working perfectly on her own machine.

Containerization was invented to tackle that problem.

What is containerization?

In traditional software development, programmers code an application in one computing environment that may run with bugs or errors when deployed in another, as was the case with Eliza above. To solve for this, developers bundle their application together with all its related configuration files, libraries, and dependencies required to run in containers hosted in the cloud. This method is called containerization.

The goal of containerization is to allow applications to run in an efficient and bug-free way across different computing environments, whether that’s a desktop or virtual machine or Windows or Linux operating system. The demand for applications to run consistently among different systems and infrastructures has moved development of this technology along at a rapid pace. The use of different platforms within business organizations and the move to the cloud are undoubtedly huge contributors to this demand.

Containerization is almost always conducted in a cloud environment, which contributes to its scalability. While some of the most popular cloud services are known for data storage—Google Drive, iCloud, Box—other public cloud computing companies, such as Amazon Web Services, Oracle, or Microsoft Azure allow for containerization. In addition, there are private cloud solutions, in which companies host data on an enterprise intranet or data center.

The difference between containerization and virtualization

Containerization is closely related to virtualization, and it often helps to compare and contrast the two in order to get a better understanding of how containerization can help organizations build and run applications.

Containers, unlike virtual machines (VMs), do not require the overhead of an entire operating system (OS). That means containerization is less demanding in the hardware department and needs fewer computing resources than what you’d need to run the same applications on virtual machines.

Organizations could even opt to share common libraries and other files among containers to further reduce the overhead. Sharing of these files happens at the application layer level, where VMs run on the hardware layer. As a result, you can run more application containers that share a common OS.

Image courtesy of ElectronicDesign.

VMs are managed by a hypervisor (aka virtual machine monitor) and utilize VM hardware (1) while containerized systems provide operating system services from the underlying host and isolate the applications using virtual-memory hardware or (2) the container manager provides an abstract OS for the containers. This method eliminates a layer and in doing so, saves resources and provides a quicker startup time, when necessary.

Why use containerization?

There are a few reasons why organizations decide to use containerization:

  • Portability: You can run the application on any platform and in any infrastructure. Switch to a different cloud supplier? No problem.
  • Isolation: Mishaps and faults in one container do not carry over to other containers. This means maintenance, development, and troubleshooting can be done without downtime of the other containers.
  • Security: The strict isolation from other applications and the host system also results in better security.
  • Management: You don’t have to think about the effects on other applications when you update, add further developments, or even rollback.
  • Scalability: Instances of containers can be copied and deployed in the cloud to match the growing needs of the organization.
  • And last but not least, cost effectiveness: Compared to virtualized solutions, containerization is much more efficient and it reduces costs for server instances, OS licenses, and hardware.
Security risks for containers

Since containerization started out as a means for efficient development and cost savings and quickly ballooned in adoption and implementation, security was unfortunately a low priority in its design—as it often is in tech innovation.

Yet containers have a large attack surface, as they tend to include complex applications that use components which communicate with each other over the network. To the standard vulnerabilities introduced by various application components, add other security gaps created by misconfigurations, which result in inadequate authorization issues. These vulnerabilities will not be limited to the top layer of the application.

Add to these vulnerabilities the limitations of some security vendors, whose enterprise programs may not be able to protect containers running in the cloud environment. Due to the isolated nature of the containers, some security solutions may not be able to scan inside active containers or monitor their behavior as they would when running on a virtual machine.

Containers’ security postures are further weakened by a likely lack of awareness by their users about these limitations, which might encourage less stringent oversight. However, there are already prime examples of threat actors taking advantage of containerization developers’ security indifference.

On November 26, ZDNet reported that a hacking group was mass-scanning the Internet looking for Docker platforms with open API endpoints to deploy a classic XMRig cryptominer. What’s worse is that they also disabled security software running on those instances. Containerization users must take care not to leave admin ports and API endpoints exposed online, otherwise cybercriminals can easily wreak havoc. If they were able to install cryptominers, what’s to stop them from dropping ransomware?

Security recommendations

In order to shore up containers so that applications can both run efficiently and bug-free in diverse environments and remain secure, there are a few simple pieces of advise developers and operators should keep in mind.

Probably the most important: When copying the runtime system, developers, managers, or operators will need to check whether the latest patches and updates have been applied for all components. Otherwise, programmers could copy outdated, insecure, or even infected libraries to the next container. One common piece of advice is to store a model container in a secure place that can be updated, patched, and scanned before it is copied to work environments.

Second: When migrating a container to a different environment, the operator will have to take into account the possible vulnerabilities in both the container and new environment, as well as the influence of the container’s behavior on the new environment. Despite its portability, the container might require additional demands for safety measures or configuration of the container management system.

The rest of containerization’s security efforts can be summed up in a few short bullets:

  • Check for exposed endpoints and close the ports, if there are any.
  • Limit direct interaction between containers.
  • Use providers that can assist with security know-how, if it’s not available in house.
  • Use container isolation to your advantage, but also be aware of the consequences. Configure containers to be read-only.
  • If available, use your container orchestration platform to enhance and keep tabs on security.
  • Consider security solutions that can scan and protect containers in the cloud working environment if your current provider is unable.

These extraneous security measures might feel counterintuitive, as developers originally set out to have standard containers that behave the same way in every environment and for every user. But they are minor, simple steps that can go a long way in protecting an organization’s data and applications.

Therefore, use containerization as it suits best, but always keep security in mind.

The post Explained: What is containerization? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

There’s an app for that: web skimmers found on PaaS Heroku

Malwarebytes - Wed, 12/04/2019 - 16:00

Criminals love to abuse legitimate services—especially platform-as-a-service (Paas) cloud providers—as they are a popular and reliable hosting commodity used to support both business and consumer ventures.

Case in point, in April 2019 we documented a web skimmer served on code repository GitHub. Later on in June, we observed a vast campaign where skimming code was injected into Amazon S3 buckets.

This time, we take a look at a rash of skimmers found on Heroku, a container-based, cloud PaaS owned by Salesforce. Threat actors are leveraging the service not only to host their skimmer infrastructure, but also to collect stolen credit card data.

All instances of abuse found have already been reported to Heroku and taken down. We would like to thank the Salesforce Abuse Operations team for their swift response to our notification.

Abusing cloud apps for skimming

Developers can leverage Heroku to build apps in a variety of languages and deploy them seamlessly at scale.

Heroku has a freemium model, and new users can experiment with the plaform’s free web hosting services with certain limitations. The crooked part of the Magecart cabal were registering free accounts with Heroku to host their skimming business.

Their web skimming app consists of three components:

  • The core skimmer that will be injected into compromised merchant sites, responsible for detecting the checkout URL and loading the next component.
  • A rogue iframe that will overlay the standard payment form meant to harvest the victim’s credit card data.
  • The exfiltration mechanism for the stolen data that is sent back in encoded format.
iframe trick

Compromised shopping sites are injected with a single line of code that loads the remote piece of JavaScript. Its goal is to monitor the current page and load a second element (a malicious credit card iframe) when the current browser URL contains the Base64 encoded string Y2hlY2tvdXQ= (checkout).

The iframe is drawn above the standard payment form and looks identical to it, as the cybercriminals use the same cascading style sheet (CSS) from

Finally, the stolen data is exfiltrated, after which victims will receive an error message instructing them to reload the page. This may be because the form needs to be repopulated properly, without the iframe this time.

Several Heroku-hosted skimmers found

This is not the only instance of a credit card skimmer found on Heroku. We identified several others using the same naming convention for their script, all seemingly becoming active within the past week.

Another one on @heroku

hxxps://stark-gorge-44782.herokuapp[.]com/integration.js. Fake form in an iframe. Data goes to hxxps://stark-gorge-44782.herokuapp[.]com/config.php?id=

— Denis (@unmaskparasites) December 2, 2019

In one case, the threat actors may have forgotten to use obfuscation. The code shows vanilla skimming, looking for specific fields to collect and exfiltrate using the window.btoa(JSON.stringify(result)) method.

We will likely continue to observe web skimmers abusing more cloud services as they are a cheap (even free) commodity they can discard when finished using it.

From a detection standpoint, skimmers hosted on cloud providers may cause some issues with false positives. For example, one cannot blacklist a domain used by thousands of other legitimate users. However, in this case we can easily do full qualified domain (FQDN) detections and block just that malicious user.

Indicators of Compromise (IOCs)

Skimmer hostnames on Heroku


The post There’s an app for that: web skimmers found on PaaS Heroku appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New version of IcedID Trojan uses steganographic payloads

Malwarebytes - Tue, 12/03/2019 - 18:06

This blog post was authored by @hasherezade, with contributions from @siri_urz and Jérôme Segura.

Security firm Proofpoint recently published a report about a series of malspam campaigns they attribute to a threat actor called TA2101. Originally targeting German and Italian users with Cobalt Strike and Maze ransomware, the later wave of malicious emails were aimed at the US and pushing the IcedID Trojan.

During our analysis of this spam campaign, we noticed changes in how the payload was implemented, in particular with some code rewritten and new obfuscation. For example, the IcedID Trojan is now being delivered via steganography, as the data is encrypted and encoded with the content of a valid PNG image. According to our research, those changes were introduced in September 2019 (while in August 2019 the old loader was still in use).

The main IcedID module is stored without the typical PE header and is run by a dedicated loader that uses a custom headers structure. Our security analyst @hasherezade previously described this technique in a talk at the SAS conference (Funky Malware Formats).

In this blog post, we take a closer look at these new payloads and describe their technical details.


Our spam honeypot collected a large number of malicious emails containing the “USPS Delivery Unsuccessful Attempt Notification” subject line.

Each of these emails contains a Microsoft Word document as attachment allegedly coming from the United States Postal Service. The content of the document is designed to lure the victim into enabling macros by insinuating that the content had been encoded.

Having a look at the embedded macros, we can see the following elements:

There is a fake error message displayed to the victim, but more importantly, the IcedID Trojan authors have hidden the malicious instructions within a UserForm as labels.

The labels containing numerical ASCII values

The macro grabs the text from the labels, converts it, and uses during execution:

url1 = Dcr(GH1.Label1.Caption)
path1 = Dcr(GH1.Label2.Caption)

For example:

104 116 116 112 58 47 47 49 48 52 46 49 54 56 46 49 57 56 46 50 51 48 47 119 111 114 100 117 112 100 46 116 109 112
converts to:

converts to: C:\Windows\Temp\ered.tmp

The file wordupd.tmp is an executable downloaded with the help of the URLDownloadToFileA function, saved to the given path and run. Moving on, we will take a closer look at the functionality and implementation of the downloaded sample.

Behavioral analysis

As it had before, IcedID has been observed making an injection into svchost, and running under its cover. Depending on the configuration, it may or may not download other executables, including TrickBot.

Dropped files

The malware drops various files on the disk. For example, in %APPDATA%, it saves the steganographically obfuscated payload (photo.png) and an update of the downloader:

It also creates a new folder with a random name, where it saves a downloaded configuration in encrypted form:

Inside the %TEMP% folder, it drops some non-malicious helper elements: sqlite32.dll (that will be used for reading SQLite browser databases found in web browsers), and a certificate that will be used for intercepting traffic:

Looking at the certificate, we can see that it was signed by VeriSign:


The application achieves persistence with the help of a scheduled task:

The task has two triggers: at the user login and at the scheduled hour.

Overview of the traffic

Most of the traffic is SSL encrypted. We can also see the use of websockets and addresses in a format such as “data2php?<key>“, “data3.php?<key>“.

Attacking browsers

The IcedID Trojan is known as a banking Trojan, and indeed, one of its important features is the ability to steal data related to banking transactions. For this purpose, it injects its implants into browsers, hooks the API, and performs a Man-In-The-Browser attack.

Inside the memory of the infected svchost process we can see the strings with the configuration for webinjects. Webinjects are modular (typically HTML and JavaScript code injected into a web page for the purpose of stealing data).

Webinjects configuration in the memory of infected svchost

The core bot that runs inside the memory of svchost observes processes running on the system, and injects more implants into browsers. For example, looking at Mozilla Firefox:

The IcedID implant in the browser’s memory

By scanning the process with PE-sieve, we can detect that some of the DLLs inside the browser have been hooked and their execution was redirected to the malicious module.

In Firefox, the following hooks have been installed:

  • nss3.dll : SSL_AuthCertificateHook->2c2202[2c1000+1202]
  • ws2_32.dll : connect->2c2728[2c1000+1728]

A different set was observed in Internet Explorer:

  • mswsock : hook_0[7852]->525d0[implant_code+15d0]
  • ws2_32.dll : connect->152728[implant_code+1728]

The IcedID module running inside the browser’s memory is responsible for applying the webinjects installing malicious JavaScripts into attacked pages.

Fragment of the injected script

The content of the inlined webinject script is available here: inject.js.

It also communicates with the main bot that is inside the svchost process. The main bot coordinates the work of all the injected components, and sends the stolen data to the Command and Control server (CnC).

Due to the fact that the communication is protected by HTTPS, the malware must also install its own certificate. For example, this is the valid certificate for the Bank of America website:

And in contrast, the certificate used by the browser infected by IcedID:

Overview of the changes

As we mentioned, the core IcedID bot, as well as the dedicated loader, went through some refactoring. In this comparative analysis, we used the following old sample: b8113a604e6c190bbd8b687fd2ba7386d4d98234f5138a71bcf15f0a3c812e91

The detailed analysis of this payload can be found here: [1][2][3].

The old loader vs. new

The loader of the previous version of the IcedID Trojan was described in detail here, and here. It was a packed PE file that used to load and inject a headerless PE.

The main module was injected into svchost:

The implants in the svchost’s memory

The implanted PE was divided into two sections, and the first memory page (representing the header) was empty. This type of payload is more stealthy than a full PE injection (as is more common). However, it was possible to reconstruct the header and analyze the sample like a normal PE. (An example of the reconstructed payload is available here: 395d2d250b296fe3c7c5b681e5bb05548402a7eb914f9f7fcdccb741ad8ddfea).

The redirection to the implant was implemented by hooking the RtlExitUserProcess function within svchost’s NTDLL.

When svchost tried to terminate, it instead triggered a jump into the injected PE’s entry point.

The hooked RtlExitUserProcess redirects to payload’s EP

The loader was also filling the pointer to the data page within the payload. We can see this pointer being loaded at the beginning of the payload’s execution:

In the new implementation, there is one more intermediate loader element implemented as shellcode. The diagram below shows the new loading chain:

The shellcode has similar functionality that was previously implemented by the loader in form of a PE. First it injects itself into svchost.

Then it decompresses and injects the payload, which as before is a headerless PE (analogical to the one described here).

Comparing the core

The implementation of the core bot is modified. Yet, inside the code we can find some strings known from the previous sample, as well as a similar set of imported API functions. We can also see some matching strings and fragments of implemented logic.

Fragment of the code from the old implementation

Analogical fragment from the new sample:

Fragment of the code from the new implementation

Comparing both reconstructed samples with the help of BinDiff shows that there are quite a few differences and rewritten parts. Yet, there are parts of code that are the same in both, which proves that the codebase remained the same.

Preview of the similar functions Preview of different/rewritten functions

Let’s follow the execution flow of all the elements from the new IcedID package.

The downloader

In the current delivery model, the first element of IcedID is a downloader. It is a PE file, packed by a crypter. The packing layer changes from sample to sample, so we will omit its description. After unpacking it, we get the plain version: fbacdb66748e6ccb971a0a9611b065ac.

Internally, this executable is simple and no further obfuscated. We can see that it first queries the CnC trying to fetch the second stage, requesting for a photo.png. It passes a generated ID to the URL. Example:


Fragment of the function responsible for generating the image URL

The downloader fetches the PNG with the encoded payload. Then it loads the file, decodes it, and redirects the execution there. Below we can see the responsible function:

Once the PNG is downloaded, it will be saved on disk and can be loaded again at system restart. The downloader will turn into a runner of this obfuscated format. In this way, the core executable is revealed only in memory and never stored on disk as an EXE file.

The “photo.png” looks like a valid graphic file:

Preview of the “photo.png”

In this fragment of code, we can see that the data from the PNG (section starting from the tag “IDAT”) is first decoded to raw bytes, and then those bytes are passed to the further decoding function.

The algorithm used for decoding the bytes:

The PNG is decrypted and injected into the downloader. In this case, the decoded content turns out to be a shellcode module rather than a PE.

The downloader redirecting the execution into the shellcode’s entry point

The loader passes to the shellcode one argument; that is the base at which it was loaded.

The loader (shellcode)

As mentioned before, this stage is implemented as a position-independent code (shellcode). The dumped sample is available here: 624afab07528375d8146653857fbf90d.

This shellcode-based loader replaced the previously described (sources: [1][2]) loader element that was implemented as a PE file. First, it runs within the downloader:

As we can see from the downloader’s code, the shellcode entry point must first be fetched from a simple header that is at the beginning of the decoded module. We see that this header stores more information that is essential for loading the next element:

As this module is no longer a PE file, its analysis is more difficult. All the APIs used by the shellcode are resolved dynamically:

The strings are composed on the stack:

To make the deobfuscation easier, we can follow the obfuscated flow with the help of a PIN tracer. The log from the tracing of this stage (on a 32 bit system) shows APIs indicating code injection, along with their offsets:

09c;shellcode's Entry Point

Indeed, the shellcode injects its own copy, passing its entry point to the APC Queue. This time, some additional parameters are added as a thread context.

Setting parameters of the injected thread

Once the shellcode is executed from inside svchost, an alternative path to the execution is taken. It becomes a loader for the core bot. The core element is stored in a compressed form within the shellcode’s body. First, it is decompressed.

From previous experiments, we know that the payload follows the typical structure of a PE file, yet it has no headers. Often, malware authors erase headers in memory once the payload is loaded. Yet, this is not the case. In order to make the payload stealthier, the authors didn’t store the original headers of this PE at all. Instead, they created their own minimalist header that is used by the internal loader.

First, the shellcode finds the next module by parsing its own header:

The shellcode also loads the imports of the payload:

Below, we can see the fragment of code responsible for following the custom headers definition, and applying protection on pages. After the next element is loaded, execution is redirected to its entry point.

The entry point of the next module where the function expects the pointer to the data to be supplied:

The supplied data is appended at the end of the shellcode, and contains: the path of the initial executable, the path of the downloaded payload (photo.png), and other data.

Note that described analysis was performed on a 32 bit system. In case of a 64bit system, the shellcode takes an alternative execution path, and a 64bit version of the payload is loaded with the help of Heaven’s Gate technique. Yet, all the features of both payload’s versions are identical.

The Heaven’s Gate within the shellcode: switch to 64 bit mode Reconstructing the PE

In order to make analysis easier, it is always beneficial to reconstruct the valid PE header. There are two approaches to this problem:

  1. Manually finding and filling all the PE artifacts, such as: sections, imports, relocations (this becomes a problem in if all those elements are customized by the authors, as in the case of Ocean Lotus sample)
  2. Analyzing in detail the loader and reconstructing the PE from the custom header

Since we have access to the loader’s code, we can go for the second, more reliable approach: Observe how the loader processes the data and reconstruct the meaning of the fields.

A fragment of the loader’s code where the sections are processed:

The custom header reconstructed based on the analysis:

Fortunately, in this case the malware authors customized only the PE header. The Data Directory elements (imports and relocations) are kept in a standard form, so this part does not need to be converted.

The converter from this format to PE is available here:

Interestingly, the old version of IcedID used a similar custom format, but with one modification. In the past, there was one more DWORD-sized field before the ImportDirector VA. So, the latest header is shorter by one DWORD than the previous one.

The module in the old format: bbd6b94deabb9ac4775befc3dc6b516656615c9295e71b39610cb83c4b005354

The core bot (headerless PE)

6aeb27d50512dbad7e529ffedb0ac153 – a reconstructed PE

Looking inside the strings of this module, we can guess that this element is responsible for all the core malicious operations performed by this malware. It communicates with the CnC server, reads the sqlite databases in order to steal cookies, installs its own certificate for Man-In-The-Browser attacks, and eventually downloads other modules.

We can see that this is the element that was responsible for generating the observed requests to the CnC:

During the run, the malware is under constant supervision from the CnC. The communication with the server is encrypted.

String obfuscation

The majority of the strings used by the malware are obfuscated and decoded before use. The algorithm used for decoding is simple:

In order to decode the strings statically, we can reimplement the algorithm and supply to it encoded buffers. Another easier solution is a decoder that loads the original malware and uses its function, as well as the encoded buffers given by offset. Example available here.

Decoding strings is important for the further analysis. Especially because, in this case, we can find some debug strings left by the developers, informing us about the actions performed by the malware in particular fragments of code.

A list of some of the decoded strings is available here.

Available actions

The overview of the main function of the bot is given below:

The bot starts by opening a socket. Then, it beacons to the CnC and initializes threads for some specific actions: MiTM proxy, browser hooking engine, and a backconnect module (backdoor).

It also calls to a function that initializes handlers, responsible for managing a variety of available actions. The full list:

By analyzing closer to the handlers, we notice that similar to the first element, the main bot retrieves various elements as steganographically protected modules. The function responsible for decoding PNG files is analogical to the one found in the initial downloader:

Those PNGs are used to carry the content of various updates for the malware. For example, an update to the list of URLs, but also other configuration files.

Execution flow controlled by the CnC

The malware’s backconnect feature allows the attacker to deploy various commands on the victim machine. The CnC can also instruct the bot to decode other malicious modules from inside that will be deployed in a new process. For example:

If the particular command from the CnC is received, the bot will decompress another buffer that is stored inside the sample and inject it into a new instance of svchost.

The way in which this injection is implemented reminds us of the older version of the loader. First, the buffer is decompressed with the help of RtlDecompressBuffer:

Then, memory is allocated at the preferred address 0x3000.

Some functions from NTDLL and other parameters will be copied to the structure, stored at the beginning of the shellcode.

We can see there are some functions that will be used by the shellcode to load another embedded PE.

Similar to in the old loader, the redirection to the new entry point is implemented via hook set on the RtlExitUserProcess function:

After the buffer gets decompressed, we can see another piece of shellcode:

This shellcode is an analogical loader of the headerless PE module. We can see inside the custom version of PE header that will be used by the loader:

The custom header, containing minimal info from the PE header

Dumped shellcode: 469ef3aedd47dc820d9d64a253652d7436abe6a5afb64c3722afb1ac83c3a3e1

This element is an additional backdoor, deploying on demand a hidden VNC. It is also referenced by the authors by the name “HDESK bot” (Help Desk bot) because it gives the attacker direct access to the victim machine, as if it were a help-desk service. Converted to PE: 2959091ac9e2a544407a2ecc60ba941b

The “HDESK bot” deploys a hidden VNC to control the victim machine

Below, we will analyze the selected features implemented by the core bot. Note that many of the features are deployed on demand—depending on the command given by the CnC. In the observed case, the bot was also used as a downloader of the secondary malware, TrickBot.

Installing its own certificate

The malware installs its own certificate. First it drops the generated file into the %TEMP% folder. Then, the file is loaded and added to the Windows certificate store.

Fragment of Certificate generation function:

Calling the function to add the certificate to store:

Stealing passwords from IE

We can see that this bot goes after various saved credentials. Among the different methods used, we identified stealing data from the Credential Store. The used method is similar to the one described here.

We can see that it uses the mentioned GUID “abe2869f-9b47-4cd9-a358-c22904dba7f7” that was used to salt the credentials. After reading the credentials from the store, the bot undoes the salting operation in order to get the plaintext.

Stealing saved email credentials

The bot is trying to use every opportunity to extract passwords from the victim machine, also going after saved email credentials.

Stealing cookies

As we observed during the behavioral analysis, the malware drops the sqlite3.dll in the temp folder. This module is further loaded and used to perform queries to browsers’ databases with saved cookies.

Fragment of code responsible for loading sqlite module

The malware searches the files containing cookies of particular browsers:

We can see the content of the queries after decoding strings:

SELECT host, path, isSecure, expiry, name, value FROM moz_cookies

It targets Firefox, as well as Chrome and Chromium-based browsers:

The list of targeted Chromium browsers

Fragment of the code performing queries:

The list of queries to the Chrome’s database:

SELECT name, value FROM autofill

SELECT guid, company_name, street_address, city, state, zipcode, country_code FROM autofill_profiles

SELECT guid, number FROM autofill_profile_phones

SELECT guid, first_name, middle_name, last_name, full_name FROM autofill_profile_names

SELECT card_number_encrypted, length(card_number_encrypted), name_on_card, expiration_month || "/" ||expiration_year FROM credit_cards

SELECT origin_url,username_value,length(password_value),password_value FROM logins WHERE username_value <> ''

SELECT host_key, path, is_secure, (case expires_utc when 0 then 0 else (expires_utc / 1000000) - 11644473600 end), name, length(encrypted_value), encrypted_value FROM cookies

The list of queries to the Firefox’s database:

SELECT host, path, isSecure, expiry, name, value FROM moz_cookies

SELECT fieldname, value FROM moz_formhistory

All the found files are packed into a TAR archive and sent to the CnC.

Similarly, it creates a “passff.tar” archive with stolen Firefox profiles:

Hooking browsers

As mentioned earlier, the malware attacks and hooks browsers. Since the analogical functionality is achieved by different functions within different browsers, a set of installed hooks may be unique for each.

First, the malware searches for targets among the running processes. It uses the following algorithm:

It is similar to the one from the previous version (described here), yet we can see a few changes, i.e. the checksums are modified, and some additional checks are added. Yet, the list of the attacked browsers is the same, including the most popular ones: Firefox, MS Edge, Internet Explorer, and Chrome.

The browsers are first infected with the dedicated IcedID module. Just like all the modules in this edition of IcedID, the browser implant is a headerless PE file. Its reconstructed version is available here: 9e0c27746c11866c61dec17f1edfd2693245cd257dc0de2478c956b594bb2eb3.

After being injected, this module finds the appropriate DLLs in the memory of the process and sets redirections to its own code:

Parsing the instructions and installing the hooks:

Then, the selected API functions are intercepted and redirected to the plugin. Usually the hooks are installed at the beginning of functions, but there are exceptions to this rule. For example, in case of Internet Explorer, a function within the mswsock.dll has been intercepted in between:

Looking at the elements in memory involved in intercepting the calls: the browser implant (headerless PE), and the additional memory page:

Example of the hook in Firefox:

Step 1: the function SSL_AuthCertificateHook has a jump redirecting to the implanted module:

Step 2: The implanted module calls the code from the additional page with appropriate parameters:

Step 3: The code at the additional page is a patched fragment of the original function. After executing the modified code, it goes back to the original DLL.

The functionality of this hook didn’t change from the previous version.


The bot gets the configuration from the CnC in the form of .DAT files that were mentioned before. First, the file is decoded by RC4 algorithm. The output must start from the “zeus” keyword, and is further encoded by a custom algorithm. Scripts dedicated for each site are identified by a script ID.

After the files are loaded and decoded, we can see the content:

There are multiple types of webinjects available to perform by the bot:

Depending on the configuration, the bot may replace some parts of the website’s code, or add some new, malicious scripts.

Executing remote commands

In case the commands implemented by the bot are not enough for the needs of the operator, the bot allows a feature of executing commands from the command line.

The output of the run commands is sent back to the malware via named pipe, and then supplied back to the CnC.

Mature banker and stealer

As we can see from the above analysis, IcedID is not only a banking Trojan, but a general-purpose stealer able to extract a variety of credentials. It can also work as a downloader for other modules, including covert ones, that look like harmless PNG files.

This bot is mature, written by experienced developers. It deploys various typical techniques, including Zeus-style webinjects, hooks for various browsers, hidden VNC, and backconnect. Its authors also used several known obfuscation techniques. In addition, the use of customized PE headers is an interesting bonus, slowing down static analysis.

In recent updates, the malware authors equipped the bot with steganography. It is not a novelty to see it in the threat landscape, but it is a feature that makes this malware a bit more stealthy.

Indicators of Compromise

Sandbox runs:

  • Execution:
    • Command-Line Interface
    • Execution through Module Load
    • Scheduled Task
    • Scripting
    • Windows Managment Intstrumentation
  • Persistence:
    • Registry Run Keys/ Startup Folder
    • Scheduled Task
  • Privilege Escalation
    • Scheduled Task
  • Defense Evasion
    • Scripting
  • Credential Access
    • Credentials in Files
    • Credential Dumping
  • Discovery
    • Network Share Discovery
    • Query Registry
    • Remote System Discovery
    • System Information Discovery
    • System Network Configuration Discovery
  • Lateral Movement
    • Remote File Copy



About the old variants of IceID:

The post New version of IcedID Trojan uses steganographic payloads appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 25 – December 1)

Malwarebytes - Mon, 12/02/2019 - 16:23

Last week on Malwarebytes Labs, we discussed why the notion of “data as property” may potentially hurt more than help, homed in on sextortion scammers getting more creative, and explored the possible security risks Americans might face if the US changed to universal healthcare coverage.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 25 – December 1) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Building Cooperation and Trust into the Web with IPFS

Mozilla Hacks - Wed, 08/29/2018 - 14:43

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

Some projects start small, aiming for incremental improvements. Others start with a grand vision, leapfrogging today’s problems by architecting an idealized world. The InterPlanetary File System (IPFS) is definitely the latter – attempting to replace HTTP entirely, with a network layer that has scale, trust, and anti-DDOS measures all built into the protocol. It’s our pleasure to have an introduction to IPFS today from Kyle Drake, the founder of Neocities and Marcin Rataj, the creator of IPFS Companion, both on the IPFS team at Protocol Labs -Dietrich Ayala

IPFS – The InterPlanetary File System

We’re a team of people all over the world working on IPFS, an implementation of the distributed web that seeks to replace HTTP with a new protocol that is powered by individuals on the internet. The goal of IPFS is to “re-decentralize” the web by replacing the location-oriented HTTP with a content-oriented protocol that does not require trust of third parties. This allows for websites and web apps to be “served” by any computer on the internet with IPFS support, without requiring servers to be run by the original content creator. IPFS and the distributed web unmoor information from physical location and singular distribution, ultimately creating a more affordable, equal, available, faster, and less censorable web.

IPFS aims for a “distributed” or “logically decentralized” design. IPFS consists of a network of nodes, which help each other find data using a content hash via a Distributed Hash Table (DHT). The result is that all nodes help find and serve web sites, and even if the original provider of the site goes down, you can still load it as long as one other computer in the network has a copy of it. The web becomes empowered by individuals, rather than depending on the large organizations that can afford to build large content delivery networks and serve a lot of traffic.

The IPFS stack is an abstraction built on top of IPLD and libp2p:

Hello World

We have a reference implementation in Go (go-ipfs) and a constantly improving one in Javascript (js-ipfs). There is also a long list of API clients for other languages.

Thanks to the JS implementation, using IPFS in web development is extremely easy. The following code snippet…

  • Starts an IPFS node
  • Adds some data to IPFS
  • Obtains the Content IDentifier (CID) for it
  • Reads that data back from IPFS using the CID

<script src=""></script> Open Console (Ctrl+Shift+K) <script> const ipfs = new Ipfs() const data = 'Hello from IPFS, <YOUR NAME HERE>!' // Once the ipfs node is ready ipfs.once('ready', async () => { console.log('IPFS node is ready! Current version: ' + (await // convert your data to a Buffer and add it to IPFS console.log('Data to be published: ' + data) const files = await ipfs.files.add(ipfs.types.Buffer.from(data)) // 'hash', known as CID, is a string uniquely addressing the data // and can be used to get it again. 'files' is an array because // 'add' supports multiple additions, but we only added one entry const cid = files[0].hash console.log('Published under CID: ' + cid) // read data back from IPFS: CID is the only identifier you need! const dataFromIpfs = await console.log('Read back from IPFS: ' + String(dataFromIpfs)) // Compatibility layer: HTTP gateway console.log('Bonus: open at one of public HTTP gateways:' + cid) }) </script>

That’s it!

Before diving deeper, let’s answer key questions:

Who else can access it?

Everyone with the CID can access it. Sensitive files should be encrypted before publishing.

How long will this content exist? Under what circumstances will it go away? How does one remove it?

The permanence of content-addressed data in IPFS is intrinsically bound to the active participation of peers interested in providing it to others. It is impossible to remove data from other peers but if no peer is keeping it alive, it will be “forgotten” by the swarm.

The public HTTP gateway will keep the data available for a few hours — if you want to ensure long term availability make sure to pin important data at nodes you control. Try IPFS Cluster: a stand-alone application and a CLI client to allocate, replicate and track pins across a cluster of IPFS daemons.

Developer Quick Start

You can experiment with js-ipfs to make simple browser apps. If you want to run an IPFS server you can install go-ipfs, or run a cluster, as we mentioned above.

There is a growing list of examples, and make sure to see the bi-directional file exchange demo built with js-ipfs.

You can add IPFS to the browser by installing the IPFS Companion extension for Firefox.

Learn More

Learn about IPFS concepts by visiting our documentation website at

Readers can participate by improving documentation, visiting, developing distributed web apps and sites with IPFS, and exploring and contributing to our git repos and various things built by the community.

A great place to ask questions is our friendly community forum:
We also have an IRC channel, #ipfs on Freenode (or on Matrix). Join us!

The post Dweb: Building Cooperation and Trust into the Web with IPFS appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds