Techie Feeds

20 percent of organizations experienced breach due to remote worker, Labs report reveals

Malwarebytes - Thu, 08/20/2020 - 10:00

It is no surprise that moving to a fully remote work environment due to COVID-19 would cause a number of changes in organizations’ approaches to cybersecurity. What has been surprising, however, are some of the unanticipated shifts in employee habits and how they have impacted the security posture of businesses large and small.

Our latest Malwarebytes Labs report, Enduring from Home: COVID-19’s Impact on Business Security, reveals some unexpected data about security concerns with today’s remote workforce.

Our report combines Malwarebytes product telemetry with survey results from 200 IT and cybersecurity decision makers from small businesses to large enterprises, unearthing new security concerns that surfaced after the pandemic forced US businesses to send their workers home.

The data showed that since organizations moved to a work from home (WFH) model, the potential for cyberattacks and breaches has increased. While this isn’t entirely unexpected, the magnitude of this increase is surprising. Since the start of the pandemic, 20 percent of respondents said they faced a security breaches as a result of a remote worker. This in turn has increased costs, with 24 percent of respondents saying they paid unexpected expenses to address a cybersecurity breach or malware attack following shelter-in-place orders.

We noticed a stark increase in the use of personal devices for work: 28 percent of respondents admitted they’re using personal devices for work-related activities more than their work-issued devices. Beyond that, we found that 61 percent of respondents’ organizations did not urge employees to use antivirus solutions on their personal devices, further compounding the increase in attack surface with a lack of adequate protection.

We found a startling contrast between the IT leaders’ confidence in their security during the transition to work from home (WFH) environments, and their actual security postures, demonstrating a continued problem of security hubris. Roughly three quarters (73.2 percent) of our survey respondents gave their organizations a score of 7 or above on preparedness for the transition to WFH, yet 45 percent of respondents’ organizations did not perform security and online privacy analyses of necessary software tools for WFH collaboration.

Additional report takeaways
  • 18 percent of respondents admitted that, for their employees, cybersecurity was not a priority, while 5 percent said their employees were a security risk and oblivious to security best practices.
  • At the same time, 44 percent of respondents’ organizations did not provide cybersecurity training that focused on potential threats of working from home (like ensuring home networks had strong passwords, or devices were not left within reach of non-authorized users).
  • While 61 percent of respondents’ organizations provided work-issued devices to employees as needed, 65 percent did not deploy a new antivirus (AV) solution for those same devices.

To learn more about the increasing risks uncovered in today’s remote workforce population, read our full report:

Enduring from Home: COVID-19’s Impact on Business Security

The post 20 percent of organizations experienced breach due to remote worker, Labs report reveals appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The impact of COVID-19 on healthcare cybersecurity

Malwarebytes - Tue, 08/18/2020 - 19:30

As if stress levels in the healthcare industry weren’t high enough due to the COVID-19 pandemic, risks to its already fragile cybersecurity infrastructure are at an all-time high. From increased cyberattacks to exacerbated vulnerabilities to costly human errors, if healthcare cybersecurity wasn’t circling the drain before, COVID-19 sent it into a tailspin.

No time to shop for a better solution

As a consequence of being too occupied with fighting off the virus, some healthcare organizations have found themselves unable to shop for different security solutions better suited for their current situation.

For example, the Public Health England (PHE) agency, which is responsible for managing the COVID-19 outbreak in England, decided to prolong their existing contract with their main IT provider without allowing competitors to put in an offer. They did this to ensure their main task, monitoring the widespread disease, could go forward without having to worry about service interruptions or other concerns.

Extending a contract without looking at competitors is not only a recipe for getting a bad deal, but it also means organizations are unable to improve on the flaws they may have found in existing systems and software.

Attacks targeting healthcare organizations

Even though there were some early promises of removing healthcare providers as targets after COVID-19 struck, cybercriminals just couldn’t be bothered to do the right thing for once. In fact, we have seen some malware attacks specifically target healthcare organizations since the start of the pandemic.

Hospitals and other healthcare organizations have shifted their focus and resources to their primary role. While this is completely understandable, it has placed them in a vulnerable situation. Throughout the COVID-19 pandemic, an increasing amount of health data is being controlled and stored by the government and healthcare organizations. Reportedly this has driven a rise in targeted, sophisticated cyberattacks designed to take advantage of an increasingly connected environment.

In healthcare, it’s also led to a rise in nation-state attacks, in an effort to steal valuable COVID-19 data and disrupt care operations. In fact, the sector has become both a target and a method of social engineering advanced attacks. Malicious actors taking advantage of the pandemic have already launched a series of phishing campaigns using COVID-19 as a lure to drop malware or ransomware.

COVID-19 has not only placed healthcare organizations in direct danger of cyberattacks, but some have become victims of collateral damage. There are, for example, COVID-19-themed business email compromise (BEC) attacks that might be aiming for exceptionally rich targets. However, some will settle for less if it is an easy target—like one that might be preoccupied with fighting a global pandemic.

Ransomware attacks

As mentioned before, hospitals and other healthcare organizations run the risk of falling victim to “spray and prey” attack methods used by some cybercriminals. Ransomware is only one of the possible consequences, but arguably the most disruptive when it comes to healthcare operations—especially those in charge of caring for seriously ill patients.

INTERPOL has issued a warning to organizations at the forefront of the global response to the COVID-19 outbreak about ransomware attacks designed to lock them out of their critical systems in an attempt to extort payments. INTERPOL’s Cybercrime Threat Response team detected a significant increase in the number of attempted ransomware attacks against key organizations and infrastructure engaged in the virus response.

Special COVID-19 facilities

During the pandemic, many countries constructed or refurbished special buildings to house COVID-19 patients. These were created to quickly increase capacity while keeping the COVID patients separate from others. But these ad-hoc COVID-19 medical centers now have a unique set of vulnerabilities: They are remote, they sit outside of a defense-in-depth architecture, and the very nature of their existence means security will be a lower priority. Not only are these facilities prone to be understaffed in IT departments, but the biggest possible chunk of their budget is deployed to help the patients.

Another point of interest is the transfer of patient data from within the regular hospital setting to these temporary locations. It is clear that the staff working in COVID facilities will need the information about their patients, but how safely is that information being stored and transferred? Is it as protected in the new environment as the old one?

Data theft and protection

A few months ago, when the pandemic proved to be hard to beat, many agencies reported about targeted efforts by cybercriminals to lift coronavirus research, patient data, and more from the healthcare, pharmaceutical, and research industries. Among these agencies were the National Security Agency, the FBI, the Department of Homeland Security’s Cybersecurity and Infrastructure Agency, and the UK National Cyber Security.

In the spring, many countries started discussing the use of contact tracing and/or tracking apps in an effort to help keep the pandemic under control. Apps that would warn users if they had been in the proximity of an infected user. Understandably, many privacy concerns were raised by advocates and journalists.

There is so much data being gathered and shared with the intention of fighting COVID-19, but there’s also the need to protect individuals’ personal information. So, several US senators introduced the COVID-19 Consumer Data Protection Act. The legislation would provide all Americans with more transparency, choice, and control over the collection and use of their personal health, device, geolocation, and proximity data. The bill will also hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic.

The impact

Even though such a protection act might be welcome and needed, the consequences for an already stressed healthcare cybersecurity industry might be too overwhelming. One could argue that data protection legislation should not be passed on a case by case basis, but should be in place to protect citizens at all times, not just when extra measures are needed to fight a pandemic.

In the meantime, we at Malwarebytes will do our part to support those in the healthcare industry by keeping malware off their machines—that’s one less virus to worry about.

Stay safe everyone!

The post The impact of COVID-19 on healthcare cybersecurity appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane

Malwarebytes - Mon, 08/17/2020 - 15:30

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Emory Roane, policy counsel at Privacy Rights Clearinghouse, about parental monitoring apps.

These tools offer parents the capabilities to spot where their children go, read what their kids read, and prevent them from, for instance, visiting websites deemed inappropriate. And, for the likely majority of parents using these tools, their motives are sympathetic—being online can be a legitimately confusing and dangerous experience.

But where parental monitoring apps begin to cause concern is just how powerful they are.

Tune in to hear about the capabilities of parental monitoring apps, how parents can choose to safely use these with their children, and more, on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on: Other cybersecurity news
  • Intel experienced a leak due to “intel123″—the weak password that secured its server. (Source: Computer Business Review)
  • Fresh Zoom vulnerabilities for its Linux client were demonstrated at DEFCON 2020. (Source: The Hacker News)
  • Researchers saw an increase in scam attacks against users of Netflix, YouTube, HBO, and Twitch. (Source: The Independent)
  • TikTok was found collecting MAC addresses from mobile devices, a tactic that may have violated Google’s policies. (Source: The Wall Street Journal)
  • Several ads of apps labelled “stalkerware” can still be found in Google Play’s search results after the search giant’s advertising ban already took effect (Source: TechCrunch)

Stay safe, everyone!

The post Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Explosive technology and 3D printers: a history of deadly devices

Malwarebytes - Fri, 08/14/2020 - 16:45

Hackers: They’ll turn your computer into a BOMB!

“Hackers turning computers into bombs” is a now legendary headline, taken from the Weekly World News. It has rather set the bar for “people will murder you with computers” anxiety. Even those familiar with the headline may not have dug into the story too much on account of how silly it sounds, but it’s absolutely well worth checking out if only for the bit about “assassins” and “dangerous sociopaths.”

Has blasting apart your computer “like a large hand grenade” ever been so much fun? Would it only be a little bit terrifying if the hand grenade was incredibly small? How many decently-sized grenades does it take to make your computer explode like a bomb, anyway? Is the bomb also incredibly large? What kind of power are we talking here, because it would frankly be anti-climatic if it turns out to be a hoax.

Maybe the real grenades were the bombs we made along the way

However you stack it up, the antics of highly combustible cyber assassins are often overplayed for dramatic effect. At this point I’d like to ask, “Who remembers the terrible Y2k trend of exploding computers?” but as you’re aware, that didn’t actually end up happening. Lots of hard-working people spent an incredible amount of time ensuring the Millennium bug didn’t cause Millennium-bug related problems, but I don’t think they were thinking of dramatic explosions when they were doing so.

Still, there’s always been a way to affect hardware in (significantly less spectacular) ways, though most of them seem to be a combination of user error or tinkering with hardware hacks. Even the article above mentions someone claiming to make one of these start smoking by writing programs which toggled the cassette relay switch rapidly. I, myself, once woke up to a PC on fire (and I do mean on fire) after something broke overnight and I was met with the smell of metal burning, plastic melting, and an immediate concern related to not having the house burn down around me.

Evil cyber assassins making you explode, though?

Not that common, sorry. However, there has been the occasional bad event down the years where hardware was specifically targeted in frankly terrifying ways. 

Breaking hardware for fun, profit, and confusion

Before we set down some of the most common techniques hardware can be impacted in ways they probably shouldn’t be, we’ll set the bar early and highlight what’s likely the biggest, baddest example of hardware tampering. Stuxnet, a worm targeting SCADA systems, caused large amounts of damage at a nuclear facility in Iran between 2009/10. It did this by speeding up, slowing down, and then speeding up centrifuges till they were unable to cope and broke. Up to 1,000 centrifuges were taken down as a result of the attack [PDF]. This is an assault which clearly took an incredible amount of planning and prep-work to pull off, and you’ll see the phrase “Nation state attack” an awful lot if you go digging into it.

This is, without a doubt, the benchmark for dragging digital attacks into real-world impact. A close runner-up would be 2017s Wannacry attack which impacted the NHS [PDF]. The key difference is that the plant in Natanz was deliberately targeted, and the infection had specific instructions for a specific task related to the centrifuges. The NHS was “simply” caught in the fallout, and not targeted specifically.

Attacks on people at home, or smaller organisations, tend to be on a much smaller scale. The idea isn’t really to break the hardware beyond repair to make some sort of statement; the device is useful because they can keep on using it. Even so, the impact can range from “mild inconvenience” to “potentially life threatening”. 

What’s mining is mine

You could end up with a higher electricity bill than normal should you find some Bitcoin miners hiding under the hood. This might keep you warm on a chilly winter evening, but it’s not particularly advisable for financial outlay or individual PC parts. The problem with your standard Bitcoin miner placed on a system without permission is the resources they gobble up for computations. They love those big, juicy graphics cards for maximum money-making.

Having said that, your child’s middle-range gaming laptop is also a perfectly acceptable target for them. If the cooling fans are going haywire even when no game is running, it might be time to start running some scans. Overheating can be mitigated on desktops unless they’re clogged up with a lot of dust or faulty parts, but the margin for error is a lot smaller with a laptop. All that heat built up in one siginificantly smaller space over time isn’t great, and while toasting the machine wouldn’t be part of the Bitcoin miner’s gameplan, it’s one side-effect to be wary of.

On the flipside, modern systems are actually pretty good at combating heat…especially if you’re even a little bit into gaming. The reason you don’t see stories in the news about evil hackers melting computers is that a) it is, again, ultimately pointless b) it would be pretty difficult to pull off. Hardware comes with all sorts of failsafes, temperature sensors, shutdown routines, power spike protection, and much more. It would mean significant amounts of time and effort, in the vague hope you end up with something a little more impressive than “The PC shut down to prevent damage and it’s fine”.

BIOs / firmware hacks

Could you bludgeon your way into the very innards of the PC, and force it to do all sorts of terrible things? Perhaps. As with a lot of these scenarios, it typically relies on multiple steps and one specific target in mind. Malicious firmware is a possibility, but you’ll need to set the risk besides the likelihood of this happening. Once more, we see the inherent drawback in “make thing go boom”, or even just bricking the machine forever so it’s unusable. Having someone go to these lengths to attack your PC in this way is probably outside your threat model.

Not impossible, but unlikely. Somebody doing the above wants your data, not your laptop on fire. As a result, you absolutely should be cautious when in a place that isn’t your home.

IoT compromise

The world is filled with cheap, poorly secured bits of technology filling up our homes. Security is an afterthought, and even where it isn’t, bad things can still happen. At the absolute opposite end of pretend hackers turning your computer into a bomb, are the people whose sole intention is to compromise devices and live on them for as long as possible. Information, and control, are the two key currencies for domestic abusers implanting hacks into hardware.

Awfully bad, but awfully rare

These are all awful things, in various steep degrees of severity. They tamper with systems and processes in ways which can directly impact the physical operation of a device, or do it in a manner which leaves the object intact but causes trouble in the real world in other, less incendiary ways.

In terms of making things blow up, bomb style, we’re still at a bit of a loss.

All the same: there is a genuine threat aspect to all this, as we’re about to find out.

Strap yourself in and command the DeLorean as we jump from a Register article back in July 2000, to another one along similar lines at the tail end of July, 2020. Despite the warnings, it’s now 20 years on where we’re finally at the point where something a bit like your PC could go kaboom.

The entity known as time comes crashing through the window, reminding me of my melted and very much ablaze PC from about fourteen years ago. Your devices can and do catch fire for a variety of reasons, and they don’t have to be related to hacking.

In the case of a 3D printer enthusiast who found their device bellowing smoke, the likely culprit was a loose heating element and a bit of bad luck to set everything ablaze. Note that the post-incident assessment includes a rework of the physical space around the device. Everything from storage to safety equipment is now considered, to combat (as they put it), the placement of a heavy-duty bit of kit inside a “burn-my-house-down box”.

This is similar to ensuring the space around a VR gaming setup is also secured, in terms of mats on the floor so you know when you’ve wandered out of the safety zone, wires suspended by the ceiling, no sharp / dangerous items nearby and so on. Sadly, people don’t consider the ways in which physical danger presents itself via digital devices.

Keep all of this in mind, when checking out what comes next.

What comes next, is a 3D printer modified with the intention of seeing if it’s possible to “weaponize this 3D printer into a firebomb”.

Weaponizing a 3D printer into a firebomb?

Researchers at Coalfire toiled hard at creating a hand-crafted method to alter the way a specific 3D printer operates and have it hit increasingly dangerous temperatures. In their final bout of testing, the smoke and fumes were so bad in an outdoor location that they couldn’t be closer than 6ft from the smouldering device.

This is, of course, an incredibly bad situation to be in. However, there are some big caveats attached to this one in terms of threat. All 3D printers are a potential fire risk, because they naturally enough involve activities requiring high temperatures. If you leave them alone…and you really shouldn’t…you could end up with the aforementioned burn-my-house-down box. There are also emissions to consider.

In my former life as an artist, I did a lot of work around the old-style MDF which contained all manner of bad things if you cut into it and precautions had to be taken. Similarly, you have to pay attention to what may be wafting out of your printer.

I’ve dabbled in 3D printing a little bit, and the technology encourages me to treat it with a healthy bit of respect on account of how deadly it could be by default, with no tampering required, simply via me getting something wrong.

A good approach generally, I find.

Of worst case scenarios

We don’t know for sure how bad the smoking printer would’ve ended up, given the switch-off when the fumes became too much. A total meltdown seems likely, but a “bomb” as such probably won’t be the case.

This also isn’t something you can casually throw together at the drop of a hat. It’s not one short blog post showing how easy it is; it’s 3 posts of trying an awful lot of things out. Rooting the printer and cracking passwords, digging into board architecture, installing NSA tools to explore functions. Much more.

Creating extra space to house the new code, adjusting the max temperature variable then having to figure out how to bypass the error protection closing down any overenthusiastic temperature increases.

Much, much more.

Note also that to force the device to overheat beyond the safe “cut out” point, they had to replace the power supply with something bigger to achieve the required cookout levels.

It’s an awful lot of work to set your printer on fire. Of course, one worry might be a situation where the modified code is somehow pushed to device owners after some sort of compromise. You could also perhaps send people the rogue code directly as a supposed “update” and let them do the hard work.

One way around this is signed code from the vendor, though there’s usually resistance to that from some folks in maker circles because of issues related to planned obsolescence. Additionally, some prefer to download updates directly from the manufacturer’s website and aren’t keen on auto-updates for their printing tools.

Even so: regardless of the issues surrounding who wants which type of update, or if signed code would fix things or bring unforeseen hassles for the end-user, somebody compromising you remotely with this in some way would still need you to have switched out for a bigger power supply for the big boom.

That’s not very likely.

Hackers: they’ll turn your printer into a BOMB!

For the time being, your printer is (probably) not going to fulfil the prophesied digital cyber-grenade of lore. You’ve got more than enough to be getting on with where basic precautions are concerned, never mind worrying about someone blowing up your house with a toaster or USB headphones.

Treat your 3D printer with respect, follow the safety guidelines for your device, and never, ever leave it running and unattended. You don’t want to end up on the front page of the Weekly World News in 2040.

The post Explosive technology and 3D printers: a history of deadly devices appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Chrome extensions that lie about their permissions

Malwarebytes - Thu, 08/13/2020 - 17:52

“But I checked the permissions before I installed this pop-up-blocker—it said nothing about changing my searches,” my dad retorts after I scold him for installing yet another search-hijacking Chrome extension. Granted, they are not hard to remove, but having to do it over and over is a nuisance. This is especially true because it can be hard to find out which of the Chrome extensions is the culprit if the browser starts acting up.

What happened?

Recently, we came across a family of search hijackers that are deceptive about the permissions they are going to use in their install prompt. This extension, called PopStop, claims it can only read your browsing history. Seems harmless enough, right?

The install prompt in the webstore is supposed to give you accurate information about the permissions the extension you are about to install requires. It already is habit for browser extensions to only ask for permissions needed to function properly up front—then ask for additional permissions later on after installing. Why? Users are more likely to trust an extension with limited warnings or when permissions are explained to them.

But what is the use of these informative prompts if they only give you half the story? In this case, the PopStop extension doesn’t just read your browsing history, as the pop-up explains, but it also hijacks your search results.

Some of these extensions are more straightforward once the user installs them and they are listed under the installed extensions.

But others are consistent in their lies even after they have been installed, which makes it even harder to find out which one is responsible for the search hijack.

How is this possible?

Google had at some point decided to bar extensions that obfuscate their code. By doing so, it’s easier for them to read the plug-in’s programming and conduct appropriate analysis.

The first step in determining what an extension is up to is in looking at the manifest.json file.

Registering a script in the manifest tells the extension which file to reference, and, optionally, how that file should behave.

What this manifest tells us is that the only active script is “background.js” and the declared permissions are “tabs” and “storage”. More about those permissions later on.

The relevant parts in background.js are these pieces, because they show us where our searches are going:

const BASE_DOMAIN = '', pid = 9126, ver = 401; chrome.tabs.create({url: `https://${BASE_DOMAIN}/chrome3.php?q=${searchQuery}`}); setTimeout(() => { chrome.tabs.remove(currentTabId); }, 10);

This script uses two chrome.tabs methods: One to create a new tab based on your search query, and the other to close the current tab. The closed tab would have displayed the search results from your default search provider.

Looking at the chrome.tabs API, we read:

“You can use most chrome.tabs methods and events without declaring any permissions in the extension’s manifest file. However, if you require access to the url, pendingUrl, title, or favIconUrl properties of tabs.Tab, you must declare the “tabs” permission in the manifest.”

And indeed, in the manifest of this extension we found:

"permissions": [ "tabs", "storage" ],

The “storage” permission does not invoke a message in the warning screen users see when they install an extension. The “tabs” permission is the reason for the “Read your browsing history” message. Although the chrome.tabs API might be used for different reasons, it can also be used to see the URL that is associated with every newly-opened tab.

The extensions we found managed to avoid having to display the message, “Read and change all your data on the websites you visit” that would be associated with the “tabCapture” method. They did this by closing the current tab after capturing your search term and opening a new tab to perform the search for that term on their own site.

The “normal” permission warnings for a search hijacker would look more similar to this:

The end effect is the same, but an experienced user would be less likely to install this last extension, as they would either balk at the permission request or recognize the plug-in as a search hijacker by looking at these messages.

Are these extensions really lying?

Some might call it a lie. Some may say no, they simply didn’t offer the whole truth. However, the point of those permissions pop-ups is to give users the choice on whether to download a program by being upfront about what that program asks of its users.

In the case of these Chrome extensions, then, let’s just say that they’re not disclosing the full extent of the consequences of installing their extensions.

It might be desirable if Google were to add a possible message for extensions that use the chrome.tabs.create method. This would inform the user that his extension will be able to open new tabs, which is one way of showing advertisements so users would be made aware of this possibility. And chrome.tabs.create also happens to be the method that this extension uses to replace the search results we were after with their own.

An additional advantage for these extensions is the fact that they don’t get mentioned in the settings menu as a “regular” search hijacker would.

A search hijacker that replaces your default search engine would be listed under Settings > Search engine

Not being listed as the search engine replacement, again, makes it harder for a user to figure out which extension might be responsible for the unexpected search results.

For the moment, these hijackers can be recognized by the new header they add to their search results, which looks like this:

This will probably change once their current domains are flagged as landing pages for hijackers, and new extensions will be created using other landing pages.

Further details

These extensions intercept search results from these domains:

  • all google domains

It also intercepts all queries that contain the string “spalp2020”. This is probably because that string is a common factor in the installer url’s that belong to the family of hijackers.

Search hijackers

We have written before about the interests of shady developers in the billion-dollar search industry and reported on the different tactics these developers resort to in order to get users to install their extensions or use their search sites[1],[2],[3].

While this family doesn’t use the most deceptive marketing practices out there, it still hides its bad behavior in plain sight. Many users have learned to read the install prompt messages carefully to determine whether an extension is safe. It’s disappointing that developers can evade giving honest information and that these extensions make their way into the webstore over and again.


extension identifiers:



search domains: <= note the extra “o”

Malwarebytes detects these extension under the detection name PUP.Optional.SearchEngineHijack.Generic.

Stay safe, everyone!

The post Chrome extensions that lie about their permissions appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dutch ISP Ziggo demonstrates how not to inform your customers about a security flaw

Malwarebytes - Wed, 08/12/2020 - 15:00

“Can you have a look at this email I got, please?” my brother asked. “It looks convincing enough, but I don’t trust it,” he added and forwarded me the email he received from Ziggo, his Internet Service Provider (ISP). Shortly after, he informed me that despite its suspicious aura, he found confirmation that the email was, in fact, legitimate.

In the suspect email, the Dutch ISP informed customers that an expert had found a weakness in the “Wifibooster Ziggo C7,” a device they sell to strengthen WiFi signals. Ziggo told users how to recognize this equipment, and urged them to change the default password and settings.

So what’s the problem? Alerting customers about a security flaw is best practice, is it not? Absolutely. But when your email alert about a security vulnerability looks like a phish itself, it’s time to reevaluate your email marketing strategy.

In this blog, I’ll break down what exactly happened with Ziggo, the flaws in their email communication, and how organizations should approach informing their employees and customers about potential security issues—without looking like a phishing scam.

What exactly happened?

Dutch ISP Ziggo sent out an email to their customers warning about a security weakness in a specific device that they sell to their customers. I translated the relevant parts of the mail from Dutch below:

“Dear Mister Arntz,

To keep our network safe, experts are looking for weak spots. Unfortunately, such a weakness was found in the Wifibooster Ziggo C7. You can recognize the device by the ‘C7’ mark at the bottom. This email is about this device and this type only.

Do you indeed use the Wifibooster Ziggo C7? In that case change the default settings in your personal settings to keep your device safe. Below we will explain how.

How to change your password

To make the chance of abuse as small as possible, it’s necessary to change your password. Go to link to Ziggo site, follow the instructions there and use a strong password.

Want to know more or need help?

Follow the link to the Ziggo forum where you can find more information about this subject and ask for help from the community members.”

This vague, unhelpful, and frankly dangerous advice was followed by a footer that contained nine more links, including (ironically enough) an anti-phishing warning.

What made the email look spammy?

We have spent years training people to recognize spam emails, and it is gratifying when our efforts pay off on occasion. The things my brother mentioned he found to be spammy were all the weird looking links in the email and the fact that he did not own the device that was the subject of the email.

I would like to add that the email mentioned a security weakness but did not specify which one. Also, the urge to change your password to avoid danger would be a dead give-away in a phishing mail.

So, we’ve got:

  • Subject does not apply directly to all receivers. Not every addressee had said device. When asked Ziggo stated they wanted to make sure that users that bought the device second-hand would be aware of the issue too.
  • A multitude of links that looked phishy probably because they were personalized.
  • Urging receivers to go to a site and take precautions against an unclear threat.
The device

The Wifibooster Ziggo C7 is in fact a TP-Link Archer C7 that Ziggo sells to their customers with their own firmware installed. Therefore, it is hard to find any information about what the vulnerability might be. The Archer C7 is listed as affected by the WPA2 Security (KRACKs) vulnerability for certain firmware versions. But given the Ziggo device comes with custom firmware, it is hard to determine whether the Wifibooster Ziggo C7 is vulnerable as well.

Based on the fact that users are urged to change their wifi-passwords and the name of the network (SSID) and looking at the instructions we found on the site we are inclined to conclude that the device was shipped with default credentials, which might help attackers to exploit a remote acces vulnerability.

The possible danger

Ziggo warned the users that not following their instructions could lead to unauthorized access to their network.

We asked our resident hardware guru JP Taggart about this scenario and he was very weary about ISPs that put more than some branding deviations of the manufacturer’s firmware on the device. Once you start to drift away from the standard firmware you are responsible for maintaining and patching that firmware, because the manufacturer will no longer be able to or even want to. We have looked at some existing vulnerabilities for the Archer C7 but they are old and if they would apply it couldn’t be cured by changing the password and SSID.

ISPs make a habit of branding the firmware for the equipment they sell to their customers. Logic dictates that the security flaw must have been in this branded firmware, since we could not find any other recent warning about this particular type of device. Which would demonstrate JP Taggarts’ comment about the dangers of branded firmware.

What Ziggo could have done better

The most objectional part of the method Ziggo chose to inform is the phishy looking format they constructed their email in. The more companies do this, the harder it is for us to tell the real phishes apart from the legitimate emails. To be honest, some of the more sophisticated phishers have produced emails that looked less phishy than this one.

They also could have been a lot more open about the security flaw that was found. Of course we don’t expect them to post a full hackers guide on how to use an exploit and spy on your neighbor, but a little bit of concrete information on what was found and how that could be exploited would have made sense.

For the instructions of how to change the settings I would have found it preferable to list the basic steps in the email and include a link for those that need further or more detailed instructions. All the relevant and necessary information should have been in the mail and not been linked to. Links are fine, but not for crucial information.

During the installation of such a device the ISP should force the user to change the default password at least, and probably advise them to change the SSID as well. A default SSID tells an aspiring hacker which ISP you are using and they can make some informed guesses at which equipment you are using etc.

The danger of sending out phishy emails

Invested parties may have deleted the mail at first sight and never changed their password, making them vulnerable to the ‘flaw”.

Or, as our own William Tsing wrote in an older post called When corporate communications look like a phish:

“Essentially, well-meaning communications with these design flaws train an overloaded employee to exhibit bad behaviors—despite anti-phishing training—and discourage seeking help.”

This is also true for the home user that may not receive as many emails at home as an office employee does (120), but the ones that do receive a lot of mail, have trained themselves to recognize the emails that are important and ignore the rest. Which would be a shame if the included information is as important as the ISP wants us to believe.

Stay safe, everyone!

The post Dutch ISP Ziggo demonstrates how not to inform your customers about a security flaw appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The skinny on the Instacart breach

Malwarebytes - Tue, 08/11/2020 - 16:32

The COVID-19 outbreak has affected many facets of our lives—from how we visit our families, socialize with friends, meet with colleagues, to how we should be conducting ourselves outside of our homes. Ideally, a few meters apart from everyone else and with a mask on.

These—on top of imposed lockdowns—have pushed most people to stay indoors, pushing them to do almost everything they want to do in real life online. This includes grocery shopping.

It is no wonder, then, to see a sudden spike in app downloads of food and grocery delivery apps. Similarly, it is also not a wonder to see that it didn’t take long for those with ill intent to find a way to score big from brands behind these apps. Or have they really?

Instacart, one of the top three brands in the grocery and pick-up services in the world, was recently believed to be hacked, after more than 270,000 accounts of its clients were seen being peddled in the Dark Web. It was reported that these accounts contained information, such as names, addresses, credit card data, and transaction history.

BuzzFeed News, who initially reported the incident, have indicated that some affected parties were interviewed and confirmed that, upon being shown data taken from breach, confirmed it was indeed their data being sold. A cybersecurity expert who also looked at some of the data put more weight into its the breach’s validity.

Days after the report, however, Instacart denied that a security breach happened. “Our teams have been working around the clock to quickly determine the validity of reports related to site security and so far our investigation had shown that the Instacart platform was not compromised or breached,” the company wrote in a Medium post.

Instead, the company asserted the belief that the reason client accounts may have been broken into was because their clients had been reusing login credentials.

As you may already know, password reuse is a huge cybersecurity problem, where the onus rests on users who continue to use the same username-password combinations on a lot or all their online accounts. This results in a chain of compromises for one individual. If an Instacart customer uses the same credentials to access their Twitter feed, Facebook page, favorite online magazine or news sites, online banking, or cloud storage accounts, for example, a compromise on any one of those sites would result in compromise for all the others.

While the reuse of credentials is indeed a known cybersecurity problem, solving it should not be up to users alone. One cannot help but wonder if all 278,531 accounts affected by the breach were because people had been reusing username-password combinations.

Whether you’re on the side of “Yes, they’ve been breached!” or “No, they’re securing my data well,” one thing is certain: Instacart shoppers and Internet users should play our part in keeping our online accounts as impenetrable as possible. While making sure you don’t reuse username and password combinations between accounts is one way to secure against multiple breaches, it’s certainly not full-proof protection.

If remembering passwords is challenging, you can always enlist the help of a trusty password manager that will serve as your memory and keep your credentials (and other important bite-size information) encrypted and away from prying eyes. For added security, use two-factor/multi-factor authentication.

On the one hand, security is not just the customers’ problem. Companies like Instacart should play their part, too, and own their piece of the pie. They can start doing this by securing their websites against hacks with credential stuffing, credit card skimmers, and other threats that target customer accounts. Multi-factor authenticate new clients to the platform and inform or push old ones to enable this feature for their existing accounts.

Of course, this should not be the end of securing user data for companies. Privacy compliance, PCI compliance, and encrypting data at rest and in transit are key to keeping customer credentials secure. Otherwise, organizations may find themselves skewered on Reddit.

Stay safe, shopper!

The post The skinny on the Instacart breach appeared first on Malwarebytes Labs.

Categories: Techie Feeds

SBA phishing scams: from malware to advanced social engineering

Malwarebytes - Mon, 08/10/2020 - 16:30

A number of threat actors continue to take advantage of the ongoing coronavirus pandemic through phishing scams and other campaigns distributing malware.

In this blog, we look at 3 different phishing waves targeting applicants for Covid-19 relief loans. The phishing emails impersonate the US Small Business Administration (SBA), and are aimed at delivering malware, stealing user credentials or committing financial fraud.

In each of these campaigns, criminals are spoofing the sender’s email so that it looks like the official SBA’s. This technique is very common and unfortunately often misunderstood, resulting in many successful scams.

GuLoader malware

In April, we saw the first wave of SBA attacks using COVID-19 as a lure to distribute malware. The emails contained attachments with names such as ‘SBA_Disaster_Application_Confirmation_Documents_COVID_Relief.img’.

Figure 1: Spam email containing malicious attachment

The malware was the popular GuLoader, a stealthy downloader used by criminals to load the payload of their choice and bypass antivirus detection.

Traditional phishing attempt

The second wave we saw involved a more traditional phishing approach where the goal was to collect credentials from victims in order to scam them later on.

Figure 2: Phishing email luring users to a site to enter their credentials

A URL, especially if it has nothing to do with the sender, is a big giveaway that the email may be fraudulent. But things get a little more complicated when attackers are using attachments that look seemingly legitimate.

Advanced phishing attempt

This is what we saw in a pretty clever and daring scheme that tricks people into completing a full form containing highly personal information, including bank account details. These could be used to directly drain accounts or in an additional layer of social engineering, which tricks users into paying in advanced fees that don’t exist as part of the real SBA program.

Figure 3: Phishing email containing a loan application form

This latest campaign started in early August and is convincing enough to fool even seasoned security experts. Here’s a closer look at some red flags we encountered as we analyzed it.

Most people aren’t aware of email spoofing and believe that if the sender’s email matches that of a legitimate organization, it must be real. Unfortunately, that is not the case, and there are additional checks that need to be performed to confirm the authenticity of a sender.

There are various technologies for confirming the true sender email address, but we will instead focus on the emails headers, a sort of blue print that is available to anyone. Depending on the email client, there are different ways to view such headers. In Outlook, you can click File and then Properties to display them:

Figure 4: Email headers showing suspicious sender

One of the items to look at is the “Received” field. In this case, it shows a hostname (park-mx.above[.]com) that looks suspicious. In fact, we can see it has already been mentioned in another scam campaign.

If we go back to this email, we see that it contains an attachment, a loan application with the 3245-0406 reference number. A look at the PDF metadata can sometimes reveal interesting information.

Figure 5: Suspicious load application form and its metadata

Here we note the file was created on July 31 with Skia, a graphics library for Chrome. This tells us that the fraudsters created that form shortly before sending the spam emails.

For comparison, if we look at the application downloaded from the official SBA website, we see some different metadata:

Figure 6: Official loan application form and its metadata

This legitimate application form was created used Acrobat PDFMaker for Word on March 27 which coincides with the pandemic timeline.

The loan application would typically be printed out and then mailed to a physical address at one of the government offices. If we go back to the original email, it asks to send the completed form as a reply via email instead:

Figure 7: Reply email would send loan application form to criminals

This is where things get interesting. Even though the sender’s email is, when you hit the reply button, it shows a different email address at: disastercustomerservice@gov-sba[.]us. While is the official and legitimate government website, gov-sba[.]us is not.

Figure 8: Domain registered by scammers shortly before the attack

That domain name (gov-sba[.]us) was registered just days before the email campaign began and clearly does not belong to the US government.

However, we should note that this campaign is quite elaborate and that it would be easy to fall for it. Sadly, the last thing you would want when applying for a loan is to be out of even more money.

If you reply to this email with the completed form containing private information that includes your bank account details, this is is exactly what would happen.

Tips on how to protect yourself

There is no question that people should be extremely cautious whenever they are asked to fill out information online—especially in an email. Fraudsters are lurking at every corner and ready to pounce on the next opportunity.

Both the Department of Justice and the Small Business Administration have been warning of scams pertaining to SBA loans. Their respective sites provide various tips on how to steer clear of various malicious schemes.

Perhaps the biggest takeaway, especially when it comes to phishing emails, is that the sender’s address can easily be spoofed and is in no way a solid guarantee of legitimacy, even if it looks exactly the same.

Because we can’t expect everyone to be checking for email headers and metadata, at least we can suggest double checking the legitimacy of any communication with a friend or by phoning the government organization. For the latter we always recommend to never dial the number found in an email or left on a voicemail, as it could be fake. Google the organization for its correct contact number.

Malwarebytes also protects against phishing attacks and malware by blocking offending infrastructure used by scammers.

The post SBA phishing scams: from malware to advanced social engineering appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (August 3 – 9)

Malwarebytes - Mon, 08/10/2020 - 15:30

Last week on Malwarebytes Labs, on our Lock and Code podcast, we talked about identity and access management technology. We also wrote about business email compromises to score big, discussed how the Data Accountability and Transparency Act of 2020 looks beyond consent, and we analyzed how the Inter skimming kit is used in homoglyph attacks.

Other cybersecurity news
  • A new and unpatchable exploit was allegedly found on Apple’s Secure Enclave chip. (Source: 9to5Mac)
  • The Australian government will include the capability for the Australian Signals Directorate to help law enforcement agencies identify and disrupt serious criminal activity—including in Australia. (Source: The Guardian)
  • The US Department of State is offering a $10 million reward for any information leading to the identification of any person who meddles in US elections. (Source: ZDNet)
  • Facebook Inc.’s Instagram photo-sharing app is launching its clone of TikTok in more than 50 countries. (Source: Bloomberg)
  • Intelligence agencies in the US have released information about a new variant the Taidoor virus used by China’s state-sponsored hackers targeting governments, corporations, and think tanks. (Source: The Hacker News)
  • A Zoombombing attack disrupted the bail hearing of one of the alleged Twitter hackers. (Source: Naked Security)
  • American small- and medium-sized companies (SMBs) were actively targeted by LockBit ransomware operators according to an Interpol report. (Source: Bleeping Computer)
  • The Clean Network program is a comprehensive approach to guarding US citizens’ privacy and US companies’ most sensitive information from aggressive intrusions by malign actors. (Source: US Department of State)
  • A researcher found a way to deliver malware to macOS systems using a Microsoft Office document containing macro code. (Source: SecurityWeek)
  • The Chrome Web Store was slammed again for allowing 295 ad-injecting, spammy extensions that were downloaded 80 million times. (Source: TheRegister)

Stay safe!

The post A week in security (August 3 – 9) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Inter skimming kit used in homoglyph attacks

Malwarebytes - Thu, 08/06/2020 - 17:00

As we continue to track web threats and credit card skimming in particular, we often rediscover techniques we’ve encountered elsewhere before.

In this post, we share a recent find that involves what is known as an homoglyph attack. This technique has been exploited for some time already, especially in phishing scams with IDN homograph attacks.

The idea is simple and consists of using characters that look the same in order to dupe users. Sometimes the characters are from a different language set or simply capitalizing the letter ‘i’ to make it appear like a lower case ‘l’.

A threat actor is using this technique on several domain names to load the popular Inter skimming kit inside of a favicon file. It may not be their first rodeo either as some ties point to an existing Magecart group.


We collect information about web threats in various ways: from live crawling websites to finding them or with other tools such as VirusTotal.

While writing rules for hunting is a continuous and time-consuming process, identifying relevant threats within large data sets is also a difficult exercise.

One of our YARA rules triggered a detection for the Inter skimming kit on a file uploaded to VirusTotal. Considering that Inter is a popular framework, we actually get dozens and dozens of alerts each day.

Figure 1: VirusTotal hunting with YARA

This one looked different though because because the detected file was not typical HTML or JavaScript, but an .ico file instead.

One downside of finding files via VT hunting, especially when it comes to web threats, is that we don’t quite know where they come from. Thankfully, this one gave a little bit of a clue when we inspected the file and saw a “gate” (data exfiltration server):

Figure 2: Checking the content of a match for any clues Homoglyph attack

At first glance, we read that domain as ‘cigarpage’ when in fact it is ‘cigarpaqe’. A quick lookup confirmed that the correct website is indeed and cigarpaqe[.]com is the imposter.

The legitimate site was hacked and injected with an innocuous piece of code referencing an icon file:

Figure 3: Malicious code injection to load external resource

It plays an important role in loading a copycat favicon from the fake site, using the same URI path in order to keep it as authentic as possible. This is actually not the first time that we see skimming attacks abusing the favicon file.

Figure 4: Side by side of the legitimate and decoy sites

The reason why the attackers are loading this favicon from a different location becomes obvious as we examine it more closely. While the legitimate file is small and typical, the one loaded from the homoglyph domain contains a large piece of JavaScript.

Figure 5: Embedded data inside the favicon Skimmer

This JavaScript is the one that originally triggered a detection for our Inter skimming kit YARA rule. The screenshot below shows the form fields on a payment page that are being monitored and their corresponding data.

Figure 6: Skimming script

The gate used for exfiltration has the same domain that was used to host the malicious favicon file.

Figure 7: Data exfiltration request Homoglyph attacks with a historic tie to Magecart Group 8

The threat actor did not only target that one website, but several more belonging to the same victim.

Looking at the malicious infrastructure (, we can see several domains were registered recently with the same homoglyph technique.

Figure 8: Connections between homoglyphs and known infrastructure

Here are the original domain names on the left, and their homoglyph version on the right:

A fourth domain stands out from the rest: This is also an homoglyph for, but that domain has a history. It was previously associated with Magecart Group 8 (RiskIQ)/CoffeMokko (Group-IB) and was recently registered again after several months of inactivity.

Figure 9: RiskIQ heatmap for the domain

The skimming code sometimes referred to as CoffeMokko is quite different from the one involved here. However, according to Group-IB, this threat actor may have reused skimming code from others, in particular Group 1 (RiskIQ) in a skimmer also known as Grelos and seen in several attacks.

In addition, Group 8 was documented in high-profile breaches, including one that is relevant here: the MyPillow compromise. This involved injecting a malicious third-party JavaScript hosted on (note the homoglyph on

While homoglyph attacks are not restricted to one threat actor, especially when it comes to spoofing legitimate web properties, it is still interesting to note in correlation with infrastructure reuse.

Combining techniques

Threat actors love to take advantage of any technique that will provide them with a layer of evasion, no matter how small that is.

Code re-use poses a problem for defenders as it blurs the lines between the different attacks we see and makes any kind of attribution harder.

One thing we know from experience is that previously used infrastructure has a tendency to come back up again, either from the same threat actor or different ones. It may sound counter productive to leverage already known (and likely blacklisted) domains or IPs, but it has its advantages, too—in particular, when a number of compromised (and never cleaned up) sites still load third party scripts from those.

We contacted the victim site but also noticed that the malicious code had already been removed. Malwarebytes users are protected against this homoglyph attack.

Figure 10: Malwarebytes Browser Guard protecting shoppers Indicators of Compromise

Homoglyph domains/IP

cigarpaqe[.]com fleldsupply[.]com winqsupply[.]com zoplm[.]com 51.83.209[.]11

The post Inter skimming kit used in homoglyph attacks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Data Accountability and Transparency Act of 2020 looks beyond consent

Malwarebytes - Wed, 08/05/2020 - 16:35

In the United States, data privacy is hard work—particularly for the American people. But one US Senator believes it shouldn’t have to be.

In June, Democratic Senator Sherrod Brown of Ohio released a discussion draft of a new data privacy bill to improve Americans’ data privacy rights and their relationship with the countless companies that collect, store, and share their personal data. While the proposed federal bill includes data rights for the public and data restrictions for organizations that align with many previous data privacy bills, its primary thrust is somewhat novel: Consent is unmanageable at today’s scale.

Instead of having to click “Yes” to innumerable, unknown data collection practices, Sen. Brown said, Americans should be able to trust that their online privacy remains intact, no clicking necessary.

As the Senator wrote in his opinion piece published in Wired: “Privacy isn’t a right you can click away.”

The Data Accountability and Transparency Act

In mid-June, Sen. Brown introduced the discussion draft of the Data Accountability and Transparency Act (which does not appear to have an official acronym, and which bears a perhaps confusing similarity in title to the 2014 law, the Digital Accountability and Transparency Act).

Broadly, the bill attempts to wrangle better data privacy protections in three ways. First, it grants now-commonly proposed data privacy rights to Americans, including the rights of data access, portability, transparency, deletion, and accuracy and correction. Second, it places new restrictions on how companies and organizations can collect, store, share, and sell Americans’ personal data. The bill’s restrictions are tighter than many other bills, and they include strict rules on how long a company can keep a person’s data. Finally, the bill would create a new data privacy agency that would enforce the rules of the bill and manage consumer complaints.

Buried deeper into the bill though are two proposals that are less common. The bill proposes an outright ban on facial recognition technology, and it extends what is called a “private right of action” to the American public, meaning that, if a company were to violate the data privacy rights of an everyday consumer, that consumer could, on their own, bring legal action against the company.

Frustratingly, that is not how it works today. Instead, Americans must often rely on government agencies or their own state Attorney General to get any legal recourse in the case of, for example, a harmful data breach.

If Americans don’t like the end results of the government’s enforcement attempts? Tough luck. Many Americans faced this unfortunate truth last year, when the US Federal Trade Commission reached a settlement agreement with Equifax, following the credit reporting agency’s enormous data breach which affected 147 million Americans.

Announced with some premature fanfare online, the FTC secured a way for Americans affected by the data breach to apply for up to $125 each. The problem? If every affected American actually opted for a cash repayment, the real money they’d see would be 21 cents. Cents.

That’s what happens for one of the largest data breaches in recent history. But what about for smaller data breaches that don’t get national or statewide attention? That’s where a private right of action might come into play.

As we wrote last year, some privacy experts see a private right of action as the cornerstone to an effective, meaningful data privacy bill. In speaking then with Malwarebytes Labs, Purism founder and chief executive Todd Weaver said:

“If you can’t sue or do anything to go after these companies that are committing these atrocities, where does that leave us?” 

For many Americans, it could leave them with a couple of dimes in their pocket.

Casting away consent management in the Data Accountability and Transparency Act

Today, the bargain that most Americans agree to when using various online platforms is tilted against their favor. First, they are told that, to use a certain platform, they must create an account, and in creating that account, they must agree to having their data used in ways that only a lawyer can understand, described to them in a paragraph buried deep in a thousand-page-long end-user license agreement. If a consumer disagrees with the way their data will be used, they are often told they cannot access the platform itself. Better luck next time.

But under the Data Accountability and Transparency Act, there would be no opportunity for a consumer’s data to be used in ways they do not anticipate, because the bill would prohibit many uses of personal data that are not necessary for the basic operation of a company. And the bill’s broad applicability affects many companies today.

Sen. Brown’s bill targets what it calls “data aggregators,” a term that includes any individual, government entity, company, corporation, or organization that collects personal data in a non-insignificant way. Individual people who collect, use, and share personal data for personal reasons, however, are exempt from the bill’s provisions.

The bill’s wide net thus includes all of today’s most popular tech companies, from Facebook to Google to Airbnb to Lyft to Pinterest. It also includes the countless data brokers who help power today’s data economy, packaging Americans’ personal data and online behavior and selling it to the highest bidders.

The restrictions on these companies are concise and firm.

According to the bill, data aggregators “shall not collect, use, or share, or cause to be collected, used, or shared any personal data,” except for “strictly necessary” purposes. Those purposes are laid out in the bill, and they include providing a good, service, or specific feature requested by an individual in an intentional interaction,” engaging in journalism, conducting scientific research, employing workers and paying them, and complying with laws and with legal inquiries. In some cases, the bill allows for delivering advertisements, too.

The purpose of these restrictions, Sen. Brown explained, is to prevent the aftershock of worrying data practices that impact Americans every day. Because invariably, Sen. Brown said, when an American consumer agrees to have their data used in one obvious way, their data actually gets used in an unseen multitude of other ways.

Under the Data Accountability and Transparency Act, that wouldn’t happen, Sen. Brown said.

“For example, signing up for a credit card online won’t give the bank the right to use your data for anything else—not marketing, and certainly not to use that data to sign you up for five more accounts you didn’t ask for (we’re looking at you, Wells Fargo),” Sen. Brown said in Wired. “It’s not only the specific companies you sign away your data to that profit off it—they sell it to other companies you’ve never heard of, without your knowledge.”

Thus, Sen. Brown’s bill proposes a different data ecosystem: Perhaps data, at its outset, should be restricted.

Are data restrictions enough?

Doing away with consent in tomorrow’s data privacy regime is not a unique idea—the Center for Democracy and Technology released its own draft data privacy bill in 2018 that extended a set of digital civil rights that cannot be signed away.

But what if consent were not something to be replaced, but rather something to be built on?

That’s the theory proposed by Electronic Frontier Foundation, said Adam Schwartz, a senior staff attorney for the digital rights nonprofit.

Schwartz said that Sen. Sherrod’s bill follows on a “kind of philosophical view that we see in some corners of the privacy discourse, which is that consent is just too hard—that consumers are being overwhelmed by screens that say ‘Do you consent?’”

Therefore, Schwartz said, for a bill like the Data Accountability and Transparency Act, “in lieu of consent, you see data minimization,”—a term used to describe the set of practices that require companies to only collect what they need, store what is necessary, and share as little as possible when giving the consumer what they asked for.

But instead of ascribing only to data minimization, Schwartz said, EFF takes what he called a “belt-and-suspenders” approach that includes consent. In other words, the more support systems for consumers, the better.

“We concede there are problems with consent—confusing click-throughs, yes—but think that if you do consent plus two other things, it can become meaningful.”

To make a consent model more meaningful, Schwartz said consumers should receive two other protections. First, any screens or agreements that ask for a user’s consent should not include the use of any “dark patterns.” The term describes user-experience design techniques that could push a consumer into a decision that does not benefit themselves. For example, a company could ask for a user’s consent to use their data in myriad, imperceptible ways, and then present the options to the user in two ways: one, with a bright, bold green button, and the other in pale gray, small text.

The practice is popular—and despised—enough to warrant a sort of watchdog Twitter account.

Second, Schwartz said, a consent model should require a ban on “pay for privacy” schemes, in which organizations and companies could retaliate against a consumer who opts into protecting their own privacy. That could mean consumers pay a literal price to exercise their privacy rights, or it could mean withholding a discount or feature that is offered to those who waive their privacy rights.

Sen. Brown’s bill does prohibit “pay for privacy” schemes—a move that we are glad to see, as we have reported on the potential dangers of these frameworks in the past.

What’s next?

Because Congress is attempting—and failing—to properly address the likely immediate homelessness crisis that will kick off this month due to the cratering American economy colliding with the evaporation of eviction protections across the country, an issue like data privacy is probably not top of mind.

That said, the introduction of more data privacy bills over the past two years has pushed the legislative discussion into a more substantial realm. Just a little more than two years ago, data privacy bills took more piece-meal approaches, focusing on the “clarity” of end-user license agreements, for example.

Today, the conversation has advanced to the point that a bill like the Data Accountability and Transparency Act does not seek “clarity,” it seeks to do away with the entire consent infrastructure built around us.  

It’s not a bad start.

The post Data Accountability and Transparency Act of 2020 looks beyond consent appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Business email compromise: gunning for goal

Malwarebytes - Tue, 08/04/2020 - 15:00

The evergreen peril of business email compromise (BEC) finds itself in the news once more. This time, major English Premier League football teams almost fell victim to their trickery, to the tune of £1 million.

First half: fraudsters on the offensive

Somebody compromised a Managing Director’s email after they logged into a phishing portal via bogus email. Fake accounts set up during the transfer window to buy and sell players provided the required opening. They inserted themselves into the conversations with ease. Both clubs were conversing with fakes, as the fraudsters changed banking details for payment. No money reached the scammers, as the bank recognised the fraudulent bank account.

As with so many BEC attacks, the weak point was unsecured email with no additional measures in place. Some 2FA would have helped immeasurably here, along with additional precautions. We’ve talked about this previously, where organisations may have to accept some slowdown in their activities behind the scenes for the extra protection afforded. Does the CEO need to confirm wires over the phone with someone in another timezone? Will it slow things down a little?

That is, for some, the cost of (scammers) doing business. The trick is trying to come up with solutions that work best for you, in a way which doesn’t meet with objections from both the board and the people making use of these processes daily.

The sporting sector is under attack digitally on all fronts at the moment. You can read about some of the other attacks, and a few more BEC-related shenanigans, in the NCSC report.

Second half: BEC keeps the pressure up

BEC scams have gained a lot of visibility these past few weeks.

Big financial losses sit alongside the embarrassment of going public of compromise. We almost certainly don’t know the true extent of the damage. Ransomware and similar blackmail threats cause similar problems when trying to estimate impact.

BEC isn’t just some sort of amateur hour, either. The pros are absolutely doing what they can in this realm to further enhance their profits.

Extra time: the long arm of the law

Organisations and people often realise too late that sending wires means the cash is gone forever. The attack replies on stealth and making away with the money without anybody noticing until it’s too late.

On the other hand, busts do happen. Turns out being massively visible with some 2.4 million Instagram followers might not be the best way to remain Guy Incognito. After a little under 1 million dollars was swiped from a victim in the US, the FBI found evidence of communications between a popular social media star and the alleged co-conspirators of the fraud. The FBI filed a criminal complaint in June which alleges all the social media star’s wealth is gained illegally.

Interestingly, there’s mention of yet another attempt on an English premier league football club. This time, however, the money up for grabs is significantly larger:  £100 million, versus £1 million.


Penalties: one final multi-pronged attack

It’s not just the standard BEC we need to be concerned about. There’s a lot of divergent routes into your business originating from roughly the same starting position. Vendor email compromise is something gaining prominence since its more well-known sibling came to light, so add that to the growing list of things to defend against. The successful attack on a major European cinema chain for $21 million is starting to seem like small potatoes at this point, though most definitely not for anyone caught in the fallout.

Some scammers roll with malware. For others, it’s a case of burning a horribly expensive exploit. The hope is that it’ll make several times the amount paid for it initially. The rest lurking in the shadows? Big money from malvertising, or gaming social media with a splash of viral spread and a lot of stolen clicks.

Meanwhile, over there, we have a group of people piecing together the inner workings of your organisation from information freely available online. At this very moment, they’re considering sending some innocent missive, just to see if the mail address is live and if the person responsible for it replies.

You won’t hear from them again…but you almost certainly will see a mail from something claiming to be your system administrator urging you to reset your login details.

Where both you and your organisation’s cash reserves end up after that, is entirely down to whatever planning was made beforehand.

How ready will you be when the business email compromisers come calling?

The post Business email compromise: gunning for goal appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Lock and Code S1Ep12: Pinpointing identity and access management’s future with Chuck Brooks

Malwarebytes - Mon, 08/03/2020 - 15:30

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Chuck Brooks, cybersecurity evangelist and adjunct professor for Georgetown University’s Applied Intelligence Program and graduate Cybersecurity Programs, about identity and access management technology.

This set of technologies and policies controls who accesses what resources inside a system—from company files being locked away for only some employees, to even your online banking account being accessible only to you.

But with more individuals using more accounts to access more resources than ever before, threats have similarly emerged.

Tune in to hear about the uses of identity and access management technology, how the tech will be influenced by other technologies in the future, and more, on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:  Other cybersecurity news

Stay safe, everyone!

The post Lock and Code S1Ep12: Pinpointing identity and access management’s future with Chuck Brooks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Avoid these PayPal phishing emails

Malwarebytes - Fri, 07/31/2020 - 15:00

For the last few weeks, there’s been a solid stream of fake PayPal emails in circulation, twisting FOMO (fear of missing out) into DO THIS OR BAD THINGS WILL HAPPEN. It’s one of the most common tools in the scammer’s arsenal, and a little pressure applied in the right way often brings results for them.

Claim people are going to lose something, or incur charges, or miss out on a valuable service, and they’ll come running. Below is an outline of who these emails claim to be from, what they look like, and the kind of panic-clicking that they’re pushing. These are just a few examples; there are many, many others.

Common factors

Most of the mails we’ve seen claim to be sent from


Or variations thereof, although the actual email being used is frequently just a mishmash of random letters / words / numbers. They also mostly make claims that your account is limited, or restricted in some way, or there’s been some unusual activity on your account and now you must  prove you were the one making (non-existent) transactions.

It’s very similar to this batch of missives from 2015, where scammers were after credit card / payment  details. Here’s some of the mails, to give you an idea of what to look out for. They are typically awash with typos, and we’ve not corrected any of their mistakes.

Click to Enlarge Scam mails

Re: [Important] – Your account was temporary limited

We would like to inform you of certain modifications to our user contracts which concern you.

No action is required on your part. However, if you would like to know more, we invite you to consult our Policy Updates page where you will find the details of these modifications, in which cases they apply and how to refuse them, if applicable.

After a recent review of your account activity. we’ve determined you are in violation of PayPal’s Acceptable Use Policy. Your account has been limited until we hear from you. While your account is limited, some options in your account won’t be available.

Re: [Renewal of the Order Receipt] Sign Up for Bank Statement Updates use Google Chrome from Marshall Islands

Dear Customer Service

Your paypal account has been limited because we’ve noticed significanyt changes in your account activity. As your payment ptocessor, we need to understand these changes better. This account limitation will affectr your ability to:

Send or receive money

Withdraw money from your account

Add or remove a card & bank account

Dispute a transaction

Close your account

What to do next?

Please logi in to your paypal account and proviude the requested information thought {SIC} the resolution center

Re: Submitted : Statement update login with Google Chrome From Taiwan, Province of China

Your PayPal account has been limited

Dear Customer,

Our service is improving the security system for all PayPal account. The reason, many accounts have been hacked by someone to order an item using a credit / debit / bank card in account associated.

For the convenience and security of PayPal, we have limited all accounts registered.

PayPal is the safer, faster way to pay. To recovery your account, you can click the link button below and proceed with identity verification to prove that it is your account.

Re: Reminder: [Daily Report] [Update News] [System known] Update-informatie zie factuur van – Statement Update New Login

Your paypal account is temporarily limited

Hello client,

We noticed that you’ve been using your Paypal account in a questionable manner. To understand this better, we just need more information from you.

To ensure that your account remains secure, we need you to take action on your account. We’ve also temporarily limited certain features in your account

Currently, You won’t be able to:

• Send Payments

• Withdraw Funds

What should you do?

Log in to your Paypal account follow the steps and perform the required tasks.

RE: Reminder: [Daily Report] [Statement Agreement] We have sent notifications. Automatic updates 

Your account has been limited.

Hello, Customer

We’ve limited your account

After a recent review of your account activity, we’ve determined you are in violation of PayPal’s Acceptable Use Policy. Please log in to confirm your identity and review all your recent activity

You can find the complete PayPal Acceptable Use Policy by clicking Legal at the bottom of any PayPal page.

Help and advice for avoiding scams

PayPal has expanded its security resources in recent years. They now have a portal for multiple forms of suspicious activity, a section for reporting phish scams, and protection for buyers and sellers.

You can also check out part 1 of our 3-part Phishing 101 guide.

These emails won’t be drying up anytime soon, so please be on your guard and, as always, visit the PayPal website directly from your browser should you receive any messages claiming you’ve been limited or locked out. If it’s genuine, then customer service will be able to assist. If it isn’t, help both PayPal and everyone else by reporting the phish. It’s a win-win scenario.

The post Avoid these PayPal phishing emails appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malspam campaign caught using GuLoader after service relaunch

Malwarebytes - Thu, 07/30/2020 - 16:55

They say any publicity is good publicity. But perhaps this isn’t true for CloudEye, an Italian firm that claims to provide “the next generation of Windows executables’ protection”.

First described by Proofpoint security researchers in March 2020, GuLoader is a downloader used by threat actors to distribute malware on a large scale. In June, CloudEye was exposed by CheckPoint as the entity behind GuLoader.

Following the spotlight from several security firms and news outlets, GuLoader activity dropped in late June. But around the second week of July, we started seeing the downloader in malspam campaigns again.

Protection and evasion attract criminal element

While the concept of downloaders is certainly not new, GuLoader itself found its origins in DarkEye Protector, a crypter sold in various forums circa 2011, which later evolved into CloudEye.

Designed as a product to prevent reverse engineering and protect against other forms of code theft, CloudEye is a Visual Basic 6 downloader that leverages cloud services to store and retrieve the final piece of software (in the form of heavily obfuscated shellcode) a customer wants to install.

GuLoader/CloudEye has proved to be very effective at bypassing sandboxes and security products including network-based detection.

Figure 1: GuLoader executed in a sandbox and detecting it

This is exactly the kind of feature criminals may want to distribute their malware. Unsurprisingly, this is exactly what happened and at one point GuLoader became the most popular malicious attachment in our spam honeypot.

Figure 2: Most popular attachments by tags in Malwarebytes email telemetry Back in business

On July 11, CloudEye announced it was resuming its business after about a month of interruption during which time sales stopped and accounts used by malicious actors were banned.

Figure 3: CloudEye website announcing return of service

What prompted us to visit the company’s website and see this announcement was seeing GuLoader in the wild back again. We noted malspam activity using the classic DHL delivery lure pushing GuLoader again:

Figure 4: Malspam using DHL theme to push GuLoader GuLoader and stealers

The attachment is an ISO file type which Windows 10 can open by mounting it as a drive. Inside, it contains the GuLoader executable written in Visual Basic. Usiung a decompiler, you can reveal one of its forms, which is very typical of GuLoader:

Figure 5: Decompiled view of GuLoader showing VB form

When you execute it, it will attempt to connect to a remote server to download its payload. By the time we checked this sample, that website no longer responded. However, a PCAP file was available on VirusTotal and allowed us to ‘trick’ the malware so it would proceed to load it as normal.

Figure 6: Dumping shellcode from memory to disk

We used PE-Sieve to reconstruct the encrypted payload as a standalone PE file. This allows us to dump the shellcode from memory into a file on disk.

Figure 7: Comparing shellcode with file on disk

It turned out to be the FormBook stealer, which is consistent with the type of payloads we see associated with GuLoader.

Popular tool already cracked?

We believe one particular threat group is engaged in malspam campaigns with and without GuLoader, and instead using RAR attachments to spread other stealers.

Once a tool has proved to be popular and effective for criminal purposes (whether it was built for legitimate reasons or not), it will continue to fuel malware campaigns.

It’s quite possible that of the many builders of GuLoader in circulation some have been cracked and are now being used by threat actors on their own accord.

We track the GuLoader malspam campaigns and continue to protect our customers against this threat.

Figure 8: Malwarebytes Nebula’s detection of GuLoader

Thanks to S!Ri for the heads up on the return of GuLoader.

Indicators of Compromise



Shellcode loaded by GuLoader


Decoded shellcode into binary


The post Malspam campaign caught using GuLoader after service relaunch appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cloud workload security: Should you worry about it?

Malwarebytes - Wed, 07/29/2020 - 17:30

Due to the increasing use of the cloud, organizations find themselves dealing with hybrid environments and nebulous workloads to secure. Containerization and cloud-stored data have provided the industry with a new challenge. And while you can try to make the provider of cloud data storage responsible for the security of the data, you will have a hard time trying to convince the provider that they are responsible for your cloud workload security.

What are you talking about?

Let us explain some of the less common terms for those that are unfamiliar with them.

The goal of containerization is to allow applications to run in an efficient and bug-free way across different computing environments, whether that is a desktop or virtual machine or Windows or Linux operating system. The demand for applications to run consistently among different systems and infrastructures has moved development of this technology along at a rapid pace. The use of different platforms within business organizations and the move to the cloud are undoubtedly huge contributors to this demand. Containerization is almost always conducted in a cloud environment, which contributes to its scalability.

While there are many providers of cloud data storage, providers that offer containerization services for the moment are almost exclusively the big players, like Amazon Web Services, Oracle, and Microsoft Azure.

Static, or even constantly changing, data are easier to protect than active processes. And a cloud workload can range from simple web applications to complex organization-specific workflow management systems.

Cloud workload security

From a security standpoint, the isolation between containers is a good thing. If one container is compromised, it is almost impossible for any malware to cross over to another container, as the top layer operating system has separate namespaces for each of the containers. But as you can imagine, this separation also makes it harder to devise a security solution for the whole complex of containers that are in use.

Traditionally, security software was designed to keep your IT environment protected from the outside world. Nowadays cutting the environment off from the outside world would mean cloud resources to become unavailable and remote workers to be disconnected from the company network. Because security was one of the major concerns holding organizations back from moving their data and workload to the cloud, a lot of attention has been given to cloud workload security.

The first step to expand your security perimeter to include the cloud workload is to make the cloud environment secure-by-design. Which means that attention has been given to security implications during every step of the design.

Your IT department and cloud resources

One common mistake is that organizations or teams within the organization start using cloud resources without involving their in-house IT/security department. While this may seem trivial or they may not even be thinking of the new “app” as a cloud resource, it does have an impact on the security perimeter and the responsible team should be aware of the change.

Organization of cloud security

The way cloud security is organized depends very much on where the responsibility for the security of the cloud resources lie. They vary from a completely in-house model to a fully external model where the cloud security provider takes full responsibility for all the resources and provides the necessary security layers.

Application layer

Web applications are secured in the application layer. This layer generally consists of a few elements designed to protect the applications from outside threats. The main element can be a customized firewall combined with end-to- end encryption. This will shield the applications from threats and protect the data-stream from being intercepted and read.

Hypervisor layer

Another important layer for cloud workload security is the hypervisor layer. The security setup in this layer will be designed to keep the cloud server’s virtualization environment safe. In this environment you will find the guest operating systems and virtual networks. This layers’ security will also take care of the containers that are running in virtual machines. The main component for the security in this layer will be application hardening. In-house apps need to be coded with security in mind and third-party software needs to be updated and patched in a timely manner.

Security orchestration

In such a layered and complex environment another important element is the security orchestration. Orchestration in this context implies:

  • Solutions working together without interrupting each other.
  • Streamlining workflow processes so that each component does what it does best.
  • Unification so that data is exported in a user-friendly and organized manner.

Security orchestration is ideally possible even when security software comes from different vendors. However, it often needs to be modified to get the most out of what the solutions have to offer, without one interfering with the effectivity of another.

In general, it’s easier to effectively orchestrate specialized applications from different vendors than it is to orchestrate overlapping applications from different vendors. The overlap between rivalling applications tends to be the field where the accidents happen. Either because features are disabled so they do not cause interference, or because one application is expected to catch something and the other doesn’t need to watch that area.

Rise in importance

As cloud applications continue to grow in absolute numbers and relative size for your organization it is imperative to look at the structure and organization of your security perimeter and into the way you want to secure that perimeter. Some points of attention as your organization grows in this direction:

  • Stay on top of the awareness of the security and IT teams of all cloud applications.
  • Scout the possibilities of security applications from different vendors and how you can best manage and orchestrate them.
  • Inform yourself about the different types of cloud-based applications you are using and whether they need a specific security approach.
  • Do not rely on your cloud provider to have security automatically arranged for you. If you do decide to rely on the cloud services provider for security arrangement as well, make sure you and your IT staff are aware of the boundaries and limitations of their coverage.

Stay safe everyone!

The post Cloud workload security: Should you worry about it? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

TikTok is being discouraged and the app may be banned

Malwarebytes - Tue, 07/28/2020 - 16:55

In recent news retail giant Amazon sent a memo to employees telling them to delete the popular social media app TikTok from their phones. In the memo it stated that the app would pose a security risk without going into details. Later the memo was withdrawn without an explanation except that it was sent in error. Are we curious yet, my dear Watson?

What is TikTok

For those of us that can’t tell one social media app from another, TikTok is one of the most popular ones and it was especially designed to allow users to upload short video’s for others to like and share. Functionality, it has grown from a basic lip-sync app to host a wide variety of short video clips. It is predominantly popular among a younger audience. Most of the users are between 13 and 24 years old. In the first quarter of 2019, TikTok was the most downloaded app in the App Store, with over 33 million installs. TikTok is owned by a Chinese tech company called ByteDance.

Nation states’ attention

This wasn’t the first time TikTok faced removal from a number of devices. India already banned TikTok. And the USA and Australia are also considering blocking the app. In fact, In December, the US Army banned TikTok from its phones, and in March, US senators proposed a bill that would block TikTok from all government devices.

Is TikTok safe?

For starters, TikTok being a Chinese product does not help. A number of Chinese apps and software packages have been under investigation and were found to be “calling home”. Now this does not automatically mean they are spying on you, but when you start your investigation with a negative expectation, you are inclined to see it as such. And gathering information about a client without their consent is wrong.

The fact that TikTok is different in China itself, where it goes under the name Douyin, is another factor. But this could be explained away as well as China has a reputation of spying on its population. So maybe the foreign version is less intrusive than the domestic one. And some governments have their own reasons not to trust anything from Chinese origin or another agenda to boycott products originating from China.

Adding to the suspicion a Reddit user by the handle of bangorlol posted comments about the data found to be sent home when he reverse-engineered the app. The same user has started a thread on reddit where he wishes to cooperate with other reverse-engineers on newer versions of the app. One type of behavior that was confirmed by another source is that the app copies information from the clipboard. Which certainly is something that goes above and beyond what other social media apps do.

TikTok’s defense

TikTok’s main defense consists of the fact that most of their senior staff are outside of China. On their blog they also specified where their data are stored and that the data are not subject to Chinese law.

“TikTok is led by an American CEO, with hundreds of employees and key leaders across safety, security, product, and public policy here in the US. We have never provided user data to the Chinese government, nor would we do so if asked.”

Options to ban TikTok completely

Besides organizations like Wells Fargo and some branches of the US military asking their employees to refrain using the app on devices that also contain data about the organization, we have also seen countries advocating a total ban of the app. But this is not an easy goal to achieve and could also prove to be ineffective.

For a total ban of an app you would have to get it removed from the official playstores. This is harder to achieve for some countries than for others. India banned TikTok along with 58 other Chinese apps. The US government would have to find a legally sound reason to request that Apple and Google pull TikTok from their app stores and would probably meet with a lot of resistance.

Besides if people want to install a popular app like TikTok there are many other sources. Downloads are not limited to the official playstores, so a determined user will be able to find the app elsewhere. And it does not stop the millions of active users from continuing to use the app.

Another option is to give TikTok the same treatment as was handed to Huawei. Put them on the Commerce Departments’ entity list which would deny them access to US technology. Given the circumstances that doesn’t accomplish much more than denying them access to the playstores with the same consequences as we discussed above.

Social media and privacy

We have warned many times against posting privacy sensitive information on social media and guiding you and your children to use social media in a safe way. We even posted a guide for those that wanted to remove themselves from the major social media.

But when the social media app itself is determined to mine your data it becomes a whole different story. We have seen no conclusive proof that this is true for TikTok, but some of the allegations are very serious and seem to be supported by facts and authoritative research.

Other analysts discarded the researchers’ findings as jumping to conclusions. On thing is for sure: a full analysis without the help of the developers will take a lot of effort and time and even then, the results may still be disputable. At this point we can not be sure whether the TikTok app is spying on its users in a way that goes deeper than we might expect from an ordinary social media app.

All we can do at this point is to inform our users about the ongoing discussion and maybe explain some of the points that are being brought up. We also feel the need to repeat our warnings about the difficult relationship between social media and privacy. Obviously if any concrete facts should surface we will keep you posted.

Stay safe everyone!

The post TikTok is being discouraged and the app may be banned appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 20 – 26)

Malwarebytes - Mon, 07/27/2020 - 15:30

Last week on Malwarebytes Labs, our Lock and Code podcast delved into Bluetooth and beacon technology. We also dug into APT groups targeting India and Hong Kong, covered a law enforcement bust, and tried to figure out when, exactly, a Deepfake is a Deepfake.

Other cybersecurity news

Stay safe!

The post A week in security (July 20 – 26) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Deepfakes or not: new GAN image stirs up questions about digital fakery

Malwarebytes - Thu, 07/23/2020 - 15:00

Subversive deepfakes that enter the party unannounced, do their thing, then slink off into the night without anybody noticing are where it’s at. Easily debunked clips of Donald Trump yelling THE NUKES ARE UP or something similarly ludicrous are not a major concern. We’ve already dug into why that’s the case.

What we’ve also explored are the people-centric ways you can train your eye to spot stand-out flaws and errors in deepfake imagery—essentially, GANS (generative adversarial networks) gone wrong. There will usually be something a little off in the details, and it’s up to us to discover it.

Progress is being made in the realm of digital checking for fraud, too, with some nifty techniques available to see what’s real and what isn’t. As it happens, a story is in the news which combines subversion, the human eye, and even a splash of automated examination for good measure.

A deepfake letter to the editor

A young chap, “Oliver Taylor” studying at the University of Birmingham found himself with editorials published in major news sources such as Time of Israel and Jerusalem Post, with his writing “career” apparently  kicking into life in late 2019, with additional articles in various places throughout 2020.

After a stream of these pieces, everything exploded in April when a new article from “Taylor” landed making some fairly heavy accusations against a pair of UK-based academics.

After the inevitable fallout, it turned out that Oliver Taylor was not studying at the University of Birmingham. In fact, he was apparently not real at all and almost all online traces of the author vanished into the ether. His mobile number was unreachable, and nothing came back from his listed email address.

Even more curiously, his photograph bore all the hallmarks of a deepfake (or, controversially, not a “deepfake” at all; more on the growing clash over descriptive names later). Regardless of what you intend to class this man’s fictitious visage as, in plain terms, it is an AI-generated image designed to look as real as possible.

Had someone created a virtual construct and bided their time with a raft of otherwise unremarkable blog posts simply to get a foothold on major platforms before dropping what seems to be a grudge post?

Fake it to make it

Make no mistake, fake entities pushing influential opinions is most definitely a thing. Right leaning news orgs have recently stumbled into just such an issue. Not so long ago, an astonishing 700 pages with 55 million followers were taken down by Facebook in a colossal AI-driven disinformation blowout dubbed “Fake Face Swarm.” This large slice of Borg-style activity made full use of deepfakes and other tactics to consistently push political messaging with a strong anti-China lean.

Which leads us back to our lone student, with his collection of under-the-radar articles, culminating in a direct attack on confused academics. The end point—the 700 pages worth of political shenanigans and a blizzard of fake people—could easily be set in motion by one plucky fake human with a dream and a mission to cause aggravation for others.

How did people determine he wasn’t real?

Tech steps up to the plate

A few suspicions, and the right people with the right technology in hand, is how they did it. There’s a lot you can do to weed out bogus images, and there’s a great section over on Reuters that walks you through the various stages of detection. No longer do users have to manually pick out the flaws; technology will (for example) isolate the head from the background, making it easier to see frequently distorted flaws. Or perhaps we can make use of heatmaps generated by algorithms to highlight areas most suspected of digital interference.

Even better, there are tools readily available which will give you the under-the-hood summary of what’s happening with one image.

Digging in the dirt

If you edit a lot of photographs on your PC, you’re likely familiar with EXIF metadata. This is a mashing together of lots of bits of information at the moment the photo is taken. Camera/phone type, lens, GPS, colour details—the sky’s the limit. On the flipside, some of it, like location data, can potentially be a privacy threat so it’s good to know how to remove it if needs be.

As with most things, it really depends what you want from it. AI-generated images are often no different.

There are many ways to stitch together your GAN imagery. This leaves traces, unless you try to obfuscate it or otherwise strip some information out. There are ways to dig into the underbelly of a GAN image, and bring back useful results.

Image swiping: often an afterthought

Back in November 2019, I thought it would be amusing if the creators of “Katie Jones” had just lazily swiped an image from a face generation website, as opposed to agonising over the fake image details.

For our fictitious university student, it seems that the people behind it may well have done just that [1], [2]. The creator of the site the image was likely pulled from has said they’re looking to make their images no longer downloadable, and/or place people’s heads in front of a 100 perceNT identifiably fake background such as “space.” They also state that “true bad actors will reach for more sophisticated solutions,” but as we’ve now seen in two high-profile cases, bad actors with big platforms and influential reach are indeed just grabbing whatever fake image they desire.

This is probably because ultimately the image is just an afterthought; the cherry on an otherwise bulging propaganda cake.

Just roll with it

As we’ve seen, the image wasn’t tailor-made for this campaign. It almost certainly wasn’t at the forefront of the plan for whoever came up with it, and they weren’t mapping out their scheme for world domination starting with fake profile pics. It’s just there, and they needed one, and (it seems) they did indeed just grab one from a freely-available face generation website. It could just as easily have been a stolen stock model image, but that is of course somewhat easier to trace. 

And that, my friends, is how we end up with yet another subtle use of synthetic technology whose presence may ultimately have not even mattered that much.

Are these even deepfakes?

An interesting question, and one that seems to pop up whenever a GAN-generated face is attached to dubious antics or an outright scam. Some would argue a static, totally synthetic image isn’t a deepfake because it’s a totally different kind of output.

To break this down:

  1. The more familiar type of deepfake, where you end up with a video of [movie star] saying something baffling or doing something salacious, is produced by feeding a tool multiple images of that person. This nudges the AI into making the [movie star] say the baffling thing, or perform actions in a clip they otherwise wouldn’t exist in. The incredibly commonplace porn deepfakes would be the best example of this.
  2. The image used for “Oliver Taylor” is a headshot sourced from a GAN which is fed lots of images of real people, in order to mash everything together in a way that spits out a passable image of a 100 percent fake human. He is absolutely the sum of his parts, but in a way which no longer resembles them.

So, when people say, “That’s not a deepfake,” they’re wanting to keep a firm split between “fake image or clip based on one person, generated from that same person” versus “fake image or clip based on multiple people, to create one totally new person.”

The other common negative mark set against calling synthetic GAN imagery deepfakes, is that the digital manipulations are not what make it effective. How can it be a deepfake if it wasn’t very good?

Call the witnesses to the stand

All valid points, but the counterpoints are also convincing.

If we’re going to dismiss their right to deepfake status because digital manipulations are not effective, then we’re going to end up with very few bona-fide deepfakes. The digital manipulations didn’t make it effective, because it wasn’t very good. By the same token, we’d never know if digital manipulations haven’t made a good one because we’d miss it entirely as it flies under the radar.

Even the best movie-based variants tend to contain some level of not-quite-rightness, and I have yet to place a bunch before me where I couldn’t spot at least nine out of 10 GAN fakes mixed in with real photos.

As interesting and as cool as the technology is, the output is still largely a bit of a mess. From experience, the combo of a trained eye and some of the detection tools out there make short work of the faker’s ambitions. The idea is to do just enough to push whatever fictional persona/intent attached to the image is over the line and make it all plausible—be it blogs, news articles, opinion pieces, bogus job posting, whatever. The digital fakery works best as an extra chugging away in the background. You don’t really want to draw attention to it as part of a larger operation.

Is this umbrella term a help or a hindrance?

As for keeping the tag “deepfake” away from fake GAN people, while I appreciate the difference in image output, I’m not 100 percent sure that this is necessarily helpful. The word deepfake is a portmanteau of “deep learning” and “fake.” Whether you end up with Nicolas Cage walking around in The Matrix, or you have a pretend face sourced from an image generation website, they’re both still fakes borne of some form of deep learning.

The eventual output is the same: a fake thing doing a fake thing, even if the path taken to get there is different. Some would argue this is a potentially needless and unnecessary split/removal of a catch-all definition which manages to helpfully and accurately apply to both above—and no doubt other—scenarios.

It would be interesting to know if there’s a consensus in the AI deep learning/GAN creation/analyst space on this. From my own experience talking to people in this area, the bag of opinions is as mixed as the quality from GAN outputs. Perhaps that’ll change in the future.

The future of fakery detection

I asked Munira Mustaffa, Security Analyst, if automated detection techniques would eventually surpass the naked eye forever:

I’ve been mulling over this question, and I’m not sure what else I could add. Yes, I think an automated deepfake checking can probably make better assessment than the human eye eventually. However, even if you have the perfect AI to detect them, human review will always be needed. I think context also matters in terms of your question. If we’re detecting deepfakes, what are we detecting against?

I think it’s also important to recognise that there is no settled definition for what is a deepfake. Some would argue that the term only applies to audio/videos, while photo manipulations are “cheapfakes”. Language is critical. Semantics aside, at most, people are playing around with deepfakes/cheapfakes to produce silly things via FaceApp. But the issue here is really not so much about deepfakes/cheapfakes, but it is the intent behind the use. Past uses have indicated how deepfakes have been employed to sway perception, like that Nancy Pelosi ‘dumbfake’ video.

At the end of the day, it doesn’t matter how sophisticated the detection software is if people are not going to be vigilant with vetting who they allow into their network or who is influencing their point of view. I think people are too focused on the concept that deepfakes’ applications are mainly for revenge porn and swaying voters. We have yet to see large scale ops employing them. However, as the recent Oliver Taylor case demonstrated to us, deepfake/cheapfake applications go beyond that.

There is a real potential danger that a good deepfake/cheapfake that is properly backstopped can be transformed into a believable and persuasive individual. This, of course, raises further worrying questions: what can we do to mitigate this without stifling voices that are already struggling to find a platform?

We’re deepfakes on the moon

We’re at a point where it could be argued deepfake videos are more interesting conceptually than in execution. MIT’s Centre for Advanced Virtuality has put together a rendition of the speech Richard Nixon was supposed to give if the moon landing ended in tragedy. It is absolutely a chilling thing to watch; however, the actual clip itself is not the best technically.

The head does not play well with the light sources around it, the neckline of the shirt is all wrong against the jaw, and the voice has multiple digital oddities throughout. It also doesn’t help that they use his resignation speech for the body, as one has to wonder about the optics of shuffling papers as you announce astronauts have died horribly.

No, the interesting thing for me is deciding to show the deceptive nature of deepfakes by using a man who was born in 1913 and died 26 years ago. Does anyone under the age of 40 remember his look, the sound of his voice outside of parody and movies well enough to make a comparison? Or is the disassociation from a large chunk of collective memory the point? Does that make it more effective, or less?

I’m not sure, but it definitely adds weight to the idea that for now, deepfakes—whether video or static image—are more effective as small aspects of bigger disinformation campaigns than attention drawing pieces of digital trickery.

See you again in three months?

It’s inevitable we’ll have another tale before us soon enough, explaining how another ghostly entity has primed a fake ID long enough to drop their payload, or sow some discord at the highest levels. Remember that the fake imagery is merely one small stepping stone to an overall objective and not the end goal in and of itself. It’s a brave new world of disruption, and perhaps by the time you’re pulling up another chair, I might even be able to give you a definitive naming convention.

The post Deepfakes or not: new GAN image stirs up questions about digital fakery appeared first on Malwarebytes Labs.

Categories: Techie Feeds

EncroChat system eavesdropped on by law enforcement

Malwarebytes - Wed, 07/22/2020 - 15:00

Due to the level of sophistication of the attack, and the malware code, we can no longer guarantee the security of your device.

This text caused a lot of aggravation, worries, and sleepless nights. No one wants to hear the security of their device has been compromised by a malware attack. The good news is that the actual victims of this malware attack were almost exclusively criminals. The bad news is that the message was sent out by a provider called EncroChat, which had previously billed itself as private as an in-person conversation in a soundproof room.

EncroChat provides customers with secure messaging and cryptophones. Their cryptophones run on the OTR operating system. Short for Off-The-Record, OTR is a cryptographic protocol that provides both authentication and end-to-end encryption for instant messaging. This protocol ensures that session keys will not be compromised even if the private key of the server is compromised. Even when a server is seized, the conversations cannot be decrypted or lead back to the participants.

What happened to EncroChat?

EncroChat, a company based in the Netherlands, advertises their services as safer than safe, stating that no messages are saved on their servers, which are located “offshore.” But at some point, Dutch law enforcement figured out the EncroChat servers were located in France and got to work, hoping to catch criminals in the act.

Decryption specialists that had been involved in the Ennetcom (Canada) and PGP Safe (Costa Rica) cases were consulted and managed to access the EncroChat systems—their method of access is still unknown to the public. When asked how they managed to follow conversations on EncroChat, Netherlands’ Team High Tech Crime chose not to answer. They may have hopes to use the method again in the future with another service.

Based on the information disclosed by EncroChat, it is likely that law enforcement agencies managed to install software on the servers that provided the phones with updates or delivered malware to the phones in another form. Either way, infecting devices allowed them to see the unencrypted messages. In essence, with enough infected devices, law enforcement was able to follow conversations in real time.

The warning that EncroChat sent out said:

They repurposed our domains to launch an attack to comprise carbon units. With control of our domain they managed to launch a malware campaign against the carbon to weaken its security.

Another clue supporting this takeaway was the fact that some users complained that the wipe function no longer worked, an indication that the malware was active at the device level.

What happened to EncroChat users?

Hundreds of arrests have already been made in the UK, the Netherlands, France, the Middle East, and a few other countries. On top of that, law enforcement has millions of chat messages that can lead to more arrests or serve as evidence in upcoming lawsuits. International drug traffickers have been hit especially hard by the service going bust.

But law enforcement’s move to access encrypted conversations sets up a dangerous precedent. Likely, the police had to act immediately on information that was potentially life threatening. However, without knowledge on how or why they breached the EncroChat system, their actions made encrypted chat users and operators suspicious about a possible leak. A criminal in the UK was confronted with an EncroChat message dating back to the end of 2019, so law enforcement agencies must have been monitoring the service for many months before users found out the system was compromised.

Why were so many criminals using EncroChat?

The EncroChat system was well organized and had gained a lot of trusting users over the years. Criminals felt secure enough to chat freely about everything: names of customers, drug deliveries, and even assassinations. And their trust was understandable, given what EncroChat had to offer:

  • Phones were dual boot, so users could alternatively start the Android operating system and their phones would look like a normal, old-fashioned model.
  • The phones had a “wipe all” button that would delete all the stored conversations in case of an arrest or other emergency.
  • No messages were stored on servers so they could not be seized and decrypted later.
  • OTR, unlike PGP, cannot be fully reconstructed even if you have both encryption keys.

EncroChat users paid hefty fees for this service— thousands of dollars per year, per device. The exorbitant fees may explain why the majority of the EncroChat clientele could be found on the wrong side of the law. Other parties that might have a vested interest in keeping their chat messages secret include government parties, journalists, security professionals, or lawyers. However, there are cheaper, if somewhat less sophisticated, alternatives for legitimate secret-keeping that law enforcement does not target.

After law enforcement agencies had taken down or compromised other providers, many European criminals flocked to EncroChat. An estimate by the French police indicated that 90 percent of the EncroChat users were engaged in criminal activity. However, of the 60,000 EncroChat end users, only 800 were arrested.

Encryption and law enforcement

Dutch law enforcement’s ability to breach EncroChat supports our point that the police don’t need built-in backdoors to catch criminals. Governments have asked for both means of observing data in transit, as well as retrieving data at rest on devices of interest. Looking at this case, we doubt that criminals would have chatted so freely about their activities had they known there was a backdoor—or even the capability of a backdoor—somewhere in the system.

But providing law enforcement with free access into platforms of their choosing is a slippery slope. For one, hacking into a secure platform puts all users’ information in jeopardy. Despite the intel on criminal activity in EncroChat, there are still legitimate users whose private messages are now compromised. In addition, where should law enforcement draw the line? How many other encryption platforms will they compromise before users have nowhere to turn? And at what point will law enforcement make an assumption of guilt just because someone is using encrypted chat?

Time and again law enforcement agencies have demonstrated that even if they can’t keep up with every new security development, at some point they catch up and find a way around it. And when they do, the harvest is huge. In this case, police departments will have years of investigating ahead of them if they plan to follow up on the millions of messages they intercepted. They may also find that because of their means of access, many data points may be inadmissible in court.

Thankfully, breaking encryption is not easy, especially when the encryption routine is without flaw. And these flaws will be a rare find when it comes to algorithms with track records like PGP and OTR. Finding a way to break the encryption will depend on a flaw in the implementation. Or finding a way to intercept messages before the encryption on the sender’s end or after the encryption on the receiver’s end.

Our hope is that law enforcement exhaust all other avenues of reconnaissance and investigation before moving to put the privacy of an entire platform of users in jeopardy. For now, legitimate users of end-to-end encryption programs needn’t worry about their company secrets or other confidential whisperings getting out. But for the potentially thousands of criminal EncroChat users that haven’t been arrested yet—time to worry.

The post EncroChat system eavesdropped on by law enforcement appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds