Techie Feeds

New Flash Player zero-day used against Russian facility

Malwarebytes - Wed, 12/05/2018 - 22:44

For the past couple of years, Office documents have largely replaced exploit kits as the primary malware delivery vector, giving threat actors the choice between social engineering lures and exploits or a combination of both.

While today’s malicious spam (malspam) heavily relies on macros and popular vulnerabilities (i.e. CVE-2017-11882), attackers can also resort to zero-days when trying to compromise a target of interest.

In separate blog posts, Gigamon and 360 Core Security reveal how a new zero-day (CVE-2018-15982) for the Flash Player (version and earlier) was recently used in targeted attacks. Despite being a brand new vulnerability, Malwarebytes users were already protected against it thanks to our Anti-Exploit technology.

The Flash object is embedded into an Office document disguised as a questionnaire from a Moscow-based clinic.

A dot reveals an embedded (and hidden) ActiveX object

Since Flash usage in web browsers has been declining over the past few years, the preferred scenario is one where a Flash ActiveX control is embedded in an Office file. This is something we saw earlier this year with CVE-2018-4878 against South Korea.

360 Core Security identified the zero-day as a Use After Free vulnerability in a Flash package called com.adobe.tvsdk.mediacore.metadata.

ActionScript view of the malicious SWF exploit. Thanks David Ledbetter for sharing the dumped file.

Victims open the booby-trapped document from a WinRAR archive that also contains a bogus jpeg file (shellcode) that will be used as part of the exploitation process that eventually loads a backdoor.

Exploitation flow showing the processes involved in the attack

As Qihoo 360 security researchers noted, the timing with this zero-day attack is close to a recent real-world incident between Russia and Ukraine. Cyberattacks between the two countries have been going on for years and have affected major infrastructure, such as the power grid

Malwarebytes users were already protected against this zero-day without the need to update any signatures. We detect the malware payload as Trojan.CrisisHT.APT.

Zero-day attack flow stopped by Malwarebytes

Adobe has patched this vulnerability (security bulletin APSB18-42) and it is highly recommended to apply this patch if you are still using Flash Player. Following the typical exploit-patch cycle, zero-days often become mainstream once other attackers get their hands on the code. For this reason, we can expect to see this exploit integrated into document exploit kits as well as web exploit kits in the near future.

The post New Flash Player zero-day used against Russian facility appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Breaches, breaches everywhere, it must be the season

Malwarebytes - Wed, 12/05/2018 - 19:57

After last weeks shocker from Marriott this week started off with disclosures about breaches at Quora, Dunkin’ Donuts, and 1-800-Flowers.


Quora is an online community that focuses on asking and answering questions. It was founded in 2009 by two former Facebook employees.

The stolen data may concern up to 100 million users of the platform and included the username, the email address, and the encrypted password. In some cases, imported data from other social networks and private messages on the platform may have been taken as well.

To counter future abuse of the login credentials we would advise Quora users to change their password and make sure that the combination of credentials they used on Quora aren’t used elsewhere. Even though Quora used encryption and salted the passwords, it is not prudent to assume nobody will be able to decrypt them. For those that are in the habit of re-using passwords across different sites, please read: Why you don’t need 27 different passwords.

For those who no longer want to be registered at Quora, we also advise you to check under Settings and Disconnect any and all Connected Accounts.

Quora’s official statement can be checked for further details and updates.

Dunkin’ Donuts

A threat-actor successfully managed to gain access to Dunkin’ Donuts Perks accounts. The Perks accounts is a run-of-the-mill loyalty reward system. Dunkin’ Donuts claims that there was no breach into their systems but that re-used passwords were to blame.

we’ve been informed that third parties obtained usernames and passwords through other companies’ security breaches and used this information to log into some Dunkin’ DD Perks accounts.

As a countermeasure they forced password resets for all the customers the company believes were affected. If you are one of these customers the threat actors could have learned your first and last names, email addresses, 16-digit DD Perks account numbers, and DD Perks QR codes.

I repeat myself: For those that are in the habit of re-using passwords across different sites, please read: Why you don’t need 27 different passwords.


The Canadian online outpost of the floral and gourmet foods gift retailer reported an incident where a threat-actor may have gained access to customer data from 75,000 Canadian orders, including names and credit card information, over a four-year period. Even though the breach did not impact any customers on its U.S. website,, the company has filed a notice with the attorney general’s office in California.

The stolen payment information seems to include credit card numbers and all the related information: names, expiration dates, and security codes. That’s really all any seasoned criminal needs to plunder your account.

Are you afraid to be a victim of this breach, here’s what you can do to prevent further damage:

  • Review your banking and credit card accounts for suspicious activity.
  • Consider a credit freeze if you’re concerned your financial information was compromised.
  • Watch out for breach-related scams; cybercriminals know this is a massive, newsworthy breach so they will pounce at the chance to ensnare users through social engineering

Or download our Data Breach Checklist here.

Is it the season?

Some of the recent breaches happened quite some time ago or have been ongoing for years, so why are they all telling us now?

Possible reasons:

  • New legislation requires companies to report breaches
  • Breaches happen all the time, but these happen to be some very serious or big ones, so the media talks about them
  • When a big breach is aired you will always see a few smaller ones, trying to hide in their shadow
If you’re a business looking for tips to prevent getting hit by a breach:
  • Invest in an endpoint protection product and data loss prevention program to make sure alerts on similar attacks get to your security staff as quickly as possible.
  • Take a hard look at your asset management program:
    • Do you have 100 percent accounting of all of your external facing assets?
    • Do you have uniform user profiles across your business for all use cases?
  • When it comes to lateral movement after an initial breach, you can’t catch what you can’t see. The first step to a better security posture is to know what you have to work with.

In a world where it seems breaches cannot be contained, consumers and businesses once again have to contend with the aftermath. Our advice to organizations: Don’t become a cautionary tale. Save your customers hassle and save your business’ reputation by taking proactive steps to secure your company today.

The post Breaches, breaches everywhere, it must be the season appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New ‘Under the Radar’ report examines modern threats and future technologies

Malwarebytes - Wed, 12/05/2018 - 13:01

As if you haven’t heard it enough from us, the threat landscape is changing. It’s always changing, and usually not for the better.

The new malware we see being developed and deployed in the wild have features and techniques that allow them to go beyond what they were originally able to do, either for the purpose of additional infection or evasion of detection.

To that end, we decided to take a look at a few of these threats and pick apart what about them makes them difficult to detect, remaining just out of sight and able to silently spread across an organization.

 Download: Under the Radar: The Future of Undetected Malware

We then examine what technologies are unprepared for these threats, which modern tech is actually effective against these new threats, and finally, where the evolution of these threats might eventually lead.

The threats we discuss:

  • Emotet
  • TrickBot
  • Sorebrect
  • SamSam
  • PowerShell, as an attack vector

While discussing these threats, we also look at where they are most commonly found in the US, APAC, and EMEA regions.

Emotet 2018 detections in the United States

In doing so, we discovered interesting trends that create new questions, some of which are clear and others that need more digging. Regardless, it is evident that these threats are not old hat, but rather making bigger and bigger splashes as the year goes on, in interesting and sometimes unexpected ways.

Sorebrect ransomware detections in APAC region

Though the spread and capabilities of future threats are unknown, we have to prepare people to protect their data and experiences online. Unfortunately, many older security solutions will not be able to combat future threats, let alone what is out there now.

Not all is bad news in security, though, as we do have a lot going for us as in technological developments and innovations in modern features. For example:

  • Behavioral detection
  • Blocking at delivery
  • Self-defense modes

These features are effective at combating today’s threats and will soon be needed to build the basis for future developments, such as:

  • Artificial Intelligence being used to develop, distribute, or control malware
  • The continued development of fileless and “invisible” malware
  • Businesses becoming worm food for future malware
Download: Under the Radar: The Future of Undetected Malware

The post New ‘Under the Radar’ report examines modern threats and future technologies appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Humble Bundle alerts customers to subscription reveal bug

Malwarebytes - Tue, 12/04/2018 - 17:20

You’ll want to check your mailbox if you have a Humble Bundle account, as they’re notifying some customers of a bug used to gather subscriber information.

Click to enlarge

The mail reads as follows:


Last week, we discovered someone using a bug in our code to access limited non-personal information about Humble Bundle accounts. The bug did not expose email addresses, but the person exploited it by testing a list of email addresses to see if they matched a Humble Bundle account. Your email address was one of the matches.

Now, this is the part of a breach/bug mail where you tend to say “Oh no, not again” and take a deep breath. Then you see how much of your personal information winged its way to the attacker.

Oh no, not again

For once, your name, address, and even your login details are apparently in safe hands. Either this bug didn’t expose as much as the attacker was hoping for, or they were just in it for the niche content collection.

The email continues:

Sensitive information such as your name, billing address, password, and payment information was NOT exposed. The only information they could have accessed is your Humble Monthly subscription status. More specifically, they might know if your subscription is active, inactive, or paused; when your plan expires; and if you’ve received any referral bonuses.

I should explain at this point. You can buy standalone PC games on the Humble store, or whatever book, game, or other collection happen to be on offer this week. Alternatively, you can sign up to the monthly subscription. With this, you pay and then every month you’re given a random selection of video game titles. They may be good, bad, or indifferent. You might already own a few, in which case you may be able to gift them to others. If you have  no interest in the upfront preview titles, you can temporarily pause your subscription for a month.

This is the data that the bug exploiter has obtained, which is definitely an odd and specific thing to try and grab.

Security advice from Humble Bundle

Let’s go back to the email at this point:

Even though the information revealed is very limited, we take customer trust very seriously and wanted to promptly disclose this to you. We want to make sure you are able to protect yourself should someone use the information gathered to pose as Humble Bundle.

As a reminder, here are some tips to keep your account private and safe:

  • Don’t share your password, personal details, or payment information with anyone. We will NEVER ask for information like that.
  • Be careful of emails with links to unfamiliar sites. If you receive a suspicious email related to Humble Bundle, please contact us via our support website so that we can investigate further and warn others.
  • Enable Two-factor authentication (2FA) so that even if someone gets your password, they won’t be able to access your account. You can enable2FA by following these instructions.

We sincerely apologize for this mistake. We will work even harder to ensure your privacy and safety in the future.

Good advice, but what’s the threat?

One could guess that the big risk here, then, is the potential for spear phishing. They could exploit this by sending mails to subscribers that their subscription is about to time out, or claim problems with stored card details. Throw in a splash of colour text regarding your subscription “currently being paused,” and it’s all going to look convincing.

Phishing is a major danger online, and we should do everything we can to thwart it. While the information exposed here isn’t as bad as it tends to be, it can still cause major headaches. Be on the lookout for dubious Humble mails, especially if they mention subscriptions. It’ll help to keep your bundle of joy from becoming a bundle of misery.

The post Humble Bundle alerts customers to subscription reveal bug appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 26 – December 2)

Malwarebytes - Mon, 12/03/2018 - 17:06

Last week on Malwarebytes Labs, we took a look at our cybersecurity predictions for 2019, we explained why Malwarebytes participated in AV testing and how we took part in an joint take down of massive ad fraud botnets, warned that ESTA registration websites still lurk in paid ads on Google, discussed what 25 years of webcams have brought us, and reported about the Marriott breach that impacted 500 million customers.

Other cybersecurity news:
  • LinkedIn violated data protection by using 18 million email addresses of non-members to buy targeted ads on Facebook. (Source: TechCrunch)
  • Researchers created fake “master” fingerprints to unlock smartphones. (Source: Motherboard)
  • Uber slapped with £385K ICO fine for major breach. (Source: InfoSecurty Magazine)
  • Rogue developer infects widely-used NodeJS module to steal Bitcoins. (Source: The Hacker News)
  • When the FBI (and not the fraudsters) make a fake FedEx website. (Source: Graham Cluley)
  • Microsoft warns about two apps that installed root certificates then leaked the private keys. (Source: ZDNet)
  • Social media scraping app Predictim banned by Facebook and Twitter. (Source: NakedSecurity)
  • Tech support scam: Call centers shut down by Indian police in collaboration with Microsoft. (Source: TechSpot)
  • Germany detects new cyberattack targeting politicians, military, and embassies. (Source: DW)
  • It’s time to change your password again as Dell reveals attempted hack. (Source: Digital Trends)

Stay safe, everyone!

The post A week in security (November 26 – December 2) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Marriott breach impacts 500 million customers: here’s what to do about it

Malwarebytes - Fri, 11/30/2018 - 19:17

Today Marriott disclosed a large-scale data breach impacting up to 500 million customers who have stayed at a Starwood-branded hotel within the last four years. While details of the breach are still sparse, Marriott stated that there was unauthorized access to a database tied to customer reservations stretching from 2014 to September 10, 2018.

For a majority of impacted customers (approximately 327 million), the breached data includes some combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences. For some of those guests, their credit card numbers and expiration dates were exposed, however, they were encrypted using the Advanced Encryption Standard (AES-128).

You can read more on impact to customers in Marriott’s statement here.

A root cause of the breach is currently unknown, but Marriott indicated that the intruders encrypted the information before exfiltrating the data. Brian Krebs reported that Starwood reported its own breach in 2015, shortly after acquisition by Marriott. At the time, Starwood said that their breach timeline extended back one year, to roughly November 2014. Incomplete remediation of breaches is extremely common, and when compounded by asset management challenges introduced by mergers and acquisitions, seeing lateral movement and exfiltration after an initial hack is not unreasonable.

Starwood properties impacted are as follows:

  • Westin
  • Sheraton
  • The Luxury Collection
  • Four Points by Sheraton
  • W Hotels
  • St. Regis
  • Le Méridien
  • Aloft
  • Element
  • Tribute Portfolio
  • Design Hotels
What should you do about it? If you’re a customer:
  • Change your password for your Starwood Preferred Guest Rewards Program immediately. Random passwords generated by a password manager of your choice should be most helpful.
  • Review your banking and credit card accounts for suspicious activity.
  • Consider a credit freeze if you’re concerned your financial information was compromised.
  • Watch out for breach-related scams; cybercriminals know this is a massive, newsworthy breach so they will pounce at the chance to ensnare users through social engineering. Review emails supposedly from Marriott with an eagle eye.
If you’re a business looking for tips to prevent getting hit by a breach:
  • Invest in an endpoint protection product and data loss prevention program to make sure alerts on similar attacks get to your security staff as quickly as possible.
  • Take a hard look at your asset management program:
    • Do you have 100 percent accounting of all of your external facing assets?
    • Do you have uniform user profiles across your business for all use cases?
  • When it comes to lateral movement after an initial breach, you can’t catch what you can’t see. The first step to a better security posture is to know what you have to work with.

In a world where it seems breaches cannot be contained, consumers and businesses once again have to contend with the aftermath. Our advice to organizations: Don’t become a cautionary tale. Save your customers hassle and save your business’ reputation by taking proactive steps to secure your company today.

The post Marriott breach impacts 500 million customers: here’s what to do about it appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The 25th anniversary of the webcam: What did it bring us?

Malwarebytes - Fri, 11/30/2018 - 16:00

How did the webcam progress from a simple convenience to a worldwide security concern in 25 years?

November 2018 can be marked as the 25th anniversary of the webcam. This is a bit of an arbitrary choice, but if we consider a webcam that was installed at the University of Cambridge to keep an eye on the coffee level in the shared coffeemaker as the first one, then it’s been 25 years already. And those 25 years are measured from the moment the images were viewable over the Internet. (The images had been visible on the universities’ intranet since a few years before.)

Definition of a webcam

According to Wikipedia:

A webcam is a video camera that feeds or streams its image in real time to or through a computer to a computer network.

We deviate slightly from this definition by only considering cameras that are visible on the Internet.

The first official webcam

The first camera was actually installed in the late 1980’s so that employees could avoid walking all the way to the coffeemaker to find the pot empty, but it was made visible to the Internet in November 1993. Before that, it could only be seen on the local network. For none other than historic reasons, it is worth mentioning that this camera was in the “Trojan Room” of the Computer Science Department. The scientists used a digital camera with a video capture board and MSRPC2, a remote procedure call mechanism, to upload one frame per second.

The first commercial webcam

The first commercially produced webcam was the QuickCam by Connectix, which was marketed in 1994. It could only be used with an Apple Macintosh and recorded a whopping 15 frames per second. Nowadays, it’s hard to find a laptop that does not have a webcam installed. It has even reached the point where you can buy webcam covers to hide away from prying eyes.

Or use a Band-Aid

Popular usage

The webcam quickly became popular when Internet speeds rose to the level that it was possible to chat face-to-face over long distances. But there are many other legitimate and popular ways to use a webcam:

  • Child or pet monitoring: Keep an eye on your loved ones when you are elsewhere.
  • Video conferences: Join a meeting that you can’t physically attend.
  • Earth cam: Watch the scenery around the world from behind your laptop.
  • Security camera or baby monitor: Be alerted when something happens at home or in the baby’s room.
  • Porn: Sell your explicit images or video feed to earn some extra cash. (Not that we recommend it…)
  • Surveillance: Keep an eye on suspects. (this can also be combined with facial recognition.)
  • Vlogging: Share information about your life or interests online via video.
Possible future uses

Some webcam developments are underway, but not quite ready to hit the stores yet:

  • Face login: similar to using your fingerprints to log on to a device. Show your face to the webcam, and if it recognizes you, it will let you in. Same as with using fingerprint readers, I’d like the device to ask for my secret password now and then—just in case a thief looks a bit like me. Windows already has Hello Face Authentication, but it requires near infrared imaging.
  • VR-like webcams: adding an extra dimension to your webcam, 3D could make your online chats even more realistic. 3D webcams are already available, but the technology to use it in person-to-person chat isn’t available yet.
Internet of Things concerns

The Internet of Things (IoT) has been a subject of our cybersecurity related concerns before, and we don’t expect those concerns to go away anytime soon. Webcams are among the top IoT problems because of their sheer numbers and their often weak security setup, such as easy-to-guess and hard-to-change default passwords.

If you want to be freaked out a little, here are some of the websites that let you take a peek through the eye of unprotected webcams:


A botnet is a collection of centrally controlled devices and systems that accept commands from a remote administration. IoT devices, including webcams, are the stuff that the currently most powerful botnets are made of. The Mirai botnet, for example, has been responsible for some of the most effective DDoS attacks. Working for a central command has also made it possible for IoT botnets to be used in cryptomining.

Facial recognition

Facial recognition works by measuring distances between features on a face and comparing the resulting “faceprint” to a database. To get a dependable recognition rate, the tech must measure around 80 nodal points on the human face to create a faceprint and find a match.

Big Brother

The combination of publicly available security and surveillance cameras has brought Orwell’s vision of blanket surveillance to life. China is already using its massive network of closed-circuit television (CCTV) cameras and facial recognition technology to track its citizens. And if naming and shaming jaywalkers is the only activity they admit to, you can rest assured that it is far from the only thing that they are keeping track of.


Camfecting is a term used for hacking into a webcam’s data stream. Threat actors would be able to view or store the live feed from a webcam for their own purposes. An important thing to keep in mind is that if they have hacked your webcam, they are just as easily capable of turning off any warning light that would show you whether it’s active or not. The fear of camfecting is one reason for webcam covers (or post-it notes, Band-Aids, and other sticky stuff to cover the webcam’s eye). Stolen video images can lead to sextortion and other extortion practices.

Historic overview

Looking back, in 25 years we went from watching the level of supply in a coffeepot online to the state surveillance capabilities where we can be found and identified in a matter of minutes. And where we can’t be sure who is watching us or what the devices, we are using to look at others, are doing in the background. Are they sending the same images to the manufacturer? Or to some hacker? Should we be worried about those sextortion emails?  Probably not, but that still leaves us with lots of other things to worry about.

Different types

Webcams come in many different types, shapes, and sizes. While they perform many useful and convenient tasks, we need to be aware of the dangers and concerns that come with using them. The ones that we should be worried about most are the ones that are connected directly to the internet. The ones that are connected or even built into our computers and laptops are under control by the active security solutions. The IoT devices however, especially the ones that are fitted with no or default credentials, are a major concern in the fields of privacy and cybersecurity.

Use webcams to connect with friends and family, for meetings, and to keep an eye on your inventory, but don’t allow them to be the weak link in your home or business network.

The post The 25th anniversary of the webcam: What did it bring us? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

ESTA registration websites still lurk in paid ads on Google

Malwarebytes - Wed, 11/28/2018 - 16:00

Google has taken direct action against adverts promoting ESTA registration services, often offered by third parties at highly inflated prices. Ads displayed on the Google network shouldn’t display fees higher than what a public source or government charges for products or services. This tightening of the ad leash has taken a remarkable eight years to complete—and we argue it’s not done yet.

What ESTA services are these sites advertising?

The US Visa Waiver program allows citizens of 38 countries to travel visa free for up to 90 days. This requires an application for eligibility on ESTA (Electronic System for Travel Authorisation). The process is simple and takes only around 10 minutes to fill in an application online. However, many sites have sprung up offering to fill it in on your behalf.

That sounds great!

Sure, everyone hates paperwork, but many people are needlessly paying for service that does, essentially, nothing. The idea is, you fill in the ESTA questions and submit to Homeland Security. You then get an authorisation or a rejection. These sites want you to pay them for filling in essentially the exact same form you’d fill on the USGOV website so they can, in turn, “submit” it on the USGOV submission page. They’ll also often charge a lot more than the standard US$14 submission fee.

That’s…not so great

The flaw here is that if you can submit this information to the third party ESTA registration website, there’s no reason why you couldn’t have just done it yourself on the official USGOV website and saved the additional fee. Once you consider the inflated fees and the fact you might be submitting sensitive personal information and/or payment details to random websites, it quickly becomes an issue.

Why pay $80 instead of $14? It doesn’t really make sense, and this is partly why Google is now cracking down on these sorts of advertisements.

What does Google say about this?

From their Advertising Policies page, Google prohibits the sale of free items. The following is not allowed:

Charging for products or services where the primary offering is available from a government or public source for free or at a lower price

Examples (non-exhaustive list): Services for passport or driving license applications; health insurance applications; documents from official registries, such as birth certificates, marriage certificates, or company registrations; exam results; tax calculators.

Note: You can bundle something free with another product or service that you provide. For example, a TV provider can bundle publicly available content with paid content, or a travel agency can bundle a visa application with a holiday package. But the free product or service can’t be advertised as the primary offering.

Google search results

We thought we’d see what, exactly, is still out there in Google search land. For this, we decided to try common ESTA-related search terms. I went with “ESTA” (naturally), “ESTA questions,” and “ESTA answers.” Here’s what I found:

Search term: ESTA

How popular a Worldwide search term is “ESTA” over time?

Click to Enlarge

A search for the word “ESTA” brings back no adverts in the search results whatsoever. That’s good!

Click to enlarge

Search term: ESTA questions

How popular a Worldwide search term is “ESTA questions” over time?

Click to enlarge

A search for “ESTA questions” returned one result, which is still quite good. However, Google said common search terms would no longer fetch ads. Our search above seems pretty basic and still snagged a hit.


Click to enlarge

The website featured in the advert doesn’t mention cost on the front page, but does on Terms of Use. Their basic fee is US$14 for the USGOV application, and US$85 for their listed services. This is arguably the kind of site Google is trying remove.

Search Term: ESTA answers

How popular a Worldwide search term is “ESTA answers” over time?

Click to Enlarge

“ESTA answers” returned four adverts.


Click to enlarge

First result: The same site listed for “ESTA questions” also made top spot under this search term.

Second result: Costs a grand total of US$89, which includes the US$14 Government fee. However, they are upfront about the fact that the service charge won’t apply should you apply directly on the Homeland Security portal. Many sites don’t mention this or hide it away in some terms and conditions.

Third result: Uh, an advert for dust extraction systems. At least there’s definitely no overpriced ESTA fee this time around.

Fourth result: The site lists their fees as US$79, which includes the US$14 Government charge.

We’ve reported all sites to Google whose adverts potentially conflict with Google’s ad policies.

How does Yahoo! stack up?

We looked at Yahoo! to see what we could find in terms of ESTA ads. As far as their Policies for Ads go, the closest thing I could find was “Low quality offers and landing page techniques” from the Oath Ad Policies page:

Services that are offered for free by the government and offered by third parties without adding any additional value to the user, such as green card lotteries Display and Native ads promoting body branding, piercings or tattoos

This doesn’t really apply here though, as ESTA carries the $14 application fee. On the other hand, there could well be something else I’ve missed in the numerous terms and conditions for advertisers. With that in mind, let’s see what we found.

Searching for “ESTA” brought back no fewer than four ads under the search bar, and seven down the side, with actual search results quite a bit further down the page.


Click to enlarge

In terms of the sites themselves, we had a mixed response with regards to upfront pricing information.

First result: The same site in both “ESTA questions” and “ESTA answers” Google searches returns again, with their now familiar combined fee of $14 and $85.

Second result: No information visible for fees that we could find.

Third result: This site offers a fee of 59 Euros.

Fourth result: We couldn’t find details of pricing, and the FAQ drop-downs didn’t work, so if the information was in there, we couldn’t see it.

Here’s the results for the adverts down the right-hand side:

First result: US$89 for services offered.

Second result: No price or FAQs visible, just a form submission process. There was a webchat, however, and we were able to obtain a price that way instead: 89 Euro/US$100 for a US ESTA submission.


Click to enlarge

Third result: No price visible that we could find.

Fourth result: US$79 plus US$14 Government fee

Fifth result: Nothing visible that we could find.

Sixth result: 84 Euros (this includes a “2-year concierge service”)

Seventh result: £37.82, US$14 Government fee, plus £1 “overseas transition/calling card fee”

Looking for travel assistance online?

There are many pitfalls lurking online the moment you go looking for visas, ESTAs, or anything else. It seems baffling to me that people would pay someone else to submit a form to a third party when they have to fill out the form themselves first. Are the extra services promoted by these sites really worth it? Some claim to retain your data “for up to two years” in case you need to reapply. The ESTA is valid for two years, by which point they’d no longer be retaining your information, so I don’t see how this helps.

“Aha”, they’ll say. “We don’t retain the data for two years in case you need to apply for the ESTA again. We retain it in case you’re denied authorisation so you can have another go!”

Well, great, except not really. If you’re denied an ESTA at application time, that’s the end of that:

If a traveler is denied ESTA authorization and his or her circumstances have not changed, a new application will also be denied. A traveler who is not eligible for ESTA is not eligible for travel under the Visa Waiver Program and should apply for a nonimmigrant visa at a U.S. Embassy or Consulate. Reapplying with false information in order to qualify for a travel authorization will make the traveler permanently ineligible for travel to the United States under the Visa Waiver Program

Time for a little DIY

On a similar note, these sites do offer to check that all of your information is correct before submitting. The information you need to supply for an ESTA is basic stuff, though: name, address, passport number, and answers to a series of yes/no questions. It’s not complicated, and you could easily have a friend or relative look it over before submitting it online yourself. “Concierge” services sound good, but there’s so much information online, you shouldn’t have trouble finding a hotel or a taxi service or anything else for that matter.

If you insist on making use of an ESTA application website, keep in mind the above commentary. You should also be wary of sites that aren’t upfront with their pricing. Pay particular attention as to whether they retain a copy of your data and for how long. If they promote the benefit of retaining it for less than two years in case you want to “reapply,” that’s not a great sign. If they refer to the ESTA as a “visa,” also not good. (It isn’t a visa; it’s access to participation in the Visa Waiver Program.)

Keep your passport and your online wits close to hand, and you won’t have any problems. Safe travels!

The post ESTA registration websites still lurk in paid ads on Google appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malwarebytes helps take down massive ad fraud botnets

Malwarebytes - Wed, 11/28/2018 - 14:00

On November 27, the US Department of Justice announced the indictment of eight individuals involved in a major ad fraud case that cost digital advertisers millions of dollars. The operation, dubbed 3ve, was the combination of the Boaxxe and Kovter botnets, which the FBI—in collaboration with researchers in the private sector, including one of our own at Malwarebytes—was able to dismantle.

The US CERT advisory indicates that 3ve was controlling over 1.7 million unique IP addresses between both Boaxxe and Kovter at any given time. Threat actors rely on different tactics to generate fake traffic and clicks, but one of the most common is to infect legitimate computers and have them silently mimic a typical user’s behavior. By doing so, fraudsters can generate millions of dollars in revenue while eroding trust in the online advertising business.

This criminal enterprise was quite sophisticated in that it had many evasion techniques that not only made it difficult to detect the presence of ad fraud, but also clean up affected systems. Kovter in particular is a unique piece of malware that goes to great lengths to avoid detection and even trick analysts. Its fileless nature to maintain persistence has also made it more challenging to disable.

Malwarebytes, along with several other companies, including Google, Proofpoint, and ad fraud detection company White Ops, was involved in the global investigation into these ad fraud botnets. We worked with our colleagues at White Ops, sharing our intelligence and samples of the Kovter malware. We were happy to be able to leverage our telemetry, which proved to be valuable for others to act upon.

Even though cybercriminal enterprises can get pretty sophisticated, this successful operation proves that concerted efforts between both the public and private sectors can defeat them and bring perpetrators to justice.

The full report on 3ve, co-authored by Google and White Ops, with technical contributions from Proofpoint and others, can be downloaded here.

The post Malwarebytes helps take down massive ad fraud botnets appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Why Malwarebytes decided to participate in AV testing

Malwarebytes - Tue, 11/27/2018 - 22:44

Starting this month, Malwarebytes began participating in the antivirus software for Windows comparison test performed by This is uncharted territory for us, as we have refrained from participating in these types of tests since our inception. Although recent testing results show Malwarebytes protecting against more than 97 percent of web vector threats and detecting and removing 99.5 percent of malware during a scan on any machine, we still maintain reservations about the entire testing process.

Why participate now?

In the past, we’ve avoided AV comparison tests because we felt their methods did not allow us to demonstrate how our product works in a real environment. By testing only a small portion of our product’s technologies, AV comparison tests are often unable to replicate Malwarebytes’ overall effectiveness. However, we understand the importance of independent reviews for those considering a Malwarebytes purchase, so we decided to participate.

Malwarebytes is not a traditional antivirus, and detecting files based on signatures—which is what the testing companies review—is only one of the methods we use to protect our customers from threats. We probably never will be the best performer in this category; it simply isn’t our focus. We mostly rely on other methods, such as hardening, application behavior, and vector blocking defenses that disrupt malware earlier in the attack chain.

What did the test miss?

Some of our best technologies block malware before it has the chance to execute. Our application behavior and web protection modules, for example, stop threats earlier in the attack—at the point of delivery instead of the point of execution. However, the URLs tested only represent the final stage of an attack (i.e. the URL pointing to the final payload EXE).

In addition, testers often do not replicate the original infection vector used by malware campaigns, such as malspam, exploits, or redirects. Instead, they download the malware directly, bypassing typical delivery methods. By doing this, they´re controlling the environment, but also missing out on the trigger for many of our detections.

What exactly is checked in these monthly tests?
  • Detections (specifications)
    • Detection of URLs pointing directly to malware EXEs (i.e. “web and email threats” test)
    • On-demand scan of a directory full of malware EXEs (i.e. “widespread and prevalent malware” test)
  • Performance impact, such as browsing slowdown, application load slowdown, slowdown of file copy operations, etc.
  • Usability test, with focus on false positives

More information about the test procedures can be found at

Unsolicited tests

A number of times in the past, Malwarebytes has been included in tests that we were not aware of or in which we didn’t choose to participate. Some even compared our free, limited scanner against fully functional AVs. No surprises there: while the other vendors may have scored higher in their detections, our free scanner still outperformed them in remediation and removal.

Change the tests

If the tests miss out on our best protection modules, you would expect us to try and change the testing methods altogether, right? We did look into this, and it’s not entirely off the table. We feel sure that using live malware or duplicating real-life attacks would show our excellence, but these conditions are hard to replicate for a controlled and equal testing environment.

What we would like to see is a test for zero-day effectiveness, and not a test based on relatively old samples and infection vectors. But again, we also understand that this is hard to achieve for a testing organization that likes to have some control over the environment and in order to create a level playing field.

When and where can we expect to see your test results?

As of November 27, 2018, will include results for our flagship consumer product, Malwarebytes for Windows versions 3.5 and 3.6. publishes their results publicly every two months. The November 2018 results are the summary of tests performed during September and October. Our participation is only in the “Windows Antivirus” test for home users.

We still do not believe in the “pay-to-play” model, and especially the “pay-to-see-what-you-missed” model that some organizations use. (AV companies, for an additional fee, can see the samples they did not catch in the test and develop fixes in the product for future tests/use.) Nonetheless, we want to give our customers some idea of what we are capable of, even when the playing field is skewed.

We would just like you to keep in mind that, when reviewing our scores, these tests only show part of the whole picture. Many of our best protection modules have been left out of the test entirely—which basically misses what Malwarebytes is truly capable of.

So what would you rather have: a product that does well on AV tests, or a product that detects, blocks, and cleans up threats in the real world?

The post Why Malwarebytes decided to participate in AV testing appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malwarebytes’ 2019 security predictions

Malwarebytes - Tue, 11/27/2018 - 16:00

Every year, we at Malwarebytes Labs like to stare into our crystal ball and foretell the future of malware.

Okay, maybe we don’t have a crystal ball, but we do have years and years of experience in observing trends and sensing shifts in patterns. When it comes to security, though, we can only know so much. For example, we guarantee there’ll be some kind of development that we had zero indication would occur. We also can pretty much assure you that data breaches will keep happening—just as the sun rises and sets.

And while all hope is for a malware-free 2019, the reality will likely look a little more like this:

New, high-profile breaches will push the security industry to finally solve the username/password problem. The ineffective username/password conundrum has plagued consumers and businesses for years. There are many solutions out there—asymmetric cryptography, biometrics, blockchain, hardware solutions, etc.—but so far, the cybersecurity industry has not been able to settle on a standard to fix the problem. In 2019, we will see a more concerted effort to replace passwords altogether.

IoT botnets will come to a device near you. In the second half of 2018, we saw several thousand MikroTik routers hacked to serve up coin miners. This is only the beginning of what we will likely see in the new year, with more and more hardware devices being compromised to serve up everything from cryptominers to Trojans. Large scale compromises of routers and IoT devices are going to take place, and they are a lot harder to patch than computers. Even just patching does not fix the problem, if the device is infected.

Digital skimming will increase in frequency and sophistication. Cybercriminals are going after websites that process payments and compromising the checkout page directly. Whether you are purchasing roller skates or concert tickets, when you enter your information on the checkout page, if the shopping cart software is faulty, information is sent in clear text, allowing attackers to intercept in real time. Security companies saw evidence of this with the British Airways and Ticketmaster hacks.

Microsoft Edge will be a prime target for new zero-day attacks and exploit kits. Transitioning out of IE, Microsoft Edge is gaining more market share. We expect to see more mainstream Edge exploits as we segue to this next generation browser. Firefox and Chrome have done a lot to shore up their own technology, making Edge the next big target.

EternalBlue or a copycat will become the de facto method for spreading malware in 2019. Because it can self-propagate, EtnernalBlue and others in the SMB vulnerability present a particular challenge for organizations, and cybercriminals will exploit this to distribute new malware.

Cryptomining on desktops, at least on the consumer side, will just about die. Again, as we saw in October (2018) with MikroTik routers being hacked to serve up miners, cybercriminals just aren’t getting value out of targeting individual consumers with cryptominers. Instead, attacks distributing cryptominers will focus on platforms that can generate more revenue (servers, IoT) and will fade from other platforms (browser-based mining).

Attacks designed to avoid detection, like soundloggers, will slip into the wild. Keyloggers that record sounds are sometimes called soundloggers, and they are able to listen to the cadence and volume of tapping to determine which keys are struck on a keyboard. Already in existence, this type of attack was developed by nation-state actors to target adversaries. Attacks using this and other new attack methodologies designed to avoid detection are likely to slip out into the wild against businesses and the general public.

Artificial Intelligence will be used in the creation of malicious executables While the idea of having malicious AI running on a victim’s system is pure science fiction at least for the next 10 years, malware that is modified by, created by, and communicating with an AI is a dangerous reality. An AI that communicates with compromised computers and monitors which and how certain malware is detected can quickly deploy countermeasures. AI controllers will enable malware built to modify its own code to avoid being detected on the system, regardless of the security tool deployed. Imagine a malware infection that acts almost like “The Borg” from Star Trek, adjusting and acclimating its attack and defense methods on the fly based on what it is up against.

Bring your own security grows as trust declines. More and more consumers are bringing their own security to the workplace as a first or second layer of defense to protect their personal information. Malwarebytes recently conducted global research and found that nearly 200,000 companies had a consumer version of Malwarebytes installed. Education was the industry most prone to adopting BYOS, followed by software/technology and business services. 

The post Malwarebytes’ 2019 security predictions appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 19 – 25)

Malwarebytes - Mon, 11/26/2018 - 18:21

Last week on Malwarebytes Labs, we took a look at a devastating business email compromise attack, web skimming antics, and the fresh perils of Deepfakes. We also checked out some Chrome bug issues, and took the deepest of deep dives into DNA testing.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 19 – 25) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Spoofed addresses and anonymous sending: new Gmail bugs make for easy pickings

Malwarebytes - Wed, 11/21/2018 - 17:53

Tim Cotten, a software developer from Washington, DC, was responding to a request for help from a female colleague last week, who believed that her Gmail account has been hacked, when he discovered something phishy. The evidence presented was several emails in her Sent folder, purportedly sent by her to herself.

Cotten was stunned when, upon initial diagnosis, he found that those sent emails didn’t come from her account but from another, which Gmail—being the organized email service that it is—only filed away in her Sent folder. Why would it do that if the email wasn’t from her? It seems that while Google’s filtering and organizing technology worked perfectly, something went wrong when Gmail tried to process the emails’ From fields.

This trick is a treat for phishers

Cotten noted in a blog post that the From header of the emails in his coworker’s Sent folder contained (1) the recipient’s email address and (2) another text—usually a name, possibly for increased believability. The presence of the recipient’s address caused Gmail to move the email to the Sent folder while also disregarding the email address of the actual sender.

Weird “From” header. Screenshot by Tim Cotten, emphasis (in purple) ours.

Why would a cybercriminal craft an email that never ends up in a victim’s inbox? This tactic is particularly useful for a phishing campaign that banks on the recipient’s confusion.

“Imagine, for instance, the scenario where a custom email could be crafted that mimics previous emails the sender has legitimately sent out containing various links. A person might, when wanting to remember what the links were, go back into their sent folder to find an example: disaster!” wrote Cotten.

Cotten provided a demo for Bleeping Computer wherein he showed a potentially malicious sender spoofing the From field by displaying a different name to the recipient. This may yield a high turnover of victims if used in a business email compromise (BEC)/CEO fraud campaign, they noted.

After raising an alert about this bug, Cotten unknowingly opened the floodgates for other security researchers to come forward with their discovered Gmail bugs. Eli Grey, for example, shared the discovery of a bug in 2017 that allowed for email spoofing, which has been fixed in the web version of Gmail but remains a flaw in the Android version. One forum commenter claimed that the iOS Mail app also suffers from the same glitch.

Another one stirs the dust

Days after publicly revealing the Gmail bug, Cotten discovered another flaw wherein malicious actors can potentially hide sender details in the From header by forcing Gmail to display a completely blank field.

Who’s the sender? Screenshot by Tim Cotten, emphasis (in purple) ours.

He pulled this off by replacing a portion of his test case with a long and arbitrary code string, as you can see below:

The string. Screenshot from Tim Cotten, emphasis (in purple) ours.

Average Gmail users may struggle to reveal the true sender because clicking the Reply button and the “Show original” option still yields a blank field.

The sender with no name. Screenshot by Tim Cotten, emphasis (in purple) ours.

There’s nothing there! Screenshot by Tim Cotten, emphasis (in purple) ours.

Missing sender details could potentially increase the possibility of users opening a malicious email to click an embedded link or open an attachment, especially if it contains a subject that is both actionable and urgent.

When met with silence

The Gmail vulnerabilities mentioned in this post are all related to user experience (UX), and as of this writing, Google has yet to address them. (Cotten has proposed a possible solution for the tech juggernaut.) Unfortunately, Gmail users can only wait for the fixes.

Spotting phishing attempts or spoofed emails can be tricky, especially when cybercriminals are able to penetrate trusted sources, but a little vigilance can go a long, long way.

The post Spoofed addresses and anonymous sending: new Gmail bugs make for easy pickings appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Are Deepfakes coming to a scam near you?

Malwarebytes - Wed, 11/21/2018 - 16:00

Your boss contacts you over Skype. You see her face and hear her voice, asking you to transfer a considerable amount of money to a firm you’ve never ever heard of. Would you ask for written confirmation of her orders? Or would you simply follow through on her instructions?

I would certainly be taken aback by such a request, but then again, this is not anywhere near a normal transaction for me and my boss. But, given the success rate of CEO fraud (which was a lot less convincing), threat actors would need only find the right person to contact to be able to successfully fool employees into sending the money.

Imagine the success rate of CEO fraud where the scam artists would be able to actually replicate your boss’ face and voice in such a Skype call. Using Deepfake techniques, they may reach that level in a not too distant future.

What is Deepfake?

The word “Deepfake” was creating by mashing “deep learning” and “fake” together. It is a method of creating human images based on artificial intelligence (AI). Simply put, creators feed a computer data consisting of a lot of facial expressions of a person and find someone who can imitate that person’s voice. The AI algorithm is then able to match the mouth and face to synchronize with the spoken words. All this would result in a near perfect “lip sync” with the matching face and voice.

Compared against the old Photoshop techniques to create fake evidence, this would qualify as “videoshop 3.0.”

Where did it come from?

The first commotion about this technique arose when a Reddit user by the handle DeepFakes posted explicit videos of celebrities that looked realistic. He generated these videos by replacing the original pornographic actors’ faces with those of the celebrities. By using deep learning, these “face swaps” were near to impossible to detect.

DeepFakes posted the code he used to create these videos on GitHub and soon enough, a lot of people were learning how to create their own videos, finding new use cases as they went along. Forums about Deepfakes were immensely popular, which was immediately capitalized upon by coinminers. And at some point, a user-friendly version of Deepfake technology was bundled with a cryptominer.

The technology

Deepfake effects are achieved by using a deep learning technology called autoencoder. Input is compressed, or encoded, into a small representation. These can be used to reproduce the original input so they match previous images in the same context (here, it’s video). Creators need enough relevant data to achieve this, though. To create a Deepfake image, the producer reproduces face B while using face A as input. So, while the owner of face A is talking on the caller side of the Skype call, the receiver sees face B making the movements. The receiver will observe the call as if B were the one doing the talking.

The more pictures of the targeted person we can feed the algorithm, the more realistic the facial expressions of the imitation can become.

Given that an AI already exists which can be trained to mimic a voice after listening to it for about a minute, it doesn’t look as if it will take long before the voice impersonator can be replaced with another routine that repeats the caller’s sentences in a reasonable imitation of the voice that the receiver associates with the face on the screen.

Abuse cases

As mentioned earlier, the technology was first used to replace actors in pornographic movies with celebrities. We have also seen some examples of how this technology could be used to create “deep fake news.”

So, how long will it take scammers to get the hang of this to create elaborate hoaxes, fake promotional material, and conduct realistic fraud?

Hoaxes and other fake news are damaging enough as they are in the current state of affairs. By nature, people are inclined to believe what they see. If they can see it “on video” with their own eyes, why would they doubt it?

You may find the story about the “War of the Worlds” broadcast and the ensuing panic funny, but I’m pretty sure the more than a million people that were struck with panic would not agree with you. And that was just a radio broadcast. Imagine something similar with “live footage” and using the faces and voices of your favorite news anchors (or, better said, convincing imitations thereof). Imagine if threat actors could spoof a terrorist attack or mass shooting. There are many more nefarious possibilities.


The Defense Advanced Research Project Agency (DARPA) is aware of the dangers that Deepfakes can pose.

“While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns.

This manipulation of visual media is enabled by the wide-scale availability of sophisticated image and video editing applications, as well as automated manipulation algorithms that permit editing in ways that are very difficult to detect either visually or with current image analysis and visual media forensics tools. The forensic tools used today lack robustness and scalability, and address only some aspects of media authentication; an end-to-end platform to perform a complete and automated forensic analysis does not exist.”

DARPA has launched the MediFor program to stimulate researchers to develop technology that can detect manipulations and even provide information about how the manipulations were done.

One of the signs that researchers now look for when trying to uncover a doctored video is how often the person in the video blinks his eyes. Where a normal person would blink every few seconds, a Deepfake imitation might not do it at all, or not often enough to be convincing. One of the reasons for this effect is that pictures of people with their eyes closed don’t get published that much, so they would have to use actual video footage as input to get the blinking frequency right.

As technology advances, we will undoubtedly see improvements on both the imitating and the defensive sides. What already seems to be evident is that it will take more than the trained eye to recognize Deepfake videos—we’ll need machine learning algorithms to adapt.

Anti-video fraud

With the exceptional speed of developments in the Deepfakes field, it seems likely that you will see a hoax or scam using this method in the near future. Maybe we will even start using specialized anti-video fraud software at some point, in the same way as we have become accustomed to the use of anti-spam and anti-malware protection.

Stay safe and be vigilant!

The post Are Deepfakes coming to a scam near you? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Web skimmers compete in Umbro Brasil hack

Malwarebytes - Tue, 11/20/2018 - 16:51

Umbro, the popular sportswear brand has had their Umbro Brasil website hacked and injected with not one but two web skimmers part of the Magecart group.

Magecart has become a household name in recent months due to high profile attacks on various merchant websites. Criminals can seamlessly steal payment and contact information from visitors purchasing products or services online.

Multiple threat actors are competing at different scales to get their share of the pie. As a result, there are many different web skimming scripts and groups that focus on particular types of merchants or geographical areas.

Case in point, in this Umbro Brasil compromise, one of the two skimming scripts checks for the presence of other skimming code and if present will slightly alter the credit card number that was entered by the victim. Effectively, the first skimmer will receive wrong credit card numbers as a direct act of sabotage.

Two skimmers go head to head

The Umbro Brasil website ([.]br) runs the Magento e-commerce platform. The first skimmer is loaded via a fake BootStrap library domain bootstrap-js[.]com, recently discussed by Brian Krebs. Looking at its code, we see that it fits the profile of threat actors predominantly active in South America, according to a recent report from RiskIQ.

1st skimmer with code exposed in plain sight (conditional with referer check)

This skimmer is not obfuscated and exfiltrates the data in a standard JSON output. However, another skimmer is also present on the same site, loaded from g-statistic[.]com. This time, it is heavily obfuscated as seen in the picture below:

2nd skimmer, showing large obfuscation blurb

No fairplay between Magecart groups

Another interesting aspect is how the second skimmer alters the credit card number from the first skimmer. Before the form data is being sent, it grabs the credit card number and replaces its last digit with a random number.

The following code snippet shows how certain domain names trigger this mechanism. Here we recognize bootstrap-js[.]com, which is the first skimmer. Then, a random integer ranging from 0 to 9 is generated for later use. Finally, the credit card number is stripped of its last digit and the previously generated random number is used.

Code to conditionally swap the last digit of the credit card (decoding courtesy of Willem de Groot)

By tampering with the data, the second skimmer can send an invalid but almost correct credit card number to the competing skimmer. Because only a small part of it was changed, it will most likely pass validation tests and go on sale on black markets. Buyers will eventually realize their purchased credit cards are not working and will not trust that seller again.

The second skimmer, now being the only one to hold the valid credit card number, uses a special function to encode the data it exfiltrates. Looking at the POST request, we can only see what looks like gibberish sent to its exfiltration domain (onlineclouds[.]cloud):

Encoded data sent back to exfiltration server

This situation where multiple infections reside on the same host is not unusual. Indeed, unless a vulnerability with a webserver is fixed, it can be prone to several compromises by different perpetrators. Sometimes they can coexist peacefully, sometimes they are directly competing for the same resources.

Coolest sport in town

While web skimming has been going on for years, it has now become a very common (re-)occurrence. Security researcher Willem de Groot has aggregated data for 40K websites since counting in 2015. His study also shows that reinfection among e-commerce sites (20% reinfection rate) is a problem that needs addressing.

Website owners that handle payment processing need to do due diligence in securing their platform by keeping their software and plugins up-to-date, as well as paying special attention to third-party scripts.

Consumers also need to be aware of this threat when shopping online, even if the merchant is a well known and reputable brand. On top of closely monitoring their bank statements, they should consider ways in which they can limit the damage from malicious withdrawals.

We have informed of this compromise and even though the skimmers are still online, Malwarebytes users are covered by our web protection module.


Thanks to Willem de Groot for his assistance in this research.



1st skimmer: bootstrap-js[.]com 2nd skimmer: g-statistic[.]com


1st skimmer's exfil domain: bootstrap-js[.]com 2nd skimmer's exfil domain: onlineclouds[.]cloud

The post Web skimmers compete in Umbro Brasil hack appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What DNA testing kit companies are really doing with your data

Malwarebytes - Tue, 11/20/2018 - 15:00

Sarah* hovered over the mailbox, envelope in hand. She knew as soon as she mailed off her DNA sample, there’d be no turning back. She ran through the information she looked up on 23andMe’s website one more time: the privacy policy, the research parameters, the option to learn about potential health risks, the warning that the findings could have a dramatic impact on her life.

She paused, instinctively retracting her arm from the mailbox opening. Would she live to regret this choice? What could she learn about her family, herself that she may not want to know? How safe did she really feel giving her genetic information away to be studied, shared with others, or even experimented with?

Thinking back to her sign-up experience, Sarah suddenly worried about the massive amount of personally identifiable information she already handed over to the company. With a background in IT, she knew what a juicy target hers and other customers’ data would be for a potential hacker. Realistically, how safe was her data from a potential breach? She tried to recall the specifics of the EULA, but the wall of legalese text melted before her memory.

Pivoting on her heel, Sarah began to turn away from the mailbox when she remembered just why she wanted to sign up for genetic testing in the first place. She was compelled to learn about her own health history after finding out she had a rare genetic disorder, Ehlers-Danlos syndrome, and wanted to present her DNA for the purpose of further research. In addition, she was on a mission to find her mother’s father. She had a vague idea of who he was, but no clue how to track him down, and believed DNA testing could lead her in the right direction.

Sarah closed her eyes and pictured her mother’s face when she told her she found her dad. With renewed conviction, she dropped the envelope in the mailbox. It was done.

*Not her real name. Subject asked that her name be changed to protect her anonymity.

An informed decision

What if Sarah were you? Would you be inclined to test your DNA to find out about your heritage, your potential health risks, or discover long lost family members? Would you want to submit a sample of genetic material for the purpose of testing and research? Would you care to have a trove of personal data stored in a large database alongside millions of other customers? And would you worry about what could be done with that data and genetic sample, both legally and illegally?

Perhaps your curiosity is powerful enough to sign up without thinking through the consequences. But this would be a dire mistake. Sarah spent a long time weighing the pros and cons of her situation, and ultimately made an informed decision about what to do with her data. But even she was missing parts of the puzzle before taking the plunge. DNA testing is so commonplace now that we’re blindly participating without truly understanding the implications.

And there are many. From privacy concerns to law enforcement controversies to life insurance accessibility to employment discrimination, red flags abound. And yet, this fledgling industry shows no signs of stopping. As of 2017, an estimated 12 million people have had their DNA analyzed through at-home genealogy tests. Want to venture a guess at how many of those read through the 21-page privacy policy to understand exactly how their data is being used, shared, and protected?

Nowadays, security and privacy cannot be assumed. Between hacks of major social media companies and underhanded sharing of data with third parties, there are ways that companies are both negligent of the dangers of storing data without following best security practices and complicit in the dissemination of data to those willing to pay—whether that’s in the name of research or not.

So I decided to dig into exactly what these at-home DNA testing kit companies are doing to protect their customers’ most precious data, since you can’t get much more personally identifiable than a DNA sample. How seriously are these organizations taking the security of their data? What is being done to secure these massive databases of DNA and other PII? How transparent are these companies with their customers about what’s being done with their data?

There’s a lot to unpack with commercial DNA testing—often pages and pages of documents to sift through regarding privacy, security, and design. It can be mind-numbingly difficult to process, which is why so many customers just breeze through agreements and click “Okay” without really thinking about what they’re purchasing.

But this isn’t some app on your phone or software on your computer. It’s data that could be potentially life-changing. Data that, if misinterpreted, could send people into an emotional tailspin, or worse, a false sense of security. And it’s data that, in the wrong hands, could be used for devastating purposes.

In an effort to better educate users about the pros and cons of participating in at-home DNA testing, I’m going to peel back the layers so customers can see for themselves, as clearly as possible, the areas of concern, as well as the benefits of using this technology. That way, users can make informed choices about their DNA and related data, information that we believe should not be taken or given away lightly.

That way, when it’s your turn to stand in front of the mailbox, you won’t be second-guessing your decision.

Area of concern: life insurance

Only a few years ago in the United States, health insurance companies could deny applicants coverage based on pre-existing conditions. While this is thankfully no longer the case, life insurance companies can be more selective about who they cover and how much they charge.

According to the American Counsel for Life Insurers (ACLI), a life insurance company may ask an applicant for any relevant information about his health—and that includes the results of a genetic test, if one was taken. Any indication of health risk could factor into the price tag of coverage here in the United States.

Of course, there’s nothing that forces an individual to disclose that information when applying for life insurance. But the industry relies on honest communication from its customers in order to effectively price policies.

“The basis of sound underwriting has always been the sharing of information between the applicant and the insurer—and that remains today,” said Dr. Robert Gleeson, consultant for the ACLI. “It only makes sense for companies to know what the applicant knows. There must be a level playing field.”

The ACLI believes that the introduction of genetic testing can actually help life insurers better determine risk classification, enabling them to offer overall lower premiums for consumers. However, the fact remains: If a patience receives a diagnosis or if genetic testing reveals a high risk for a particular disease, their insurance premiums go up.

In Australia, any genetic results deemed a health risk can result in not only increased premiums but denial of coverage altogether. And if you thought Australians could get away with a little white lie of omission when applying for life insurance, they are bound by law to disclose any known genetic test results, including those from at-home DNA testing kits.

Area of concern: employment

Going back as far as 1964 to Title VII of the Civil Rights Act, employers cannot discriminate based on race, color, religion, sex, or nationality. Workers with disabilities or other health conditions are protected by the Americans with Disabilities Act, the Rehab Act, and the Family and Medical Leave Act (FMLA).

But these regulations only apply to employees or candidates with a demonstrated health condition or disability. What if genetic tests reveal the potential for disability or health concern? For that, we have GINA.

The Genetic Information Nondiscrimination Act (GINA) prohibits the use of genetic information in making employment decisions.

“Genetic information is protected under GINA, and cannot be considered unless it relates to a legitimate safety-sensitive job function,” said John Jernigan, People and Culture Operations Director at Malwarebytes.

So that’s what the law says. What happens in reality might be a different story. Unfortunately, it’s popular practice for individuals to share their genetic results online, especially on social media. In fact, 23andMe has even sponsored celebrities unveiling and sharing their results. Surely no one will see videos of stars like Mayim Bialik sharing their 23andMe results live and follow suit.

The hiring process is incredibly subjective. It would be almost impossible to point the finger at any employer and say, “You didn’t hire me because of the screenshot I shared on Facebook of my 23andMe results!” It could be entirely possible that the candidate was discriminated against, but in court, any he said/she said arguments will benefit the employer and not the employee.

Our advice: steer clear of sharing the results, especially any screenshots, on social media. You never know how someone could use that information against you.

Area of concern: personally identifiable information (PII)

Consumer DNA tests are clearly best known for collecting and analyzing DNA. However just as important—arguably more so to their bottom line—is the personally identifiable information they collect from their customers at various points in their relationship. Organizations are absorbing as much as they can about their customers in the name of research, yes, but also in the name of profit.

What exactly do these companies ask for? Besides the actual DNA sample, they collect and store content from the moment of registration, including your name, credit card, address, email, username and password, and payment methods. But that’s just the tip of the iceberg.

Along with the genetic and registration data, 23andMe also curates self-reported content through a hulking, 45-minute long survey delivered to its customers. This includes asking about disease conditions, medical and family history, personal traits, and ethnicity. 23andMe also tracks your web behavior via cookies, and stores your IP address, browser preference, and which pages you click on. Finally, any data you produce or share on its website, such as text, music, audio, video, images, and messages to other members, belongs to 23andMe. Getting uncomfortable yet? These are hugely attractive targets for cybercriminals.

Survey questions gather loads of sensitive PII.

Oh, but there’s more. Companies such as Ancestry or Helix have ways to keep their customers consistently involved with their data on their sites. They’ll send customers a message saying, “You disclosed to us you had allergies. We’re doing this study on allergies—can you answer these questions?” And thus even more information is gathered.

Taking a closer look at the companies’ EULAs, you’ll discover that PII can also be gathered from social media, including any likes, tweets, pins, or follow links, as well as any profile information from Facebook if you use it to log into their web portals.

But the information-gathering doesn’t stop there. Ancestry and others will also search public and historical records, such as newspaper mentions, birth, death, and marriage records related to you. In addition, Ancestry cites a frustratingly vague “information collected from third parties” bullet point in their privacy policy. Make of that what you will.

Speaking of third parties, many of them will get a good glimpse of who you are thanks to policies that allow for commercial DNA testing companies to market new products offers from business partners, including producing targeted ads personalized to users based on their interests. And finally, according to the privacy policy shared among many of these sites, DNA testing companies can and do sell your aggregate information to third parties “in order to perform business development, initiate research, send you marketing emails, and improve our services.”

That’s a lot of marketing emails.

One such partner who benefits from the sharing of aggregate information is Big Pharma: at-home DNA testing kits profit by selling user data to pharmaceutical companies for development of new drugs. For some, this might constitute crossing the line; for others, it represents being able to help researchers and those suffering from disease with their data.

“You have to trust all their affiliates, all their employees, all the people that could purchase the company,” said Sarah, our IT girl who elected to participate in 23andMe’s research. “It’s better to take the mindset that there’s potential that any time this could be seen and accessed by anyone. You should always be willing to accept that risk.”

Sadly, there’s already more than enough reason to assume any of this information could be stolen—because it has.

In June 2018, MyHeritage announced that the data of over 92 million users was leaked from the company’s website in October the previous year. Emails and hashed passwords were stolen—thankfully, the DNA and other data of customers was safe. Prior to that, the emails and passwords of 300,000 users from were stolen back in 2015.

But as these databases grow and more information is gathered on individuals, the mark only becomes juicier for threat actors. “They want to create as broad a profile of the target as possible, not just of the individual but of their associates,” said security expert and founder of Have I Been Pwned Troy Hunt, who tipped off Ancestry about their breach. “If I know who someone’s mother, father, sister, and descendants might be, imagine how convincing a phishing email I could create. Imagine how I could fool your bank.”

Cybercriminals can weaponize data not only to resell to third parties but for blackmail and extortion purposes. Through breaching this data, criminals could dangle coveted genetic, health, and ancestral discoveries in front of their victims. You’ve got a sibling—send money here and we’ll show you who. You’re pre-dispositioned to a disease, but we won’t tell you which one until you send Bitcoin here. Years later, the Ashley Madison breach is still being exploited in this way.

Doing it right: data stored safely and separately

With so much sensitive data being collected by DNA testing companies, especially content related to health, one would hope these organizations pay special attention to securing it. In this area, I was pleasantly surprised to learn that several of the top consumer DNA tests banded together to create a robust security policy that aims to protect user data according to best practices.

And what are those practices? For starters, DNA testing kit companies store user PII and genetic data in physically separating computing environments, and encrypt the data at rest and in transit. PII is assigned a randomized customer identification number for identification and customer support services, and genetic information is only identified using a barcode system.

Security is baked into the design of the systems that gather, store, and disseminate data, including explicit security reviews in the software development lifecycle, quality assurance testing, and operational deployment. Security controls are also audited on a regular basis.

Access to the data is restricted to authorized personnel, based on job function and role, in order to reduce the likelihood of malicious insiders compromising or leaking the data. In addition, robust authentication controls, such as multi-factor authentication and single sign-on, prohibit data flowing in and out like the tides.

For additional safety measures, consumer DNA testing companies conduct penetration testing and offer a bug bounty program to shore up vulnerabilities in their web application. Even more care has been taken with security training and awareness programs for employees, and incident management and response plans were developed with guidance from the National Institute of Standards and Technology (NIST).

In the words of the great John Hammond: They spared no expense.

When Hunt made the call to Ancestry about the breach, he recalls that they responded quickly and professionally, unlike other organizations he’s contacted about data leaks and breaches.

“There’s always a range of ways organizations tend to deal with this. In some cases, they really don’t want to know. They put up the shutters and stick their head in the sand. In some cases, they deny it, even if the data is right there in front of them.”

Thankfully, that does not seem to be the case for the major DNA testing businesses.

Area of concern: law enforcement

At-home DNA testing kit companies are a little vague about when and under which conditions they would hand over your information to law enforcement, using terms such as “under certain circumstances” and “we have to comply with valid requests” without defining the circumstances or indicating what would be considered “valid.” However, they do provide this transparency report that details government requests for data and how they have responded.

Yet, news broke earlier this year that DNA from 23andMe was used to find the Golden State Killer, and it gave consumers collective pause. While putting a serial killer behind bars is worthy cause, the killer was found because a relative of his had participated in 23andMe’s test, and the DNA was a close enough match to DNA found at the original 1970’s crime scenes that they were able to pin him down.

This opens up a can of worms about the impact of commercially-generated genetic data being available to law enforcement or other government bodies. How else could this data be used or even abused by police, investigators, or legislatures? The success of the Golden State Killer arrest could lead to re-opening other high-profile cold cases, or eventually turning to the consumer DNA databases every time there’s DNA evidence found at the scene of a crime.

Because so many individuals have now signed up for commercial DNA tests, odds are 60 percent and rising that, if you live in the US and are of European descent, you can be identified by information that your relatives have made public. In fact, law enforcement soon may not need a family member to have submitted DNA in order to find matches. According to a study published in Science, that figure will soon rise to 100 percent as consumer DNA databases reach critical mass.

What’s the big deal if DNA is used to capture criminals, though? Putting on my tinfoil hat for a second, I imagine a Minority-Report-esque scenario of stopping future crimes or misinterpreting DNA and imprisoning the wrong person. While those scenarios are a little far-fetched, I didn’t have to look too hard for real-life instances of abuse.

In July 2018, Vice reported that Canada’s border agency was using data from and to establish nationalities of migrants and deport those it found suspect. In an era of high tensions on race, nationality, and immigration, it’s not hard to see how genetic data could be used against an individual or family for any number of civil or human rights violations.

Area of concern: accuracy of testing results

While this doesn’t technically fall under the guise of cybersecurity, the accuracy of test results is of concern because these companies are doling out incredibly sensitive information that has the potential to levy dramatic change on peoples’ lives. A March 2018 study in Nature found that 40 percent of results from at-home DNA testing kits were false positives, meaning someone was deemed “at risk” for a category that later turned out to be benign. That statistic is validated by the fact that test results from different consumer testing companies can vary dramatically.

The relative inaccuracy of the test results is compounded by the fact that there’s a lot of room to misinterpret them. Whether it’s learning you’re high risk for Alzheimer’s or discovering that your father is not really your father, health and ancestry data can be consumed without context, and with no doctor or genetic counselor on hand to soften the blow.

In fact, consumer DNA testing companies are rather reticent to send their users to genetic counselors—it’s essentially antithetical to their mission, which is to make genetic data more accessible to their customers.

Brianne Kirkpatrick, a genetic counselor and ancestry expert with the National Society for Genetic Counselors (NSGC), said that 23andMe once had a fairly prominent link on their website for finding genetic counselors to help users understand their results. That link is now either buried or gone. In addition, she mentioned that a one of her clients had to call 23andMe three times until they finally agreed to recommend Kirkpatrick’s counseling services.

“The biggest drawback is people believing that they understand the results when maybe they don’t,” she said. “For example, people don’t understand that the BRCA1 and BRCA2 testing these companies provide is really only helpful if you’re Ashkenazi Jew. In the fine print, it says they look at three variants out of thousands, and these three are only for this population. But people rush to make a conclusion because at a high level it looks like they should be either relieved or worried. It’s complex information, which is why genetic counselors exist in the first place.”

But what’s the symbology?

The data becomes even more messy when you move beyond users of European descent. People of color, especially those of Asian or African descent, have had a particularly hard go of it because they are underrepresented in many companies’ data sets. Often, black, Hispanic, or Asian users receive reports that list parts of their heritage as “low confidence” because their DNA doesn’t sufficiently match the company’s points of reference.

DNA testing companies not only offer sometimes incomplete, inaccurate information that’s easy to misunderstand to their customers, they also provide the raw data output that can be downloaded and then sent to third party websites for even more evaluation. But those sites have not been as historically well-protected as the major consumer DNA testing companies. Once again, the security and privacy of genetic data goes fluttering away into the ether when users upload it, unencrypted and unprotected, to third-party platforms.

Doing it right: privacy policy

As an emerging industry, there’s little in the way of regulation or public policy when it comes to consumer genetic testing. Laboratory testing is bound by Medicare and Medicaid clauses, and commercial companies are regulated by the FDA, but DNA testing companies are a little of both, with the added complexity of operating online. The General Data Protection Regulation (GDPR) launched in May 2018 requires companies to publicly disclose whether they’ve experienced a cyberattack, and imposes heavy fines for those who are not in compliance. But GDPR only applies to companies doing business in Europe.

As far as legal precedent is concerned, the 1990 California Supreme Court case Moore vs. Regents of the University of California found that individuals no longer have claim over their genetic data once they relinquish it for medical testing or other forms of study. So if Ancestry sells your DNA to a pharmaceutical company that then uses your cells to find the cure for cancer, you won’t see a dime of compensation. Bummer.

Despite the many opportunities for data to be stolen, abused, misunderstood, and sold to the highest bidder, the law simply hasn’t caught up to our technology. So the teams developing security and privacy policies for DNA testing companies are doing pioneering work, embracing security best practices and transparency at every turn. This is the right thing to do.

Almost two years ago, founders at Helix started working with privacy experts in order to understand all the key pieces they would need to safeguard—and they recognized that there was a need to form a formal coalition to enhance collaboration across the industry.

Through the Future of Privacy forum, they developed an independent think tank focused on creating public policy that leaders in the industry could follow. They teamed up with representatives from 23andMe, Ancestry, and others to create a set of standards that primarily hammered on the importance of transparency and clear communication with consumers.

“It is something that we are very passionate about,” said Misha Rashkin, Senior Genetic Counselor at Helix, and an active member of developing the shared privacy policy. “We’ve spent our careers explaining genetics to people, so there’s a years-long held belief that transparent, appropriate education—meaning developing policy at an approachable reading level—has got to be a cornerstone of people interacting with their DNA.”

While the privacy coalition strived for easy-to-understand language, the fact remains that their privacy policy is a 21-page document that most people are going to ignore. Rashkin and other team members were aware, so they built more touch points for customers to drill into the data and provide consent, including in-product notifications, emails, blog posts, and infographics delivered to customers as they continued to interact with their data on the platform.

Maps, diagrams, charts, and other visuals help users better understand their data.

After Rashkin and company finalized and published their privacy policy, they turned it into a checklist that partners could use to determine baseline security and privacy standards, and what companies need to do to be compliant. But the work won’t stop there.

“This is just the beginning,” said Elissa Levin Senior Director of Clinical Affairs and Policy at Helix, and a founding member of the privacy policy coalition. “As the industry evolves, we are planning on continuing to work on these standards and progress them. And then we’re actually going out to educate policy makers and regulators and the public in general. We want to help them determine what these policies are and differentiate who are the good players and who are the not-so-good players.”

Biggest area of concern: the unknown

We just don’t know what we don’t know when it comes to technology. When Mark Zuckerberg invented Facebook, he merely wanted an easy way to look at pretty college girls. I don’t think it entered his wildest dreams that his company’s platform could be used to directly interfere with a presidential election, or lead to the genocide of citizens in Myanmar. But because of a lack of foresight and an inability to move quickly to right the ship, we’re now all mired in the mud.

Right now, cybercriminals aren’t searching for DNA on the black market, but that doesn’t mean they won’t. Cybercrime often follows the path of least resistance—what takes the least amount of effort for the biggest payoff? That’s why social engineering attacks still vastly outnumber traditional malware infection vectors.

Because of that, cybercriminals likely believe it’s not worth jumping through hoops to try and break serious encryption for a product (genetic data) that’s not in demand—yet. But as biometrics and fingerprinting and other biological modes of authentication become more popular, I imagine it’s only a matter of time before the wagons start circling.

And yet—does it even matter? Even with all of the red flags exposed, millions of customers have taken the leap of faith because their curiosity overpowers their fear, or the immediate gratification is more satisfying than the nebulous, vague “what ifs” that we in the security community haven’t solved for. With so much data publicly available, do people even care about privacy anymore?

“There are changing sentiments about personal data among generations,” said Hunt. “There’s this entire generation who has grown up sharing their whole world online. This is their new social norm. We’re normalizing the collection of this information. I think if we were to say it’s a bad thing, we’d be projecting our more privacy-conscience viewpoints on them.”

Others believe that, regardless of personal feelings on privacy, this technology isn’t going away, so we—security experts, consumers, policy makers, and genetic testers alike—need to address its complex security and privacy issues head on.

“Privacy is such a personal matter. And while there may be trends, that doesn’t necessarily speak to an entire generation. There are people who are more open and there are people who are more concerned,” said Levin.  “Whether someone is concerned or not, we are going to set these standards and abide by these practices because we think it’s important to protect people, even if they don’t think it’s critical. Fundamentally, it does come down to being transparent and helping people be aware of the risk to at least mitigate surprises.”

Indeed, whether privacy is personally important to you or not, understanding which data is being collected from where and how companies benefit from using your data makes you a more well-informed consumer.

Don’t just check that box. Look deeper, ask questions, and do some self-reflection about what’s important to you. Because right now, if someone steals your data, you might have to change a few passwords or cancel a couple credit cards. You might even be embroiled in identity theft hell. But we have no idea what the consequences will be if someone steals your genetic code.

Laws change and society changes. What’s legal and sanctioned now may not be in the future. But that data is going to be around a long time. And you cannot change your DNA.

The post What DNA testing kit companies are really doing with your data appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 12 – 18)

Malwarebytes - Mon, 11/19/2018 - 17:08

Last week on Malwarebytes Labs, we found out that TrickBot became a top business threat, so we took a deeper look at what’s new with it.

With Christmas just around the corner, the Secret Sister scam returned.

We also touched on the security and privacy (or lack thereof) in smart jewelry, air traffic control compromise, and what security concerns to take note of when automating your business.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 12 – 18) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Business email compromise scam costs Pathé $21.5 million

Malwarebytes - Mon, 11/19/2018 - 16:00

Recently released court documents show that European-based cinema chain Pathé lost a small fortune to a business email compromise (BEC) scam in March 2018. How much? An astonishing US$21.5 million (roughly 19 million euros). The attack, which ran for about a month, cost the company 10 percent of its total earnings.

What is business email compromise?

Business email compromise is a type of phishing attack, sprinkled with a dash of targeted social engineering. A scammer pretends to be an organisation’s CEO, then starts bombarding the CFO with urgent requests for a money transfer. The requests are generally for wire transfers (hard to trace), and are often routed through Hong Kong (lots of wire transfers, even harder to trace).

Scammers will sometimes buy domain names to make the fake emails look even more convincing. These attacks rely on the social importance of the CEO: nobody wants to question the boss. If an organisation has no safeguards in place against these attacks, a scammer will likely be very rich indeed. It only takes one successful scam to generate a huge haul, at which point the scammer simply vanishes into the ether.

What happened here?

This particular BEC scam is of interest because it highlights a slightly different approach to the attack. Scammers abandoned pitting the fake CEO against the real CFO in favour of faking French head office missives to the Dutch management.

It all begins with the following mail:

“We are currently carrying out a financial transaction for the acquisition of foreign corporation based in Dubai. The transaction must remain strictly confidential. No one else has to be made aware of it in order to give us an advantage over our competitors.”

Even though the CFO and CEO thought it strange, they pressed on regardless and sent over 800,000 in Euros. More requests followed, including some while the CFO was on vacation—both executives were fired after the head office noticed. Although they weren’t involved in the fraud, Pathé said they could—and should—have noticed the “red flags.” They didn’t, and there was no safety net in place, so the business email compromise attempt was devastatingly successful.

The shame game

Many instances of BEC fraud go unreported because nobody wants to voluntarily admit they fell victim. As a result, the first you tend to hear about it is in court proceedings. It’s hard to guess how much is really lost to BEC fraud, but the FBI have previously floated a $2.1 billion-dollar figure. The actual figure could easily be higher.

How can businesses combat this?
  1. Check the social media accounts and other online portals of your executives, and have those connected to finance make their profiles as private—and secure—as possible. You can certainly reduce a CFO’s online footprint, even if you can’t remove it completely.
  2. Authentication is key. The CFO and CEO, or whoever is responsible for wire authorisation, should have a special process in place for approvals. It shouldn’t be email based, as that’s how people end up in BEC scam trouble in the first place. If you have a unique, secure method of communication, then use it. If you can lock down approvals with additional security like two-factor authentication, then do so. Some organisations make use of bespoke, offline authenticator apps on personal devices. The solution is out there!
  3. If you have many offices, and different branches move money around independently, the same rules apply: find a consistent method of authentication that can be used across multiple locations. This would have almost certainly saved Pathé from losing $21.5 million.
  4. When there’s no other way to lock things down, it’s time to break out the telephone and rely on verbal authentication. While this may cause a small amount of business drag (If you’re on the other side of the world, is your CFO fielding calls at 2:00am?), it’s better than losing everything.
A threat worth tackling

Business email compromise continues to grow in popularity among scammers, and it’s up to all of us to combat it. If your organisation doesn’t take BEC seriously, you could easily be on the receiving end of an eye-watering phone call from your bank manager. Keeping your finances in the black is a priority, and BECs are one of the most insidious threats around, whether you distribute movies, IT services, or anything else for that matter. Don’t let malicious individuals decide when to call things a wrap.

The post Business email compromise scam costs Pathé $21.5 million appeared first on Malwarebytes Labs.

Categories: Techie Feeds

6 security concerns to consider when automating your business

Malwarebytes - Fri, 11/16/2018 - 16:00

Automation is an increasingly-enticing option for businesses, especially when those in operations are in a  perpetual cycle of “too much to do and not enough time to do it.”

When considering an automation strategy, business representatives must be aware of any security risks involved. Here are six concerns network admins and other IT staff should keep in mind.

1. Using automation for cybersecurity in counterproductive ways

The cybersecurity teams at many organizations are overextended, accustomed to taking on so many responsibilities that their overall productivity goes down. Automating some cybersecurity tasks could provide much-needed relief for those team members, as long as those employees use automation strategically.

For example, if cybersecurity team members automate standard operating procedures, they’ll have more time to triage issues and investigate potential vulnerabilities. But, the focus must be on using automation in a way that makes sense for cybersecurity—as well as the other parts of the business. Human intelligence is still needed alongside automation in order to better identify threats, analyze patterns, and quickly make use of available resources. If you build up defenses but leave them unattended, eventually the enemies are going to break through.

2. Giving too many people access to automatic payment services

Forgetting to pay a bill on time is embarrassing and can negatively affect a company’s access to lines of credit. Fortunately, companies can use numerous automatic bill-paying services to deduct the necessary amounts each month, often on a specified day.

Taking that approach prevents business representatives from regularly having to pull credit cards out of their wallets and manually type the numbers into forms. However, it’s a best practice to restrict the number of people who can set up those payments and verify that they happen.

Otherwise, if there are problems with a payment, it’ll become too difficult to investigate what went wrong. In addition, there’s a possibility of insider threats, such as a disgruntled employee or someone looking to get revenge after termination. Malicious insiders could access a payment service and change payment schedules, delete payment methods, withdraw large amounts, or otherwise wreak havoc.

3. Thinking that automation is infallible

One of the especially handy things about automation is that it can reduce the number of errors people make. Statistics indicate that almost 71 percent of workers report being disengaged at the office. Repetitive tasks are often to blame, and automation could reduce the boredom people feel (and mistakes they make) by relegating them to more challenging projects.

Regardless of the ways they use automation, IT admins mustn’t fall into the habit of believing that automated tools are foolproof, and it’s not necessary to check for mistakes. For example, if a company uses automation to deal with financial-related content, such as invoices, it should not adopt a relaxed approach to keeping that information secure just because a tool is now handling the task.

In all responsibilities that involve keeping data secure, humans still play a vital role in ensuring things are working as they should. After all, people are the ones who set up the processes that automation carries out, and those people could have made mistakes, too.

4. Failing to account for GDPR

The General Data Protection Regulation (GDPR) went into effect in May 2018, and it determines how businesses must treat the data of customers in the European Union. Being in violation could result in substantial fines for businesses, yet some companies aren’t even aware they’re doing something wrong.

Keeping information in a customer relationship management (CRM) database could maintain GDPR compliance by helping businesses have accurate and up-to-date records of their customers, making it easier to ensure they treat that information appropriately. As the GDPR gives customers numerous rights, including the right to have data erased or the right to have the data stored but not processed, any automation tools selected by an organization need to be agile enough to accommodate those requests.

Automation—whether achieved through a CRM tool or otherwise—can actually help companies better align with GDPR regulations. In fact, it’s essential that companies not overlook GDPR when they choose ways to automate processes.

5. Not using best practices with password managers

Password managers are incredibly convenient and secure because they store, encrypt, and automatically fill in the proper passwords for any number of respective accounts—as long as users know the correct master password. Some of them even automate filling in billing details by storing payment information in secure online wallets.

However, there are wrong ways to use password managers for business or personal purposes. For example, if a person chooses a master password that she’s already used on multiple other sites or shares that password with others, she’s defeated the purpose of the password manager. Choosing a password manager with multi-factor authentication is our recommendation for the most secure way to log into your accounts.

It’s undoubtedly convenient to visit a site and have it automatically fill in your password for you with one click. But, password managers only work as intended when employees use them correctly.

6. Ignoring notifications to update automation software

Many automation tools display pop-up messages when new software updates are available. Sometimes the updates only encompass new features, but it’s common for them to address bugs that could compromise security. When the goal is to dive into work and get as much done as possible, taking a few minutes to update automation software isn’t always an appealing option.

But, if outdated software ends up leading to an attack and compromising customer records, people will wish they didn’t procrastinate. It’s best for businesses to get on a schedule, such as checking automation software for updates on a particular day each month (Patch Tuesday, for example).

Fortunately, many software titles allow people to choose the desired time for the update to happen, or in essence, automate the maintenance of automation software. Then, users can set the software to update outside of business hours or during other likely periods of downtime.

Automation is advantageous—if security remains a priority

Although automation can be a tremendous help to businesses, it can also pose risks if misused, neglected, or too heavily relied upon. Staying aware of the security-related issues raised in this article helps organizations of all sizes and in all industries use automated tools safely and effectively.

The post 6 security concerns to consider when automating your business appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Building Cooperation and Trust into the Web with IPFS

Mozilla Hacks - Wed, 08/29/2018 - 14:43

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

Some projects start small, aiming for incremental improvements. Others start with a grand vision, leapfrogging today’s problems by architecting an idealized world. The InterPlanetary File System (IPFS) is definitely the latter – attempting to replace HTTP entirely, with a network layer that has scale, trust, and anti-DDOS measures all built into the protocol. It’s our pleasure to have an introduction to IPFS today from Kyle Drake, the founder of Neocities and Marcin Rataj, the creator of IPFS Companion, both on the IPFS team at Protocol Labs -Dietrich Ayala

IPFS – The InterPlanetary File System

We’re a team of people all over the world working on IPFS, an implementation of the distributed web that seeks to replace HTTP with a new protocol that is powered by individuals on the internet. The goal of IPFS is to “re-decentralize” the web by replacing the location-oriented HTTP with a content-oriented protocol that does not require trust of third parties. This allows for websites and web apps to be “served” by any computer on the internet with IPFS support, without requiring servers to be run by the original content creator. IPFS and the distributed web unmoor information from physical location and singular distribution, ultimately creating a more affordable, equal, available, faster, and less censorable web.

IPFS aims for a “distributed” or “logically decentralized” design. IPFS consists of a network of nodes, which help each other find data using a content hash via a Distributed Hash Table (DHT). The result is that all nodes help find and serve web sites, and even if the original provider of the site goes down, you can still load it as long as one other computer in the network has a copy of it. The web becomes empowered by individuals, rather than depending on the large organizations that can afford to build large content delivery networks and serve a lot of traffic.

The IPFS stack is an abstraction built on top of IPLD and libp2p:

Hello World

We have a reference implementation in Go (go-ipfs) and a constantly improving one in Javascript (js-ipfs). There is also a long list of API clients for other languages.

Thanks to the JS implementation, using IPFS in web development is extremely easy. The following code snippet…

  • Starts an IPFS node
  • Adds some data to IPFS
  • Obtains the Content IDentifier (CID) for it
  • Reads that data back from IPFS using the CID

<script src=""></script> Open Console (Ctrl+Shift+K) <script> const ipfs = new Ipfs() const data = 'Hello from IPFS, <YOUR NAME HERE>!' // Once the ipfs node is ready ipfs.once('ready', async () => { console.log('IPFS node is ready! Current version: ' + (await // convert your data to a Buffer and add it to IPFS console.log('Data to be published: ' + data) const files = await ipfs.files.add(ipfs.types.Buffer.from(data)) // 'hash', known as CID, is a string uniquely addressing the data // and can be used to get it again. 'files' is an array because // 'add' supports multiple additions, but we only added one entry const cid = files[0].hash console.log('Published under CID: ' + cid) // read data back from IPFS: CID is the only identifier you need! const dataFromIpfs = await console.log('Read back from IPFS: ' + String(dataFromIpfs)) // Compatibility layer: HTTP gateway console.log('Bonus: open at one of public HTTP gateways:' + cid) }) </script>

That’s it!

Before diving deeper, let’s answer key questions:

Who else can access it?

Everyone with the CID can access it. Sensitive files should be encrypted before publishing.

How long will this content exist? Under what circumstances will it go away? How does one remove it?

The permanence of content-addressed data in IPFS is intrinsically bound to the active participation of peers interested in providing it to others. It is impossible to remove data from other peers but if no peer is keeping it alive, it will be “forgotten” by the swarm.

The public HTTP gateway will keep the data available for a few hours — if you want to ensure long term availability make sure to pin important data at nodes you control. Try IPFS Cluster: a stand-alone application and a CLI client to allocate, replicate and track pins across a cluster of IPFS daemons.

Developer Quick Start

You can experiment with js-ipfs to make simple browser apps. If you want to run an IPFS server you can install go-ipfs, or run a cluster, as we mentioned above.

There is a growing list of examples, and make sure to see the bi-directional file exchange demo built with js-ipfs.

You can add IPFS to the browser by installing the IPFS Companion extension for Firefox.

Learn More

Learn about IPFS concepts by visiting our documentation website at

Readers can participate by improving documentation, visiting, developing distributed web apps and sites with IPFS, and exploring and contributing to our git repos and various things built by the community.

A great place to ask questions is our friendly community forum:
We also have an IRC channel, #ipfs on Freenode (or on Matrix). Join us!

The post Dweb: Building Cooperation and Trust into the Web with IPFS appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds