Techie Feeds

Harnessing the power of identity management (IDaaS) in the cloud

Malwarebytes - Tue, 02/18/2020 - 17:25

Sometimes, consumers have it easy.

Take, for example, when they accidentally lock themselves out of their personal email. Their solution? Reset the password. With one click, they’re able to change their old, complicated password with a new, more memorable one.

Self-service password reset is awesome like this. For users on a business network, it’s not so simple. That is, unless they’re using identity-as-a-service (IDaaS).

What is IDaaS?

IDaaS—pronounced “ay-das”—stands for identity-as-a-service. Essentially, it is identity and access management (IAM)—pronounced “I-am”—deployed from the cloud.

Organizations use IAM technology to make sure their employees, customers, contractors, and partners are who they say they are. Once confirmed via certain methods of authentication, the IDaaS system provides access rights to resources and systems based on permissions granted. And because it’s deployed through the cloud, business entities can request access securely wherever they are and whatever device they’re using.

Giving its own users self-service access to portals is just one of the ways an IDaaS system can provide support for businesses. In fact, the need to better engage with customers while securing their data and conforming to established standards has become the main driving force behind the move to IDaaS.

IDaaS vs. traditional IAM

While traditional, on-premise identity management systems offer levels of self-serve access for employees at the office, their benefits are limited in comparison to cloud-based options. This is because IAMs are:

  • Expensive to create and maintain. It costs more if the organization supports global users due to complexity of infrastructure. IAMs can also be unsustainable overall as the business grows. Both cost and infrastructure complexity increases, making IAMs more difficult to support.
  • Inefficiently managed, security-wise. IAMs that must be placed on legacy systems, for example, put organizations at risk because patching these systems is a challenge, leaving the door open for vulnerabilities at access points.
  • Time-consuming. Upgrading IAM hardware is time-consuming. Sometimes, the upgrade doesn’t happen if it means long downtimes and lost productivity. Also, IT teams are faced with significant time-consuming (and patience-testing) tasks, from password resetting to user provisioning.
  • Not future-proofed. Although some traditional IAMs can provide limited cloud support, they’re essentially designed to handle on-premise resources. Since IAMs inherently lack support for modern-day tech (mobile devices, IoT) and business disruptors (Big Data, digital transformation), they don’t address what current users need and want.
Benefits of IDaaS

Businesses can benefit from IDaaS in so many ways. For the sake of brevity, keep in mind these three main drivers for adapting IDaaS: new capabilities, speed of implementation, and innovation. Not only would these make them more attractive to potential customers, but also helps to retain current ones.

New capabilities, such as single sign-on (SSO), gives business customers the ease and convenience of accessing multiple resources using only a single login instance. Logging in once creates a token, which the IDaaS system then shares with other applications on behalf of the customer, so they would not need to keep logging in.

SSO also removes the burden of remembering multiple login credentials from users, which usually drives them to create memorable but also easily breakable passwords. Needless to say, SSO—and other protocols like Security Assertion Markup Language (SAML), OAuth (pronounced “oh-auth”), and OpenID Connect (OIDC)—will greatly enhance an organization’s security.

Since IDaaS is cloud-based, implementing it in your organization is a lot quicker. For one thing, hardware provisioning is already with the IDaaS provider. What usually takes a couple of years to realize will only take several months—sometimes even a few weeks.

Organizations that are still unsure of whether they want to fully embrace IDaaS but are curious to try it out can temporarily use the solution as a subset of their applications. Should they change their minds, they can pull back just as easily as they pushed on.

And finally, IDaaS removes the barriers that inhibits organizations from moving forward on innovation. Understaffed IT teams, the mounting costs surrounding IT infrastructure that only gets more complicated over time, and insufficient support for modern technologies are just a few of problems that hold modern businesses back from innovating in their own workforce processes, product offerings, and marketing and sales techniques.

Business leaders need to get themselves “unstuck” from these problems by outsourcing their needs to a trusted provider. Not only will doing so be lighter on their pockets, but they can also customize IDaaS’s inherent capabilities to fit their business needs and improve their customer engagement. It’s a win-win for all.

Note, however, that a pure IDaaS implementation may not be for every organization. Some organizations are simply not ready for it. In fact, the majority of enterprises today use hybrid environments—a combination of on-premise and cloud-based applications. This is because some organizations believe that there are some resources best kept on-premise. And when it comes to IDaaS adoption, utilizing the best of both worlds is increasingly becoming the norm.

My organization is small. Is IDaaS still necessary?

Absolutely. Small- and medium-sized businesses experience many of the same IAM issues enterprise organizations face. Every employee maintains a set of credentials they use to access several business applications to do their jobs. An SSO feature in IDaaS will significantly cut back on the number of login instances they have to face when switching from one app to another.

It’s a good question to ask if your business needs IDaaS. But perhaps the better—or bigger—question is whether your business is compliant enough to established security and privacy standards. Thankfully, having IDaaS will help with that issue as well. The caveat is that organizations, regardless of size, must evaluate potential IDaaS providers based on their maturity and their capability to offer a great solution. No two IDaaS offerings are the same.

Mike Wessler and Sean Brown, authors of the e-book “Cloud Identity for Dummies”, propose some questions to consider when deciding:

  • Are they a new company on a shoe-string budget catering to lower-end clients with cost as the primary driver?
  • Are they relatively new in either the cloud or IAM field where they gained those capabilities via recent acquisitions and are simply rebranding someone else’s products and services?
  • Do they have legitimate experience and expertise in cloud and IAM services where offering IDaaS is a logical progression?
What are the possible security problems?

Despite the good that IDaaS could bring to your organization, it is no cure-all. In fact, some security researchers have already noted concerns on some of its key capabilities. Using our previous example, which is the SSO, it is argued that this has become a “single point of failure” should the authentication server fails. Or it can also act as a “single breach point,” waiting to be compromised.

The cybersecurity sector has a dizzyingly long laundry list of use cases where organizations are breached due to compromised credentials. Australia’s Early Warning Network, which was compromised a year ago, was caused by the misuse of stolen credentials. And there are many ways credentials can be leaked or stolen. Organizations can thwart this by requiring the use of multi-factor authentication (MFA).

The bottom line is this: IDaaS or no, businesses still have to adopt and practice safe computing habits to minimize their attack surface.

If you’d like a more in-depth reading on IDaaS, please visit the following:

Stay safe!

The post Harnessing the power of identity management (IDaaS) in the cloud appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (February 10 – 16)

Malwarebytes - Tue, 02/18/2020 - 16:40

Last week on Malwarebytes Labs, we explained how to battle online coronavirus scams with facts, discussed the persistent re-infection techniques of Android/Trojan.xHelper and how to remove it, provided cyber tips for safe online dating, and showed how Hollywood teaches us misleading cybersecurity lessons.

We also released the 2020 State of Malware Report describing the threat landscape of the year in detail, including top threats for Mac, Windows, Android, and the web, as well as the state of data privacy in commerce and legislation.

Other cybersecurity news
  • Medical transportation vendor, GridWorks experienced a burglary that resulted in a laptop stolen, which contained the personal identifiable information (PII) of 654,362 members. (Source: Security Boulevard)
  • Four members of China’s military were charged on with hacking into Equifax and stealing trade secrets and the personal data of about 145 million Americans in 2017. (Source: The New York Times)
  • Critical vulnerabilities addressed in the Accusoft ImageGear library could be exploited by remote attackers to execute code on a victim machine. (Source: Security Week)
  • Dell has copped to a flaw in the pre-installed program SupportAssist that allows local hackers to load malicious files with admin privileges. (Source: TheRegister)
  • The owner of the Helix Bitcoin Mixer was charged with laundering over $310 million in Bitcoin cryptocurrency while operating the dark web mixer between 2014 and 2017. (Source: BleepingComputer)
  • Emotet has found a new attack vector: using already infected devices to identify new potential victims that are connected to nearby Wi-Fi networks. (Source: The Hacker News)
  • A digitally signed Gigabyte driver has been discovered to be in use by Ransom.RobbinHood to fully encrypt the files on a computer. (Source: Guru 3D)
  • Chief Information Security Officers (CISOs, or CSOs) across the industry are reporting high levels of stress resulting in an average tenure of only 26 months. (Source: ZDNet)
  • The Czech data protection authority announced an investigation into antivirus company Avast for harvesting the browsing history of over 100 million users. (Source:
  • Hackers are demanding nude photos to unlock files in a new ransomware scheme targeting women. (Source: FastCompany)

Stay safe, everyone!

The post A week in security (February 10 – 16) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Misleading cybersecurity lessons from pop culture: how Hollywood teaches to hack

Malwarebytes - Fri, 02/14/2020 - 17:32

In pop culture, cybercrimes are often portrayed as mysterious and unrealistic. Hackers are enigmatic and have extraordinary tech abilities. They can discover top secrets in a short time and type at breakneck speed to hack into a database.

In real life, though, hacking is not that straightforward. Hackers may have technical capabilities and high intelligence, but they are otherwise normal human beings. It takes a lot of time and research to come up with foolproof strategies to break into an organization’s secret files.

In the last few decades, hacking and cybersecurity have become important topics of discussion, and pop culture has capitalized on this wave of interest. Many movies and TV shows now find ways to weave cybercrime into their storylines. At times, the depiction is realistic and informative; most of the time, it’s plain misleading and ludicrous.

In this article, we take a look at some pop culture hacking scenes from TV and movies and the cybersecurity lessons, if any, we can learn from them.

Hackers are not always basement-dwelling nerds

Predominantly, male hackers are depicted in Hollywood movies to be either reclusive conspiracy theorists or super-smart, ex-intelligence officers. Picture Dennis Nedry from Jurassic Park or Martin Bishop in Sneakers. Their female counterparts—few and far between—tend toward the harsh, ass-kicking, boyish types, like Kate Libby in Hackers or Trinity in The Matrix.

The reality is, while we may be able to create criminal profiles for threat actors or even define skill sets and personality types that are attracted to hacking, there is no single stereotype that carries.

Hackers could be bubbly, social, feminine, sporty, narcissistic—the life of the party. They could also be relatively quiet, introverts, artists, compassionate, or deeply sensitive. Simply put, pop culture has a habit of stereotyping what it doesn’t understand, and hacking is still a widely misunderstood pastime/profession.

But there is one truth that unites all hacker types: Hacking requires strategic, conceptual thinking, so intelligence is required, as is practice. The best actual hackers spend years honing their craft, testing and testing code, working with mentors and peers, sometimes going to school or, yes, the military, for skills training.

However, cybercrime isn’t dominated by super-skilled hackers. Most criminals have softer code-writing skills, purchasing malware-as-a-service kits on the dark web or using social engineering techniques to scam users out of money. Meanwhile, there are hackers who use their skills for good, called white hats, often working as security researchers or in IT for businesses, schools, healthcare organizations, or the government.

Pop culture would benefit from seeing these more diverse representations of hackers, cybercriminals, and security professionals on TV and in the movies.

Hacking takes research and patience

Movies and TV shows are meant to be exciting and dramatic. As with most careers that aren’t well understood by those outside the industry—think theoretical physicist or brain surgeon—these professional portrayals are made out to be much more action-packed in pop culture than they are in the real world.

Real hackers and cybersecurity experts have to rely on patience and persistence gained through training and experience to strike gold—much more so than a magical solution that can resolve a plot point in five minutes or less.

3..2…1…”I’m in!”

Research is one of the most important parts of hacking or engineering or reverse-engineering, along with making mistakes. Real-world cybersecurity experts understand that failures are just as important as successes. Why?

Part of cybersecurity involves testing currently-active systems to find flaws and improve what needs improving. That can often take months or years of hard work, and not just a few minutes of elaborate schemes and computer wizardry. And even when criminals building the most sophisticated software discover that their cover is blown, they go back to the drawing table to advance on how best to come up with a better plan of infiltrating the host computer.

You can’t save a system by smashing buttons

When NCIS’ Abby is hacked, a million pop-ups fill her screen—Hollywood’s favorite “You’ve been hacked!” move. Thankfully, her friend heroically steps in, furiously typing on the keyboard until the problem is solved. Of course, that’s not quite how the scene would play out in real life.

When a computer is hacked, you cannot save it by pressing buttons aimlessly. You must, at minimum, unplug/shut down the computer and restart or install a USB drive or CD system. And you should also run a scan with an anti-malware program that can clean up infected devices. If you’re part of a business network, the process is more complicated: Alerting your company’s IT team is the best course of action if you suspect an infection. Button mashing will only make your fingers sore.

Hacking is not always flashy

Hollywood loves to make eye candy out of a hacking scene, often displaying colorful, polished graphic interfaces (GUIs) or 3D-immersive virtual reality experiences—neither of which have much to do with actual hacking. This infamous hacking scene in Swordfish, for example, shows Stanley completing some sort of digital Rubik’s Cube to “assemble crypto algorithm.” Whatever that means.

And there’s also this classic from Jurassic Park, where Ariana gains control of the automatic doors by “hacking” into the Unix security system in a matter of seconds.

Setting aside that saying, “It’s a Unix system, I know this” is like saying, “It’s a Windows system, I know this,” knowing Unix (or Windows) wouldn’t automatically bestow on someone the power to override security protocols—especially on custom GUIs reminiscent of a Minecraft beta.

Pop culture loves to spoon feed its audiences cheesy 3D visuals of viruses and authentication attempts. But these flashy visual interfaces, especially in 3D, are not accurate at all. What do your file systems look like on your home or work computer? How many of them are in 3D? How many times do you see a giant “ACCESS DENIED” painted across your whole screen when you enter an incorrect password or when your operating system can’t find a file?

A more accurate interface would be to show command line (code) displayed on a console or terminal, simply because it would be the most efficient way for hackers to obtain data quickly.

However, as much as pop culture has misrepresented hacking to the general public, it has also taught us varying real-life lessons about cybersecurity. Here are a few examples:

Do not download and install untrusted applications

In Ex Machina, we learned that the CEO of Blue Book, Nathan Bateman, fast-tracked the emotional growth of Ava by taking data from smartphone cameras across the world. This scenario is currently playing out in real life, as there are applications that can be downloaded from third-party platforms and even from Google Play and Apple App Store that can spy on users and steal their personal information.

This teaches us to be careful when downloading applications online. Verify each app’s capabilities and permission requests before installing them on your devices. If a music app is asking for access to your GPS location, for example, ask yourself why such information would be necessary for this app to function. If it seems like an unnecessary amount of access, it’s better to forget downloading.

Small distractions could be a diversion

Sometimes cybersecurity lessons can be learned from movie scenes that don’t involve computers at all. For example, in Star Wars: The Last Jedi, Poe creates a diversion, distracting the general and the First Order armada before bombing the Dreadnought. In fact, military strategy is often well intertwined with that of cyberwarfare.

Small distractions were used to a great effect in the 2015 distributed denial of services (DDoS) attacks on ProtonMail, for example. A small ransom note was dropped as a precursor to a 15-minute test DDoS attack, which diverted ProtonMail’s IT team to customer service assistance. The threat actors then followed up with the true mission, jamming up ProtonMail servers with a 50 Gigabit-per-second wave of junk data that took down the datacenter housing servers while simultaneously attacking several ISPs upstream, causing serious damage that took the company offline for days.

The lesson you can take away from this is that a small disruption of services could just be the blip on the radar meant to pull attention away from the storm. Make sure you stay on alert, especially if you notice this at work, where cybercriminals are focusing more of their efforts for larger returns on their investments.

Always use two-step verification

Always use two-factor authentication (2FA) to protect your online accounts—that cannot be overemphasized. In Mr. Robot, 2FAs were used to guard access to the company’s data and keep hackers out. Many IoT devices, password managers, and other applications have recognized the power of 2FA, or multi-factor authentication, in shielding user and proprietary data from hackers who are able to exploit bad password habits.

Hollywood tends to misrepresent what hacking and cybersecurity are to the general public. But it has also taught us valuable lessons about how to protect ourselves, our devices, and our information on the Internet. We hope that, as cybersecurity awareness increases, the misrepresentations are reduced to the barest minimum. That way, TV and movies can do to cybersecurity what they do best: educate, inform, and entertain the public about its importance to our daily lives.

The post Misleading cybersecurity lessons from pop culture: how Hollywood teaches to hack appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cyber tips for safe online dating: How to avoid privacy gaffs, exploits, and scams

Malwarebytes - Thu, 02/13/2020 - 16:36

Research and reporting on this article were conducted by Labs writers Chris Boyd and David Ruiz.

Dating apps have been mainstream for a long time now, with nearly every possible dating scene covered—casual, long-term, gay, poly, of the Jewish faith, interested only in farmers—whatever you’re looking for. Sadly, wherever you find people trying to go about their business, you’ll also find others quite happy to intrude and cause problems.

Multiple pieces of research regularly highlight potential privacy flaws or security issues with dating apps galore. All this before we even get to the human aspect of the problem—no wonder online dating is exhausting.

Breaking into online dating circles

Dating apps are an unfortunate juicy target for cybercriminals, who will use any vulnerability—from software to psychological—to achieve their goal. Because it’s important to remember: Dating apps store more than just basic personally identifiable information (PII). They include sensitive data and images people might not be comfortable sharing elsewhere, which gives cybercriminals added leverage for blackmail, sextortion, and other forms of online abuse.

To start, the dating apps and sites themselves may not be safe from prying hackers looking to slurp user details. There’s the infamous 2015 compromise of cheating site Ashley Madison, or last year’s badly-timed announcement from dating app Coffee Meets Bagel, who informed users about a data compromise on Valentine’s Day.

How about location-based dating apps, like Tinder? In 2019, location-based dating app Jack’d allowed users to upload private photos and videos, but didn’t secure them on the backend, leaving users’ private images exposed to the public Internet. Now combine that with the ability to pinpoint a user’s exact location or track them on social media, and the end result is rather frightening.

Finally, online dating can wreak havoc in the workplace, too. If your organization supports a bring your own device (BYOD) policy, security vulnerabilities in dating apps could cause additional risk to your own reputation, as well as the company’s networks and infrastructure. (Though to be fair, you could argue “additional risk” is part and parcel of any BYOD policy.) A 2017 study by Kaspersky found that mobile dating apps were susceptible to man-in-the-middle attacks, putting any data or communications with the enterprise conducted via mobile device in danger.

Hints and tips for safe online dating

There are too many dating apps and websites out there to be able to give granular advice on privacy settings and security precautions for each and every one. However, a lot of security advice in this area is about common sense precaution, just as you would while dating in the real world. Many of these tips have been around forever; some require a little cybersecurity education, and a few rely on newer forms of technology to ensure things go smoothly.

Time to go hunting

Deploy some Google-Fu: One of the very first things you should do is a search related to your prospective date. There may well be multiple alarm bell–ringing search results for a troublesome dating site member all under the same username, for example. Or you could stumble upon multiple profiles begging for money on different sites, all using the same profile pic as your supposed date.

Checking photos and profile pics is a good idea in general. Use Google image search, Tineye, and other similar services to see if it’s been swiped from Shutterstock or elsewhere. It’s possible lazy scammers may start using deepfake images, which will be even harder to figure out, unless you read our blog and see some of the ways you can spot a fake.

Stay in on your night out

Don’t go outside the theoretical safety boundary of the app you’re using. This is one of the most common scam signs for any form of online shenanigans. Mysterious free video game platform gifts sent in your general direction? Surprise! You must receive the gift via dubious email link instead of the gaming platform you happen to be using. Making a purchase from a website you just discovered? Suddenly, you need to make a wire transfer instead of paying online—and so on.

Many dating apps restrict how much profile information you can reveal—that’s a good thing. However, that layer of privacy protection won’t work as well as it should if you’re convinced by a scammer to pass along lots of PII through other means. If the person on the other end of the communique is particularly insistent on this, that’s a definite red flag—for malware and for dating.

Hooking up with social media

A well-worn point, but it bears repeating: Sharing dating profiles with social media platforms may well open your data up to further scrutiny, thievery, and general tomfoolery. Your dating profile may be nicely locked down, but that approach again loses value if tied to public profiles containing a plethora of information on you, your friends, and your family. This just isn’t a risk worth taking.

Sharing is not always caring

Keeping your own dating data disconnected from social media platforms is just one step in protecting your sensitive information. Another step is awareness. When using dating apps, you should spend some time looking at their privacy policies and settings, as well as looking up news stories on them online, so that you know where your data is going, who is sending it around, and why.

For example, last month, the Norwegian Consumer Council revealed how the Android apps for Grindr, Tinder, and OkCupid sent sensitive personal information—including sexual preferences and GPS locations—to advertising companies, potentially breaching user trust.

The nonprofit’s report shone light on the digital advertising industry’s efforts to collect user information and channel it through a complex machine to find out who users are, where they live, what they like, who they support in elections, and even who they love. By analyzing 10 popular apps, the report’s researchers found at least 135 third parties that received user information.

Users’ GPS coordinates were shared with third parties by the dating apps Grindr and OkCupid. GPS “position” data was shared with third parties by the dating app Tinder, which also shared users’ expressed interest in gender. OkCupid also sent user information about “sexuality, drug use, political views, and much more,” the report said.

As to who received the information? The answers are less familiar. While Google and Facebook showed up in the report—both receiving Advertiser IDs—the majority of user data recipients were lesser-known companies, including AppLovin, AdColony, BuckSense, MoPub, and Braze.

There’s no cure-all to this type of data sharing, but you should know that privacy advocates in California are on it, having already asked the state’s Attorney General to investigate whether the data-sharing practices violate the California Consumer Privacy Act, which just came into effect at the start of this year.

General OPSEC tips

Operational security, or OPSEC for short, is pretty important as far as online dating is concerned. Some of the basic cybersecurity hygiene steps that we encourage our users to perform in their day-to-day business can help thwart unwanted digital access or steer you clear of physically dangerous situations. Here are a few examples:

Passwords, passwords, passwords

We all know password reuse is bad—across dating sites, apps, or any accounts—but depending on personal circumstances, it may also be bad to recycle usernames. If you don’t want people you’d rather avoid in the future tracking you down on social media, remember to use random names unrelated to your more general online activities.

While we’re on the subject, there are several other best practices for password security that we recommend, such as creating long passphrases that are unrelated to your name, birthday, or pets. If you can’t remember 85,000 different passwords, consider storing them in a password manager and using a single master password to control them all. If that seems like putting too much power in the hands of one password, we recommend using two- or multi-factor authentication.

The point is: Don’t reuse passwords on dating sites. There may be a plethora of intimate messages sent on these platforms, more so than on most other services you use. It makes sense to lock things down as much as possible.

Stranger danger

Meeting a date in person for the first time? Tell other people where you’re going on your date beforehand. It’s a basic, but invaluable safety step—especially if you have no way of vetting your date outside of the dating app constraints. Let your insider know the name/profile name/and anything else relevant to your date that might help them track you later, if necessary.

Also, try to obscure your literal latitude and longitude or home address from a virtual stranger before you get to know and trust them. Dating apps have taken those spammy “hot singles in your area” ads to their logical end point. Hot singles in your area really would be beneficial where dating is concerned, so why shouldn’t apps allow you to search on factors related to distance? However, on the flip side, this does rather tip your hand where revealing your general location is concerned.

So while your date will have some sort of idea as to where you’re based, you’ll want to have your first meeting(s) somewhere other than “the bar at the end of my street.” A little travel goes a long way to blocking some crucial details. Oh, and consider using public transport or your own vehicle to get to and from the date.

Ring, ring

If possible, don’t hand over your main phone number—especially when such a thing may be tied to SMS 2FA, which can lead to social engineering attacks on your mobile provider. If your mobile is your only phone, consider using a disposable phone specifically for dating that isn’t tied to anything important.

If that’s out of the question, you could try one of the many popular online services which provide their own number/voicemail.

Play it safe

After reading all of this, you may think that between potential security vulnerabilities, privacy exposures, and contending with awful scammers that it’s not worth the hassle to bother with online dating. That’s not our intention.

As long as you follow some of the advice listed above and keep in mind that dating apps can be compromised just like any other software, you should have a safe online dating experience. Just remember that anything you communicate online has the potential to drift offline—after all, that’s the whole goal of online dating in the first place.

Good luck, and stay safe out there!

The post Cyber tips for safe online dating: How to avoid privacy gaffs, exploits, and scams appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Android Trojan xHelper uses persistent re-infection tactics: here’s how to remove

Malwarebytes - Wed, 02/12/2020 - 18:15

We first stumbled upon the nasty Android Trojan xHelper, a stealthy malware dropper, in May 2019. By mid-summer 2019, xHelper was topping our detection charts—so we wrote an article about it. After the blog, we thought the case was closed on xHelper. Then a tech savvy user reached out to us in early January 2020 on the Malwarebytes support forum:

“I have a phone that is infected with the xhelper virus. This tenacious pain just keeps coming back.”

“I’m fairly technically inclined so I’m comfortable with common prompt or anything else I may need to do to make this thing go away so the phone is actually usable!”

forum user misspaperwait, Amelia

Indeed, she was infected with xHelper. Furthermore, Malwarebytes for Android had already successfully removed two variants of xHelper and a Trojan agent from her mobile device. The problem was, it kept coming back within an hour of removal. xHelper was re-infecting over and over again.

Photo provided by Amelia

If it wasn’t for the expertise and persistence of forum patron Amelia, we couldn’t have figured this out. She has graciously has allowed us to share her journey. 

All the fails

Before we share the culprit behind this xHelper re-infection, I’d like to highlight the tactics we used to investigate the situation, including the many dead ends we hit prior to figuring out the end game. By showing the roadblocks we encountered, we demonstrate the thought process and complexity behind removing malware so that others may use it as a guide. 

Clean slate

First off, Amelia was clever enough to do a factory reset before reaching out to us. Unfortunately, it didn’t resolve the issue, though it did give us a clean slate to work with. No other apps (besides those that came with the phones) were installed besides Malwarebytes for Android, thus, we could rule out an infection by prior installs (or so we thought).

We also ruled out any of the malware having device admin rights, which would have prevented our ability to uninstall malicious apps. In addition, we cleared all history and cache on Amelia’s browsers, in case of a browser-based threat, such as a drive-by download, causing the re-infection.

The usual suspect: pre-installed malware

Since we had a clean mobile device and it was still getting re-infected, our first assumption was that pre-installed malware was the issue. This assumption was fueled by the fact that the mobile device was from a lesser-known manufacturer, which is often the case with pre-installed malware.  So Amelia tested this theory by going through the steps to run Android Debug Bridge (adb) commands to her mobile device. 

With adb command line installed and the mobile device plugged into a PC, we used the workaround of uninstalling system apps for current user. This method renders system apps useless even though they still technically reside on the device. 

Starting with the most obvious to the least, we systematically uninstalled suspicious system apps, including the mobile device’s system updater and an audio app with hits on VirusTotal, a potential indicator of maliciousness.  Amelia was even able to grab various apps we didn’t have in our Mobile Intelligence System to rule everything out. After all this, xHelper’s persistence would not end.

Photo provided by Amelia of xHelper running on mobile device Triggered: Google PLAY

We then noticed something strange: The source of installation for the malware stated it was coming from Google PLAY. This was unusual because none of the malicious apps downloading on Amelia’s phone were on Google PLAY. Since we were running out of ideas, we disabled Google PLAY. As a result, the re-infections stopped!

We have seen important pre-installed system apps infected with malware in the past. But Google PLAY itself!? After further analysis, we determined that, no, Google PLAY was not infected with malware. However, something within Google PLAY was triggering the re-infection—perhaps something that was sitting in storage. Furthermore, that something could also be using Google PLAY as a smokescreen, falsifying it as the source of malware installation when in reality, it was coming from someplace else.

In the hopes that our theory held true, we asked Amelia to look for suspicious files and/or directories on her mobile device using a searchable file explorer, namely, anything that started with com.mufc., the malicious package names of xHelper. And then…eureka!

Photos provided by Amelia The culprit

Hidden within a directory named com.mufc.umbtts was yet another Android application package (APK). The APK in question was a Trojan dropper we promptly named Android/Trojan.Dropper.xHelper.VRW. It is responsible for dropping one variant of xHelper, which subsequently drops more malware within seconds.

Here’s the confusing part: Nowhere on the device does it appear that Trojan.Dropper.xHelper.VRW is installed. It is our belief that it installed, ran, and uninstalled again within seconds to evade detection—all by something triggered from Google PLAY.  The “how” behind this is still unknown.

It’s important to realize that unlike apps, directories and files remain on the Android mobile device even after a factory reset. Therefore, until the directories and files are removed, the device will keep getting infected.

How to remove xHelper re-infections

If you are experiencing re-infections of xHelper, here’s how to remove it:

  • We strongly recommend installing Malwarebytes for Android (free).
  • Install a file manager from Google PLAY that has the capability to search files and directories.
    • Amelia used File Manager by ASTRO.
  • Disable Google PLAY temporarily to stop re-infection.
    • Go to Settings > Apps > Google Play Store
    • Press Disable button
  • Run a scan in Malwarebytes for Android to remove xHelper and other malware.
    • Manually uninstalling can be difficult, but the names to look for in Apps info are fireway, xhelper, and Settings (only if two settings apps are displayed).
  • Open the file manager and search for anything in storage starting with com.mufc.
  • If found, make a note of the last modified date.
    • Pro tip: Sort by date in file manager
    • In File Manager by ASTRO, you can sort by date under View Settings
  • Delete anything starting with com.mufc. and anything with same date (except core directories like Download):
  • Re-enable Google PLAY
    • Go to Settings > Apps > Google Play Store
    • Press Enable button
  • If the infection still persists, reach out to us via Malwarebytes Support.
Mobile malware hits a new level

This is by far the nastiest infection I have encountered as a mobile malware researcher. Usually a factory reset, which is the last option, resolves even the worst infection. I cannot recall a time that an infection persisted after a factory reset unless the device came with pre-installed malware. This fact inadvertently sent me down the wrong path. Luckily, I had Amelia’s help, who was as persistent as xHelper itself in finding an answer and guiding us to our conclusion.

This, however, marks a new era in mobile malware. The ability to re-infect using a hidden directory containing an APK that can evade detection is both scary and frustrating. We will continue analyzing this malware behind the scenes. In the meantime, we hope this at least ends the chapter of this particular variant of xHelper. 

Stay safe out there!

The post Android Trojan xHelper uses persistent re-infection tactics: here’s how to remove appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malwarebytes Labs releases 2020 State of Malware Report

Malwarebytes - Tue, 02/11/2020 - 08:01

Today is Safer Internet Day—and what better way to celebrate/pay homage than to immerse yourself in research on the latest in malware, exploits, PUPs, web threats, and data privacy? It so happens we’ve got just the right content to kick-start the party because today we released the results of our annual study on the state of malware—the 2020 State of Malware Report—and as usual, it’s a doozy.

From an increase in enterprise-focused threats to the diversification of sophisticated hacking and stealth techniques, the 2019 threat landscape was shaped by a cybercrime industry that aimed to show it’s all grown up and coming after organizations with increasing vengeance.

The 2020 State of Malware Report features data sets collected from product telemetry, honey pots, intelligence, and other research conducted by Malwarebytes threat analysts and reporters to investigate the top threats delivered by cybercriminals to both consumers and businesses in 2019.

Our analysis includes a look at threats to Mac and Windows PCs, Android and iOS, as well as browser-based attacks. In addition, we examined consumer and business detections on threats to specific regions and industries across the globe. Finally, we took a look at the state of data privacy in 2019, including state and federal legislation, as well as the privacy failures of some big tech companies in juxtaposition against the forward-thinking policies of others.

Here’s a sample of what we found:

  • Mac threats increased exponentially in comparison to those against Windows PCs. While overall volume of Mac threats increased year-over-year by more than 400 percent, that number is somewhat impacted by a larger Malwarebytes for Mac userbase in 2019. However, when calculated in threats per endpoint, Macs still outpaced Windows by nearly 2:1.
  • The volume of global threats against business endpoints has increased by 13 percent year-over-year, with aggressive adware, Trojans, and HackTools leading the pack.
  • Organizations were once again hammered with Emotet and TrickBot, two Trojan-turned-botnets that surfaced in the top five threats for nearly every region of the globe, and in the top detections for the services, retail, and education industries. TrickBot detections in particular increased more than 50 percent over the previous year.
  • Net new ransomware activity is at an all-time high against businesses, with families such as Ryuk and Sodinokibi increasing by as much as 543 and 820 percent, respectively.

To learn more about the top threats of the year for Mac, Windows, Android, and the web, as well as the state of data privacy in commerce and legislation, check out the full 2020 State of Malware Report here.

The post Malwarebytes Labs releases 2020 State of Malware Report appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Battling online coronavirus scams with facts

Malwarebytes - Mon, 02/10/2020 - 16:56

Panic and confusion about the recent coronavirus outbreak spurred threat actors to launch several malware campaigns across the world, relying on a tried-and-true method to infect people’s machines: fear.

Cybercriminals targeted users in Japan with an Emotet campaign that included malicious Word documents that allegedly contained information about coronavirus prevention. Malware embedded into PDFs, MP4s, and Docx files circulated online, bearing titles that alluded to protection tips. Phishing emails that allegedly came from the US Centers for Disease Control and Prevention (CDC) were spotted, too. Malwarebytes also found a novel scam purporting to direct users to a donation page to help support government and medical research.

All of these threats rely on the same dangerous intersection of misinformation and panic—a classic and grotesque cybercrime tactic. A great defense to these is, quite simply, the truth.

At Malwarebytes, we understand that safeguarding you from cyberthreats goes beyond technological protection. It also means giving you the information you need to make smart, safe decisions. Because of this, we’re presenting verified resources and data about coronavirus that will hopefully steer users away from online threats. If you see a sketchy-looking email mentioning the virus (like the one we found below), don’t open it. Instead, come here. If you want to immediately see what these online scams look like, scroll below.

What is coronavirus?

According to the World Health Organization, the current coronavirus that has infected thousands of people across the world is a single variant of a broader family of viruses, also called “coronavirus.” This particular strain of coronavirus was first identified in the city of Wuhan in central China’s Hubei province. It has the title “2019-nCoV.” Though 2019-nCoV is from the same family of coronaviruses as SARS—which spread to 26 countries between 2002 and 2003—it is not the same virus.

As of February 7, coronavirus has spread to at least 25 countries, including Australia, Vietnam, the United States, the Philippines, Nepal, Sweden, the United Kingdom, India, and more. Mexico has no reported cases—the only country in North America to avoid the virus, it appears. Countries in South America, including Brazil, Colombia, Venezuela, and Chile, have not reported any confirmed cases of the virus, either. While the majority of infections are reported in China, with 31,211 confirmed cases, the highest count of any other country is Singapore, with 30 cases.

Full, daily reports on the virus’ spread can be found at the World Health Organization’s resource page here: Novel Coronavirus (2019-nCoV) situation reports. The situation reports also provide information about every country with confirmed coronavirus cases, and this Al Jazeera article compiles that information up to February 6.

According to a February 6 report in The Wall Street Journal that cites scientists and medical academics in China, the recent coronavirus likely started in bats.

According to the US Center for Disease Control, coronavirus symptoms include fever, cough, and shortness of breath.

How can I protect myself from coronavirus?

Because coronavirus spreads from human-to-human contact, the best protection methods involve good hygiene. According to the WHO, individuals should:

  • Wash your hands frequently with soap and water or use an alcohol-based hand rub if your hands are not visibly dirty.     
  • Maintain social distancing—maintain at least 1 meter (3 feet) distance between yourself and other people, particularly those who are coughing, sneezing and have a fever.
  • Avoid touching eyes, nose, and mouth.
  • If you have fever, cough, and difficulty breathing, seek medical care early. Tell your health care provider if you have travelled in an area in China where 2019-nCoV has been reported, or if you have been in close contact with someone with who has travelled from China and has respiratory symptoms.
  • If you have mild respiratory symptoms and no travel history to or within China, carefully practice basic respiratory and hand hygiene and stay home until you are recovered, if possible. 

The WHO also actively dispelled some current myths about coronavirus. For instance, individuals cannot catch the virus from dogs and cats that are their pets, and vaccines against pneumonia do not protect against coronavirus.

For more information on coronavirus myths, please visit the WHO Myth Busters page here, along with the WHO Q&A page.

What else should I know about coronavirus?

Coronavirus is a serious threat, but it is not the world-ending plague that many fear. As of February 7, the virus has resulted in 637 total deaths. A February 6 notice by the Chinese media service CGTN reported more recoveries, at 1,542.

Individuals should not fear receiving packages from China, the WHO said, as the virus cannot survive long durations on physical objects like packages and letters. Similarly, individuals should not dip into unmeasured fear of all things Chinese. These fears have turned New York’s Chinatown district into a “ghost town,” said one local business owner, and have fueled multiple xenophobic and racist assumptions across the world.

The WHO says it is okay to receive packages delivered from China.

Coronavirus has also received a strong global response. Air travel has been severely limited, Olympic qualifying games were relocated, workers built a hospital in about 10 days, fast food restaurants temporarily closed their locations, and China closed off entire populations—which has come with its own tragic tales of quarantine camps, isolation, and fear.

The spread of the virus is scary, yes, but people are working day and night to prevent greater exposure.

What should I know about coronavirus scams?

Coronavirus online scams are largely similar to one another. By preying on misinformation and fear, cybercriminals hope to trick unwitting individuals into opening files and documents that promise information about the virus.

However, Malwarebytes recently found an email scam that preys on people’s desire to help during a moment like this.

The scam email—titled “URGENT: Coronavirus, Can we count on your support today?”—purportedly comes from the nondescript “Department of Health.” Inside, the email asks users to donate to coronavirus prevention causes.

“We need your support , Would you consider donating 100 HKD to help us achieve our mission?” the email says near its end, before offering a disguised link that opens an application, not a website. The link itself begins with neither HTTPS or HTTP, but “HXXP.”

A screenshot of an emailed coronavirus scam that preys on users’ good will.

Routine scams that allegedly include information about prevention and protection also come through emails, like this phishing scam spotted by Sophos.

A screenshot of the emailed coronavirus scam that Sophos discovered.

The malicious email informs its recipient to open an attached document that includes information about “safety measures regarding the spreading of coronavirus,” which then directs users to a page that asks for their email address and password.

These scams are becoming a dime a dozen, and we don’t expect them to dwindle any time soon. In fact, threat actors in China were spotted sending malware around through email and through the Chinese social media platform WeChat. Though the exact types of malware were not reported, the Computer Virus Emergency Response Center said the malware itself could be used to steal data or remotely control victims’ devices.

Coronavirus information and data resources

If you’re afraid about the spread of coronavirus, we understand. But please, do not click any links in any sketchy emails, and do not donate to any causes you have not already vetted outside of your email client.

If you want to know up-to-the-date information about the virus, again, please visit the following resources:

Stay safe, everyone.

The post Battling online coronavirus scams with facts appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (February 3 – 9)

Malwarebytes - Mon, 02/10/2020 - 16:46

Last week on Malwarebytes Labs, we looked at Washington state’s latest efforts in providing better data privacy rights for their residents, and we dove into some of the many questions regarding fintech: What is it? How secure is it? And what are some of the problems in the space?

We also detailed a new adware family that our researchers had been tracking since late last year and pushed out a piece on performance art’s impact on Google Maps and other crowdsourced apps.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (February 3 – 9) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Google Maps: online interventions with offline ramifications

Malwarebytes - Fri, 02/07/2020 - 19:24

The places where online life directly intersection with that lived offline will be forever fascinating, illustrated perfectly through a recent performance piece involving Google Maps, a cart, and an awful lot of mobile phones.

Simon Weckert, an artist based in Berlin, Germany, showed how a little ingenuity could work magic on the ubiquitous Google Maps system. Turns out Google hadn’t accounted for what happens when 99 phones go for a relaxing walk down the streets of Berlin. The system was fooled into believing the world’s most aggressive traffic jam was taking place. Let’s see how it happened.

How does Google Maps help with traffic?

Back in the day, Google Maps dived into the world of traffic sensors to get a feel for how commutes and directions were impacted by traffic patterns.

In 2009, they made use of crowdsourcing, and phones with GPS enabled sent anonymous data allowing Google to figure out how vehicles were moving and where any traffic jams happened to be. The more people taking part, the greater the accuracy and benefits for all.

Things kept on moving, and in 2020 it’s a combination of sensors, user data, and satellite technology to keep things keeping on. You’d think a trolley of phones would be no match for this elaborate weave of crowdsourced mobile pocket power and additional data sources.

You would think.

Maps versus trolley

Imagine our surprise when one large jumble of phones trundled its way toward disruption heaven, proudly announcing 99 cars were not going anywhere anytime soon. Whatever failsafes Maps had in place, it simply couldn’t figure out shenanigans were afoot.

Streets formerly flagged as green (all clear) would suddenly show as red (traffic jam ahoy), with the knock-on effect of rerouting cars to other roads which may well have been free of cars but would now feel the impact of people trying to avoid the trolley hotspot. I think my favourite part of this story was when the trolley rumbled right past the Maps office. Chaos, then, but artistically done. Not the first time though…

Mapping out an artistic tribute

Art being used to make a point about technology, Google, and even Maps itself is not uncommon. Last year, an artist made use of their Google account to upload weird and wonderful pieces of 360 degree digital art using Google Business View. Sure, you could use it to give potential customers an in-depth look at your eatery before venturing inside—or you could generate chaotic mashups and loosen up the clinical aesthetic of vanilla Google Maps instead. The choice, as they say, is yours (unless someone says “no” and removes it all).

What could cause user generated content to be removed from Maps? Funny you should ask.

When does art become vandalism?

Maps may make use of crowdsourcing to great effect, but crowdsourcing alone is one consistent method to ensure chaos in the end. A few years ago, enthusiastic cartographers had the ability to make edits to Maps using the Map Maker tool. If you had a nice tip or a cool landmark you felt warranted closer inspection, you could add it manually to the map. This was one way to help out in regions where mapping hadn’t taken place, because even Google couldn’t be everywhere at once.

Other users would check and verify before edits went live. If you eventually gained enough kudos from the rest and your edits were constant and legitimate, you eventually bypassed the need for others to make sure you weren’t doing anything problematic.

Step forward, someone doing something problematic. 

Slowly but surely, people started to play pranks on the system and post a variety of spam and other nonsense. It’s possible Map Maker may have carried on if the dubious edits had been small, unnoticed, and otherwise unlikely to end up front page news.

On the other hand, this could happen and Map Maker could be thrown from the highest of cliff edges, never to return. Some features of Map Maker have made their way into regular Maps, but sadly this was lights out for the genuinely useful tool. If I could draw a 300-foot “RIP Map Maker” onto the side of a digitised mountain in the Himalayas, I would, but this written tribute of ours will have to suffice.

Getting down to business

Maps locations for businesses have also been exposed to shenanigans over the years, and not all of it confined to Maps exclusively. Whether it’s restaurant owners going out of business because of wrongly-listed opening hours, or Google+ (remember that?) listings directing hotel chain visitors to third-party websites generating commission, the conflict is nonstop and the repercussions can be enormous for those hit hard. If you’re not online much or familiar with the technology involved, then you have almost no chance of setting things straight.

Trouble in other realms

It isn’t just Google Maps beset by these antics. Other major platforms run into similar issues, and if the platform doesn’t provide the mapping tech directly, then the pranksters/malicious actors will simply go after the third-party suppliers instead. Mapbox found themselves facing a terrible edit, which worked its way into Snapchat, the Weather Channel, and others.

For as long as the ability to make use of the wisdom of the crowds (or, in many cases, the lack of wisdom) exists, these disruptions will continue to happen. A surprising amount of services we take for granted can’t really function well without an element of trust granted to the user base, so this isn’t exactly the easiest to police.

Sure, some of the shenanigans are lighthearted and may occasionally be quite funny. Some of these methods for gaming the system could also be profoundly troublesome and cause maximum discomfort with a little bit of effort.

On this occasion, at least, we can be content that the end result is “cool art project makes us think about online/offline interaction” and not “someone’s drawn a rude picture on the side of the Empire State Building.”

We make no guarantees about next time.

The post Google Maps: online interventions with offline ramifications appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Adposhel adware takes over browser push notifications administration

Malwarebytes - Thu, 02/06/2020 - 18:10

Since late last year, our researchers have been monitoring new methods being deployed by cybercriminals to potentially abuse browser push notifications. Now, an adware family detected by Malwarebytes as Adware.Adposhel is doing just that, taking control of push notifications in Chrome at the administrator level.

What does Adposhel adware do?

The adware uses Chrome policies to ensure that notification prompts will be shown to users ands add some of its own domains to the list of sites that are allowed to push browser notifications. So far nothing new. The recent twist, however, is that Adposhel enforces these settings as an administrator, meaning a regular Chrome user will not be able to change the settings in the notifications menu.

It seems the adware family has now decided to fully deploy this tactic, as we are seeing complaints about it emerging on forums, such as Reddit.

Victims have complained about being unable to remove domains from the list of domains that are allowed to show push notifications, and being unable to change the setting that control whether websites can ask you to allow notifications.

Disabling that setting would stop a user from seeing prompts like these:

If a user were to click Allow on that prompt, this domain would be added to their allowed list of URLs, with the understanding that it could be removed manually in the notifications menu.

Adposhel uses the NotificationsAllowedForUrls policy to block users from removing their entries from the Allow list.

Where you would normally see the three dots (ellipsis) menu icon representing the settings menu, entries submitted to a policy by Adposhel will see an icon telling you the setting is enforced by an administrator.

If you hover over the icon, the accompanying text confirms it.

How do I undo the changes made by Adposhel adware?

This does not mean that you can change that setting just because you are the administrator of the system you are working on, by the way. But if you are the system administrator, you can fix the notification changes made by the Adposhel installer by applying a simple registry fix:

Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome] "DefaultNotificationsSetting"=dword:00000001 [-HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\NotificationsAllowedForUrls]

This is safe to do unless there were legitimate URLs in the list of URLs that were allowed to show notifications by policy, which I doubt. But we always advise to create a backup of the registry before making any changes.

 Backing up Registry with ERUNT

Modifying the registry may create unforeseen results, so we always recommend creating a backup prior to doing that.

Please download ERUNT and save the file to the desktop.

  • Install ERUNT by following the prompts, but say No to the portion that asks you to add ERUNT to the startup folder.
  • Right-click on the icon and select Run as Administrator to start the tool.
  • Leave the default location (C:\WINDOWS\ERDNT) as a place for your backup.
  • Make sure that System registry and Current user registry are ticked.
  • The third option Other open users registries is optional.
  • Press OK to backup and then press YES to create the folder.

This tool won’t generate a report. You may uninstall it after you’re done cleaning.

Protection and detection

Malwarebytes detects the installers as Adware.Adposhel.

The URLs enforced by this Adpohel-induced Chrome policy are detected as Adware.ForcedNotifications.ChrPRST.



Stay safe, everyone!

The post Adposhel adware takes over browser push notifications administration appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fintech security: the challenges and fails of a new era

Malwarebytes - Wed, 02/05/2020 - 19:24

“I have no idea how this app from my bank works, and I don’t trust what I don’t understand.” Josh is not an old curmudgeon or luddite. He’s 42 with a decent understanding of technology. Nevertheless, the changes in fintech have come too fast for him. It’s not that he doesn’t trust his bank. He doesn’t trust himself to use and manage the banking app securely.

The world we live in has gone through some noticeable changes in the last decade. This is certainly true for the banking industry, which has grasped onto the concept of fintech as nearly interchangeable with finance. However, fintech—or computer programs and other technology used to support banking and other financial services—is the fastest-growing sector in venture capital. It may encompass anything from cryptocurrency to mobile payment apps.

The groundwork was laid for the rise of fintech through a series of major incidents over the last 10 years. These include:

  • The banking crisis and subsequent Great Recession of 2007–2009. If you had told someone 15 years ago that a number of big-name banks would not survive the decade, they would have laughed at you. Yet, the list is long.
  • New currencies introduced into the playing field, especially crypto. Bitcoin started in 2009, and hundreds of other cryptocurrencies have since followed suit.
  • Negative interest rates. Cash deposits incur a charge for storage at a bank rather than gaining interest. Some banks have to pay money to store their surplus in funds at national banks because of the negative interest rates. Some banks even charge their customers with this negative interest.
  • New players have entered the field that are different from the establishment. Some are related to the development of cryptocurrencies, but others simply look at financial business in a new and unique way.
  • Customers are increasingly expecting their payments to reach their destination account on the same day. This also helps the bank itself, as it reduces the amount they need to store against a negative interest.
What is fintech?

The hardware and software used in the financial world is generally referred to as fintech. But the expression is also used to describe the startups in the financial world. In this article it will be used to describe the technology as many of the settled financial institutions feel they need to adapt to the same new technology that the startups offer their customers. Because of this we can find these new features in banking and other financial applications both in the apps of accomplished firms along with those of the new financials.

Fintech security

While it may come as less than a surprise that Fintech startups are struggling with security, sometimes the established names surprise us with how easily they fall prey to data breaches, malware attacks, or compromised apps.

One of the reasons why some of the fintech startups are so successful lies in their ability to offer alternatives to conventional financial solutions through cryptocurrencies, online loans, and P2P. Along comes a variety of challenges and one of these challenges piques our interest: cybersecurity. To name one aspect, the huge growth in the number and size of online platforms makes this industry very vulnerable to security breaches.

Some of the problems

The introduction of new features sometimes looks as if they were done in a rush and without keeping in mind how secure they are and how clever crooks could abuse them. For example, a mobile banking app that allowed users to add an extra phone to control their account by simply scanning a QR code ended up cleaning out a few bank accounts. Clever imposters tricked people into adding their phone leaving the imposter in full control of the account.

Payment requests leading to fake websites are a quickly rising threat as banks are rolling out this feature. As always with newer technology, fraudsters benefit from the victim’s unawareness of how things work exactly. Someone pretending to buy from you on an online market can send you a payment request for the amount you are expecting. All you have to do is click “Accept” and enter your pin. And then find out that you paid them instead of the other way around.

Fake bank websites in general have been a problem for many years and this will probably remain a problem for some time to come. Most of the times these fake sites are designed to harvest login and payment credentials from the visiting victims. And they are very hard to distinguish from the real bank websites as the threat-actors simply copy all the content and layout from the original sites. And urging customers to look for the green padlock is hardly useful advice anymore.

Payment providers and online shops are plagued by web skimmers. As we have reported frequently especially there are several Magecart groups who are very active at this front. Payments are intercepted and payment card information stolen using compromised e-commerce sites.

And then there is virtual money, or since most money nowadays is virtual to some level, let’s talk about cryptocurrencies in particular. While the introduction of cryptocurrencies was intended to open up a whole new world of payment options, it also opened a virtual cesspit of options to be defrauded. The absence of a central authority gave way to types of fraud and robbery that were unheard of in the old school banking world. Huge steals from marketplaces, bank-owners running with the funds entrusted to them, stolen hot wallet credentials, and let’s not forget drive-by-mining. We covered many of these crimes in our blog about Bankrobbers 2.0.

Financials of all kinds have suffered data breaches in all sorts and sizes. From huge ones like Equifax and Capital One to equally painful ones, for those involved, like the one at P&N bank where sensitive account information was spilled.

Ransomware operators are particularly fond of financials as they usually can afford to pay large sums and they are invested in getting operations back up and running in a hurry. Travelex took the high road and refused to pay the ransom demand made after being hit with Ransom.Sodinokibi.

Privacy concerns

With governments asking for full disclosure of savings both offshore and internal, and on the other hand enforcing privacy laws, financial institutions are expected to balance these demands while keeping their customers on board.

With GDPR in Europe leading the way, financials should be ready or get ready to comply with GDPR or similar laws that apply to them and their customer base.


The financial industry is considered to be vital infrastructure and for good reason. When we lose trust in our financial institutions, it turns our society upside down. When the paper is no longer worth the number printed on it, or you cannot withdraw money from your account, that rattles the bases of our economy.

Fintech needs to adapt a more security focused approach to developing new features, especially in their mobile apps. It also wouldn’t hurt to provide customers with elaborate instructions on how to safely use the new app or new features of the app.

As a financial startup you want to grow fast. But growing fast comes with its own problems. Making sure your security measures can scale along with your growth is a must. Unless you want to find yourself restricted in your growth or notice your security to start cracking at the seams.

However frustrating it may turn out to be, financials need to think about better identity management and control. Is it enough when someone is logged into an account to allow that entity to fully control the account? Or de we need to add another factor for special actions like raising the maximum amount, allowing withdrawals abroad, or even for transactions that are larger than normal.

Fintech startups can’t expect to get away with security mistakes that other startups might. Being in the financial sector brings with it different responsibilities and expectations.

As I’ve written before: It is key that our financial institutions protect our dollars and our data so that we can keep investing our money and our trust in them.

Stay safe, everyone!

The post Fintech security: the challenges and fails of a new era appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Washington Privacy Act welcomed by corporate and nonprofit actors

Malwarebytes - Tue, 02/04/2020 - 16:35

The steady parade of US data privacy legislation continued last month in Washington with the introduction of an improved bill that would grant state residents the rights to access, control, delete, and port their data, as well as opting out of data sales.

The bill, called the Washington Privacy Act, also improves upon its earlier 2019 version, providing stronger safeguards on the use of facial recognition technology. According to some analysts, when compared to its coastal neighbor’s data privacy law—the California Consumer Privacy Act, which went into effect this year—the Washington Privacy Act excels.

Future of Privacy Forum CEO Jules Polonetsky called the bill “the most comprehensive state privacy legislation proposed to date.”

“It includes provisions on data minimization, purpose limitations, privacy risk assessments, anti-discrimination requirements, and limits on automated profiling that other state laws do not,” Polonetsky said.

Introduced on January 20 by state Senator Reuven Carlyle, the Washington Privacy Act would create new responsibilities for companies that handle consumer data, including the implementation of data protection processes and the development and posting of privacy policies.

Already, the bill has gained warm reception from corporate and nonprofit actors. Washington-based tech giant Microsoft said it was encouraged, and Consumer Reports welcomed the thrust of the bill, while urging for even more improvements.

“This new draft is definitely a step in the right direction toward protecting Washington residents’ personal data,” said Consumer Reports Director of Consumer Privacy and Technology Policy Justin Brookman. “We do hope to see further improvements to get rid of inadvertent loopholes that remain in the text.”

What the Washington Privacy Act would do

Like the many US data privacy bills introduced in the past 18 months, the Washington Privacy Act approaches the problem of lacking data privacy with two prongs—better rights for consumers, tighter restrictions for companies.

On the consumer side, the Washington Privacy Act would grant several new rights to Washington residents, including the rights to access, correct, delete, and port their data. Further, consumers would receive the right to “opt out” of having their personal data used in multiple, potentially invasive ways. Consumers could say no to having their data sold and to having their data used for “targeted advertising”—the somewhat inescapable practice that results in advertisements for a pair of shoes, a fetching sweater, or an 4K TV following users around from device to device. 

Consumers could exercise their rights with simple requests to the companies that handle their data. According to the bill, these requests would require a response within 45 days. If a company cannot meet that deadline, it can file for an extension, but it is required to notify the consumer about the extension and about why it could not meet the deadline.

Further, unfulfilled requests are not a dead end for consumers—companies must also offer an appeals process to the consumers whose requests they deny or do not fulfil. Requests must also be responded to free of charge, up to two times a year per consumer.

Perhaps one of the most welcome provisions in the bill is its anti-discrimination rules. Companies cannot, the bill says, treat consumers differently because of their choices to exert their data privacy rights. On the surface, that makes dangerous ideas like “pay-for-privacy” schemes much harder to enact.

Concerning new business regulations, the Washington Privacy Act separates the types of companies it applies to into two categories: “controllers” and “processors.” The two terms, borrowed from the European Union’s General Data Protection Regulation (GDPR), have simple meanings. “Controllers” are the types of entities that actually make the decisions about how consumer data is collected, shared, or used. So, a small business with just one employee who decides to sell data to third parties? That’s a controller. A big company that decides to collect data to send targeted ads? That’s a controller, too.

Processors, on the other hand, are akin to contractors and subcontractors that perform services for controllers. So, a payment processor that simply processes e-commerce transactions and nothing more? That’s a processor.

The Washington Privacy Act’s new rules focus predominantly on “controllers”—the Facebooks, Amazons, Twitters, Googles, Airbnbs, and Oracles of the world.

Controllers would have to post privacy policies that are “reasonably accessible, clear, and meaningful,” and would include the following information:

  • The categories of personal data processed by the controller
  • The purposes for which the categories of personal data are processed
  • How and where consumers may exercise their rights
  • The categories of third parties, if any, with whom the controller shares personal data

If controllers sell personal data to third parties, or process it for targeted advertising, the bill requires those controllers to clearly disclose that activity, along with instructions about how consumers can opt out of those activities.

Separately, controllers would need to perform “data protection assessments,” in which the company looks at, documents, and considers the risks of any personal data processing that involves targeted advertising, sale, and “profiling.”

The regulation of “profiling” is new to data privacy bills. It’s admirable.

According to the bill, “profiling” is any form of automated processing of personal data to “evaluate, analyze, or predict personal aspects concerning an identified or identifiable person’s economic situation, health, personal preference, interests, reliability, behavior, location, or movements.”

In today’s increasingly invasive online advertising economy, profiling is omnipresent. Companies collect data and create “profiles” of consumers that, yes, may not include an exact name, but still include what are considered vital predictors about that consumer’s lifestyle and behavior. 

These new regulations make the Washington Privacy Act stand out amongst its contemporaries, said Stacey Gray, senior counsel with Future of Privacy Forum.

“The big picture of the bill is that includes the same individual rights as the California Consumer Privacy Act—of access, sale, et cetera—and then more,” Gray said. “The right to correct your data, to opt out of targeted advertising, and out of profiling—that is further on the individual rights side.”

Gray added that the bill’s business obligations also go further than those in the CCPA, naming the data risk assessments previously discussed.

The Washington Privacy Act includes several more business obligations, all of which add up to meaningful data protections for consumers. For instance, companies would need to commit to data minimization principles, only collecting consumers’ personal data that is necessary for expressed purposes. Companies would also need to obtain affirmative, opt-in consent from consumers before processing any “sensitive data,” which is any data that could reveal race, ethnicity, religion, mental or physical health conditions or diagnoses, sexual orientations, or citizenship and immigration statuses.

But perhaps most intriguing in the Washington Privacy Act is its regulation of facial recognition technology.

Facial recognition provisions

In 2019, Washington state lawmakers crafted a bill aimed at improving the data privacy protections of consumers. They called it… the Washington Privacy Act. That original bill, which has now been substituted the 2020 version, included provisions on the commercial use of facial recognition.

On its face, the new rules looked good: Companies that used facial recognition tech for commercial purposes would have to obtain consent from consumers “prior to deploying facial recognition services.”

Unfortunately, the original bill’s very next sentence made that consent almost meaningless.

According to that bill, consumer “consent” could be obtained not by actually asking the consumer about whether they agreed to having their facial data recorded, but instead, by posting a sign on a company’s premises.

As the bill stated:

“The placement of conspicuous notice in physical premises or online that clearly conveys that facial recognition services are being used constitute a consumer’s consent to the use of such facial recognition services when that consumer enters those premises or proceeds to use the online services that have such notice, provided that there is a means by which the consumer may exercise choice as to facial recognition services.”

The length of the explainer is as broad as the exception it allows.

This loophole upset several privacy rights advocates who, in February 2019, sent a letter to key Washington lawmakers.

“[W]hile the bill purportedly requires consumer consent to the use of facial recognition technology, it actually allows companies to substitute notification for seeking consent—leaving consumers without a real opportunity to exercise choice or control,” the letter said. It was signed by Consumer Reports, Common Sense, Electronic Frontier Foundation, and Privacy Rights Clearinghouse.

The 2020 bill closes this loophole, instead requiring affirmative, opt-in consent for commercial facial recognition use, along with mandatory notifications—such as signs—in spaces that use facial recognition technology. The new bill also requires processors to open up their data-processing tools to outside investigation and testing, in an effort to root out what the bill calls “unfair performance differences across distinct subpopulations,” such as minorities, disabled individuals, and the elderly.

Moving the Washington Privacy Act forward

Despite the 2019 Washington Privacy Act gaining swift approval in the Senate two months after its January introduction, the bill ultimately failed to reach the House. Multiple factors led to the bill’s failure, including the bill’s definitions for certain terms, its approach to enforcement, and its treatment of facial recognition.

Some of those same obstacles could come up for the 2020 bill, Gray said.

“If this bill does not pass this year, that’s where we might see a source of conflict—is either with the facial recognition provisions, or with enforcement,” Gray said. For enforcement to take hold, Gray said the Attorney General’s office—tasked with regulation—will need increased funding and staffing. Further, there will likely be opposition to the bill’s lack of “private right of action,” which means that consumers will not be able to individually file lawsuits against companies that they allege violated the law. This issue has been a sticking point for data privacy legislation for years.

Still, Gray said, the bill shows improvement from its 2019 version, which could help push it forward.

“All things aside,” Gray said, “we’re more optimistic than last year about it passing.”

The post Washington Privacy Act welcomed by corporate and nonprofit actors appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (January 27 – February 2)

Malwarebytes - Mon, 02/03/2020 - 19:00

Last week on Malwarebytes Labs, we looked at the strengths and weaknesses of the Zero Trust model, gave you the low-down on spear phishing, and took a delve into the world of securing the managed service provider (MSP).

Other cybersecurity news

Stay safe, everyone!

The post A week in security (January 27 – February 2) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Securing the MSP: their own worst enemy

Malwarebytes - Thu, 01/30/2020 - 16:00

We’ve previously discussed threats to managed service providers (MSPs), covering their status as a valuable secondary target to both an assortment of APT groups as well as financially motivated threat groups. The problem with covering new and novel attack vectors, however, is that behind each new vector is typically a system left unpatched, asset management undone, a security officer not hired (typically justified with factually dubious claims of a “skills shortage”) or a board who sees investment in infrastructure—and yes, security is infrastructure—as a cost center rather than a long-term investment in sustainable profits.

In short, malware can be significantly less dangerous to a business than that business’ own operational workflow.

Points of entry

Data on breach root causes is hard to come by, typically because security vendors tend to benefit by not providing industry vertical specific risk analysis. But the data that is available occasionally hints at corporate data breaches starting with some common unforced errors.

The 2019 Verizon DBIR claims that only 28 percent of observed data breaches involve the use of malware for the initial intrusion. While malware plays a significant role in the subsequent exploitation, the numbers suggest the majority of public breaches are not driven by zero-day exploits or outlandishly complex intrusion paths. So if you’re trying to secure an MSP, what are the most common entry points for attackers?

Under the broad heading of “hacking,” the most prominent observed tactics for point-of-entry include phishing, use of stolen credentials, and other social engineering techniques. Subsequent actions to further access include common use of backdoors or compromised web applications. Let’s break these down a little further.

Phishing is a reliable way of gaining a foothold to compromise a system. How would an employee clicking on a phish constitute an unforced error? Frequently, enterprises of all sorts incentivize their workers to click on absolutely everything, while simultaneously limiting their actual reading of messages. The consequences for poorly-designed corporate communications can be huge, as was seen when an MSP lost control of admin credentials via phishing attack that was subsequently used to launch ransomware.

Stolen credentials are a tremendously common attack vector that has been seen in several high profile MSP data breaches. “Stolen” is a bit of a misnomer though, and they would be better considered as “mishandled.”

Setting aside credentials gained via social engineering or phishing, companies can frequently lose track of credentials by keeping old or unnecessary accounts active, failing to monitor public exposure of accounts, failing to force resets after secondary breaches that may impact employees, failing to enforce modern password policies—basically failing to pay attention.

Should any single account with exposed credentials be over-privileged, a significant breach is almost guaranteed. And the consequences for MSPs with sloppy credential handling can be quite severe (1, 2).

Last in the lineup for unnecessary security failures is patch management. Like any other company trying to manage fixed infrastructure costs, MSPs rely heavily on third-party software and services. So when a business-critical support app is discovered to have multiple severe vulnerabilities, it introduces a wide-open channel for further exploitation. On occasion, the vulnerabilities used are brand new. Typically, they are not, and companies that fail to patch or mitigate vulnerable software get predictably exploited.

Mishandled mitigation

These attack entry points have a couple factors in common. First, they are not tremendously technically sophisticated. Even with regards to limited APT examples, the actors relied on compromised credentials and phishing first before deploying the big guns for lateral propagation. Second, mitigating these common entry points are actions that impacted MSPs should have been doing anyway.

Credential management that includes limited external monitoring, timely access control, and periodic privilege review doesn’t simply protect against catastrophic breaches—it protects against a host of attacks at all points of the technical sophistication spectrum.

Anti-phishing system design cues not only defend against employees leaking critical data, they also make for more efficient corporate communications, keep employees safe, and ideally reduce their overall email load.

Appropriate logging with timely human review cuts down time to breach discovery, but also assists in detailed risk analysis that can make for lean and effective security budgets into the future. The relationship between all of these security behaviors and observed MSP data breaches suggests that more attention to industry best practices could have gone a long way toward eliminating or sharply diminishing breach risk.

Finally, a patch management schedule that tracks third party software and services, fixing vulnerabilities in a timely manner is a great way to close some of the largest entry points into an MSP. Subordinating patches to non critical business needs, not having a test network to deploy patches, or simply not patching at all is a large signpost to attackers signifying an easy target.

MSP security: not a luxury

An MSP might be tempted to consider security as an expensive indulgence—something to be considered as a nice-to-have after uptime and availability of resources. Done well, it is neither expensive, nor a luxury.

Adherence to security norms that have been well defined for years can go a long way toward preventing big breaches, and can do so without expensive vendor contracts, pricy consultants, or best-in-class equipment. A managed service provider who chooses to ignore or delay those norms does so at its peril.

The post Securing the MSP: their own worst enemy appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Spear phishing 101: what you need to know

Malwarebytes - Wed, 01/29/2020 - 18:50

Phishing, a cyberattack method as old as viruses and Nigerian Princes, continues to be one of the most popular means of initiating a breach against individuals and organizations, even in 2020. The tactic is so effective, it has spawned a multitude of sub-methods, including smishing (phishing via SMS), pharming, and the technique du jour for this blog: spear phishing.

But first, a quick parable.

A friend of mine received a blitz of emails over the course of a few days, all geared toward their Netflix account.

Click to enlarge

The clues indicating something wasn’t quite right were numerous:

  • There were half a dozen emails instead of just one.
  • All of them required payment information, but each mail gave a different reason as to why.
  • There were spelling mistakes galore.
  • The emails were not personalised in any way.

Even without spotting the utterly bogus, non HTTPS URL linked from the email body, this friend would never have fallen for it. Granted, they have a decent knowledge of security basics. However, consider if the attacker had done this:

  • Grabbed some personal details from a data dump
  • Hunted online for accounts belonging to this person, perhaps on social media
  • Checked to see if they had an account with Netflix
  • Crafted an imitation Netflix email address
  • Addressed the potential victim directly by name
  • Included some or all of their home address
  • Made use of spell check
  • Set up a free HTTPS website
  • Used the most current version of Netflix’s logo

See the difference? While the first set of emails wouldn’t pass muster with a marginally knowledgeable user, the second would be much more difficult to screen as fake.

And that is what’s known in the business as spear phishing.

What is spear phishing?

Spear phishing’s sole purpose is to get inside the recipient’s head and make them think the messages they’re responding to are 100 percent legitimate—achieved due to personal touches designed to make them think what they’re dealing with is the real deal.

While you could argue alarm bells should ring when being asked for credit card details, in all honesty, once the scammer has thrown a few personal details into the mix like name and address, it may well be too late.

Imagine if the scammer monitored social media feeds to see which shows their target liked, then said something like, “Please ensure your details are correct to continue enjoying The Witcher.” Now add a picture of Henry Cavill looking cool.

Game. Over.

As you might expect, this kind of attack is rather difficult to combat. It doesn’t help when utterly random nonsense such as the poorly-made Netflix phishing attempt regularly inflict huge losses on organisations across the globe, despite being pretty terrible.

How many times have we seen healthcare facilities and even local municipal governments fall foul to ransomware via pretend spreadsheet attachments in fake HR tax emails? Make no mistake, this is a very real and immediate problem for those caught out.

With generic phishing already causing huge headaches for businesses and consumers alike, cybercriminals using data dumps expertly combined with professional social engineering techniques have an ever higher likelihood of success. And that’s before you consider other forms of spear phishing, such as conversation hijacking (more on this later), or attacks that use the spear phish as a launching pad for infecting networks with malware and other digital nasties.

Shall we take a look at some numbers?

Watch those verticals

A few years ago, the average cost of spear phish prevention over 12 months was $319,327 versus the significantly higher cost of any successful attack, which weighed in at $1.6 million. In 2019, the stats leaning heavily towards spear phishing speak for themselves, and huge payouts for scammers are the order of the day.

Payouts of $40 million, $50 million, and even $70 million and beyond are common, and that’s before you get to the cost of the cleanup and class action lawsuits. Throw in a little reputation damage and a PR firestorm, and you have all the ingredients for a successful breach. For the victims, not so much.

With spear phishing, the slightest piece of information can bring about an organisation’s downfall as it slices through all its otherwise fully functional security defences.

Evolution of the spear phish

Spear phishing isn’t only left to the realm of emails. Highly-targeted attacks also branch out into other areas, especially ones full of self volunteered information. Hijacking customer support conversations on Twitter is a great example of this: scammers set up imitation support accounts then barge into the conversation, leading the victim to phishing central. It’s a slick move.

It’s debatable how much of these scams are targeted, considering they’re making their attack up on the fly, instead of wading in with pre-gained knowledge. The difference here is the recon is aimed at the person the potential victim is being helped by, as opposed the victim themselves. Making note of when the customer support account is active, looking at initial Tweets so they can pretend to be the same person who helped before, and adopting some of their speech mannerisms/corporate speak all help to create a convincing illusion.

At that point, all we’re really dealing with is a perfectly-crafted imitation email but in human form, and with the ability to interact with the victim. Has spear phishing ever seen such a potent way to go on the offensive? When people are happy to weaponise customer support to use them against you, it’s really something to sit down and consider.

Fighting the rising tide of spear phishing

Anybody can be a target, but executives, especially at the CEO level, is where it’s at in terms of big scores for criminals (a form of targeting sometimes called whaling). By necessity, most organisations’ executives are set up to be publicly visible, and scammers take advantage of this. As has been mentioned, this is one of the toughest forms of attack to defend against.

If the social engineering component is designed to open the network to malware abuse, then we also need to consider the overall security infrastructure. Security software, updates, firewalls, and more all become important tools in the war against spear phishing—especially given what can come after the initial foot in the door attack.

Tools such as spam filtering and detection are great for random, casual attacks, but given the direct nature of spear phishing, it may well be a bridge too far for automation to flag as suspicious. Dedicated, ongoing training is important at all levels of the business, alongside not getting into the habit of blaming employees and third parties when things go wrong (and they will, eventually). You don’t want people less likely to report incidents out of fear of getting into trouble—it’s not productive and won’t help anybody.

Tools to aid in reporting spear phishing attacks, either dedicated apps or something web-based inside the network, are always useful. It’s also good to ensure departments have at least some idea how important business processes work in other departments. Securing the organization is a little easier when unrelated department A is an additional layer of defence for unrelated department B. Pay attention to HR, accounting, and top line exec interaction.

If your organisation hasn’t considered what to lock down yet, there’s never been a better time. Europol’s EC3 report on spear phishing was released late last year and contains a wealth of information on the subject for those wanting to dive deeper.

Ponder all forms of phishing, see which one(s) may be the biggest danger to your organisation and your employees, and start figuring out how best to approach the issue. You won’t regret it—but the scammers certainly will.

The post Spear phishing 101: what you need to know appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Building Cooperation and Trust into the Web with IPFS

Mozilla Hacks - Wed, 08/29/2018 - 14:43

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

Some projects start small, aiming for incremental improvements. Others start with a grand vision, leapfrogging today’s problems by architecting an idealized world. The InterPlanetary File System (IPFS) is definitely the latter – attempting to replace HTTP entirely, with a network layer that has scale, trust, and anti-DDOS measures all built into the protocol. It’s our pleasure to have an introduction to IPFS today from Kyle Drake, the founder of Neocities and Marcin Rataj, the creator of IPFS Companion, both on the IPFS team at Protocol Labs -Dietrich Ayala

IPFS – The InterPlanetary File System

We’re a team of people all over the world working on IPFS, an implementation of the distributed web that seeks to replace HTTP with a new protocol that is powered by individuals on the internet. The goal of IPFS is to “re-decentralize” the web by replacing the location-oriented HTTP with a content-oriented protocol that does not require trust of third parties. This allows for websites and web apps to be “served” by any computer on the internet with IPFS support, without requiring servers to be run by the original content creator. IPFS and the distributed web unmoor information from physical location and singular distribution, ultimately creating a more affordable, equal, available, faster, and less censorable web.

IPFS aims for a “distributed” or “logically decentralized” design. IPFS consists of a network of nodes, which help each other find data using a content hash via a Distributed Hash Table (DHT). The result is that all nodes help find and serve web sites, and even if the original provider of the site goes down, you can still load it as long as one other computer in the network has a copy of it. The web becomes empowered by individuals, rather than depending on the large organizations that can afford to build large content delivery networks and serve a lot of traffic.

The IPFS stack is an abstraction built on top of IPLD and libp2p:

Hello World

We have a reference implementation in Go (go-ipfs) and a constantly improving one in Javascript (js-ipfs). There is also a long list of API clients for other languages.

Thanks to the JS implementation, using IPFS in web development is extremely easy. The following code snippet…

  • Starts an IPFS node
  • Adds some data to IPFS
  • Obtains the Content IDentifier (CID) for it
  • Reads that data back from IPFS using the CID

<script src=""></script> Open Console (Ctrl+Shift+K) <script> const ipfs = new Ipfs() const data = 'Hello from IPFS, <YOUR NAME HERE>!' // Once the ipfs node is ready ipfs.once('ready', async () => { console.log('IPFS node is ready! Current version: ' + (await // convert your data to a Buffer and add it to IPFS console.log('Data to be published: ' + data) const files = await ipfs.files.add(ipfs.types.Buffer.from(data)) // 'hash', known as CID, is a string uniquely addressing the data // and can be used to get it again. 'files' is an array because // 'add' supports multiple additions, but we only added one entry const cid = files[0].hash console.log('Published under CID: ' + cid) // read data back from IPFS: CID is the only identifier you need! const dataFromIpfs = await console.log('Read back from IPFS: ' + String(dataFromIpfs)) // Compatibility layer: HTTP gateway console.log('Bonus: open at one of public HTTP gateways:' + cid) }) </script>

That’s it!

Before diving deeper, let’s answer key questions:

Who else can access it?

Everyone with the CID can access it. Sensitive files should be encrypted before publishing.

How long will this content exist? Under what circumstances will it go away? How does one remove it?

The permanence of content-addressed data in IPFS is intrinsically bound to the active participation of peers interested in providing it to others. It is impossible to remove data from other peers but if no peer is keeping it alive, it will be “forgotten” by the swarm.

The public HTTP gateway will keep the data available for a few hours — if you want to ensure long term availability make sure to pin important data at nodes you control. Try IPFS Cluster: a stand-alone application and a CLI client to allocate, replicate and track pins across a cluster of IPFS daemons.

Developer Quick Start

You can experiment with js-ipfs to make simple browser apps. If you want to run an IPFS server you can install go-ipfs, or run a cluster, as we mentioned above.

There is a growing list of examples, and make sure to see the bi-directional file exchange demo built with js-ipfs.

You can add IPFS to the browser by installing the IPFS Companion extension for Firefox.

Learn More

Learn about IPFS concepts by visiting our documentation website at

Readers can participate by improving documentation, visiting, developing distributed web apps and sites with IPFS, and exploring and contributing to our git repos and various things built by the community.

A great place to ask questions is our friendly community forum:
We also have an IRC channel, #ipfs on Freenode (or on Matrix). Join us!

The post Dweb: Building Cooperation and Trust into the Web with IPFS appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Dweb: Building a Resilient Web with WebTorrent

Mozilla Hacks - Wed, 08/15/2018 - 14:49

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

The web is healthy when the financial cost of self-expression isn’t a barrier. In this installment of the Dweb series we’ll learn about WebTorrent – an implementation of the BitTorrent protocol that runs in web browsers. This approach to serving files means that websites can scale with as many users as are simultaneously viewing the website – removing the cost of running centralized servers at data centers. The post is written by Feross Aboukhadijeh, the creator of WebTorrent, co-founder of PeerCDN and a prolific NPM module author… 225 modules at last count! –Dietrich Ayala

What is WebTorrent?

WebTorrent is the first torrent client that works in the browser. It’s written completely in JavaScript – the language of the web – and uses WebRTC for true peer-to-peer transport. No browser plugin, extension, or installation is required.

Using open web standards, WebTorrent connects website users together to form a distributed, decentralized browser-to-browser network for efficient file transfer. The more people use a WebTorrent-powered website, the faster and more resilient it becomes.


The WebTorrent protocol works just like BitTorrent protocol, except it uses WebRTC instead of TCP or uTP as the transport protocol.

In order to support WebRTC’s connection model, we made a few changes to the tracker protocol. Therefore, a browser-based WebTorrent client or “web peer” can only connect to other clients that support WebTorrent/WebRTC.

Once peers are connected, the wire protocol used to communicate is exactly the same as in normal BitTorrent. This should make it easy for existing popular torrent clients like Transmission, and uTorrent to add support for WebTorrent. Vuze already has support for WebTorrent!

Getting Started

It only takes a few lines of code to download a torrent in the browser!

To start using WebTorrent, simply include the webtorrent.min.js script on your page. You can download the script from the WebTorrent website or link to the CDN copy.

<script src="webtorrent.min.js"></script>

This provides a WebTorrent function on the window object. There is also an
npm package available.

var client = new WebTorrent() // Sintel, a free, Creative Commons movie var torrentId = 'magnet:...' // Real torrent ids are much longer. var torrent = client.add(torrentId) torrent.on('ready', () => { // Torrents can contain many files. Let's use the .mp4 file var file = torrent.files.find(file =>'.mp4')) // Display the file by adding it to the DOM. // Supports video, audio, image files, and more! file.appendTo('body') })

That’s it! Now you’ll see the torrent streaming into a <video width="300" height="150"> tag in the webpage!

Learn more

You can learn more at, or by asking a question in #webtorrent on Freenode IRC or on Gitter. We’re looking for more people who can answer questions and help people with issues on the GitHub issue tracker. If you’re a friendly, helpful person and want an excuse to dig deeper into the torrent protocol or WebRTC, then this is your chance!



The post Dweb: Building a Resilient Web with WebTorrent appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Dweb: Social Feeds with Secure Scuttlebutt

Mozilla Hacks - Wed, 08/08/2018 - 16:01

In the series introduction, we highlighted the importance of putting people in control their social interactions online, instead of allowing for-profit companies be the arbiters of hate speech or harassment. Our first installment in the Dweb series introduces Secure Scuttlebutt, which envisions a world where users are in full control of their communities online.

In the weeks ahead we will cover a variety of projects that represent explorations of the decentralized/distributed space. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

This post is written by André Staltz, who has written extensively on the fate of the web in the face of mass digital migration to corporate social networks, and is a core contributor to the Scuttlebutt project. –Dietrich Ayala

Getting started with Scuttlebutt

Scuttlebutt is a free and open source social network with unique offline-first and peer-to-peer properties. As a JavaScript open source programmer, I discovered Scuttlebutt two years ago as a promising foundation for a new “social web” that provides an alternative to proprietary platforms. The social metaphor of mainstream platforms is now a more popular way of creating and consuming content than the Web is. Instead of attempting to adapt existing Web technologies for the mobile social era, Scuttlebutt allows us to start from scratch the construction of a new ecosystem.

A local database, shared with friends

The central idea of the Secure Scuttlebutt (SSB) protocol is simple: your social account is just a cryptographic keypair (your identity) plus a log of messages (your feed) stored in a local database. So far, this has no relation to the Internet, it is just a local database where your posts are stored in an append-only sequence, and allows you to write status updates like you would with a personal diary. SSB becomes a social network when those local feeds are shared among computers through the internet or through local networks. The protocol supports peer-to-peer replication of feeds, so that you can have local (and full) copies of your friends’ feeds, and update them whenever you are online. One implementation of SSB, Scuttlebot, uses Node.js and allows UI applications to interact with the local database and the network stack.

Using Scuttlebot

While SSB is being implemented in multiple languages (Go, Rust, C), its main implementation at the moment is the npm package scuttlebot and Electron desktop apps that use Scuttlebot. To build your own UI application from scratch, you can setup Scuttlebot plus a localhost HTTP server to render the UI in your browser.

Run the following npm command to add Scuttlebot to your Node.js project:

npm install --save scuttlebot

You can use Scuttlebot locally using the command line interface, to post messages, view messages, connect with friends. First, start the server:

$(npm bin)/sbot server

In another terminal you can use the server to publish a message in your local feed:

$(npm bin)/sbot publish --type post --text "Hello world"

You can also consume invite codes to connect with friends and replicate their feeds. Invite codes are generated by pub servers
owned by friends in the community, which act as mirrors of feeds in the community. Using an invite code means the server will allow you to connect to it and will mirror your data too.

$(npm bin)/sbot invite.accept $INSERT_INVITE_CODE_HERE

To create a simple web app to render your local feed, you can start the scuttlebot server in a Node.js script (with dependencies ssb-config and pull-stream), and serve the feed through an HTTP server:

// server.js const fs = require('fs'); const http = require('http'); const pull = require('pull-stream'); const sbot = require('scuttlebot/index').call(null, require('ssb-config')); http .createServer((request, response) => { if (request.url.endsWith('/feed')) { pull( sbot.createFeedStream({live: false, limit: 100}), pull.collect((err, messages) => { response.end(JSON.stringify(messages)); }), ); } else { response.end(fs.readFileSync('./index.html')); } }) .listen(9000);

Start the server with node server.js, and upon opening localhost:9000 in your browser, it should serve the index.html:

<html> <body> <script> fetch('/feed') .then(res => res.json()) .then(messages => { document.body.innerHTML = ` <h1>Feed</h1> <ul>${messages .filter(msg => msg.value.content.type === 'post') .map(msg => `<li>${} said: ${msg.value.content.text}</li>` ) }</ul> `; }); </script> </body> </html> Learn more

SSB applications can accomplish more than social messaging. Secure Scuttlebutt is being used for Git collaboration, chess games, and managing online gatherings.

You build your own applications on top of SSB by creating or using plug-ins for specialized APIs or different ways of querying the database. See secret-stack for details on how to build custom plugins. See flumedb for details on how to create custom indexes in the database. Also there are many useful repositories in our GitHub org.

To learn about the protocol that all of the implementations use, see the protocol guide, which explains the cryptographic primitives used, and data formats agreed on.

Finally, don’t miss the frontpage, which explains the design decisions and principles we value. We highlight the important role that humans have in internet communities, which should not be delegated to computers.

The post Dweb: Social Feeds with Secure Scuttlebutt appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Introducing the Dweb

Mozilla Hacks - Tue, 07/31/2018 - 14:00
Introducing the Dweb

The web is the most successful programming platform in history, resulting in the largest open and accessible collection of human knowledge ever created. So yeah, it’s pretty great. But there are a set of common problems that the web is not able to address.

Have you ever…

  • Had a website or app you love get updated to a new version, and you wished to go back to the old version?
  • Tried to share a file between your phone and laptop or tv or other device while not connected to the internet? And without using a cloud service?
  • Gone to a website or service that you depend on, only to find it’s been shut down? Whether it got bought and enveloped by some internet giant, or has gone out of business, or whatever, it was critical for you and now it’s gone.

Additionally, the web is facing critical internet health issues, seemingly intractable due to the centralization of power in the hands of a few large companies who have economic interests in not solving these problems:

  • Hate speech, harassment and other attacks on social networks
  • Repeated attacks on Net Neutrality by governments and corporations
  • Mass human communications compromised and manipulated for profit or political gain
  • Censorship and whole internet shutdowns by governments

These are some of the problems and use-cases addressed by a new wave of projects, products and platforms building on or with web technologies but with a twist: They’re using decentralized or distributed network architectures instead of the centralized networks we use now, in order to let the users control their online experience without intermediaries, whether government or corporate. This new structural approach gives rise to the idea of a ‘decentralized web’, often conveniently shortened to ‘dweb’.

You can read a number of perspectives on centralization, and why it’s an important issue for us to tackle, in Mozilla’s Internet Health Report, released earlier this year.

What’s the “D” in Dweb?!

The “d” in “dweb” usually stands for either decentralized or distributed.
What is the difference between distributed vs decentralized architectures? Here’s a visual illustration:

(Image credit:, your best source for technical clip art with animals)

In centralized systems, one entity has control over the participation of all other entities. In decentralized systems, power over participation is divided between more than one entity. In distributed systems, no one entity has control over the participation of any other entity.

Examples of centralization on the web today are the domain name system (DNS), servers run by a single company, and social networks designed for controlled communication.

A few examples of decentralized or distributed projects that became household names are Napster, BitTorrent and Bitcoin.

Some of these new dweb projects are decentralizing identity and social networking. Some are building distributed services in or on top of the existing centralized web, and others are distributed application protocols or platforms that run the web stack (HTML, JavaScript and CSS) on something other than HTTP. Also, there are blockchain-based platforms that run anything as long as it can be compiled into WebAssembly.

Here We Go

Mozilla’s mission is to put users in control of their experiences online. While some of these projects and technologies turn the familiar on its head (no servers! no DNS! no HTTP(S)!), it’s important for us to explore their potential for empowerment.

This is the first post in a series. We’ll introduce projects that cover social communication, online identity, file sharing, new economic models, as well as high-level application platforms. All of this work is either decentralized or distributed, minimizing or entirely removing centralized control.

You’ll meet the people behind these projects, and learn about their values and goals, the technical architectures used, and see basic code examples of using the project or platform.

So leave your assumptions at the door, and get ready to learn what a web more fully in users’ control could look like.

Note: This post is the introduction. The following posts in the series are listed below.

The post Introducing the Dweb appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds