Techie Feeds

What should a US federal data privacy law ideally include?

Malwarebytes - Wed, 07/10/2019 - 15:00

In the constant David-and-Goliath struggle between digital privacy advocates and corporate privacy invaders, the question of how to legally protect Americans with a comprehensive, federal data privacy law provides conflicting answers. Advocates want protections, which Big Tech interprets as restrictions.

As of today, there is no one digital privacy law to rule them all. While a few state laws exist that protect consumer privacy here in the US, overarching federal legislation, such as the Global Data Privacy Regulation (GDPR) in Europe, has not yet penetrated the market.

US-based corporations must comply with GDPR if they have a global presence, but that’s only for their European customers—and many have found convenient workarounds. Who will protect the American user? Smaller tech? Privacy-forward tech? What about we-don’t-have-a-lobbying-war-chest tech? How do they feel about a federal privacy law?

For months, Malwarebytes Labs has reported on data privacy laws in the United States and abroad. But the question of federal legislation that applies to the entire country has gone unanswered, as multiple Senate proposals have yet to move forward.

Further, despite Big Tech’s recently-avowed commitment to regulation, those same companies are reportedly funding efforts to dismantle newly-enacted stateside data privacy protections.

But earlier this year, a group of tech companies stood opposed. They wanted to strengthen one of those same privacy protections. This tech group included some of the most recognizable company names in user privacy: DuckDuckGo, Ghostery, ProtonMail, Lavabit, Brave, Vivaldi, Purism, and Disconnect.

We asked those companies to broaden their sights beyond state legislation. What did they want, if anything, from a federal data privacy law for the United States?

What’s the goal?

For many of these privacy-forward companies, a federal data privacy law would be far from restrictive. Instead, it is considered necessary.

Todd Weaver is the founder and chief executive of Purism. He supports a federal data privacy law, so long as it isn’t stripped of meaningful user protections and doesn’t create barriers to success for startups and mid-sized companies. Federal legislation could be, Weaver said, the one way to finally defend the public from an ongoing digital privacy crisis.

“We’re talking about the exploitation of people in the digital world, and this is a giant problem,” Weaver said. He continued:

“The problem can be boiled down to things that nobody should ever know. Those are where people are, what people do, and who talks to whom.”

In the US, those pieces of information are far from protected, though. Where we are, what we do, and who we talk to fuels a massive corporate surveillance machine driven by social media behemoths, aggressive online tracking, and unseen data brokers, all motivated by continuously-climbing advertising revenue. No current law forbids much of this.

So how do we fix it? Here are a few ideas from privacy advocates.

Like the CCPA…but better

Last year, California’s then-governor Jerry Brown signed the California Consumer Privacy Act (CCPA). Effective January 1, 2020, the CCPA grants Californians the rights to know what data is collected on them, whether that data is sold, the option to opt out of those sales, and the right to access that data.

In April, privacy search engine DuckDuckGo, joined by 23 other technology companies, sent a letter to the California Assembly’s Privacy Committee asking that the law be bolstered. The requested improvements, DuckDuckGo wrote, would include the right to opt out of having information shared—not just sold—and the right to sue companies that violated any privacy provision of the CCPA.

Helen Horstmann-Allen, chief operating officer at email provider Fastmail (which signed onto DuckDuckGo’s letter) said she would appreciate seeing legislation similar to CCPA go national.

“We were pleased to see California take the lead with their privacy laws to reflect how companies do business today. Expanding the scope of privacy legislation recognizes that companies don’t need to sell data to violate consumer privacy,” Horstmann-Allen said. “We’d love to see this type of legislation move on the national level as well. Privacy rights shouldn’t end at the state line.”

Jeremy Tillman, director of product at the ad-blocking browser extension Ghostery, made similar comments in a 2018 opinion piece for The Hill:

“If there is serious traction for federal consumer privacy legislation, which there absolutely should be, the California Consumer Protection law can serve as a solid template to model future laws after.”

A consumer’s right to sue for privacy violations

California’s privacy law received a major setback this year when a proposed amendment did not pass one of the state’s Senate committees. The amendment, SB 561, would have given Californians the right to sue a company that violated any privacy rights described in the CCPA.

Currently, CCPA only gives Californians the right to sue a company for the harm of a data breach. Though a novel inclusion when compared to the dearth of privacy protections across the nation, some argue that broader opportunities to go to court are needed.  

“If you can’t sue or do anything to go after these companies that are committing these atrocities, where does that leave us?” Weaver said. “We’ve already seen that with the CCPA in California.”

At least 40 bills have been introduced in California with the near-uniform purpose to amend the CCPA into a weaker version of itself. AB 846, for example, would have limited the CCPA’s discrimination prohibition. AB 873 would have pared down the definition of individuals’ personal information.

More attempts to weaken the CCPA remain, Weaver said.

“One of those bills is just about defanging the entire regulation,” Weaver said. “If you do that, if you defang, [the law] is just paper.”

Transparent data collection practices

Ghostery’s Tillman echoed the above sentiments that any federal data privacy legislation should “hold big tech accountable for their deceptive data collection practices,” but he added:

“[It] should require that any data collection occur as part of a transparent, easy-to-understand transaction where the cost to consumers is clear, enabling them to be knowing and voluntary participants in an ad-supported and data-driven economy.”

Design for interoperability with GDPR

Johnny Ryan, chief policy officer for the privacy-focused web browser Brave, testified earlier this year before the US Senate Judiciary Committee about a potential federal data privacy law. Such a law, Ryan said, should hew closely to the standards of a popular, across-the-pond framework: the European Union’s General Data Protection Regulation (GDPR).

“We view the GDPR as essential,” Ryan said in an email to Malwarebytes Labs. “It can establish the conditions to allow young, innovative companies like ours to flourish.”

Ryan told the committee that two elements within the GDPR can help both protect Americans’ data and give opportunities for small companies to meaningfully compete with Silicon Valley’s biggest, most entrenched businesses. Those two provisions are the “purpose limitation” principle—which protects people’s data from being used in ways they could not anticipate—and the ability to easily opt out of a company’s data collection.

“These two GDPR tools, the ‘purpose limitation principle’, plus the ease of withdrawal of consent, enable freedom,” Ryan told the committee. “Freedom for the market of users to softly ‘break up’—and ‘un-break up’—big tech companies by deciding what personal data can be used for.”

Further, Ryan said to Malwarebytes Labs, a US federal data privacy law inspired by GDPR—particularly in defining concepts like personal data, opt-in consent, and profiling—will provide technology companies with a streamlined path toward compliance, since many have already worked toward complying with GDPR.

“The standard of protection in a federal privacy law, and the definition of key concepts and tools in it, should therefore be compatible and interoperable with the emerging GDPR de facto standard that is being adopted globally,” Ryan said.

Do not undermine states’ individual data privacy laws

Ever since Americans learned about a European consultancy’s effort to sway the 2016 US Presidential election by harvesting the Facebook data of tens of millions of non-consenting users, individual US states have clamped down hard on data misuse against their residents.

California passed the CCPA. Vermont passed a law regulating data brokers. Maine passed a law placing restrictions on how Internet service providers share Mainers’ personal information.

But those state laws could be in trouble if a federal data privacy law calls for their nullification. Such a provision exists in both Senator Marco Rubio’s data privacy bill and in the draft privacy legislation written by Center for Democracy and Technology.

This superseding provision—called “pre-emption”—is unacceptable to Brave.

“The federal law should be of equal or higher standard to state laws, and should not undermine state laws,” Ryan said.

A “Digital Bill of Rights”

When explaining what he would like to see in a federal privacy bill, Weaver repeatedly returned to the idea of a “Digital Bill of Rights.” It is an idea his company has already acted on, having written out and implemented several of the principles.

Included in the company’s Digital Bill of Rights are:

  • The right to change providers
    • Users can take all their data and move it to another service
  • The right to protect personal data
    • Users “own and control” the master keys to encrypt their data
  • The right to verify
    • Users can analyze the source code of software operating locally on their machines
  • The right to not be tracked
    • Users know about and have access to all the collections and uses of their data
    • Users can “obtain, correct, or permanently delete personal data”
    • User data that is collected for a purpose is deleted after that purpose is fulfilled
  • The right to access
    • Users will not be “discriminated against nor exploited based on personal data”

A digital bill of rights is a rare find for any technology company, but Weaver explained that Purism is not guided by the same rules as Big Tech. Instead, because Purism has incorporated as a “social purpose company,” it is not obliged to maximize shareholder value. Instead, it is obliged to fulfill the principles written in its articles of incorporation.

Those “Purist Principles,” Weaver explained, guide the company every day.

“It allows everyone, including me, our employees, to advance our causes before caring about profits or maximizing shareholder value,” Weaver said.

One last, important aspect about the rights described in the Purist Principles is that none of them can be removed by a company’s terms of service.

“If this was established at the federal level,” Weaver said, “this is saying ‘These are your rights, and nobody can remove these rights inside a Terms of Service [agreement] that nobody reads.’”

The post What should a US federal data privacy law ideally include? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Enterprise incident response: getting ahead of the wave

Malwarebytes - Wed, 07/10/2019 - 14:19

Enterprise defenders have a tough job. In contrast to small businesses, large enterprise can have thousands of endpoints, legacy hardware from mergers and acquisitions, and legacy apps that are business critical and prevent timely patching. Add to that a deluge of indicators and metadata from the perimeter that may represent the early stages of a devastating attack—or may be nothing at all.

So how do network defenders get out from behind the 8-ball? How do leaders bring an effective strategy to bear in mobilizing incident response (IR) resources? To deal with knotty problems like this, security researchers have developed a number of IR models to help bring a maximally sane, efficient strategy to network defense efforts.

The cyber kill chain

In 2011, Lockheed Martin developed the cyber kill chain. Borrowed from the US military, the kill chain essentially breaks most cyberattacks down to their constituent elements, and theorizes that forcing a hard stop to any of the seven phases will prevent the entire attack. So if an attack is caught at the installation phase and remediated, the attacker can no longer proceed to act on objectives. But if endpoint protection can stop an attack at the delivery phase, so much the better.

The general idea that makes the kill chain such an appealing way of looking at an attack is that you can’t block everything. Malspam will get through perimeter defenses. Reconnaissance will sometimes happen whether you like it or not. Exploitation will definitely happen with that one employee who is committed to clicking on everything.

So rather than throwing up a Maginot line of ever-increasing defenses at ever-escalating costs, the kill chain suggests that defenders have seven opportunities to shut down an attack, and can fight on a battlefield of their choosing. While it would be best to identify an attack at the Reconnaissance phase, killing it at the Delivery phase can keep the network just as safe, without burning out your SOC by expecting them to catch everything. Check out some more details on how the kill chain is implemented here.

The ATT&CK model

A somewhat more granular model, ATT&CK is a matrix that maps a lengthy list of attacker capabilities to a 12-step attack chain. Often seen as a complement to the kill chain, the ATT&CK can be a useful exercise to match TTPs already observed to attack chain phases to determine defense priorities. When looking at use cases for the model, threat data sharing is one of the most useful. Mapping out a full matrix of observed TTPs can be a method to quickly share a snapshot of the threat landscape across multiple defensive groups or different organizations.

Critiques of IR models

Most critiques of the kill chain and its more recent variants boil down to “what about X?” This is a little bit misguided, as attacker capabilities change over time, and a comprehensive matrix of TTPs would be exhausting to look at, and probably inaccurate in some way. What these models are really meant to assist with is bringing threat intelligence and strategy into the SOC to eliminate blind reactivity. Using any strategic model at all can bring better results than blind monitoring.

Intelligence: the bigger point

The takeaway for the SOC leader or CISO looking to implement an IR model is not picking the one, singularly correct model. Rather, implementing strategic defense in any form can boost the SOC’s responsiveness, efficiency, and accuracy. Having a well-mapped matrix tying observed indicators to specific attack phases can be an aid in prioritizing responses, as well as judging severity for a successful attack caught midstream.

Most importantly, having an incident response model forces SOC staff to respond to an incident in a strategic manner, addressing threats furthest along an attack chain first, and using threat staging to derive intelligence on potential ongoing attacks. As with conventional warfare, beating back attacks and winning the war depends on having a plan.

Stay vigilant, and stay safe.

The post Enterprise incident response: getting ahead of the wave appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How to securely send your personal information

Malwarebytes - Mon, 07/08/2019 - 16:00

This story originally ran on The Parallax and was updated on July 3, 2019.

A few months ago, my parents asked a great security question: How could they securely send their passport numbers to a travel agent? They knew email wasn’t safe on its own.

Standard email indeed isn’t safe for sending high-value personal information such as credit card or passport numbers, according to security experts such as Robert Hansen, CEO of intelligence and analysis firm OutsideIntel, now part of Bit Discovery.

“Email sometimes has good cryptography but often does not,” Hansen says. When sending between Gmail accounts or within a company, he adds, secure transport “probably isn’t an issue.” But people should ask themselves, “Can somebody steal the data when it’s at rest?”

There’s no 100 percent hack-proof way to send your personal information across the Internet. But thanks to the development of end-to-end encryption, which secures data from even the company providing the encryption, there are tools and techniques you can use to make the process safer for you and the identification numbers we use to rule our lives.

Here are three expert tips for securely sending someone your personal information when planning your summer vacation, buying your next house, or just sending documents to your doctor’s office (when they don’t have their own secure messaging system.)

Tip 1: Use an app with end-to-end encryption

The use of encryption has been increasing “since the mid-1990s,” notes security expert Bruce Schneier, thanks to a seminal court case allowing companies to work on computer cryptography without having to first seek the government’s permission. 

Some phone apps protect your text messages using end-to-end encryption. We have highlighted several of the best in a guide to apps offering end-to-end encryption. Here are a few we find exceptionally useful for securely sending personal information.

WhatsApp, used by more than 1.5 billion people, is on every major and several minor platform, including an easy-to-use desktop browser app, and it provides end-to-end encryption by default. If you use WhatsApp (acquired by Facebook in 2014,) you use end-to-end encryption. It’s that simple, and its popularity means that you might not have to convince your intended recipient to install it.

WhatsApp’s encryption tech is actually provided by Open Whisper Systems, which makes its own end-to-end encryption text and voice app, Signal. So which app should you use? Signal arguably has two advantages over WhatsApp, at least from a security perspective. Signal doesn’t store any metadata on its chats, while WhatsApp does. It’s not the content of messages, but it can help identify the type of content being sent. Signal can be set to auto-delete messages, which is effective as long as the recipient hasn’t taken a screenshot or otherwise copied the content of the message.

Signal is also open-source, which means that the code on which it’s built is subject to independent reviews. WhatsApp development is closed, and doesn’t have people not associated with the company poking around in its code. While Signal is only for iPhone and Android, both Signal and WhatsApp can comfortably exist on the same device—they don’t conflict with each other. (Sometimes, however, Signal struggles to let its users go.)

As of July 2019, WhatsApp and Signal are the only two end-to-end encrypted messaging apps for which the advocacy nonprofit Electronic Frontier Foundation offers installation instructions in its Surveillance Self-Defense Tool Guide. The organization elsewhere in its guide recommends the end-to-end encrypted messaging app Wire. Wire works on Android, iOS, and desktops. One of Wire’s benefits is that it doesn’t require you to share your phone number to use the service, instead relying on usernames. That can help minimize the ability of others to track you. But it also stores conversation threads in plaintext when you use it across multiple devices.

End-to-end encrypted Wickr also allows users to delete messages they’ve sent after they’ve been viewed. Once you’ve deleted a message you’ve sent, you don’t have to worry about the recipient’s device storing it. However, because Wickr runs only on iOS and Android, and it has no password recovery method, you might have a hard time convincing your recipient to use it. (Editor’s note: Since this story was originally published, Wickr is still available to all users but is focused on businesses, not consumers.)

Tip 2: If you must use email…

If you must use email—perhaps you’re sending the Panama Papers—strongly consider learning about Pretty Good Privacy. The challenge with PGP is that not only do you have to use it correctly, with different instructions for WindowsMac, and Linux, but so does your recipient. You can consider sending a password-protected ZIP file, as long as the password isn’t in the same email you send. 

Electronic Frontier Foundation technologist Jeremy Gillula advises against creating a simple code for sending important numbers, such as changing all 1s to 2s. “If you’re using simple cipher, might as well call up the recipient and tell them over the phone,” he says.

Some email networks are encrypted within their own systems. If you know that your recipient is using Gmail, and you’re using Gmail, the content of the messages will be protected from snooping while being sent, Gillula says. “It can thwart a passive eavesdropper, but you’re still susceptible to active attacks.”

Tip 3: Ask questions

If you’re not sure about your recipient’s computer security, ask him or her about it. Hansen tells a story about trying to get a mortgage, and the mortgage company wanted “unbelievable amounts of information. I took one look at their website and found a number of different flaws in it.” 

He ended up finding a larger, more computer-savvy mortgage company. Good starter questions include:

  • Are the data you transmit and the databases that store it encrypted on disk? 
  • Is access to your information systems handled on a per-user basis, or does everybody use the same username and password?

If the data isn’t encrypted on disk and at rest, and if there’s only one username and password for accessing customer data, keep looking for a different service provider, Hansen says. From there, the questions you ask depend on whether you’re working with a travel agent, a health care provider, or a mortgage firm.

The post How to securely send your personal information appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 1 – 7)

Malwarebytes - Mon, 07/08/2019 - 15:08

Last week on Malwarebytes Labs, we explained what to do when you find stalkerware, how cooperating apps and automatic permissions are setting you up for failure, and why you should steer clear of Bitcoin Cash generators.

Other cybersecurity news:
  • A former Chief Information Officer (CIO) of Equifax has been issued a prison sentence for insider trading on the firm’s disastrous data breach before the incident became public knowledge. (Source: ZDNet)
  • A new Ryuk ransomware campaign is spreading globally, according to a warning issued by the UK’s National Cyber Security Centre (NCSC). (Source: DarkReading)
  • Orvibo smart home devices leaked billions of user records including logs that contained everything from usernames, email addresses, and passwords, to precise locations. (Source: VPNMentor)
  • Chinese authorities have decided to spy on foreigners crossing the border by installing spyware on Android phones. (Source: iPhoneHacks)
  • Germany‘s cybersecurity agency is working on a set of minimum rules that modern web browsers must comply with in order to be considered secure. (Source: ZDNet)
  • An ongoing attack in the OpenPGP community makes users’ certificates unusable and can essentially break the OpenPGP implementation of anyone who tries to import one of the certificates. (Source: Duo Security)
  • Dubbed Godlua, researchers have discovered the first known malware strain that uses the DNS over HTTPS protocol. (Source: TechSpot)
  • IronPython, darkly: how researchers uncovered an attack on government entities in Europe. (Source: PT Security)
  • Attunity, a company that is currently working with at least half of all Fortune 100 companies, including Netflix, leaked both its clients’ and its own data. (Source: BleepingComputer)
  • The US Cyber Command has issued an alert that hackers have been actively going after CVE-2017-11774. The flaw is a sandbox escape bug in Outlook. (Source: The Register)

Stay safe, everyone!

The post A week in security (July 1 – 7) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Steer clear of Bitcoin Cash generators

Malwarebytes - Wed, 07/03/2019 - 18:19

Here’s an interesting evolution on a well-worn scam, taking one profit generating fakeout and turning it into something else entirely.

For years, gamers have been stuck navigating the treacherous waters of fake video game giveaways. With so many actual genuine gaming giveaways around, you’re never quite sure if a site offering free Xbox points, or Steam credits, or downloadable content, is going to do what it claims.

Typically, the site will ask you to pick your reward then “verify you’re a human” or just help a fictitious process along by clicking an ad or filling in a survey or downloading a file and hoping it isn’t malware.

The gamer never gets their rewards. They may well end up with a few unexpected visitors on their desktops, though.

What’s the change here?

One enterprising individual has clearly had enough of the video game wilderness and decided to try and make money in a less explored realm.

Step up, Bitcoin—or to be more accurate, Bitcoin Cash. Bitcoin Cash is a form of cryptocurrency that went its own way in 2017, and then split again in what I can only call the great Bitcoin cash war of 2018 when two rival groups imagined vastly different directions for the fledgling currency.

The intention, with or without split, was supposed to be a digital coin that functioned more as a currency than a digital investment. It is this fertile ground that sets the scene for the site we’re about to look at: Bitcoin-cash-generator(dot)com.

Click to enlarge

Getting things started

The website claims to “inject exploits into Bitcoin Cash pools and blockchain.” They attempt to put pressure on visitors right from the start, claiming they limit use of the tool to 30 minutes per IP address, up to a maximum profit of 2.5BCH. That’s around £815/US$1,024, so it’s a tidy bit of profit for jumping some hoops. For reference, the minimum amount a visitor can ask for is 0.1 BCH, roughly £32/US$41.

Whatever slice of the pie a visitor picks, they’re going to get a little bit of money back…Or are they?

What hoops do we have to jump through?

Unlike many similar gaming-themed scam sites, surprisingly little. With no social aspect, there’s no real reason to plaster share buttons all over the place or ask to send to friends. This is all about the site visitor only. They simply have to “Enter your Bitcoin cash address bellow [sic]” and move a slider to select their desired amount. (And really, who will pick anything less than the maximum?) Then, they hit the start button.

Pop-ups abound of other IP addresses receiving amounts. “People” in the chatroom confirm it works great. Any hesitation a user might have had is likely gone at this point.

Click to enlarge

After confirming the desired amount, we’re off to the “this website is doing nothing at all” races.

Constructing the lie

Those familiar with the fake game points/ free gift card websites will know the drill. A collection of random boxes pops up, claiming to be hacking the Gibson. The more vaguely technical sounding it all is, the better—anything that sells the vision of actual, honest-to-goodness exploits doing strange exploity things in the background.

Click to enlarge

“Injecting transfer requests into the blockchain.” I hate when that happens.

Click to enlarge

“Connecting to blockchain maintenance channel”

Well of course, it always helps when you connect to the old blockchain maintenance channel.

This one is  a particular favourite of mine, as it’s every TV show’s attempt to show you some hacking on a screen in one hilarious image:

Click to enlarge

It also comes in handy for digging out multiple similar websites apparently using aspects of the same “We’re definitely hacking a blockchain, honest” code.

Multiple claims are made during the supposed hacking process that various attempts have failed to grab the cash, but they continue to persevere with it. Whereas many survey scams are almost instantaneous, these things really stretch out the illusion and make visitors wait a good few minutes while the titanic (fictional) battle rages in the background.

Eventually: success!

Sadly, success comes with a price. At this point, ye olde survey scam would ask you to fill in some offers. The free video game points site would ask you to install a dubious game or spam links across social media.


They need you to make a small donation, because of course they do. The site reads as follows:

The BitcoinCash network requires a small fee to be paid for each transaction that goes to the miners, else a transaction might never be confirmed. To ensure your transaction confirms consistently and reliably, pay the miners fee of 0.00316 BCH for this transaction at: [wallet address]

The request for 0.00316 BCH (roughly £1/US$1.30) is made regardless of whether you ask for the minimum/maximum amount of free cash. It doesn’t scale upwards.

Click to enlarge

Does this work?

The only thing that does work in all this is website visitors sending small amounts of cash to the people behind the website(s). As mentioned earlier, we’ve seen a few other sites doing much the same thing, such as freebtc(dot)uw(dot)hu and smartcoingenerator(dot)com:

Click to enlarge

Click to enlarge

Money trails

One interesting aspect of this type of scam branching out into digital coinland is increased visibility into site owner antics. You can only go so far with survey scams or random social media profiles sending out spam links. Here, however, much of what constitutes digital transactions are out there in the ether as a matter of public record.

There are entire sub-industries devoted to analysis of Bitcoin transactions and how people make their digital cash flow down the money tubes. Generally, most folks’ experience of watching the Bitcoin wheels go ’round are focused on plain old Bitcoin. Bitcoin Cash is a little different, but you can still take a look behind the scenes.

The various sites we’ve seen offer up different addresses to send their “small transactions,” and not all of them are focused on BitCoin Cash. With reference to the one used on Bitcoin Cash Generator, they do appear to have made a little money so far. It seems doubtful anyone is going to retire from it, though.

Another scam bites the dust

These Bitcoin Cash Generator sites are yet another sub-genre of survey scams that need to be filed under the “Something for nothing” label. If getting your hands on digital currency was this easy, everybody would be doing it. Instead, it’s a unique selling point for a handful of websites lurking in the corners of the net.

The post Steer clear of Bitcoin Cash generators appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Cooperating apps and automatic permissions are setting you up for failure

Malwarebytes - Tue, 07/02/2019 - 16:53

“Hey you. Someone from HR has invited you to a meeting on Thursday. Would you like me to add the appointment to the calendar?”

Receiving an email notification when someone has invited you to a meeting is a feature that many professionals would not like to miss. Being able to log in at certain sites with your Facebook profile might be less indispensable, but nevertheless, it’s a heavily-used functionality. What do these two functions have in common? They both require an integration between different apps, and this opens up some security and privacy risks.

Some practical problems

Recently, we were reminded that the Google Calendar notifications in Gmail provided scammers with the option to spam users with phishing links to sites that are out to steal user credentials. Basically, scammers were able to craft the links in the invitation so that they included a malicious link. Since this is a relatively unknown method, most people wouldn’t think twice before clicking.

Logging into sites with social media profiles more than doubles the privacy risks you run into by using either app separately. We say this because the data used by either app can easily be combined with those of the other app—therefore cybercriminals can come away with double the payday.

You may have seen these login options for Twitter, Google, and Facebook. And Facebook combines these risks with yet another problem. Many people that canceled their Facebook accounts (or thought they did) have found that coming back to a site where they used to log in with their Facebook account revives said Facebook profile and opens it up for the world to see again.

Seems easier to just choose Facebook or Google, right?

And we haven’t even touched upon the apps that grab the permission to post on these social media sites on your behalf.

Underlying problems

Before we can start to look for effective countermeasures, we need to understand the real foundation behind these security risks. The most common and well-known problems include:

  • Apps that refuse to work without permissions. They shouldn’t require integration.
  • Apps that grant other apps access to their data and settings.
  • Apps that are downloaded and installed by impulse. We tend to forget about them after we’ve stopped using them, but the data sharing goes on.
  • Jailbreaking, rooting, and sideloading apps. Apps outside the Google Play or App Store are not as secure. However, popular games like Fortnite were not available in Google Play, basically forcing their fans to compromise their safety to install the game.
  • Lack of awareness of the implications of granting permissions. Even when the permissions are clearly communicated (the app will be able to post to your Twitter account, for example), users have the inclination to think it will be all right to allow “trusted apps” full permissions.

Even though not every app in the Play Store is 100 percent trustworthy, you can be assured that at least some security checks have been performed. Google does require developers to limit their device permission requests to what’s really necessary for the app. And they do block many apps from the Play Store because they may be harmful, but there are always those that manage to slither through.

These are just the measures taken against apps that are potentially harmful. We shouldn’t forget those that invade or risk your privacy. What’s important to remember here is that when you are installing apps from other unknown sources, they most likely didn’t have to pass any scrutiny at all—and are a likely security or privacy risk.

A regular check of your list of apps may result in some good device-cleaning, which not only reduces your attack surface, but also might improve your device’s performance and speed. While you’re at it, check the permissions on some of the apps that you decide to keep. They may not need all of them to do what you want or expect the app to do for you.

When an app asks for permissions, carefully read what it is asking for and let that sink in before you allow it. I know that these requests always seem to come at an inconvenient moment. You are in a hurry and you want that notification out of your way so you can carry on and use the app.

But consider why a gaming app is asking for access to GPS location. Or how come that financial app wants access to all of your contacts. Is the app really worth turning over that private information? Also note that these requests are not limited to the install process. They may come after an update or when you are trying a new feature.null

Partial solutions

Right now, without more user awareness of the security risks of integration, and without the applications, software programs, or social media platforms narrowing down their permissions requests to only what’s necessary to make the program work, there are only partial solutions for those looking for convenient installation or login processes. However, these solutions do improve your overall security posture without sacrificing too many benefits.

When it comes to integrations, there are a few tips we are happy to share.


If you decide to unpair your apps and websites from Facebook, follow the directions below:

  • Under the Facebook menu, go to Settings.
  • Under Security, select Apps and websites then click on the “Logged in with Facebook” section.
  • Select to remove all the entries that you will no longer be using. You can also see what information each app was able to retrieve from your Facebook profile. Quite an eye-opener.

Google has an informative page in their Help Center about giving third-party apps access to your Google account. It reads:

“Depending on how you use Google products, some of the information in your account may be extra sensitive. When you give access to third-parties, they may be able to read, edit, delete, or share this private information.”

The integration between Gmail and Google calendar can be rendered less automated (and thus less of a security risk) by turning off the automatic calendar invitations feature. Here are the directions:

  • Go to the Event Setting menu in Google Calendar and disable the automatically add invitations option.
  • Enable the only show invitations to which I’ve responded one instead.
  • Also, users are advised to make sure that the Show declined events in the “View Options” section is also left unchecked.

Twitter has a similar page as Google called About third-party applications and log in sessions which warns:

“You should be cautious before giving third-party applications access to use your account.”

The page also provides information on how to remove access for sites and apps. Have a look and check for any unexpected guests.

Cooperating apps

I realize that cooperating apps are designed to make our life easier. After all, it’s frustrating if the left hand doesn’t know what the right hand is doing. And when everything works seamlessly together, our online life has a natural flow. I’m just asking you to give it some thought before you blindly allow integrations and permissions.

It looks as though users have shifted mindsets from “I have nothing to hide” to “They already know everything anyway.” But in both cases, it is true that you don’t have to hand your personal data to “them” on a silver platter, no matter who they are. Your personal information is too valuable to just give away. After all, that’s why cybercriminals (and legitimate organizations) are after it to begin with.

Stay safe out there!

The post Cooperating apps and automatic permissions are setting you up for failure appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (June 24 – 30)

Malwarebytes - Mon, 07/01/2019 - 17:02

Last week on Malwarebytes Labs, we peeled back the mystery on an elusive malware campaign that relied on blank JavaScript injections, detailed for readers our latest telemetry on the tricky GreenFlash Sundown exploit, and looked at one of the top campaigns directing traffic toward scareware pages for Microsoft’s Azure Cloud Services.

We also doubled down on our commitment—and significantly increased efforts—to detect stalkerware on victims’ devices.

Other cybersecurity news:

Stay safe, everyone!

The post A week in security (June 24 – 30) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Helping survivors of domestic abuse: What to do when you find stalkerware

Malwarebytes - Mon, 07/01/2019 - 16:51

We’re going to talk about something different today. We’re going to talk about domestic abuse.

Earlier this year, cybersecurity company Kaspersky Lab announced that the latest upgrade to its Android app would inform users about whether their devices were running stealthy, behind-the-scenes monitoring apps sometimes referred to as stalkerware.

This type of software can track unsuspecting victims’ locations, record phone calls, peer into text messages and emails, pry into locally-stored photos and videos, and rifle through web browsing activity, all while hidden from view.

Though often, and shamelessly, advertised as a tool for parents to track the activity of their children, these apps are commonly used against survivors of domestic abuse.

It serves as no surprise. Stalkerware coils around a victim’s digital life, giving abusive partners what they crave: control.

Electronic Frontier Foundation Cybersecurity Director Eva Galperin, who pushed Kaspersky Labs into improving its product, told Motherboard at the time of the company’s announcement:

“I would really like to see other [antivirus] companies follow suit, so that I can recommend them instead of just one company that has shown that they are committed to doing this… I’d like to see this be the industry standard so it doesn’t matter which product you’re downloading.”

Malwarebytes stands up to this commitment, as we have for years.

But starting today, we’re going to do more than improve our stalkerware detection capabilities. We’re going to help survivors understand this danger and know what to do if they’re being digitally tracked.

Finding proof of stalkerware

Stalkerware presents a unique detection problem for its victims—it often hides itself from public view, and any attempt to find it could be recorded by the stalkerware itself.

Further, the US government has done little to help. Despite a previous FBI investigation that led to the court-ordered shut down of the stalkerware app StealthGenie, countless other stalkerware apps still operate today.

CitizenLab, a research institution at the University of Toronto that focuses on technology and human rights, recently produced a study on the harms of stalkerware. Researchers studied eight apps based on their monitoring capabilities and relative popularity—analyzed through Google Trends, web searches, and “best of” lists. The study focused on the following apps which are used in the US, Canada, and Australia: FlexiSpy, Highster Mobile, Hoverwatch, Mobistealth, mSpy, TeenSafe, TheTruthSpy, and Cerberus.

Malwarebytes Labs has previously written about the technological signs of stalkerware—quickly-depleting battery life, increased data usage, and longer response times than usual—but we wanted to explore what stalkerware looks like from a behavioral aspect. We spoke to multiple domestic abuse networks and advocacy groups, and one troubling fact arose repeatedly:

Symptoms of stalkerware are not proof of stalkerware.

Erica Olsen, director of the Safety Net project for the National Network to End Domestic Violence, said her organization consistently hears stories from domestic abuse survivors who are struggling to explain how their partners know about their phone calls, text message conversations, emails, and even visited locations.

“Survivors could come to law enforcement and say ‘My ex knows about the text messages I sent, and I don’t know how they know that,’” Olsen said. But, she said, the signs don’t always guarantee the use stalkerware.

“Could the [recipient] have just told [the ex]?” Olsen said.

In determining the presence of stalkerware, Olsen said survivors should assess several factors:

  • Does their abusive partner have physical access to their device—a common situation for couples who live together?
  • Does their abusive partner know the passcode to unlock a device—another situation that depends on whether an abusive partner even allows for that level of agency and freedom from their victim.
  • Can their abusive partner view call logs on their device, learning who was called, how often, and for how long?
  • Does their abusive partner know the content of phone calls?
  • For domestic abuse survivors who have physically escaped their abuser, do their abusers still know about recently-taken photographs, locations visited, and any information that is typically locked behind an account or device passcode?

Further, Olsen said that domestic abuse survivors should study how the private information is being used by an abuser.

“Abusers will end up hinting at all the things they know that they shouldn’t know,” Olsen said. “That is the most frequent thing we hear from survivors, advocates, and law enforcement—the number one thing is identifying that an abuser knows ways too much.”

Olsen continued: “They know text messages, emails, they have access to accounts logged into via [the survivor’s] phone. That’s when we immediately have to start talking to survivors about what they think is safe.”

While every safety plan is unique, and every domestic abuse situation nuanced, Olsen offered one top-level piece of advice that applies to all survivors: Trust yourself. You know the feeling of being watched and controlled—whether through physical, emotional, mental, or digital means. You should trust those feelings and never discount your own concerns. 

The following ideas do not present a catch-all “solution” to finding stalkerware on a device. Instead, they present information that will hopefully guide survivors toward safety.

Evaluate your own level of safety

Determining what is safe for you is crucial. What you discover in this process can impact what other steps you take after learning about or suspecting the presence of stalkerware on your device.

Ask yourself several questions about what steps you can reliably take.

  • Do you have people you can ask for support?
  • Can you communicate with those people from a safe, non-monitored device?
  • Can you change your social media account passwords?
  • Can you change your own device passcode?
  • Are you allowed to have a device passcode?
  • Can you install antivirus and anti-malware programs on your own device?
  • What would be the consequences of your abusive partner discovering that you are trying to get rid of stalkerware?
  • Do you want to bring in law enforcement?

If all this seems overwhelming, remember that the National Domestic Violence Hotline is there to help.

Your every move might be recorded

When determining your own level of safety, it’s important to remember that everything you do on your compromised device could be recorded and watched by an abusive partner. That means your web browsing activity, your text messages, your emails, and all of your written correspondence could be far from private.

Know what apps are on your phone and what permissions they’re allowed

Olsen advised that domestic abuse survivors know what apps are on their devices at any given moment. While this guideline does not reliably catch hidden stalkerware apps, it does give you an opportunity to understand what other apps might have been installed on your device in an attempt to surveil you.

Remember, abusive partners do not need stalkerware to victimize and control their partners. Instead, Olsen said, abusers can rely on technology misuse.

“The vast majority of our work is in looking at misuses of general technologies that have 100 different good uses, that are never intended to be misused,” Olsen said. “The ownership [of abuse] is always on the abuser for their behavior. If you remove technology, you’re still going to have an abusive person.”

Shaena Spoor, program assistant with W.O.M.A.N. Inc., offered a couple of examples of technology misuses that she has heard about.

“We had some concerns with Snap Maps,” Spoor said about the Snapchat feature rolled out in 2017 that let users find their friends’ locations. Every user that agreed to share their location had their locations updated with every app use.

“For some people, they didn’t realize that locations had been [turned] on,” Spoor said. “If you don’t use the app very often, you’re just sitting on a map, super findable.”

Spoor said she also heard of domestic abuse survivors whose locations were tracked through the use of the location-tracking product Tile. Though sold to legitimately track luggage, wallets, and purses, domestic abusers can also sneak the small plastic device into your jacket or work bag. When the abuser loads up the Tile app, they can then get a real-time result of that device, and thus, your location.

“People use Tile, for example, and hide them in survivor’s stuff,” Spoor said. “[Survivors] are showing up at domestic violence shelters and finding it hidden in a bag.”

Create new online account logins and passwords from a safe device

This one comes straight from the National Network to End Domestic Violence’s Technology Safety project. You should think about making new account logins and passwords.

As one of the the Technology Safety project’s many resource said:

“If you suspect that anyone abusive can access your email or Instant Messaging (IM), consider creating additional email/IM accounts on a safer computer. Do not create or check new email/IM accounts from a computer that might be monitored.”

The Tech Safety resource also advises you to open new accounts with no identifying information, like real names or nicknames. This step should be considered for all important online accounts, including your banking and social media accounts.

Always remember to do this from a safe computer that is not being monitored.

Factory reset or toss your device

Multiple organizations recommended that any stalkerware victim take immediate steps to toss, or wipe clean, their current device. There are a few options:

  • Toss your device and buy a new one
  • Factory reset your device
  • Keep your compromised device, but purchase a new phone that you use for confidential conversations

Olsen advised that every situation has its own unique challenges, and she urged domestic abuse survivors to consider the potential outcomes of whatever option they choose. She said her organization works closely with domestic abuse survivors to come up with the best plan for them.

“We think about the abuser, who no longer has remote access to [the survivor]—they will try to get physical access, and that is a real concern which absolutely could happen,” Olsen said. “If the survivor thinks that [might happen], we try alternatives—buying a pay-as-you-go phone, use it to have critical conversations, private ones, but still keep the regular phone for silly things and to keep the [abuser] at bay.”

Chris Cox, founder of Operation Safe Escape, which works directly with domestic abuse networks and shelters and law enforcement to provide operational and cybersecurity support, echoed similar advice.

“What we always advise, consistently, if an abuser ever had access to the device, leave it behind. Never touch it. Get a burner,” Cox said, using the term “burner” to refer to a prepaid phone, purchased with cash. “You have to assume the device and the accounts are compromised.”

Further, Cox cautioned against survivors trying to wipe stalkerware from a device, as it could introduce a “new vulnerability” in which an abuser learns—through the stalkerware itself—that their victim is trying to thwart the abuser.

Instead, Cox said, “whenever possible, the device is left behind.”

Approach law enforcement

Working with the police is a step taken by survivors who want to take legal action, whether that means eventually obtaining a restraining order or bringing charges against their abuser.

Because of this step’s nuance, you should take caution.

Olsen said that, of the successful attempts she has learned of survivors working with local police, the survivors already have a firm safety plan in place, and they have built a relationship with domestic abuse shelters and advocates. She said that, together with their support network, survivors have managed to get confessions out of their abusers.

But, Olsen stressed, trying to get an abuser to admit to their abusive and potentially criminal behavior is not a step to be taken alone.

“I do not suggest doing this in isolation, but if they’re working with advocates, I have heard of some survivors strategically communicating with abusers,” Olsen said. “It is amazing how many times abusers admit to [using stalkerware].”

Also, survivors should be wary of how police can be used against them, said Cox.

“Abusers, as a whole, are adept at using the law as a weapon,” Cox said. “If a phone belongs to a victim, and it happens to be in the abuser’s name, if the victim leaves and the abuser reports it stolen, [law enforcement] are used as a weapon to track the victim down.”

Call the National Domestic Violence Hotline

If you find stalkerware on your device, or you have strong suspicions about an abusive partner knowing too much about your personal life—with details from text messages and knowledge of private photos—call the hotline from a safe device.

The number for the National Domestic Violence Hotline is 1−800−799−7233.

The hotline’s trained experts can help you find the safest path forward, all while maintaining your confidentiality.

Seek help from various online resources

If you want to find more information online, from a safe device, read through any of these resources about dealing with domestic abuse, stalkerware, and the misuse of technology:

Malwarebytes has also written a few articles on types of technology, malicious or not, that are often abused to their victims’ detriment. Awareness of what’s out there and how it can be used against you can help you stay safe:

And if you are able to install an anti-malware program on your mobile device, running a scan with Malwarebytes for Android can help you detect and remove stalkerware apps—as well as keep a log of which apps were installed on your phone, which is valuable information if you choose to work with law enforcement.

We’re here for you. We care. And we’ll always do what we can to help users have a safe online—and offline—experience with technology.

Stay tuned for our next article in our stalkerware series, which will explore which monitoring apps are safe for parents to use, and which should be avoided. Stay safe.

The post Helping survivors of domestic abuse: What to do when you find stalkerware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fake jquery campaign leads to malvertising and ad fraud schemes

Malwarebytes - Thu, 06/27/2019 - 16:14

Recently we became aware of new domains used by an old malware campaign known as ‘fake jquery’, previously documented by web security firm Sucuri. Thousands of compromised websites are injected with a reference to an external JavaScript called jquery.js.

However, there is something quite elusive about this campaign with regards to its payload. Indeed, to many researchers the supposedly malicious JavaScript is always blank.

In this blog we share how we were able to identify the purpose of the fake jquery malware infection by looking for artifacts and employing a variety of User-Agent strings and geolocations.

Unsurprisingly, we found a web of malicious redirects via malvertising campaigns with a strong focus on mobile users who are tricked into installing rogue apps. The end goal is to monetize via fullscreen adverts that pop up on your phone at regular intervals.

Looking for a clue

Our search begins by looking up some of the domains mentioned on Twitter by @Placebo52510486. There are thousands of sites listed by PublicWWW that have been injected with malicious jquery lookalikes.

While we do not know the exact infection vector, many of these websites are running an outdated Content Management System (CMS).

Like other researchers before, when we replayed traffic the supposedly malicious JavaScript was once again empty.

However, with some persistence and luck, we were able to find an archive of this script when it was not empty.

We can see that it contains a redirect to: financeleader[.]co. A cursory check on this domain confirms the host pairs corresponding to those fake jquery domains. It’s worth noting that browsing to the root domain without the special identifier will redirect to

Desktop web traffic

There is some geo-targeting involved for the redirections and clearly desktop users do not appear to be the primary focus here. From a US IP address, you are presented with a bogus site where all items point to the same link that redirect you to instantcheckmate[.]com.

Associated web traffic:

From a non US IP, you are redirected to a page that aggressively advertises VPNs:

Associated web traffic:

Mobile web traffic

Once we switch to a mobile User-Agent and Android in particular, we can see a lot more activity and a variety of redirects. For example in one case, we were served a bogus adult site that requires users to download an app in order to play the videos:

Associated web traffic:

This app is malicious (detected as Android/Trojan.HiddenAds.xt by Malwarebytes) and will generate full screen ads at regular intervals.

Traffic monetization and ad fraud

While we encountered some desktop traffic, we believe the primary goal of the fake jquery campaign is to monetize from mobile users. This would explain the level of filtering involved to hide non-qualified traffic.

We weren’t able to get an idea of the scale at play, especially considering that the domain initiating the redirects really only became active in late May. However, given the number of websites that have been compromised, this campaign is quite likely funneling a significant amount of traffic leading to ad fraud.

Malwarebytes users are protected against this campaign both on desktop and mobile.

Indicators of Compromise

Fake jquery domains:


Malicious APKs:
0e67fd9fc535e0f9cf955444d81b0e84882aa73a317d7c8b79af48d91b79ef19 a210c9960edc5362b23e0a73b92b4ce4597911b00e91e7d3ca82632485c5e68d

The post Fake jquery campaign leads to malvertising and ad fraud schemes appeared first on Malwarebytes Labs.

Categories: Techie Feeds

GreenFlash Sundown exploit kit expands via large malvertising campaign

Malwarebytes - Wed, 06/26/2019 - 18:30

Exploit kit activity has been relatively quiet for some time, with the occasional malvertising campaign reminding us that drive-by downloads are still a threat.

However, during the past few days we noticed a spike in our telemetry for what appeared to be a new exploit kit. Upon closer inspection we realized it was actually the very elusive GreenFlash Sundown EK.

The threat actors behind it have a unique modus operandi that consists of compromising ad servers that are run by website owners. In essence, they are able to poison the ads served by the affected publisher via this unique kind of malvertising.

In this blog, we review their latest campaign responsible for pushing ransomware, Pony and a coin miner. A number of publishers have been compromised and this marks the first time we see GreenFlash Sundown EK expand widely out of Asia.

Stealthy compromise

At first, we believed the attack originated from one ad network, but we were able to pinpoint where it came from by reviewing traffic captures. One of the affected publishers is onlinevideoconverter[.]com, a popular site to convert videos from YouTube and other platforms into files. According to SimilarWeb, it drives 200 million visitors per month:

Stats over the past few months show high traffic volume

People navigating to the page to convert YouTube videos into the MP4 format will be sent to the exploit kit, but only after some very careful fingerprinting. The full redirection sequence is shown below:

Web traffic leading to the exploit kit

The redirection mechanism is cleverly hidden within a fake GIF image that actually contains a well obfuscated piece of JavaScript:

Smart way to conceal JavaScript within an image

After some painful debugging, we can see that it links to fastimage[.]site:

Debugging the JavaScript reveals the next hop in the chain

The next few sessions contain more interesting code including a file loaded from fastimage[.]site/uptime.js which is actually a Flash object.

Another fancy method of performing a covert redirect

This performs the redirection to adsfast[.]site which we recognize as being part of the GreenFlash Sundown exploit kit. It uses a Flash Exploit to deliver its encoded payload via PowerShell:

Leveraging PowerShell is interesting because it allows to do some pre-checks before deciding to drop the payload or not. For example, in this case it will check that the environment is not a Virtual Machine. If the environment is acceptable, it will deliver a very visible payload in SEON ransomware:

SEON’s ransomware note

The ransomware uses a batch script to perform some of its duties, such as deleting shadow copies:

Batch helper to delete backups

GreenFlash Sundown EK will also drop Pony and a coin miner while victims struggle to decide the best course of action in order to recover their files.

Wider campaign

Our previous encounters with GreenFlash Sundown EK, for example during our winter 2019 exploit kits review, were always limited to South Korea. However, based on our telemetry this campaign is active in North America and Europe, which is an interesting departure for this threat group.

Telemetry stats showing where we found GreenFlash Sundown most active

Malwarebytes users were already protected against these drive-by attacks and we have informed the publisher about the compromise so that they can take action.

Indicators of Compromise

GreenFlash Sundown infrastructure:

Seon ransomware:


Coin miner:

[Update: 2019-06-28] Joseph Chen from Trend Micro has blogged about the return of this campaign called ShadowGate.

The post GreenFlash Sundown exploit kit expands via large malvertising campaign appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Recipe for success: tech support scammers zero in via paid search

Malwarebytes - Tue, 06/25/2019 - 15:00

Tech support scammers are known for engaging in a game of whack-a-mole with defenders. Case in point, last month there were reports that crooks had invaded Microsoft Azure Cloud Services to host fake warning pages, also known as browser lockers. In this blog, we take a look at one of the top campaigns that is responsible for driving traffic to those Azure-hosted scareware pages.

We discovered that the scammers have been buying ads displayed on major Internet portals to target an older demographic. Indeed, they were using paid search results to drive traffic towards decoy blogs that would redirect victims to a browlock page.

This scheme has actually been going on for months and has intensified recently, all the while keeping the same modus operandi. Although not overly sophisticated, the threat actors behind it have been able to abuse major ad platforms and hosting providers for several months.

Leveraging paid search results

Tech support scams are typically distributed via malvertising campaigns. Cheap adult traffic is usually first on the list for many groups of scammers. Not only is it cost effective, but it also plays into the psychology of users believing they got infected after visiting a dodgy website.

Other times, we see scammers actively targeting brands by trying to impersonate them. The idea is to reel in victims looking for support with a particular product or service. However, in this particular campaign, the crooks are targeting folks looking up food recipes.

There are two types of results from a search engine results page (SERP):

  • Organic search results that match the user’s search query based on relevance. The top listed sites are usually those that have the best Search Engine Optimization (SEO).
  • Paid search results, which are basically ads relevant to the user’s query. They require a certain budget where not all keywords are equal in cost.

Because paid search results are typically displayed at the top (often blending in with organic search results), they tend to generate more clicks.

We searched for various recipes on several different web portals (CenturyLink,, Yahoo! search and xfinity) and were able to easily find the ads bought by the scammers.

We do not have exact metrics on how many people clicked on those ads but we can infer that this campaign drew a significant amount of traffic based on two indicators: the first being our own telemetry and the second from a URL shortener used by one of the websites:

While those ads look typical and actually match our keyword search quite well, they actually redirect to websites created with malicious intent.

Decoy websites

To support their scheme, the scammers have created a number of food-related blogs. The content appears to be genuine, and there are even some comments on many of the articles.

However, upon closer inspection, we can see that those sites have basically taken content from various web developer sites offering paid or free HTML templates. “<!– Mirrored from…” is an artifact left by the HTTrack website copier tool. Incidentally, this kind of mirroring is something we often witness when it comes to browser locker pages that have been copied from other sites.

During our testing, visiting those sites directly did not create any malicious redirection, and they seemed to be absolutely benign. With only circumstantial evidence and without the so-called smoking gun, a case could not be made just yet.

Full infection chain

After some trial and error that included swapping various User-Agent strings and avoiding using commercial VPNs, we eventually were able to replay a full infection chain, from the original advert to the browser locker page.

The blog’s URL is actually called three consecutive times, and the last one performs a POST request with the eventual conditional redirect to the browlock. In the screenshot below, you can see the difference between proper cloaking (no malicious behavior) and the redirect to a browlock page:

Browlock page

The fake warning page is fairly standard. It checks for the type of browser and operating system in order to display the appropriate template to Windows and Mac OS victims.

The scammers often register entire ranges of hostnames on Azure by iterating through numbers attached to random strings. While many of those pages are taken down quickly, new ones are constantly popping back up in order to keep the campaign running. Here are some URI patterns we observed:


We believe the crooks may also be rotating the decoy site that performs the redirect in addition to the existing user filtering in order to evade detection from security scanners.

Finding the perpetrators

We do not condone interacting with scammers directly, but part of this investigation was about finding who was behind this campaign in order to take action and spare more victims.

To continue on with deception, the rogue technicians lied to us about the state of our computer and made up imaginary threats. The goal was to sell expensive support packages that actually add little value.

The company selling those services is A2Z Cleaner Pro (AKA Coretel Communications) and was previously identified by one victim in August 2018 in a blog comment on the FTC’s website.

Their webste is hosted at, where we found two other interesting artifacts. The first one is a company named CoreTel that is also used by the scammers as a kind of business entity. It appears to be a rip off from another domain that pre-existed by several years and also hosted on the same IP adddress:

And then, there are two new recipe sites that were both registered in June and, as with previous ones, they also use content copied from other places:

Mitigation and take down

Malwarebytes’ browser extension was already blocking the various browlock pages heuristically.

We immediately reported the fraudulent ads to Google and Microsoft (Bing), as well as the decoy blogs to GoDaddy. The majority of their domains have been taken down already and their ad campaigns banned.

This tech support scam campaign cleverly targeted an older segment of the population by using paid search results for food recipes via online portals used by many Internet Service Providers.

There is no doubt scammers will continue to abuse ad platforms and hosting providers to carry out their business. However, industry cooperation for takedowns can set them back and save thousands of victims from being defrauded.

Indicators of compromise

Decoy blogs







Regex to match browlock URIs on Azure


The post Recipe for success: tech support scammers zero in via paid search appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (June 17 – 23)

Malwarebytes - Mon, 06/24/2019 - 16:29

Last week on the Malwarebytes Labs blog, we took a look at the growing pains of smart cities, took a deep dive into AI, jammed along to Radiohead, and looked at the lessons learned from Chernobyl in relation to critical infrastructure. We also explored a new Steam phish attack, and pulled apart a Mac cryptominer.

Other cybersecurity news
  • Florida City falls to ransomware: Riviera Beach City Council agrees to pay $600,000 to regain use of hijacked computers. (Source: Forbes)
  • Smart TV virus warning goes AWOL: A peculiar promotional message warning about the  dangers posed to smart TVs goes missing. But why? (Source: The Register)
  • Used Nest cams allow continued cam access: This has been fixed, but read on for a look at what happens in the realm of IoT when old devices connect in ways you’d rather they didn’t. (Source: Wirecutter)
  • Fake profiles on LinkedIn go spying: An interesting tale of scammers making use of AI-generated profile pictures to make their bogus accounts look a little more believable. (source: Naked Security)
  • Bella Thorne takes fight to extortionists: The actress decided to share stolen photographs of herself to teach a hacker a lesson. (source: Hollywood Reporter)
  • This phish is a fan of encryption: A new scam claims an encrypted message is waiting, but you need to log in to view it. (Source: Bleeping Computer)
  • Mobile app concerns: High risk vulnerabilities abound in both iOS and Android apps. (Source: Help Net Security)
  • Twitter takes on state sponsored accounts: The social media platform took down around 5,000 accounts being used to push propaganda. (Source: Infosecurity Magazine)
  • Malware comes gunning for Google 2FA: A new attack tries its best to bypass additional security restrictions. (Source: We Live Security)
  • A security hole in one: Mobile malware attempts to swipe numerous pieces of personal information. (Source: SC Magazine)

Stay safe, everyone!

The post A week in security (June 17 – 23) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Mobile stalkerware: a long history of detection

Malwarebytes - Mon, 06/24/2019 - 15:00

Recently, we have received an alarming question from many Malwarebytes users, asking, “Do you detect stalkerware?” The answer is an overwhelming, “Absolutely, and for good reason!” Moreover, we have been doing so for a long time, and are expanding our efforts in the months to come.

Going back more than five years, Malwarebytes researchers have detected applications and software that monitor other people’s online behavior and physical whereabouts. Our firm belief then is what we hold to be true now: People who are being watched have a right to know. And, taking that a step further, people should be able to consciously choose which applications and software are on their machines.

It’s your device, your choice. But when it comes to stalkerware, we know it’s not as simple as that—especially for victims of domestic abuse. So that’s why we launched a concerted effort to build a more comprehensive list of stalkerware and block it via Malwarebytes for Android, as well as Malwarebytes for Mac and Windows. (Malwarebytes for iOS no longer has scanning capabilities because of Apple constraints.)

Over the last month, we analyzed more than 2,500 samples of programs that had been flagged in research algorithms as potential monitoring/tracking apps, spyware, or stalkerware. Our database of known stalkerware has now increased to include 100 applications that no one else detects, including seven that are, as of presstime, still on Google Play.

In addition, we’ve partnered with local shelters, nonprofit groups, and law enforcement, as well as other security professionals, to share intel and build awareness. Our aim is to protect domestic abuse victims on and off their devices. Stay tuned for more blogs with advice on what to do if you find stalkerware on your phone, and how parents and other individuals can determine if a monitoring app is safe to use.

What is stalkerware?

The term stalkerware can be applied to any application that can be used to stalk/spy on someone else. Stalkerware is often marketed as a legitimate mobile tracking program to keep tabs on loved ones, especially children. Some of these programs are used above board by families keeping a close eye on their kids’ devices or users looking to find lost phones/laptops. However, these programs are often misused—to the detriment of their victims—who can now be found wherever they are going, even if they are trying to get away from abusive partners or other dangerous individuals.

What can stalkerware do?

To get to what stalkerware can do, let’s first look at the longtime mobile threat category monitor, which is a subset of potentially unwanted programs (PUPs). Because some of these stalkerware applications can be used legitimately, they are currently flagged as programs users might not potentially want on their phones. However, once presented with what stalkerware can do (or once gaining knowledge of a program that’s been installed on their device without consent), many users will likely want to delete these apps.

To see how scary a monitoring app can be, for example, I invite you to read Mobile Menace Monday: beware of monitoring apps. To highlight, here is a list of information a monitoring app/stalkerware can gather— all of which can be sent to a remote user.

  • GPS location
  • Pictures taken with front/rear camera (unbeknownst to user)
  • SMS messages
  • Call history
  • Browser history
  • Recorded audio via device mic
  • Email accounts stored on device
  • Phone numbers in contact list
  • IP address of device
A monitoring app can pinpoint a device’s exact location.

Even scarier, some of these apps are easily available on Google Play. More on that later.

A step further

Outside of Google Play, there lives a malevolent class of malware known as spyware. It has all the features of monitoring apps along with even more information-gathering capabilities. This information is readily available to stalkers with real-time data on every step of their victims. In addition, spyware can be uploaded and remain undetected, stealthily hiding its presence deep within mobile or desktop devices. 

However, stalkerware can achieve much the same results as spyware, and it’s more readily available on the market. These applications represent real-life threats to domestic abuse victims, who can readily be tracked down (along with their children), even when hidden in shelters.

In expanding our efforts to block stalkerware, we are working side-by-side with shelters, non-profit organizations, other AV vendors, and law enforcement agencies to collect as many samples of stalkerware as we can, and train victims on what to do if they suspect they are being tracked. This is a matter of personal security for victims, and we take their safety seriously.

Hard stance on monitoring apps

There is a small set of monitoring apps actively available on Google Play.  These apps advertise themselves as helping hands for finding lost or stolen mobile devices, or for keeping track of younger children in the family. 

Admittedly, there is an argument that these apps can indeed be helpful in both of those cases. Nevertheless, the potential to have the same appalling outcome as spyware exists. For this reason, we aggressively detect monitoring apps, even if they are in Google Play.

If users have knowingly and willingly downloaded monitoring apps to their own devices, they needn’t delete them when we detect them. Directions on how to keep a program that you know and trust that we’ve flagged are here for Windows users. For Android users:

  1. Run a scan.
  2. On the results screen, below each checkbox is drop-down arrow. Click on the arrow.
  3. From the list of options, select “Ignore Always.” Future scans will no longer detect the app as suspicious.
Call to action

Historically, apps that fall under the stalkerware umbrella have been extremely difficult to track down. That’s why we are calling on our patrons to help! Please reach out if you or someone you know suspects an app can be used to stalk its victims—and especially let us know if Malwarebytes for Android does not currently detect that app. You can do so via our Malwarebytes Support Forum or by submitting a ticket with Malwarebytes support.

In addition, look out for our next article on stalkerware that aims to provide victims with guidance on how to tell if their device has stalkerware installed, and what to do if that’s the case.

Dedicated to protecting you

It is a haunting reality that technology can be used for abusive purposes, especially those with horrifying physical outcomes. With most malware, some far-off threat actor is making a profit off of strangers by selling their data, zapping their CPU, or scamming them into handing over a few hundred dollars. Although dirty, no one is physically harmed.

With stalkerware, there is a real-life threat with dire consequences.

There is no more important task for a cybersecurity company than to protect its users from harm—and stalkerware opens the door to the worst form of it. This is a pursuit that all of us on at Malwarebytes take on with upmost gravitas. We hope you will join us in the fight.

Stay safe out there!

The post Mobile stalkerware: a long history of detection appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Building Cooperation and Trust into the Web with IPFS

Mozilla Hacks - Wed, 08/29/2018 - 14:43

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

Some projects start small, aiming for incremental improvements. Others start with a grand vision, leapfrogging today’s problems by architecting an idealized world. The InterPlanetary File System (IPFS) is definitely the latter – attempting to replace HTTP entirely, with a network layer that has scale, trust, and anti-DDOS measures all built into the protocol. It’s our pleasure to have an introduction to IPFS today from Kyle Drake, the founder of Neocities and Marcin Rataj, the creator of IPFS Companion, both on the IPFS team at Protocol Labs -Dietrich Ayala

IPFS – The InterPlanetary File System

We’re a team of people all over the world working on IPFS, an implementation of the distributed web that seeks to replace HTTP with a new protocol that is powered by individuals on the internet. The goal of IPFS is to “re-decentralize” the web by replacing the location-oriented HTTP with a content-oriented protocol that does not require trust of third parties. This allows for websites and web apps to be “served” by any computer on the internet with IPFS support, without requiring servers to be run by the original content creator. IPFS and the distributed web unmoor information from physical location and singular distribution, ultimately creating a more affordable, equal, available, faster, and less censorable web.

IPFS aims for a “distributed” or “logically decentralized” design. IPFS consists of a network of nodes, which help each other find data using a content hash via a Distributed Hash Table (DHT). The result is that all nodes help find and serve web sites, and even if the original provider of the site goes down, you can still load it as long as one other computer in the network has a copy of it. The web becomes empowered by individuals, rather than depending on the large organizations that can afford to build large content delivery networks and serve a lot of traffic.

The IPFS stack is an abstraction built on top of IPLD and libp2p:

Hello World

We have a reference implementation in Go (go-ipfs) and a constantly improving one in Javascript (js-ipfs). There is also a long list of API clients for other languages.

Thanks to the JS implementation, using IPFS in web development is extremely easy. The following code snippet…

  • Starts an IPFS node
  • Adds some data to IPFS
  • Obtains the Content IDentifier (CID) for it
  • Reads that data back from IPFS using the CID

<script src=""></script> Open Console (Ctrl+Shift+K) <script> const ipfs = new Ipfs() const data = 'Hello from IPFS, <YOUR NAME HERE>!' // Once the ipfs node is ready ipfs.once('ready', async () => { console.log('IPFS node is ready! Current version: ' + (await // convert your data to a Buffer and add it to IPFS console.log('Data to be published: ' + data) const files = await ipfs.files.add(ipfs.types.Buffer.from(data)) // 'hash', known as CID, is a string uniquely addressing the data // and can be used to get it again. 'files' is an array because // 'add' supports multiple additions, but we only added one entry const cid = files[0].hash console.log('Published under CID: ' + cid) // read data back from IPFS: CID is the only identifier you need! const dataFromIpfs = await console.log('Read back from IPFS: ' + String(dataFromIpfs)) // Compatibility layer: HTTP gateway console.log('Bonus: open at one of public HTTP gateways:' + cid) }) </script>

That’s it!

Before diving deeper, let’s answer key questions:

Who else can access it?

Everyone with the CID can access it. Sensitive files should be encrypted before publishing.

How long will this content exist? Under what circumstances will it go away? How does one remove it?

The permanence of content-addressed data in IPFS is intrinsically bound to the active participation of peers interested in providing it to others. It is impossible to remove data from other peers but if no peer is keeping it alive, it will be “forgotten” by the swarm.

The public HTTP gateway will keep the data available for a few hours — if you want to ensure long term availability make sure to pin important data at nodes you control. Try IPFS Cluster: a stand-alone application and a CLI client to allocate, replicate and track pins across a cluster of IPFS daemons.

Developer Quick Start

You can experiment with js-ipfs to make simple browser apps. If you want to run an IPFS server you can install go-ipfs, or run a cluster, as we mentioned above.

There is a growing list of examples, and make sure to see the bi-directional file exchange demo built with js-ipfs.

You can add IPFS to the browser by installing the IPFS Companion extension for Firefox.

Learn More

Learn about IPFS concepts by visiting our documentation website at

Readers can participate by improving documentation, visiting, developing distributed web apps and sites with IPFS, and exploring and contributing to our git repos and various things built by the community.

A great place to ask questions is our friendly community forum:
We also have an IRC channel, #ipfs on Freenode (or on Matrix). Join us!

The post Dweb: Building Cooperation and Trust into the Web with IPFS appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Dweb: Building a Resilient Web with WebTorrent

Mozilla Hacks - Wed, 08/15/2018 - 14:49

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

The web is healthy when the financial cost of self-expression isn’t a barrier. In this installment of the Dweb series we’ll learn about WebTorrent – an implementation of the BitTorrent protocol that runs in web browsers. This approach to serving files means that websites can scale with as many users as are simultaneously viewing the website – removing the cost of running centralized servers at data centers. The post is written by Feross Aboukhadijeh, the creator of WebTorrent, co-founder of PeerCDN and a prolific NPM module author… 225 modules at last count! –Dietrich Ayala

What is WebTorrent?

WebTorrent is the first torrent client that works in the browser. It’s written completely in JavaScript – the language of the web – and uses WebRTC for true peer-to-peer transport. No browser plugin, extension, or installation is required.

Using open web standards, WebTorrent connects website users together to form a distributed, decentralized browser-to-browser network for efficient file transfer. The more people use a WebTorrent-powered website, the faster and more resilient it becomes.


The WebTorrent protocol works just like BitTorrent protocol, except it uses WebRTC instead of TCP or uTP as the transport protocol.

In order to support WebRTC’s connection model, we made a few changes to the tracker protocol. Therefore, a browser-based WebTorrent client or “web peer” can only connect to other clients that support WebTorrent/WebRTC.

Once peers are connected, the wire protocol used to communicate is exactly the same as in normal BitTorrent. This should make it easy for existing popular torrent clients like Transmission, and uTorrent to add support for WebTorrent. Vuze already has support for WebTorrent!

Getting Started

It only takes a few lines of code to download a torrent in the browser!

To start using WebTorrent, simply include the webtorrent.min.js script on your page. You can download the script from the WebTorrent website or link to the CDN copy.

<script src="webtorrent.min.js"></script>

This provides a WebTorrent function on the window object. There is also an
npm package available.

var client = new WebTorrent() // Sintel, a free, Creative Commons movie var torrentId = 'magnet:...' // Real torrent ids are much longer. var torrent = client.add(torrentId) torrent.on('ready', () => { // Torrents can contain many files. Let's use the .mp4 file var file = torrent.files.find(file =>'.mp4')) // Display the file by adding it to the DOM. // Supports video, audio, image files, and more! file.appendTo('body') })

That’s it! Now you’ll see the torrent streaming into a <video width="300" height="150"> tag in the webpage!

Learn more

You can learn more at, or by asking a question in #webtorrent on Freenode IRC or on Gitter. We’re looking for more people who can answer questions and help people with issues on the GitHub issue tracker. If you’re a friendly, helpful person and want an excuse to dig deeper into the torrent protocol or WebRTC, then this is your chance!



The post Dweb: Building a Resilient Web with WebTorrent appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Dweb: Social Feeds with Secure Scuttlebutt

Mozilla Hacks - Wed, 08/08/2018 - 16:01

In the series introduction, we highlighted the importance of putting people in control their social interactions online, instead of allowing for-profit companies be the arbiters of hate speech or harassment. Our first installment in the Dweb series introduces Secure Scuttlebutt, which envisions a world where users are in full control of their communities online.

In the weeks ahead we will cover a variety of projects that represent explorations of the decentralized/distributed space. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

This post is written by André Staltz, who has written extensively on the fate of the web in the face of mass digital migration to corporate social networks, and is a core contributor to the Scuttlebutt project. –Dietrich Ayala

Getting started with Scuttlebutt

Scuttlebutt is a free and open source social network with unique offline-first and peer-to-peer properties. As a JavaScript open source programmer, I discovered Scuttlebutt two years ago as a promising foundation for a new “social web” that provides an alternative to proprietary platforms. The social metaphor of mainstream platforms is now a more popular way of creating and consuming content than the Web is. Instead of attempting to adapt existing Web technologies for the mobile social era, Scuttlebutt allows us to start from scratch the construction of a new ecosystem.

A local database, shared with friends

The central idea of the Secure Scuttlebutt (SSB) protocol is simple: your social account is just a cryptographic keypair (your identity) plus a log of messages (your feed) stored in a local database. So far, this has no relation to the Internet, it is just a local database where your posts are stored in an append-only sequence, and allows you to write status updates like you would with a personal diary. SSB becomes a social network when those local feeds are shared among computers through the internet or through local networks. The protocol supports peer-to-peer replication of feeds, so that you can have local (and full) copies of your friends’ feeds, and update them whenever you are online. One implementation of SSB, Scuttlebot, uses Node.js and allows UI applications to interact with the local database and the network stack.

Using Scuttlebot

While SSB is being implemented in multiple languages (Go, Rust, C), its main implementation at the moment is the npm package scuttlebot and Electron desktop apps that use Scuttlebot. To build your own UI application from scratch, you can setup Scuttlebot plus a localhost HTTP server to render the UI in your browser.

Run the following npm command to add Scuttlebot to your Node.js project:

npm install --save scuttlebot

You can use Scuttlebot locally using the command line interface, to post messages, view messages, connect with friends. First, start the server:

$(npm bin)/sbot server

In another terminal you can use the server to publish a message in your local feed:

$(npm bin)/sbot publish --type post --text "Hello world"

You can also consume invite codes to connect with friends and replicate their feeds. Invite codes are generated by pub servers
owned by friends in the community, which act as mirrors of feeds in the community. Using an invite code means the server will allow you to connect to it and will mirror your data too.

$(npm bin)/sbot invite.accept $INSERT_INVITE_CODE_HERE

To create a simple web app to render your local feed, you can start the scuttlebot server in a Node.js script (with dependencies ssb-config and pull-stream), and serve the feed through an HTTP server:

// server.js const fs = require('fs'); const http = require('http'); const pull = require('pull-stream'); const sbot = require('scuttlebot/index').call(null, require('ssb-config')); http .createServer((request, response) => { if (request.url.endsWith('/feed')) { pull( sbot.createFeedStream({live: false, limit: 100}), pull.collect((err, messages) => { response.end(JSON.stringify(messages)); }), ); } else { response.end(fs.readFileSync('./index.html')); } }) .listen(9000);

Start the server with node server.js, and upon opening localhost:9000 in your browser, it should serve the index.html:

<html> <body> <script> fetch('/feed') .then(res => res.json()) .then(messages => { document.body.innerHTML = ` <h1>Feed</h1> <ul>${messages .filter(msg => msg.value.content.type === 'post') .map(msg => `<li>${} said: ${msg.value.content.text}</li>` ) }</ul> `; }); </script> </body> </html> Learn more

SSB applications can accomplish more than social messaging. Secure Scuttlebutt is being used for Git collaboration, chess games, and managing online gatherings.

You build your own applications on top of SSB by creating or using plug-ins for specialized APIs or different ways of querying the database. See secret-stack for details on how to build custom plugins. See flumedb for details on how to create custom indexes in the database. Also there are many useful repositories in our GitHub org.

To learn about the protocol that all of the implementations use, see the protocol guide, which explains the cryptographic primitives used, and data formats agreed on.

Finally, don’t miss the frontpage, which explains the design decisions and principles we value. We highlight the important role that humans have in internet communities, which should not be delegated to computers.

The post Dweb: Social Feeds with Secure Scuttlebutt appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Introducing the Dweb

Mozilla Hacks - Tue, 07/31/2018 - 14:00
Introducing the Dweb

The web is the most successful programming platform in history, resulting in the largest open and accessible collection of human knowledge ever created. So yeah, it’s pretty great. But there are a set of common problems that the web is not able to address.

Have you ever…

  • Had a website or app you love get updated to a new version, and you wished to go back to the old version?
  • Tried to share a file between your phone and laptop or tv or other device while not connected to the internet? And without using a cloud service?
  • Gone to a website or service that you depend on, only to find it’s been shut down? Whether it got bought and enveloped by some internet giant, or has gone out of business, or whatever, it was critical for you and now it’s gone.

Additionally, the web is facing critical internet health issues, seemingly intractable due to the centralization of power in the hands of a few large companies who have economic interests in not solving these problems:

  • Hate speech, harassment and other attacks on social networks
  • Repeated attacks on Net Neutrality by governments and corporations
  • Mass human communications compromised and manipulated for profit or political gain
  • Censorship and whole internet shutdowns by governments

These are some of the problems and use-cases addressed by a new wave of projects, products and platforms building on or with web technologies but with a twist: They’re using decentralized or distributed network architectures instead of the centralized networks we use now, in order to let the users control their online experience without intermediaries, whether government or corporate. This new structural approach gives rise to the idea of a ‘decentralized web’, often conveniently shortened to ‘dweb’.

You can read a number of perspectives on centralization, and why it’s an important issue for us to tackle, in Mozilla’s Internet Health Report, released earlier this year.

What’s the “D” in Dweb?!

The “d” in “dweb” usually stands for either decentralized or distributed.
What is the difference between distributed vs decentralized architectures? Here’s a visual illustration:

(Image credit:, your best source for technical clip art with animals)

In centralized systems, one entity has control over the participation of all other entities. In decentralized systems, power over participation is divided between more than one entity. In distributed systems, no one entity has control over the participation of any other entity.

Examples of centralization on the web today are the domain name system (DNS), servers run by a single company, and social networks designed for controlled communication.

A few examples of decentralized or distributed projects that became household names are Napster, BitTorrent and Bitcoin.

Some of these new dweb projects are decentralizing identity and social networking. Some are building distributed services in or on top of the existing centralized web, and others are distributed application protocols or platforms that run the web stack (HTML, JavaScript and CSS) on something other than HTTP. Also, there are blockchain-based platforms that run anything as long as it can be compiled into WebAssembly.

Here We Go

Mozilla’s mission is to put users in control of their experiences online. While some of these projects and technologies turn the familiar on its head (no servers! no DNS! no HTTP(S)!), it’s important for us to explore their potential for empowerment.

This is the first post in a series. We’ll introduce projects that cover social communication, online identity, file sharing, new economic models, as well as high-level application platforms. All of this work is either decentralized or distributed, minimizing or entirely removing centralized control.

You’ll meet the people behind these projects, and learn about their values and goals, the technical architectures used, and see basic code examples of using the project or platform.

So leave your assumptions at the door, and get ready to learn what a web more fully in users’ control could look like.

Note: This post is the introduction. The following posts in the series are listed below.

The post Introducing the Dweb appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds