Techie Feeds

Compromising vital infrastructure: communication

Malwarebytes - Fri, 02/08/2019 - 19:09

Have you ever been witness to a Wi-Fi failure in a household with school-aged children? If so, I don’t have to convince you that communication qualifies as vital infrastructure. For the doubters: when you see people risking their lives in traffic just to check their phone, you’ll understand why most adults consider instant communication to be vital as well.

Forms of communication

Humanity has come a long way in communication techniques. From drawings on the cave wall to wartime messages sent via courier to the Pony Express and now, the Internet. Modern communication tools enable us to reach most places across the world in a matter of seconds.

What are the lines of communications that are more or less vital to our everyday life?

  • The Internet
  • Telephone lines
  • Mobile telephone networks
  • TV and radio broadcasting

Granted, if one of these communication forms fails, part of its traffic can be taken over by another form, but they all have their specific pros and cons that make a durational outage hard to cope with. For example, most smartphones are capable of using both the mobile networks and the Internet, but the latter is limited to when they have Wi-Fi access. When cell phone towers go down, as they did during 9/11, users could send messages via Internet messaging services—at that time, AIM, but today WhatsApp, Facebook Messenger, or other platforms.

Growing importance

In the list I posted earlier, you may have felt that I missed out on letters and postcards, or snail-mail as we often call it. This is because a growing number of companies are keeping us informed through email, their websites, text messages, and other forms of communication that are way faster than postal services. Most companies will still send letters and paper bills if you ask for them, but it’s no longer the default. Our mail delivery services are increasingly starting to resemble package delivery services. They see a growing number of deliveries that require a physical transfer of an object rather than information alone.

Instead, the majority of modern communication is digital.

Securing digital communication

Digital information that needs to be kept from prying eyes and eavesdropping is usually encrypted. To establish secure communication, one may use encrypted mail, crypto-phones, and secure protocols on the Internet. Most of these encryptions are strong enough to withstand brute force attempts at entry—at least for long enough to outlive the usefulness of intercepting the message. Future computer systems like qubit quantum computers, however, may require us to upgrade the encryption strength that we use for these methods.

Breaking the Internet

Because of the way the Internet has grown and become more versatile, the Internet backbone is robust enough to withstand DDoS attacks of a large magnitude. Yet, there have been instances where an entire country, such as North Korea, was taken offline, or where an attack on a major DNS provider caused a serious disruption in the number of sites we were able to visit.

These attacks were targeted at systems that were important for specific parts of the Internet. Nevertheless, they demonstrated that there are weaknesses in the infrastructure that can be exploited to paralyze parts of the Internet, and therefore, parts of our vital communication.

Misinformation and fake news

Another growing problem with predominantly online communication is the spreading of fake news and deliberate misinformation. The most common reasons for spreading misinformation are political and financial gain, as well as attention. The problem has reached a size and impact that caused government bodies like the EU to announce countermeasures. During that process, and due to other influences social media has over its users, many organizations felt the need to hired hordes of moderators who are tasked with keeping the information spread on their platforms as clean and as honest as possible. This still fell short in some instances, such as the dramatic events in Myanmar where Facebook was used as a tool for ethnic cleansing. And these are not the only problems social media are trying to deal with.

Malware and communication

Communication is also a vital part of some types of malware, such as backdoors, Trojans, and especially spyware. After all, what use is it to spy on someone if you are unable to get your hands on the gathered information? Traditional malware communication relies on the use of Command and Control (C&C) servers. But since those servers can be taken down or blocked, malware authors have been looking at rotation systems like Domain Generating Algorithms and some other creative ideas, like using social media and other public platforms.

While you may use social media to stay in contact with family and friends, there are many forms of malware that use those same media for different purposes. Botnets are known to use Twitter as an outlet for spam, fraud, and fake news. But they also use it to send commands to Remote Access Trojans (RATs) that wait for code hidden in memes posted by a particular account.

In addition, malware exploits messenger platforms to communicate instructions. There’s the Goodsender malware, for which threat actors used the Telegram messenger platform to communicate with the malware and send HTTPS-protected instructions. Another well-known phenomenon are the Facebook Messenger apps that spread in a worm-like fashion by sending out links to friends in an attempt to trick users into being installed.

Social media countermeasures

While social media is struggling with its public reputation these days, they at least seem ready to take baby steps forward in tightening up security—whether that’s from political pressure or self-awareness. At an event in Brussels, Nick Clegg, Facebook’s head of global public relations, stated:

We are at the start of a discussion which is no longer about whether social media should be regulated, but how it should be regulated. We recognize the value of regulation, and we are committed to working with policymakers to get it right.

Working out the “how” could turn into a long-winded discussion, however. Maybe the rumors about a space laser communications system represent a step in the right direction. In theory, such a system could be used to improve security.

Better communication results in better security

Having all the facts helps us to improve security. Making sure that this information reaches the people that need it is a matter of effective communication strategy. And in some cases, it may be just as important that the information is not communicated so that it doesn’t fall into the wrong hands.

The National Intelligence Strategy released in January 2019 by the Office of the Director of National Intelligence states:

Nearly all information, communication networks, and systems will be at risk for years to come.

Therefore, an important part of communication strategy must be to recognize the risk and integrate the proper tools—such as end-to-end encryption or intel on certain platforms known to be used by cybercriminals, for example. The National Intelligence Strategy goes on to say that they’ll be “harnessing the full talent and tools of the IC [Intelligence Community] by bringing the right information, to the right people, at the right time.”

Cyberattacks on communication infrastructure

A pretty bizarre method of abusing communication happened when a family was scared into believing there was an ongoing nuclear attack, as some prankster accessed their Nest camera to issue realistic warnings about missiles heading to the US from North Korea.

More worrying is the trend for ransomware authors (especially groups using SamSam) to aim their targets at cities and small government bodies with the aim of shutting down infrastructure, including communications. Taking down a city website, as was the case in the city of Atlanta, cripples an important medium of disseminating citizen information, not to mention that the costs related to getting everything back online were absorbed with taxpayer money that could have been better spent on other services.

Information is crucial

Important decisions may be postponed when the person or body that is supposed to make that decision is unable to gather the information necessary. Communications are also a vital part of some malware infections. Perhaps organizations can use some of the ingenious methods malware authors have thought up when looking for ways to make vital lines of communication more robust. Redundancy is a good thing when it allows us to use multiple methods and networks to transmit the same information. On the other hand, it also enlarges the attack surface when it comes to sharing confidential information.

This does have an upside for the quality of free information. Because of all the communication options out there, some regimes are having an increasingly difficult time shielding their population from information they would rather keep under the carpet. This hasn’t stopped some, like China’s Great Firewall, from trying, though.

Communication is everywhere

Communication is truly always available to nearly everyone that wants it in the western world, and this readiness—and the danger that lurks with it—may shape how our generation is viewed far into the future. This may be the era when communication both flourished to its true potential, and reached its limits. After all, pitfalls are inherent when technology develops faster than regulation can keep up.

Maybe the developments we are seeing now are just another step forward for the eventual better regulation of communication, though I’m convinced it will not be the last step regulators need to take. In fact, 5G is already waiting around the corner to add another level in speed and bandwidth to an already connected society. Let’s see how this new technology impacts an already complex tapestry of communication triumphs and failures.

The post Compromising vital infrastructure: communication appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Merging Facebook Messenger, WhatsApp, and Instagram: a technical, reputational hurdle

Malwarebytes - Thu, 02/07/2019 - 16:53

Secure messaging is supposed to be just that—secure. That means no backdoors, strong encryption, private messages staying private, and, for some users, the ability to securely communicate without giving up tons of personal data.

So, when news broke that scandal-ridden, online privacy pariah Facebook would expand secure messaging across its Messenger, WhatsApp, and Instagram apps, a broad community of cryptographers, lawmakers, and users asked: Wait, what?

Not only is the technology difficult to implement, the company implementing it has a poor track record with both user privacy and online security.

On January 25, the New York Times reported that Facebook CEO Mark Zuckerberg had begun plans to integrate the company’s three messaging platforms into one service, allowing users to potentially communicate with one another across its separate mobile apps. According to the New York Times, Zuckerberg “ordered that the apps all incorporate end-to-end encryption.”

The initial response was harsh.

Abroad, Ireland’s Data Protection Commission, which regulates Facebook in the European Union, immediately asked for an “urgent briefing” from the company, warning that previous data-sharing proposals raised “significant data protection concerns.”

In the United States, Democratic Senator Ed Markey for Massachusetts said in a statement: “We cannot allow platform integration to become privacy disintegration.”

Cybersecurity technologists swayed between cautious optimism and just plain caution.

Some professionals focused on the clear benefits of enabling end-to-end encryption across Facebook’s messaging platforms, emphasizing that any end-to-end encryption is better than none.

Former Facebook software engineer Alec Muffet, who led the team that added end-to-end encryption to Facebook Messenger, said on Twitter that the integration plan “clearly maximises the privacy afforded to the greatest [number] of people and is a good idea.”

Others questioned Facebook’s motives and reputation, scrutinizing the company’s established business model of hoovering up mass quantities of user data to deliver targeted ads.

John Hopkins University Associate Professor and cryptographer Matthew Green said on Twitter that “this move could potentially be good or bad for security/privacy. But given recent history and financial motivations of Facebook, I wouldn’t bet my lunch money on ‘good.’”

On January 30, Zuckerberg confirmed the integration plan during a quarterly earnings call. The company hopes to complete the project either this year or in early 2020.

It’s going to be an uphill battle.

Three applications, one bad reputation

Merging three separate messaging apps is easier said than done.

In a phone interview, Green said Facebook’s immediate technological hurdle will be integrating “three different systems—one that doesn’t have any end-to-end encryption, one where it’s default, and one with an optional feature.”

Currently, the messaging services across WhatsApp, Facebook Messenger, and Instagram have varying degrees of end-to-end encryption. WhatsApp provides default end-to-end encryption, whereas Facebook Messenger provides optional end-to-end encryption if users turn on “Secret Conversations.” Instagram provides no end-to-end encryption in its messaging service.

Further, Facebook Messenger, WhatsApp, and Instagram all have separate features—like Facebook Messenger’s ability to support more than one device and WhatsApp’s support for group conversations—along with separate desktop or web clients.

Green said to imagine someone using Facebook Messenger’s web client—which doesn’t currently support end-to-end encryption—starting a conversation with a WhatsApp user, where encryption is set by default. These lapses in default encryption, Green said, could create vulnerabilities. The challenge is in pulling together all those systems with all those variables.

“First, Facebook will have to likely make one platform, then move all those different systems into one somewhat compatible system, which, as far as I can tell, would include centralizing key servers, using the same protocol, and a bunch of technical development that has to happen,” Green said. “It’s not impossible. Just hard.”

But there’s more to Facebook’s success than the technical know-how of its engineers. There’s also its reputation, which, as of late, portrays the company as a modern-day data baron, faceplanting into privacy failure after privacy failure.

After the 2016 US presidential election, Facebook refused to call the surreptitious collection of 50 million users’ personal information a “breach.” When brought before Congress to testify about his company’s role in a potential international disinformation campaign, Zuckerberg deflected difficult questions and repeatedly claimed the company does not “sell” user data to advertisers. But less than one year later, a British parliamentary committee released documents that showed how Facebook gave some companies, including Airbnb and Netflix, access to its platform in exchange for favors—no selling required.

Five months ago, Facebook’s Onavo app was booted from the Apple App Store for gathering app data, and early this year, Facebook reportedly paid users as young as 13-years-old to install the “Facebook Research” app on their own devices, an app intended strictly for Facebook employee use. Facebook pulled the app, but Apple had extra repercussions in mind: It removed Facebook’s enterprise certificate, which the company relied on to run its internal developer apps.

These repeated privacy failures are enough for some users to avoid Facebook’s end-to-end encryption experiment entirely.

“If you don’t trust Facebook, the place to worry is not about them screwing up the encryption,” Green said. “They want to know who’s talking to who and when. Encryption doesn’t protect that at all.”

If not Facebook, then who?

Reputationally, there are at least two companies that users look to for both strong end-to-end encryption and strong support of user privacy and security—Apple and Signal, which respectively run the iMessage and Signal Messenger apps.

In 2013, Open Whisper Systems developed the Signal Protocol. This encryption protocol provides end-to-end encryption for voice calls, video calls, and instant messaging, and is implemented by WhatsApp, Facebook Messenger, Google’s Allo, and Microsoft’s Skype to varying degrees. Journalists, privacy advocates, cryptographers, and cybersecurity researchers routinely praise Signal Messenger, the Signal Protocol, and Open Whisper Systems.

“Use anything by Open Whisper Systems,” said former NSA defense contractor and government whistleblower Edward Snowden.

“[Signal is] my first choice for an encrypted conversation,” said cybersecurity researcher and digital privacy advocate Bruce Schneier.

Separately, Apple has proved its commitment to user privacy and security through statements made by company executives, updates pushed to fix vulnerabilities, and legal action taken in US courts.

In 2016, Apple fought back against a government request that the company design an operating system capable of allowing the FBI to crack an individual iPhone. Such an exploit, Apple argued, would be too dangerous to create. Earlier last year, when an American startup began selling iPhone-cracking devices—called GrayKey—Apple fixed the vulnerability through an iOS update.

Repeatedly, Apple CEO Tim Cook has supported user security and privacy, saying in 2015: “We believe that people have a fundamental right to privacy. The American people demand it, the constitution demands it, morality demands it.”

But even with these sterling reputations, the truth is, cybersecurity is hard to get right.

Last year, cybersecurity researchers found a critical vulnerability in Signal’s desktop app that allowed threat actors to obtain users’ plaintext messages. Signal’s developers fixed the vulnerability within a reported five hours.

Last week, Apple’s FaceTime app, which encrypts video calls between users, suffered a privacy bug that allowed threat actors to briefly spy on victims. Apple fixed the bug after news of the vulnerability spread.

In fact, several secure messaging apps, including Telegram, Viber, Confide, Allo, and WhatsApp have all reportedly experienced security vulnerabilities, while several others, including Wire, have previously drawn ire because of data storage practices.

But vulnerabilities should not scare people from using end-to-end encryption altogether. On the contrary, they should spur people into finding the right end-to-end encrypted messaging app for themselves.

No one-size-fits-all, and that’s okay

There is no such thing as a perfect, one-size-fits-all secure messaging app, said Electronic Frontier Foundation Associate Director of Research Gennie Gebhart, because there’s no such thing as a perfect, one-size-fits-all definition of secure.

“In practice, for some people, secure means the government cannot intercept their messages,” Gebhart said. “For others, secure means a partner in their physical space can’t grab their device and read their messages. Those are two completely different tasks for one app to accomplish.”

In choosing the right secure messaging app for themselves, Gebhart said people should ask what they need and what they want. Are they worried about governments or service providers intercepting their messages? Are they worried about people in their physical environment gaining access to their messages? Are they worried about giving up their phone number and losing some anonymity?

In addition, it’s worth asking: What are the risks of an accident, like, say, mistakenly sending an unencrypted message that should have been encrypted? And, of course, what app are friends and family using?

As for the constant news of vulnerabilities in secure messaging apps, Gebhart advised not to overreact. The good news is, if you’re reading about a vulnerability in a secure messaging tool, then the people building that tool know about the vulnerability, too. (Indeed, developers fixed the majority of the security vulnerabilities listed above.) The best advice in that situation, Gebhart said, is to update your software.

“That’s number one,” Gebhart said, explaining that, though this line of defense is “tedious and maybe boring,” sometimes boring advice just works. “Brush your teeth, lock your door, update your software.”

Cybersecurity is many things. It’s difficult, it’s complex, and it’s a team sport. That team includes you, the user. Before you use a messenger service, or go online at all, remember to follow the boring advice. You’ll better secure yourself and your privacy.

The post Merging Facebook Messenger, WhatsApp, and Instagram: a technical, reputational hurdle appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Google Chrome announces plans to improve URL display, website identity

Malwarebytes - Wed, 02/06/2019 - 18:16

“Unreadable gobbledygook” is one way to describe URLs today as we know them, and Google has been attempting to redo their look for years. In their latest move to improve how Chrome—and of course, how the company hopes other browsers would follow suit—displays the URL in its omnibox (the address bar), Google’s Chrome team has made public two projects that usher them in this direction.

First, they launched Trickuri (pronounced as “trickery”) in time for a talk they were scheduled to present at the 2019 Enigma Conference. Second, they’re working on creating warnings of potentially phishy URLs for Chrome users.

Watch out! Some trickery and phishing ahead

Trickuri is an open-source tool where developers can test whether their applications display URLs accurately and consistently in different scenarios. The new Chrome warnings, on the other hand, are still in internal testing. Emily Stark, Google Chrome’s Usability Security Lead, confesses that the challenge lies in creating heuristic rules that appropriately flag malicious URLs while avoiding false positives.

“Our heuristics for detecting misleading URLs involve comparing characters that look similar to each other and domains that vary from each other just by a small number of characters,” Stark said in an interview with WIRED. “Our goal is to develop a set of heuristics that pushes attackers away from extremely misleading URLs, and a key challenge is to avoid flagging legitimate domains as suspicious. This is why we’re launching this warning slowly, as an experiment.”

These efforts are part of the team’s current focus, which is the detection and flagging of seemingly dubious URLs.

Google Chrome’s bigger goal

The URL is used to identify entities online. It is the first place users look to assess if they are in a good place or not. But not everyone knows the components that comprise a URL, much less what they mean in the syntax. Google’s push for website owners to use HTTPS has rippled across browser developers and consequently changed user preferences to favor such sites. In effect, by pushing HTTPS, Google changed the game to give the user a generally safer online experience.

However, Google wants to go beyond this, and are set on raising user awareness of relevant parts of the URL (so they can make quick security decisions). As a result, they are refining Chrome to present these parts while keeping users’ view away from the irrelevant gibberish.

In a separate interview with WIRED, Adrienne Porter Felt, Google Chrome’s Engineering Manager, has this to say about how users perceive the URL: “People have a really hard time understanding URLs. They’re hard to read, it’s hard to know which part of them is supposed to be trusted, and in general I don’t think URLs are working as a good way to convey site identity. So we want to move toward a place where web identity is understandable by everyone—they know who they’re talking to when they’re using a website and they can reason about whether they can trust them. But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it, as we’re figuring out the right way to convey identity.”

While these may all sound good, no one—not even Google—knows what the final, new URL will look like at this point.

A brief timeline of Google’s efforts in changing the URL

Below is a brief timeline of attempts Google has made to how Chrome displays the URL in the omnibox:

“…it just raises too many questions.”

With Google’s new effort, how will it affect redirection schemes? SEO? Shortened URLs?

Will this, in time, affect the behavior of new Internet users entering URLs in the address bar? For example, what if they don’t know that certain URL elements are (by default) elided but should now be typed in (such as entering ‘www’) to go to their desired destination? Will they understand the meaning of .com or .org if these elements are erased from view?

How can web developers, business owners, and consumers prepare themselves for these URL changes?

Right now, there’s more uncertainty than there are answers, as Google admits there is still a lot of work to be done. And based on the tone of several spokespersons in interviews, the company also expects some pushback and a degree of controversy that may arise from their efforts. Change is never easy.

Let’s keep an eye on this URLephant in the room, shall we? And let’s also keep giving feedback and raising questions. After all, this is Google’s way of keeping Chrome users away from URL-based threats. If changes are not implemented with thoughtful precision, then threat actors can easily find a way around them, or at least bank on the confusion resulting from a poor rollout of new processes.

While the future of URLs is still murky, one thing’s for certain: the bad guys know how to exploit weaknesses. So we hope, for Google and all its users’ sake, changes in URL display only serve to strengthen everyone’s security posture online.

Further reading:

 

The post Google Chrome announces plans to improve URL display, website identity appeared first on Malwarebytes Labs.

Categories: Techie Feeds

New critical vulnerability discovered in open-source office suites

Malwarebytes - Wed, 02/06/2019 - 17:16

A great number of attack techniques these days are using Microsoft Office documents to distribute malware. In recent years, there has been serious development on document exploit kit builders, not to mention the myriad of tricks that red-teamers have come up with to bypass security solutions.

In contrast to drive-by downloads that require no user interaction, document-based attacks usually incorporate some kind of social engineering component. From being lured into opening up an attachment to enabling the infamous macros, attackers are using all sorts of themes and spear phishing techniques to infect their victims.

While Microsoft Office gets all of the attention, other productivity software suites have been exploited before. We recall the Hangul Office Suite, which is popular in South Korea and was used by threat groups in targeted attacks.

Today we look at a vulnerability in LibreOffice, the free and open-source office suite, and OpenOffice (now Apache OpenOffice) available for Windows, Mac, and Linux. The bug (CVE-2018-16858) was discovered by Alex Inführ, who responsibly disclosed it and then published the results with an accompanying proof of concept on his blog.

Proof of concept code exploiting the vulnerability and launching the calculator

An attacker could take advantage of this bug to execute remote code, which could lead to compromising the system. The flaw uses a mouseover event, which means the user would have to be tricked into placing their mouse over a link within the document. This triggers execution of a Python file (installed with LibreOffice) and allows parameters to be passed and executed.

We tested several proof of concepts shared by John Lambert.  The process flow typically goes like this: soffice.exe -> soffice.bin -> cmd.exe -> calc.exe

The vulnerability has been patched in LibreOffice but not in Apache OpenOffice—yet. Malwarebytes users were already protected against it without the need for a detection update.

Time will tell if this vulnerability ends up being used in the wild. It’s worth noting that not everyone uses Microsoft Office, and threat actors could consider it for targeting specific victims they know may be using open-source productivity software.

The post New critical vulnerability discovered in open-source office suites appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How to browse the Internet safely at work

Malwarebytes - Tue, 02/05/2019 - 16:00

This Safer Internet Day, we teamed up with ethical hacking and web application security company Detectify to provide security tips for both workplace Internet users and web developers. This article is aimed at employees of all levels. If you’re a programmer looking to create secure websites, visit Detectify’s blog to read their guide to HTTP security headers for web developers.

More and more businesses are becoming security- and privacy-conscious—as they should be. When in years past, IT departments’ pleas for a bigger cybersecurity budget fell on deaf ears, this year, things have started looking up. Indeed, there is nothing quite like a lengthening string of security breaches to grab people’s—and executives’—attention.

Purely reacting to events is a bad terrible approach, and organizations who handle and store sensitive client information have learned this the hard way. It not only puts businesses in constant firefighting mode, but is also a sign that their current cybersecurity posture may be inadequate and in need of proper assessment and improvement.

Part of improving an organization’s cybersecurity posture has to do with increasing its employees’ awareness. Being their first line of defense, it’s only logical to educate users about cybersecurity best practices, as well as the latest threats and trends. In addition, by providing users with a set of standards to adhere to, and maintaining those standards, organizations can create an intentional culture of security.

Developing these training regimens requires a lot of time, effort, and perhaps a metaphorical arm and a leg. Do not be discouraged. Companies can start improving their security posture now by sharing with employees a helpful and handy guide on how to safely browse the Internet at work, whether on a desktop, laptop, or mobile phone.

Safe Internet browsing at work: a guideline

Take note that some of what’s listed below may already be in your company’s Employee Internet Security Policy, but in case you don’t have such a policy in place (yet), the list below is a good starting point.

Make sure that your browser(s) installed on your work machine are up-to-date. The IT department may be responsible for updating employee operating systems (OSes) on remote and in-house devices, as well as other business-critical software. It may not be their job, however, to update software you’ve installed yourself, such as your preferred browser. The number one rule when browsing the Internet is to make sure that your browser is up-to-date. Threats such as malicious websites, malvertising, and exploit kits can find their way through vulnerabilities that out-of-date browsers leave behind.

While you’re at it, updating other software on your work devices keeps browser-based threats from finding other ways onto your system. If IT doesn’t already cover this, update your file-compressor, anti-malware program, productivity apps, and even media players. It’s a tedious and often time-consuming task, but—shall we say—updating is part of owning software. You can use a software updater program to make the ordeal more manageable. Just don’t forget to update your updater, too.

If you have software programs you no longer use or need, uninstall them. Let’s be practical: There’s really no reason to keep software if you’ve stopped using it or if it’s just part of bloatware that came with your computer. It’s also likely that, since you’re not using that software, it’s incredibly outdated, making it an easy avenue for the bad guys to exploit. So do yourself a favor and get rid. That’s one less program to update.

Know thy browser and make the most of its features. Modern-day browsers like Brave, Vivaldi, and Microsoft Edge have launched quite a bit differently than their predecessors. Other than their appealing customization schemes, they also boast of being secure (or private) by default. By contrast, browsers that have been around for a long time continue to improve on these aspects, as well as their versatility and performance.

Regardless of which browser you use, make it a point to review its settings (if you haven’t already) and configure them with security and privacy in mind. The US-CERT has more detailed information on how to secure browsers, which you can read through here.

Refrain from visiting sites that your colleagues or boss would frown upon if they look over your shoulder. Most employees know that visiting and navigating to sites that are not safe for work (NSFW) is a no-no, but they still do it. Trouble is, not only does this welcome malware and other threats that target visitors of such sites, but it could also result in being—rightfully or not—accused of sexual harassment. Browsing sites of a pornographic nature could make coworkers incredibly uncomfortable, and if this behavior is generally tolerated by the brass, it could result in the company becoming the subject of a hostile environment claim. So if hackers don’t scare you, maybe a lawsuit will.

Use a password manager. It may sound like this advice is out of place, but we include it for a reason. Password managers don’t just store a multitude of passwords and keep them safe. They can also stop your browser from pre-filling fields on seemingly legitimate, but ultimately malicious sites, making it an unlikely protector against phishing attempts. So the next time you receive an email from your “bank” telling you there’s a breach and you have to update your password, and your password manager refuses to pre-fill that information, scrutinize the URL in the address bar carefully. You might be on a site you don’t want to be on.

Read: Why you don’t need 27 different passwords

Consider installing apps that act as another layer of protection. There is a trove of fantastic browser apps out there that a privacy- and security-conscious employee can greatly benefit from. Ad blockers, for instance, can strip out ads on sites that have been used by malicious actors before in malvertising campaigns. Tracker blockers allow one to block trackers on sites that monitor their behavior and gather information about them without their consent. Script blockers disable or prevent the execution of browser scripts, which criminals can misuse. Other apps, such as HTTPS Everywhere, force one’s browser to direct users to available HTTPS versions of websites.

Consider sandboxing. A sandbox is software that emulates an environment where one can browse the Internet and run programs independently from the actual endpoint. It’s typically used for testing and analyzing files to check if they’re safe to deploy and run.

We’re not saying that employees should know how to analyze files (although kudos if you can). Only that employees who normally open attachments from their personal emails, stumble into sites that may be deemed sketchy at best, or want to check out programs from third-party vendors do so in a safe setup that is isolated from their office network. Here is a list of free sandbox software you can read more about if you’re interested in trying one out.

Assume you are a target. Not many employees would like to admit this. In fact, it may not have crossed their minds until now. A lot of small businesses, for example, would like to think that they cannot be targets of cyberattacks because criminals wouldn’t go after “the little guy.” But various surveys, intelligence, and research tell a different story.

Employees need to change their thinking. Each time we go online at work, whether for valid reasons or not, we are putting our companies at risk. So we must take the initiative to browse safely, adopt cybersecurity best practices, and embrace training sessions with open minds. Realize that a lot is at stake in the office environment, and a single mouse click on a bad link could bring down an entire business. Do you want to be the person responsible?

We’re all in this together

When it comes to preventing online threats from infiltrating your organization’s network and keeping sensitive company and client data secure, it is true that they are no longer just IT concerns. Cybersecurity and privacy are and should be every employee’s concern—from the rank-and-file up to the managerial and executive level.

Indeed, no one should be exempted from continuous cybersecurity training, nor should high-ranking officials go on thinking that company policies don’t apply to them. If every employee can adhere to the simple guideline above, we believe that organizations of all sizes are already in a better security posture than before. This is just the first step, however. There is still the need for organizations to assess their cybersecurity and privacy needs, so they can effectively invest in tools and services that help better secure their unique work environment. Whatever changes they choose to implement that require employee participation, IT and high-ranking work officials must ensure that everyone is in it together.

Stay safe!

More Safer Internet Day blog posts:

The post How to browse the Internet safely at work appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Movie stream ebooks gun for John Wick 3 on Kindle store

Malwarebytes - Mon, 02/04/2019 - 17:30

We discovered a novel spam campaign over the weekend, targeting fans of John Wick on the Amazon Kindle store. The scam itself involves paying for what appears to be the upcoming third movie, turns into a bogus ebook, and goes on to hyperlink potential victims to a collection of third-party websites.

How does this begin?

With a dog, a grieving assassin, and a pencil.

Actually, it begins with me hunting for John Wick graphic novels on the Kindle store. What I found isn’t exactly hidden from view—as you can see from the screenshots, the bogus results kick in right under the second genuine entry:

Click to enlarge

What are we looking at here?

Roughly 40 or so individual items uploaded from around January 25 to February 2, each one from a different “author.” At first glance, you might think you’re looking at movies, thanks to the play button icon on each image preview. The fact that each entry is called something along the lines of “John Wick 3: free movie HD” probably helps, too.

Click to Enlarge

All of the items are on sale for a variety of prices including £0.99 each, £9.93, £12.19, and up to an astonishing £15.25 (roughly $20 USD). A few of them are listed as free, and all of them have a preview available.

Click to enlarge

At this point, someone seeing this may think they’re actually buying a copy of John Wick 3. This is where it gets interesting.

This isn’t John Wick 3, is it?

Correct, it absolutely is not John Wick 3. What we have here is an incredibly basic ebook with a “play movie” image bolted onto the preview. Opening up the preview gives us a slice of “coming soon” style text for the movie, due out in May.

The text reads as follows, and appears to be the same content used in each ebook:

John Wick: Chapter 3 – Parabellum 

When we last observed John Wick, he wasn’t in the best shape as he’d quite recently had a worldwide contract hit put out on him toward the finish of John Wick: Chapter 2.  

So most would agree that the third motion picture in the hit activity establishment, driven by Keanu Reeves, won’t be a steady walk around the recreation center. Indeed, even the full title, John Wick: 

Chapter 3 – Parabellum, insights at the massacre in store as Reeves clarified recently.  

“[It means] get ready for war. It’s a piece of that popular sentence, ‘Si vis pacem, para bellum’ which interprets as, ‘On the off chance that you need harmony, get ready for war’,” he laid out. All things considered, Wick said he’d “execute them all” toward the finish of Chapter 2.

Looking at the “Click here” text isn’t useful on a mobile device, because in practice I couldn’t get it to recognise my clicks. I also couldn’t figure out what the clickable link was from looking at it on the mobile, either. With that in mind, it was time to port over to a desktop and fire up an appropriate reader.

A quick port to a desktop reader later, and we now have a fully clickable link:

Click to enlarge

Where does the link go?

It takes would-be Wick watchers to:

Livemovie(dot)xyz/play(dot)php?movie=458156

Which is a portal that claims to offer up multiple movies:

Click to enlarge

The movie we’re interested in here is John Wick 3:

Click to enlarge

No matter what you do at this point, the only option here is “be forwarded to another site” via the register button: 

Click to enlarge

Our tour of the movie world upside-down now takes us to:

Flowerfun(dot)net/en/html/sf/registration/eone.html

Click to enlarge

This style of site may be familiar to regular readers. They typically claim to offer all sorts of media content and claim free sign ups, but there’s usually a rolling charge or fees somewhere in the mix. The site says the following:

You agree that, on registration for a Membership, you authorise us to place a pre-authorisation hold (between USD $1.00 to 2.00) on your Payment Card to validate your billing address and other Payment Card information.

Depending on your region, you may find yourself sent to similar sites like:

signup(dot)lymemedia(dot)net

Click to enlarge

However, there is no further information in the T&C or Privacy Policy for either site that states exactly what sort of payment is (or isn’t) expected after signing up. One thing is for certain: Someone wasting up to £15 on a bogus ebook then bouncing from site to site isn’t going to end up with a legitimate version of John Wick 3.

Don’t set him off

It’s tricky to flag dubious content on the Kindle store, as you have to report each title individually and give reasons. We contacted Amazon customer support and have been informed these ebooks have been escalated to the appropriate teams.

Amazon has had problems with fake ebooks before, though those were in the business of swiping author’s content and making as much money as possible before being shut down. What we have here are worthless ebooks with no content, save for clickthrough links to streaming portals. At time of writing, the ebooks we discovered are still available for purchase [UPDATE, 5th Feb: we’ve not heard back from Amazon, although all of the dubious ebooks now appear to have been removed].

If you’re on the hunt for John Wick, the lesson is clear: don’t bring an ebook to a gunfight.

The post Movie stream ebooks gun for John Wick 3 on Kindle store appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (January 28 – February 3)

Malwarebytes - Mon, 02/04/2019 - 17:00

Last week, we ran another in our interview with a malware hunter series, explained a FaceTime vulnerability, and took a deep dive into a new stealer. We also threw some light  on a Houzz data breach, and what exactly happened between Apple and Facebook.

Other cybersecurity news
  • Kwik Fit hit by malware: Car service specialist runs into trouble when systems go offline. (Source: BBC)
  • Mozilla publishes tracking policy: Mozilla fleshes out out their vision of what is and isn’t acceptable in tracking land. (Source: Mozilla)
  • Distracting smart speakers: How you can effectively drown out your smart speaker with a bit of distraction. (Source: The Register)
  • Privacy attack aimed at 3/4/5G users: Theoretical fake mobile towers are back in business, with an investment in monitoring device owner activities. (Source: Help Net Security)
  • How my Instagram was hacked: A good warning about the perils of password reuse. (Source: Naked Security)
  • Social media identity thieves: Scammers will stop at nothing to pull some heartstrings and make a little money in the bargain. (Source: ABC news)
  • Another smart home hacked: A family recounts their horror at seeing portions of their home cut open for someone’s amusement. (Source: Komando)
  • Facebook mashup: Plans to combine Whatsapp, Instagram, and Facebook Messenger are revealed with security questions raised. (Source: New York Times)
  • Phishing attacks continue to rise: Worrying stats via security experts polled who agree in large numbers that phishing is at the same level or higher than it was previously. (Source: Mashable)
  • Researchers discover malware-friendly hosting service: After a spike in infections, researchers track things back to a host that looked like a “hornet’s nest of malware.” (Source: TechCrunch)

Stay safe, everyone!

The post A week in security (January 28 – February 3) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Houzz data breach: Why informing your customers is the right call

Malwarebytes - Fri, 02/01/2019 - 18:00

Houzz is an online platform dedicated to home renovation and design. Today (February 1, 2019), they notified their customers about a data breach that reportedly happened in December 2018.

Data breaches unfortunately have become a common event. In fact, we dubbed 2018 the year of the data breach tsunami. Also Houzz is not a giant corporation with millions of customers. So why are we writing about this, you may ask? Mainly because we feel there are some giant corporations out there who can learn from this event as an example on how to handle a data breach properly.

Turnaround

Discovering and informing your customers about a breach that happened less than two months ago is a lot better than what we have seen recently. They did not wait until the investigation on how the breach happened was finished. As soon as they knew what was stolen, they decided to inform those concerned. Of course it is imperative that you get this information into your customers’ hands as soon as possible. Which is probably why the investigation is being conducted by a leading forensics firm. Law enforcement has been notified as well.

Informing customers

Houzz informed their customers directly by email, as well as on their website, about the breach. They said:

Houzz recently learned that a file containing some of our user data was obtained by an unauthorized third party.

The mail starts with this disclosure, goes on to explain what happened, and which information was stolen. It also contains a link to their website, where you can find more information.

The information given is concise and precise—not just some general remark that no financial information was stolen, which thankfully wasn’t indeed. Houzz included a list of information that was stolen.

The following types of information could have been impacted by this incident:

  • Certain publicly visible information from a user’s Houzz profile only if the user made this information publicly available (e.g., first name, last name, city, state, country, profile description)
  • Certain internal identifiers and fields that have no discernible meaning to anyone outside of Houzz (e.g. country of site used, whether a user has a profile image)
  • Certain internal account information (e.g., user ID, prior Houzz usernames, one-way encrypted passwords salted uniquely per user, IP address, and city and ZIP code inferred from IP address) and certain publicly available account information (e.g., current Houzz username and if a user logs into Houzz through Facebook, the user’s public Facebook ID)

Importantly, this incident does not involve Social Security numbers or payment card, bank account, or other financial information.

On the website, customers can find detailed information on how to change their password. And, like we have done in the past, they advise their customers to use a unique password for each service, which does not need to be as big a hassle as you might expect.

Improvements

Houzz announced security improvements without going into detail. While customers might find this vague, it makes sense to withhold the specifics, as the investigation is ongoing, and they wouldn’t want to make threat actors any wiser. Seeing that they were already using one-way encrypted passwords salted uniquely per user was certainly encouraging.

Dealing with data breaches

Data breaches happen all the time. It happens to the best of companies. It’s the way those organizations deal with them that can save face. What other businesses can take away from this example:

  • Inform customers as soon as it makes sense and be precise about the stolen information.
  • Approach your customers directly. Don’t let them read about it in the papers or social media.
  • Engage law enforcement and a firm specialized in forensic investigations.
  • Learn from what went wrong and improve on that.

Stay safe, everyone!

The post Houzz data breach: Why informing your customers is the right call appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Apple pulls Facebook enterprise certificate

Malwarebytes - Thu, 01/31/2019 - 16:44

It’s been an astonishing few days for Facebook. They’ve seen both an app and their enterprise certificate removed and revoked with big consequences.

What happened?

Apple issue enterprise certificates to organizations with which they can create internal apps. Those apps don’t end up released on the Apple store, because the terms of service don’t allow it. Anything storefront-bound must go through the mandatory app checks by Apple before being loaded up for sale.

What went wrong?

Facebook put together a “Facebook research” market research app using the internal process. However, they then went on to distribute it externally to non-Facebook employees. And by “non Facebook employees” we mean “people between the ages of 13 to 35.” In return for access to large swathes of user data, the participants received monthly $20 gift cards.

The program was managed via various Beta testing services, and within hours of news breaking, Facebook stated they’d pulled the app.

Problem solved?

Not exactly. Apple has, in fact, revoked Facebook’s certificate, essentially breaking all of their internal apps and causing major disruptions for their 33,000 or so employees in the process. As per the Apple statement:

We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers…a clear breach of their agreement.

Whoops

Yes, whoops. Now the race is on to get things back up and running over at Facebook HQ. Things may be a little tense behind the scenes due to, uh, something similar involving a VPN-themed app collecting data it shouldn’t have been earlier this year. That one didn’t use the developer certificate, but it took some 33 million downloads before Apple noticed and decided to pull the plug.

Could things get any worse for Facebook?

Cue Senator Ed Markey, with a statement on this particular subject:

It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding of how much data they’re handing over and how sensitive it is,” said Senator Markey. “I strongly urge Facebook to immediately cease its recruitment of teens for its Research Program and explicitly prohibit minors from participating. Congress also needs to pass legislation that updates children’s online privacy rules for the 21st century. I will be reintroducing my ‘Do Not Track Kids Act’ to update the Children’s Online Privacy Protection Act by instituting key privacy safeguards for teens.

But my concerns also extend to adult users. I am alarmed by reports that Facebook is not providing participants with complete information about the extent of the information that the company can access through this program. Consumers deserve simple and clear explanations of what data is being collected and how it being used.

Well, that definitely sounds like a slide towards “worse” instead of “better.”

A one-two punch?

Facebook is already drawing heavy criticism this past week for the wonderfully-named “friendly fraud” practice of kids making dubious purchases, and chargebacks being made. It happens, sure, but perhaps not quite like this. From the linked Register article:

Facebook, according to the full lawsuit, was encouraging game devs to build Facebook-hosted games that allowed children to input parents’ credit card details, save those details, and then bill over and over without further authorisation.

While large amounts of money were being spent, some refunds proved to be problematic. Employees were querying why most apps with child-related issues are “defaulting to the highest-cost setting in the purchase flows.” You’d better believe there may be further issues worth addressing.

What next?

The Facebook research program app will continue to run on Android, which is unaffected by the certificate antics. There’s also this app from Google in Apple land which has since been pulled due to also operating under Apple’s developer enterprise program. No word yet as to whether or not Apple will revoke Google’s certificate, too. It could be a bumpy few days for some organizations as we wait to see what Apple does next. Facebook, too, could certainly do with a lot less bad publicity as it struggles to regain positive momentum. Whether that happens or not remains to be seen.

The post Apple pulls Facebook enterprise certificate appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Analyzing a new stealer written in Golang

Malwarebytes - Wed, 01/30/2019 - 17:00

Golang (Go) is a relatively new programming language, and it is not common to find malware written in it. However, new variants written in Go are slowly emerging, presenting a challenge to malware analysts. Applications written in this language are bulky and look much different under a debugger from those that are compiled in other languages, such as C/C++.

Recently, a new variant of Zebocry malware was observed that was written in Go (detailed analysis available here).

We captured another type of malware written in Go in our lab. This time, it was a pretty simple stealer detected by Malwarebytes as Trojan.CryptoStealer.Go. This post will provide detail on its functionality, but also show methods and tools that can be applied to analyze other malware written in Go.

Analyzed sample

This stealer is detected by Malwarebytes as Trojan.CryptoStealer.Go:

Behavioral analysis

Under the hood, Golang calls WindowsAPI, and we can trace the calls using typical tools, for example, PIN tracers. We see that the malware searches files under following paths:

"C:\Users\tester\AppData\Local\Uran\User Data\" "C:\Users\tester\AppData\Local\Amigo\User\User Data\" "C:\Users\tester\AppData\Local\Torch\User Data\" "C:\Users\tester\AppData\Local\Chromium\User Data\" "C:\Users\tester\AppData\Local\Nichrome\User Data\" "C:\Users\tester\AppData\Local\Google\Chrome\User Data\" "C:\Users\tester\AppData\Local\360Browser\Browser\User Data\" "C:\Users\tester\AppData\Local\Maxthon3\User Data\" "C:\Users\tester\AppData\Local\Comodo\User Data\" "C:\Users\tester\AppData\Local\CocCoc\Browser\User Data\" "C:\Users\tester\AppData\Local\Vivaldi\User Data\" "C:\Users\tester\AppData\Roaming\Opera Software\" "C:\Users\tester\AppData\Local\Kometa\User Data\" "C:\Users\tester\AppData\Local\Comodo\Dragon\User Data\" "C:\Users\tester\AppData\Local\Sputnik\Sputnik\User Data\" "C:\Users\tester\AppData\Local\Google (x86)\Chrome\User Data\" "C:\Users\tester\AppData\Local\Orbitum\User Data\" "C:\Users\tester\AppData\Local\Yandex\YandexBrowser\User Data\" "C:\Users\tester\AppData\Local\K-Melon\User Data\"

Those paths point to data stored from browsers. One interesting fact is that one of the paths points to the Yandex browser, which is popular mainly in Russia.

The next searched path is for the desktop:

"C:\Users\tester\Desktop\*"

All files found there are copied to a folder created in %APPDATA%:

The folder “Desktop” contains all the TXT files copied from the Desktop and its sub-folders. Example from our test machine:

After the search is completed, the files are zipped:

We can see this packet being sent to the C&C (cu23880.tmweb.ru/landing.php):

Inside

Golang compiled binaries are usually big, so it’s no surprise that the sample has been packed with UPX to minimize its size. We can unpack it easily with the standard UPX. As a result, we get plain Go binary. The export table reveals the compilation path and some other interesting functions:

Looking at those exports, we can get an idea of the static libraries used inside.

Many of those functions (trampoline-related) can be found in the module sqlite-3: https://github.com/mattn/go-sqlite3/blob/master/callback.go.

Function crosscall2 comes from the Go runtime, and it is related to calling Go from C/C++ applications (https://golang.org/src/cmd/cgo/out.go).

Tools

For the analysis, I used IDA Pro along with the scripts IDAGolangHelper written by George Zaytsev. First, the Go executable has to be loaded into IDA. Then, we can run the script from the menu (File –> script file). We then see the following menu, giving access to particular features:

First, we need to determine the Golang version (the script offers some helpful heuristics). In this case, it will be Go 1.2. Then, we can rename functions and add standard Go types. After completing those operations, the code looks much more readable. Below, you can see the view of the functions before and after using the scripts.

Before (only the exported functions are named):

After (most of the functions have their names automatically resolved and added):

Many of those functions comes from statically-linked libraries. So, we need to focus primarily on functions annotated as main_* – that are specific to the particular executable.

Code overview

In the function “main_init”, we can see the modules that will be used in the application:

It is statically linked with the following modules:

Analyzing this function can help us predict the functionality; i.e. looking the above libraries, we can see that they will be communicating over the network, reading SQLite3 databases, and throwing exceptions. Other initializers suggests using regular expressions, zip format, and reading environmental variables.

This function is also responsible for initializing and mapping strings. We can see that some of them are first base64 decoded:

In string initializes, we see references to cryptocurrency wallets.

Ethereum:

Monero:

The main function of Golang binary is annotated “main_main”.

Here, we can see that the application is creating a new directory (using a function os.Mkdir). This is the directory where the found files will be copied.

After that, there are several Goroutines that have started using runtime.newproc. (Goroutines can be used similarly as threads, but they are managed differently. More details can be found here). Those routines are responsible for searching for the files. Meanwhile, the Sqlite module is used to parse the databases in order to steal data.

Then, the malware zips it all into one package, and finally, the package is uploaded to the C&C.

What was stolen?

To see what exactly which data the attacker is interested in, we can see look more closely at the functions that are performing SQL queries, and see the related strings.

Strings in Golang are stored in bulk, in concatenated form:

Later, a single chunk from such bulk is retrieved on demand. Therefore, seeing from which place in the code each string was referenced is not-so-easy.

Below is a fragment in the code where an “sqlite3” database is opened (a string of the length 7 was retrieved):

Another example: This query was retrieved from the full chunk of strings, by given offset and length:

Let’s take a look at which data those queries were trying to fetch. Fetching the strings referenced by the calls, we can retrieve and list all of them:

select name_on_card, expiration_month, expiration_year, card_number_encrypted, billing_address_id FROM credit_cards select * FROM autofill_profiles select email FROM autofill_profile_emails select number FROM autofill_profile_phone select first_name, middle_name, last_name, full_name FROM autofill_profile_names

We can see that the browser’s cookie database is queried in search data related to online transactions: credit card numbers, expiration dates, as well as personal data such as names and email addresses.

The paths to all the files being searched are stored as base64 strings. Many of them are related to cryptocurrency wallets, but we can also find references to the Telegram messenger.

Software\\Classes\\tdesktop.tg\\shell\\open\\command \\AppData\\Local\\Yandex\\YandexBrowser\\User Data\\ \\AppData\\Roaming\\Electrum\\wallets\\default_wallet \\AppData\\Local\\Torch\\User Data\\ \\AppData\\Local\\Uran\\User Data\\ \\AppData\\Roaming\\Opera Software\\ \\AppData\\Local\\Comodo\\User Data\\ \\AppData\\Local\\Chromium\\User Data\\ \\AppData\\Local\\Chromodo\\User Data\\ \\AppData\\Local\\Kometa\\User Data\\ \\AppData\\Local\\K-Melon\\User Data\\ \\AppData\\Local\\Orbitum\\User Data\\ \\AppData\\Local\\Maxthon3\\User Data\\ \\AppData\\Local\\Nichrome\\User Data\\ \\AppData\\Local\\Vivaldi\\User Data\\ \\AppData\\Roaming\\BBQCoin\\wallet.dat \\AppData\\Roaming\\Bitcoin\\wallet.dat \\AppData\\Roaming\\Ethereum\\keystore \\AppData\\Roaming\\Exodus\\seed.seco \\AppData\\Roaming\\Franko\\wallet.dat \\AppData\\Roaming\\IOCoin\\wallet.dat \\AppData\\Roaming\\Ixcoin\\wallet.dat \\AppData\\Roaming\\Mincoin\\wallet.dat \\AppData\\Roaming\\YACoin\\wallet.dat \\AppData\\Roaming\\Zcash\\wallet.dat \\AppData\\Roaming\\devcoin\\wallet.dat Big but unsophisticated malware

Some of the concepts used in this malware remind us of other stealers, such as Evrial, PredatorTheThief, and Vidar. It has similar targets and also sends the stolen data as a ZIP file to the C&C. However, there is no proof that the author of this stealer is somehow linked with those cases.

When we take a look at the implementation as well as the functionality of this malware, it’s rather simple. Its big size comes from many statically-compiled modules. Possibly, this malware is in the early stages of development— its author may have just started learning Go and is experimenting. We will be keeping eye on its development.

At first, analyzing a Golang-compiled application might feel overwhelming, because of its huge codebase and unfamiliar structure. But with the help of proper tools, security researchers can easily navigate this labyrinth, as all the functions are labeled. Since Golang is a relatively new programming language, we can expect that the tools to analyze it will mature with time.

Is malware written in Go an emerging trend in threat development? It’s a little too soon to tell. But we do know that awareness of malware written in new languages is important for our community.

The post Analyzing a new stealer written in Golang appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Apple’s FaceTime privacy bug allowed possible spying

Malwarebytes - Tue, 01/29/2019 - 19:00

Social media caught fire yesterday as the news of a new Apple bug spread. It seemed that there was a flaw in FaceTime that allowed you to place a call to someone, but listen in on their microphone if they didn’t pick up. Worse, as the news spread, it turned out that there was also a way to capture video from the camera on the target device, and that this issue was affecting not just iPhones and iPads, but Macs as well.

The result was a chorus of voices all saying the same thing: turn off FaceTime. The good news, though, if you’re just tuning in now, is that this is completely unnecessary, as Apple has disabled the service that allowed this bug to work.

How did the bug work?

The bug relied entirely on a feature of iOS 12.1 and macOS 10.14.1 called Group FaceTime. If you are using an older version of iOS or macOS, you have nothing to fear.

The bug involved doing something a bit unusual with Group FaceTime. First, you would have to place a FaceTime call to your intended victim. Next, while the call is still ringing, you would need to bring up the Add Person screen and add yourself to the call. Doing this would invoke Group FaceTime, and the microphone of the intended target would be activated, even if they didn’t answer.

Capturing video from the target phone’s camera required one of two known techniques. One would be to hope that the recipient pressed the power button on the phone to “decline” the call, in which case the camera would turn on as well. (Of course, if they pressed it twice, as some have become accustomed to doing on iPhones in these days of scam calls, that would cut the video off again. But you’d still see a flash of video.)

Alternately, you could apparently join the call from another device, which would also turn on the recipient’s camera. (Although I was able to test and verify everything else, I didn’t know about this trick until after Apple disabled Group FaceTime, so I can’t verify this one from personal experience.)

What were the dangers?

To make this work, you would need to rely on the target not answering, which could potentially be orchestrated if the target’s activities were known and it was likely that he or she would both be disinclined to answer at the time of the call, and be doing or saying something of interest. (I think we can all think of at least one such activity!)

Fortunately, this did pretty much rule out generalized surveillance, though nonetheless, there were some valiant efforts (most likely pranks) in the brief time the bug was known.

This also didn’t open up an open-ended wiretap. FaceTime rings for a while, but not forever. At most, you might get about a minute or so of spying. It’s also not the stealthiest of attacks, since you’d literally be announcing yourself in the process.

All this means that the risks were fairly low for anything beyond a prank. I personally did not feel it necessary to turn off FaceTime on my devices. Once I was aware, I could have simply covered the camera and ended the call—or had a little fun with the caller by playing Rick Astley into the phone’s mic!

How was this resolved?

Apple temporarily solved the problem by disabling Group FaceTime on their servers. This means that you can no longer add people to a FaceTime call, so the bug currently cannot be triggered. Apple will undoubtedly release iOS and macOS updates with a fix for this bug.

It’s unknown how soon Apple will re-enable Group FaceTime after that update is released, so if you’re on iOS 12.1 or macOS 10.14.1, it will be of great importance to install the next update in a timely fashion! You don’t want to be caught with your pants down (possibly literally) on a vulnerable system after the Group FaceTime switch is turned back on.

How did this happen?

Apple has had an unusually large number of high-profile and embarrassing bugs of late, which has led many people to ask what has happened to Apple’s quality assurance process. This bug is no exception.

Worse, it appears that at least one person knew about the bug almost two weeks before the news broke, and had been trying to alert Apple.

It’s unknown at this point exactly which points of contact for Apple this person was using, so it’s entirely possible that the right people at Apple didn’t learn about it until they saw it on the news. Since Apple didn’t disable Group FaceTime until after the news broke, I would hope that this is the case. It would be far more concerning if the right people at Apple knew about the bug, but didn’t make the call to disable Group FaceTime.

What’s the takeaway?

Bottom line, at this point, there’s absolutely no reason to panic or to turn off FaceTime. If you turned off FaceTime, and you want to turn it back on, it’s safe to do so, as long as you don’t delay installing the next update. There’s no indication that FaceTime can be abused without having Group FaceTime available.

There will be some who cite this as a reason to delay installing system updates. They will say that you should wait and let others work out the bugs. However, this is questionable advice. If you stay on an old version of iOS or macOS, you are using a system that has known security issues. That’s a far riskier proposition than updating to a newer version of the system where there aren’t (yet) any known security issues. From a security perspective, you should always install updates in a timely fashion.

The post Apple’s FaceTime privacy bug allowed possible spying appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Interview with a malware hunter: Jérôme Segura

Malwarebytes - Tue, 01/29/2019 - 16:00

In our series “Interview with a malware hunter,” our feature role today goes to Jérôme Segura, Malwarebytes’ Head of Threat Intelligence and world-renowned exploit kits researcher. The goal of this series is to introduce our readers to our malware intelligence crew by involving them in these Q&A sessions. So, let’s get started.

Where are you from, and where do you live now?

I was born and raised in France. After graduating from university, I moved over to North America, where I currently reside.

You are most famous for your exploit kit research. How did you get involved in that field?

I think I first got into exploit kits around 2007. I was working for a small company, and my job was to find new malware samples. I recall learning about drive-by downloads and reading an important book: Virtual Honeypots: From Botnet Tracking to Intrusion Detection by Niels Provos and Thorsten Holz.

After reading this book, I wrote a very basic prototype for a honeypot that would capture payloads from drive-by attacks.

This is also around the same time that I discovered the Fiddler web debugger tool that I have used on almost a daily basis ever since.

Are there any other fields that have your special interest?

Over the years, I’ve been curious about different fields that have come up, mostly by chance. For example, when I first started working remotely, I once received a phone call from tech support scammers. While I could have forgotten about it, it made an impression on me, so much so that it led to writing more than 30 blog posts on the topic and working with the FTC to shut down a multi million-dollar operation in the US.

Did you major in computer sciences? Or did you switch to cybersecurity later?

I graduated with a Masters in Information Systems, which at the time was not specific to computer science (by the way, I got my first computer at 18 years of age), but also included law, economics, and even things like accounting. Cybersecurity came up much later.

How long have you been a security researcher?

I’ve done malware research for about 12 years.

How did you end up working for Malwarebytes?

After working for the same company for a number of years, I found myself needing a new opportunity. Even though social media sites were not as big then, it was via Twitter message from long time malwarenaut Mieke [Malwarebytes Director of Research] that I got here.

What’s the most interesting/impactful discovery you’ve made as a researcher?

That’s tough to say. There is work that I’ve done that was really interesting and that I devoted a lot of time to, but perhaps didn’t have as much of an impact or didn’t get published.

What’s the biggest cybersecurity “fail” you’ve witnessed?

There are a lot of fails happening every day, but I think what struck me most was to see poor security practices in person. For example, seeing computers at the hospital left unlocked, running outdated software. The same ones where doctors store your personal and health records.

At the same time, I understand that lack of awareness or small budgets are some of the reasons why this is happening, and individual people aren’t always to blame.

Can you give us an impression of what a typical workday looks like for you?

The interesting thing about our job is that there is an unexpected element to it which reflects heavily on the day’s schedule. You could be reviewing logs or responding to emails when something comes up and needs your immediate attention.

Otherwise, a lot of the job consists of checking on various indicators to get a sense of what’s going on and then digging deeper when something seems new.

What kind of skills does a person need to be a malware intelligence researcher?

There are many different skill sets that can apply to be a malware intelligence researcher. Our field is vast, and few people can claim to possess all the diverse skills there are. Personally, I would say that attention to detail and persistence are really valuable qualities to have. Many other skills can be taught later on.

What advice do you have for people who want to break into the field?

There are a few young people that have come to me in the past asking for advice on how to get into this field. I always tell them to stay curious, keep learning, and publish your work and discoveries. One the best things you can do is get exposure by showing your craft to outside folks. If you keep at it, eventually it will pay off.

The post Interview with a malware hunter: Jérôme Segura appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (January 21 – 27)

Malwarebytes - Mon, 01/28/2019 - 18:00

Last week on the Malwarebytes Labs blog, we took a look at Modlishka, the latest hurdle in two-factor authentication (2FA), the potential for abuse of push notifications, a malware-phishing combo by the name of CryTekk ransomware, and why we detect PUPs, but enforce the power of users’ choice.

We also pushed out the 2019 State of Malware report, which you can readily download here.

Other cybersecurity news
  • Fortnight, the hugely popular video game, uses in-game currency. And this, The Independent has found, is fueling money laundering schemes. (Source: PYMNTS.com)
  • Thanks to the new European General Data Protection Regulation (GDPR) privacy law, a French regulator fined Google to the tune of €50 million ($56.8 million) for not getting enough user consent to data collection and targeted advertising. (Source: The Wall Street Journal)
  • A clever mobile malware affected Android devices is able to elude emulators, tools which are used by security researchers to study potentially malicious apps, by running only when it detects the that device it’s installed in moves. (Source: Ars Technica)
  • A recently released list of top out-of-date (aka vulnerable) applications installed on computer systems include a number of Adobe products, Skype, Firefox, and VLC. If you have any of these installed, now is a good time to update them. (Source: Help Net Security)
  • Automatic license plate recognition (ALPR)—or automatic number plate recognition (ANPR) in the UK—are cameras that track license plates. And some of them are connected to the Internet, leaking sensitive data and vulnerable to attacks. (Source: TechCrunch)
  • Because of authentication weaknesses in GoDaddy, the world’s largest domain name registrar, disruptive spam, malware, and phishing campaigns taking advantage of dormant web sites owned by trusted brands are possible. (Source: KrebsOnSecurity)
  • Japanese car manufacturer, Mitsubishi, has created its own cybersecurity technology for cars, which is inspired by defenses designed for systems in critical infrastructures. (Source: Security Week)
  • Researchers from the Cyprus University of Technology, the University of Alabama at Birmingham, Telefonica Research, and Boston University, authored a paper and created a deep learning classifier algorithm that protects children from videos in YouTube by detecting disturbing content. (Source: Bleeping Computer)
  • A new voicemail phishing campaign that uses recorded messages attached to emails are fooling recipients into verifying their passwords twice to confirm the legitimacy of credentials. (Source: Bleeping Computer)
  • A convincing new attack abusing the App Engine Google Cloud Platform (GCP) comes to light, which is found to be targeting mostly organizations in the financial sector. The Cobalt Strike group is behind this campaign. (Source: Dark Reading)

Stay safe, everyone!

The post A week in security (January 21 – 27) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What does ‘consent to tracking’ really mean?

Malwarebytes - Mon, 01/28/2019 - 16:00

Thanks to Jerome Boursier for contributions.

Post GDPR, many social media platforms will ask end users to consent to some form of tracking as a condition of using the service. It’s easy to make assumptions as to what that means, especially when the actual terms of service or data policy for the service in question is tough to find, full of legal jargon, or just long and boring. Part of the shock of Facebook stories was in discovering just how expansive their consent to tracking really was. Let’s take a look at what can happen after you hit OK on a new site’s Terms of Service.

What we think they’re doing

Most commonly, users think that social media sites limit their tracking to actual interactions with the site while logged in. This includes likes, follows, favorites, and general use of the site as intended. Those interactions are then analyzed to determine a user’s rough interests, and serve them corresponding ads.

We asked some non-technical Malwarebytes staffers what they thought popular companies collected on them and got the following responses:

“Hmm I would assume just my name, birthday, trends in the hashtags I use, and locations I’m at. Nothing else.”

“As far as IG goes, I’m guessing they collect data on the hashtags I follow and what I look at because all the ads are home improvement ads.”

While these are common use cases for tracking, innovations in user surveillance have allowed companies to take much more invasive actions.

What they’re actually doing

The Cambridge Analytica reports were quite shocking, but in theory their data practices were actually a violation of the agreement they had with Facebook. Somewhat more concerning are actions that Facebook and other social media companies take overtly with third parties, or as part of their explicit terms of service.

In June 2018, a New York Times report revealed partnerships between Facebook and mobile device manufacturers allowed data collection on your Facebook friends, irrespective of whether those friends had allowed data sharing with third parties. This data collection varied by device manufacturer, and most were relatively benign. Blackberry, however, seemed to go beyond what most of us expect to be collected when we log in:

Facebook has been known for years to have somewhat creepy partnerships like this. But what about other platforms? Instagram has an interesting paragraph in its terms and conditions:

Does communications include direct messages? How long is this information stored, where, and under what conditions? It could be perfectly secure and anonymized, but it’s difficult to tell because Instagram is a little vague on these points. Companies tell us what they collect consistently but they don’t always tell us why or disclose retention conditions, which makes it difficult for a user to make a proper risk assessment for allowing tracking.

Outside of the Facebook family of products, Pinterest does some data sharing that you might not expect:

Kudos to Pinterest for providing clear opt-out instructions.

A reasonable user might not expect that when consent to tracking connected with a Pinterest account, they would also agree to offsite tracking. Pinterest does stand out, however, by presenting well organized and clear information followed by simple opt-out instructions after each section.

What they might be doing

Most platforms that engage in user tracking do so in ways that raise concern, but are not overtly alarming. Abuses we’ve heard about tend to center on the tracking company sharing information with third parties. So what might happen if the wrong third party gains access to this data?

In 2016, a Pro Publica investigation was able to use Facebook ad targeting to create a housing ad that excluded minorities from seeing it. (This probably violates the US Fair Housing Act.) Using user data to discriminate in plausibly deniable ways predates the Internet, but the unprecedented volume of data collected makes schemes by bad actors much more efficient and easy to launch.

A more speculative harm is the use of tracking tags on sensitive websites. In France, a government website providing accurate information on reproductive health services was using a Facebook tracker. A “trusted partner” receiving user metadata, as well as which sections of the site that user clicks on, has the potential to be profoundly invasive. From a risk mitigation perspective, a user with a Facebook account might not have anticipated this sort of tracking when they initially consented to Facebook’s terms of service.

A common counter to complaints regarding user tracking is, “Well, you agreed to their terms, so you should have expected this.” This is arguably applicable to basic metadata collection and targeted ads, but is it reasonable to expect a Facebook user to understand that their off-platform browsing is subject to surveillance as well? User tracking has progressed so far in sophistication that an average user most likely does not have the background necessary to imagine every possible use case for data collection prior to accepting a user agreement.

What you can do about it

If any of the above examples make you uncomfortable, check out how to secure some common social media platforms using internal settings. If you want to implement additional technical solutions, browser extensions like Ghostery and the EFF’s Privacy Badger can prevent trackers from sucking up data you would prefer not to hand over.

Messenger services are a bit harder to transition away from, but not impossible. Signal is a well-regarded messenger app with end-to-end encryption, and a history of respecting user privacy. Alternatively, Wire can provide a more business-oriented alternative, with screen sharing, file sharing, and access role management.

Most important is to stay suspicious when accessing a new platform. No one can mishandle data that you never agree to hand over to begin with. Stay vigilant, stay safe, and enjoy your social media platforms knowing exactly how your data is being used.

The post What does ‘consent to tracking’ really mean? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Sly criminals package ransomware with malicious ransom note

Malwarebytes - Fri, 01/25/2019 - 18:00

Ransomware continues to show signs of evolution. From a simple screen locker to a highly-sophisticated data locker, ransomware has now become a mainstream name, even if (historically), it has been around far longer than we want to look back.

Although the criminals behind ransomware campaigns are observed to be refining their approaches—from the “spray and pray” tactic to something akin to wide beam laser precision—they are also fine-tuning their targets. They can single out organizations, companies, and industries; and they can also hold cities and towns for ransom.

Ransomware has also stepped up in sophistication. Criminals have begun introducing certain forms of hybridization in their attacks, either the ransomware file itself is given capabilities outside of its type (e.g., VirRansom and Zcrypt variants that can infect files) or the entire campaign involves one or more threat vectors.

The latest in-the-wild ransomware strain discovered by a group of security researchers known as MalwareHunterTeam (MHT, for short) fits the latter.

Ransomware + phishing: a match made in heaven?

Nothing much is known about this ransomware—which some are already dubbing as CryTekk—apart from the way it applies a wily social engineering tactic to its ransom note, potentially to ensure a near 100 percent of affected parties acting on the infection and paying the ransom. The lure? An additional payment option for affected users who want to retrieve their files but don’t have a cryptocurrency wallet.

The ransom note. (Courtesy of MalwareHunterTeam)

Transcription:

YOUR FILES HAVE BEEN ENCRYPTED!

Dear victim:

Files have been encrypted! And Your computer has been limited!

To unlock your PC you must pay with one of the payment methods provided, we regularly check your activity of your screen and to see if you have paid. Paypal automatically sends us a notification once you’ve paid, But if it doesn’t unlock your PC upon payment contact us (CryTekk@protonmail.com)

 Reference Number: CT-{redacted}

When you pay via BTC, send us an email following your REF Number if your PC doesn’t unencrypt. Once you pay, Your PC will de decrypted. However if you don’t within 14 days we will continue to infect your PC and extract all your data and use it.

Google ‘how to buy/pay with bitcoin’ if you don’t know how. To pay by bitcoin: send $40 to your unique bitcoin address.

34ieoNtVEUpcWeVbuxUWXoyANEBBy22TUb

Clicking the yellow “Buy now” button in the small PayPal option box opens a browser tab to direct users to a phishing page asking for card details:

The first PayPal phishing page asking for card deets. (Courtesy of MalwareHunterTeam)

After supplying the information wanted and clicking the “Agree and Confirm” button, users are then directed to another phishing page asking for personal information, which they need to fill in to “confirm” their identities:

The second PayPal phishing page asking for personally identifiable information (PII). (Courtesy of MalwareHunterTeam)

After filling in all information, clicking the “Agree and Confirm” button points users to a fake confirmation that the user’s account access is fully restored, which is odd because, as far as the user knows, they were paying the ransom, not addressing a problem about their PayPal accounts. Now, if the user hadn’t already realized that they had been duped twice, at this point they might.

The fake “confirmation” page. (Courtesy of MalwareHunterTeam)

Finally, clicking the “My PayPal” button directs users to the legitimate PayPal login page.

Fool me once, shame on me. Fool me twice…

While ransomware is not as rampant today compared to two years ago, it remains a top threat to consumers and businesses alike. It wouldn’t surprise us at all if the real intent of the criminals behind this campaign is to bank on people’s fear of ransomware to go after their money and credentials.

Files encrypted by this ransomware can be decrypted, as confirmed by MHT’s own Michael Gillespie in a tweet. In fact, within two hours after the initial MHT tweet, Gillespie already offered to decrypt files for possible victims. This confirms what Bleeping Computer stated about the ransomware code being “nothing special.” This also suggests that the criminals put greater effort into the phishing side of the campaign than to the ransomware itself.

Since most, if not all, ransomware attacks ask for cryptocurrency payment, this attack differentiates itself by offering victims an alternative pay first before presenting the Bitcoin payment option. This leads us to speculate that, although they didn’t say it outright, PayPal is their preferred payment method. Also, $40 in Bitcoin in exchange for decrypting files? That’s cheap compared to the amount criminals will be getting from victims once they access their accounts using the swiped credentials.

Regardless of whether we see this as a sophisticated ransomware campaign or a “really dope” attempt at phishing, one thing is clear: They are after your money and credentials, so it pays to know when you’re being phished.

It can be frightening to find oneself face-to-face with a ransomware infection, but let us remain calm and keep our heads together. Remember that criminals want us to feel vulnerable, so be and do the opposite. Scrutinize URLs carefully before you enter your credentials or PII. If you feel that something is amiss, follow your gut and don’t proceed any further. If you think you’re stuck and don’t know what to do next, don’t be afraid to ask for help from someone online or in-person who is savvy enough to guide you.

Stay safe out there!

The post Sly criminals package ransomware with malicious ransom note appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A user’s right to choose: Why Malwarebytes detects Potentially Unwanted Programs (PUPs)

Malwarebytes - Fri, 01/25/2019 - 16:00

Potentially Unwanted Programs (PUPs): the name says it all.

While the programs themselves might have legitimate uses, their vendors often use inappropriate methods to drive downloads or hide within a program bundle. At Malwarebytes, we feel we have an obligation to help protect our customers from PUPs by identifying and detecting them and giving the user the right to choose whether they continue using their services.

It’s worth noting that PUP vendors are unhappy when we detect them. Several, including Enigma, have sued us over the detections. Litigation hasn’t deterred us from continuing to flag software that meets our PUP criteria. Fortunately, a federal court in California agreed that customers should have the ability to decide which software runs on their computers and dismissed Enigma’s initial claims. A copy of the Court’s order dismissing Enigma’s case (Case 5:17-cv-02915-EJD Document 105) may be found online at our press center.

These disputes do not impact the application of our criteria for PUP detections. We continue to identify two Enigma applications: SpyHunter 4 and RegHunter as PUPs. But another release, SpyHunter 5 changed the application behavior to no longer fit into our PUP criteria. We applaud Enigma for the modifications and hope it’s a permanent change.

We will continue to evaluate software against our guidelines to give our customers the tools to make an informed choice about the software running on their computers. We prefer to give each individual the right to manage their devices. We enable consumers who want PUPs to control that choice while protecting the vast majority of our customers by keeping those programs on our PUP list. We think this is the best possible path for our company and our customers.

Stay safe everyone!

The post A user’s right to choose: Why Malwarebytes detects Potentially Unwanted Programs (PUPs) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

2019 State of Malware report: Trojans and cryptominers dominate threat landscape

Malwarebytes - Wed, 01/23/2019 - 08:01

Each quarter, the Malwarebytes Labs team gathers to share intel, statistics, and analysis of the tactics and techniques made popular by cybercriminals over the previous three months. At the end of the year, we synthesize this data into one all-encompassing report—the State of Malware report—that aims to follow the most important threats, distribution methods, and other trends that shaped the threat landscape.

Our 2019 State of Malware report is here, and it’s a doozy.

In our research, which covers January to November 2018 and compares it against the previous period in 2017, we found that two major malware categories dominated the scene, with cryptominers positively drenching users at the back end of 2017 and into the first half of 2018, and information-stealers in the form of Trojans taking over for the second half of the year.

But that’s not all we discovered.

The 2019 State of Malware report follows the top 10 global threats for consumers and businesses, as well as top threats by region and by corporate industry verticals. In addition, we followed noteworthy distribution techniques for the year, as well as popular scams. Some of our findings include:

  • In 2018, we saw a shift in ransomware attack techniques from malvertising and exploits that deliver ransomware as a payload to targeted, manual attacks. The shotgun approach was replaced with brute force, as witnessed in the most successful SamSam campaigns of the year.
  • Malware authors pivoted in the second half of 2018 to target organizations over consumers, recognizing that the bigger payoff was in making victims out of businesses instead of individuals. Overall business detections of malware rose significantly over the last year—79 percent to be exact—and primarily due to the increase in backdoors, miners, spyware, and information stealers.

  • The fallout from the ShadowBrokers’ leak of NSA exploits in 2017 continued, as cybercriminals used SMB vulnerabilities EternalBlue and EternalRomance to spread dangerous and sophisticated Trojans, such as Emotet and TrickBot. In fact, information stealers were the top consumer and business threat in 2018, as well as the top regional threat for North America, Latin America, and Europe, the Middle East, and Africa (EMEA).

Finally, our Labs team stared into its crystal ball and predicted top trends for 2019. Of particular note are the following:

  • Attacks designed to avoid detection, like soundloggers, will slip into the wild.

  • Artificial Intelligence will be used in the creation of malicious executables.

  • Movements such as Bring Your Own Security (BYOS) to work will grow as trust declines.

  • IoT botnets will come to a device near you.

To learn more about top threats and trends in 2018 and our predictions for 2019, download our report from the link below.

2019 State of Malware Report

The post 2019 State of Malware report: Trojans and cryptominers dominate threat landscape appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Browser push notifications: a feature asking to be abused

Malwarebytes - Tue, 01/22/2019 - 18:03

“I’m seeing a lot of ads popping up in the corner of my screen, and the Malwarebytes scan does not show there is anything wrong. It says my computer is clean. So what’s happening?”

Our support team runs into questions like this regularly, but the volume seems to be increasing lately. In most of these cases, it helps to look at the “Notification permissions” of the browser displaying this annoying behavior. A good cleansing in that department might be just what you need to get rid of those “pop-ups.”

The problem is that the messages users are seeing are not pop-ups at all, but in fact “push notifications,” often referred to as simply “notifications.” We understand that naming them differently doesn’t make them any less annoying. But it does change our classification of such messages.

Some notifications are not simple advertisements, but rather misleading messages about the safety of your computer.

What are these notifications?

From the Mozilla Developer pages:

The Notifications API lets a web page or app send notifications that are displayed outside the page at the system level; this lets web apps send information to a user even if the application is idle or in the background. This article looks at the basics of using this API in your own apps.

What we can learn from this is that the notifications can originate from a website or from an app. We are going to focus on the case where a website is causing the problem. Any app showing you commercial messages outside of a browser window would get detected as adware by Malwarebytes, so these would not escape a scan.

However, website notifications can be displayed outside the browser window. Wait, what’s the difference between notifications and pop-ups again? A pop-up is a new browser window or tab, whereas notifications are more like tooltips. They are messages that are independent from any open websites.

Notifications show the domain from which they originate, so that could clue you in on the answer to another important question, which is:

How did I get them?

To receive browser notifications, a user must have first allowed them. In Firefox, the dialog to allow them looks like this:

While that seems pretty straightforward, there are trickier sites that use a bit of social engineering to get you to allow their notifications.

The website visitors are led to believe that they have to click “Allow“ to see the video. In fact, if they click the “Allow” button, they will be redirected to another website, sometimes asking yet again to allow notifications, but meanwhile their clicking has allowed this site to show them notifications. And, mind you, the site does not have to be open in the browser for the notifications to pop up. As you can see, the fact that you are allowing notifications is a bit less clear in the Chrome prompt than it is in Firefox.

How do I disable them?

There are some options for disabling notifications. You can disable them altogether or you can disable notifications for specific domains, by removing them from your “Allow” list. You can even add them to your “Blocked” list.

For every browser, the notifications look slightly different and the methods to disable them are slightly different as well. To make them easier to find, I have split them up by browser.

Chrome

To completely turn off notifications, even from an extension:

  • Click the three dots button in the upper right-hand corner of the Chrome menu to enter the Settings menu.
  • Scroll down in the Settings menu and click on Advanced.
  • Under Privacy and Security, select Content settings.
  • In this menu, select Notifications.
  • By default, the slider is set to Ask before sending (recommended), but feel free to move it to Block if you wish to block notifications completely.

For more granular control, you can use this menu to manipulate the individual items. Note that the items with a jigsaw puzzle piece are enforced by an extension, so you would have to figure out which extension first and then remove it. But for the ones with the three dots behind them, you can click on the dots to open this context menu:

Selecting Block will move the item to the block list. Selecting Remove will delete the item from the list. It will ask permission to show notifications again if you visit their site (unless you have set the slider to Block).

Shortcut: another way to get into the Notifications menu shown earlier is to click on the gear icon in the notifications themselves.

This will take you directly to the itemized list.

Firefox

To completely turn of notifications in Firefox:

  • Click the three horizontal bars in the upper right-hand corner of the menu bar and select Options in the settings menu.
  • On the left-hand side, select Privacy & Security.
  • Scroll down to the Permissions section and click on the Settings button behind Notifications.

  • In the resulting menu, put a checkmark in the Block new requests asking to allow notifications box at the bottom.

In the same menu, you can apply a more granular control by setting listed items to Block or Allow by using the drop-down menu behind each item.

Opera

Where push notifications are concerned, you can see how closely related Opera and Chrome are.

  • Open the menu by clicking the O in the upper left-hand corner.
  • Click on Settings (on Windows)/Preferences (on Mac).
  • Click on Advanced and select Privacy & security.
  • Under Content settings (desktop)/Site settings (Android,) select Notifications.

On Android, you can remove all the items at once or one by one. On desktops, it works exactly the same as it does in Chrome. The same is true for accessing the menu from the notifications themselves. Click the gear icon in the notification, and you will be taken to the Notifications menu.

Edge

To disable web notifications in Windows:

  • Click the Start button in Windows (Windows icon).
  • Select Settings (gear icon).
  • Select System.
  • Select Notifications & actions.
  • Scroll down and select Microsoft Edge in the list of senders.
  • Here, you set the switch for Notifications to Off or change the notification properties.

You can also manage the notifications on a site-by-site basis in Edge:

  • Click the three dots button in the top-right corner and select Settings.
  • Scroll down and click on View advanced settings.
  • Under Notifications, click on Manage.
  • Here, you can switch notifications off for a specific website.
Safari

Launch Safari and go to Safari > Preferences, or press Command-Comma. Click on the Notifications tab. From there, you can manually disable/enable notifications from select sites, remove all notifications, or access your system-wide Notification Preferences.

Are these notifications useful at all?

While we could conceive of some cases where push notifications might be found useful, we would certainly not hold it against you if you decided to disable them altogether.

Web push notifications are not just there to disturb Windows users. Android, Chromebook, MacOS, even Linux users may see them if they use one of the participating browsers: Chrome, Firefox, Opera, Edge, and Safari. In some cases, the browser does not even have to be opened, and it can still display push notifications.

Be careful out there and think twice before you click “Allow.”

The post Browser push notifications: a feature asking to be abused appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Building Cooperation and Trust into the Web with IPFS

Mozilla Hacks - Wed, 08/29/2018 - 14:43

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

Some projects start small, aiming for incremental improvements. Others start with a grand vision, leapfrogging today’s problems by architecting an idealized world. The InterPlanetary File System (IPFS) is definitely the latter – attempting to replace HTTP entirely, with a network layer that has scale, trust, and anti-DDOS measures all built into the protocol. It’s our pleasure to have an introduction to IPFS today from Kyle Drake, the founder of Neocities and Marcin Rataj, the creator of IPFS Companion, both on the IPFS team at Protocol Labs -Dietrich Ayala

IPFS – The InterPlanetary File System

We’re a team of people all over the world working on IPFS, an implementation of the distributed web that seeks to replace HTTP with a new protocol that is powered by individuals on the internet. The goal of IPFS is to “re-decentralize” the web by replacing the location-oriented HTTP with a content-oriented protocol that does not require trust of third parties. This allows for websites and web apps to be “served” by any computer on the internet with IPFS support, without requiring servers to be run by the original content creator. IPFS and the distributed web unmoor information from physical location and singular distribution, ultimately creating a more affordable, equal, available, faster, and less censorable web.

IPFS aims for a “distributed” or “logically decentralized” design. IPFS consists of a network of nodes, which help each other find data using a content hash via a Distributed Hash Table (DHT). The result is that all nodes help find and serve web sites, and even if the original provider of the site goes down, you can still load it as long as one other computer in the network has a copy of it. The web becomes empowered by individuals, rather than depending on the large organizations that can afford to build large content delivery networks and serve a lot of traffic.

The IPFS stack is an abstraction built on top of IPLD and libp2p:

Hello World

We have a reference implementation in Go (go-ipfs) and a constantly improving one in Javascript (js-ipfs). There is also a long list of API clients for other languages.

Thanks to the JS implementation, using IPFS in web development is extremely easy. The following code snippet…

  • Starts an IPFS node
  • Adds some data to IPFS
  • Obtains the Content IDentifier (CID) for it
  • Reads that data back from IPFS using the CID

<script src="https://unpkg.com/ipfs/dist/index.min.js"></script> Open Console (Ctrl+Shift+K) <script> const ipfs = new Ipfs() const data = 'Hello from IPFS, <YOUR NAME HERE>!' // Once the ipfs node is ready ipfs.once('ready', async () => { console.log('IPFS node is ready! Current version: ' + (await ipfs.id()).agentVersion) // convert your data to a Buffer and add it to IPFS console.log('Data to be published: ' + data) const files = await ipfs.files.add(ipfs.types.Buffer.from(data)) // 'hash', known as CID, is a string uniquely addressing the data // and can be used to get it again. 'files' is an array because // 'add' supports multiple additions, but we only added one entry const cid = files[0].hash console.log('Published under CID: ' + cid) // read data back from IPFS: CID is the only identifier you need! const dataFromIpfs = await ipfs.files.cat(cid) console.log('Read back from IPFS: ' + String(dataFromIpfs)) // Compatibility layer: HTTP gateway console.log('Bonus: open at one of public HTTP gateways: https://ipfs.io/ipfs/' + cid) }) </script>

That’s it!

Before diving deeper, let’s answer key questions:

Who else can access it?

Everyone with the CID can access it. Sensitive files should be encrypted before publishing.

How long will this content exist? Under what circumstances will it go away? How does one remove it?

The permanence of content-addressed data in IPFS is intrinsically bound to the active participation of peers interested in providing it to others. It is impossible to remove data from other peers but if no peer is keeping it alive, it will be “forgotten” by the swarm.

The public HTTP gateway will keep the data available for a few hours — if you want to ensure long term availability make sure to pin important data at nodes you control. Try IPFS Cluster: a stand-alone application and a CLI client to allocate, replicate and track pins across a cluster of IPFS daemons.

Developer Quick Start

You can experiment with js-ipfs to make simple browser apps. If you want to run an IPFS server you can install go-ipfs, or run a cluster, as we mentioned above.

There is a growing list of examples, and make sure to see the bi-directional file exchange demo built with js-ipfs.

You can add IPFS to the browser by installing the IPFS Companion extension for Firefox.

Learn More

Learn about IPFS concepts by visiting our documentation website at https://docs.ipfs.io.

Readers can participate by improving documentation, visiting https://ipfs.io, developing distributed web apps and sites with IPFS, and exploring and contributing to our git repos and various things built by the community.

A great place to ask questions is our friendly community forum: https://discuss.ipfs.io.
We also have an IRC channel, #ipfs on Freenode (or #freenode_#ipfs:matrix.org on Matrix). Join us!

The post Dweb: Building Cooperation and Trust into the Web with IPFS appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Dweb: Building a Resilient Web with WebTorrent

Mozilla Hacks - Wed, 08/15/2018 - 14:49

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

The web is healthy when the financial cost of self-expression isn’t a barrier. In this installment of the Dweb series we’ll learn about WebTorrent – an implementation of the BitTorrent protocol that runs in web browsers. This approach to serving files means that websites can scale with as many users as are simultaneously viewing the website – removing the cost of running centralized servers at data centers. The post is written by Feross Aboukhadijeh, the creator of WebTorrent, co-founder of PeerCDN and a prolific NPM module author… 225 modules at last count! –Dietrich Ayala

What is WebTorrent?

WebTorrent is the first torrent client that works in the browser. It’s written completely in JavaScript – the language of the web – and uses WebRTC for true peer-to-peer transport. No browser plugin, extension, or installation is required.

Using open web standards, WebTorrent connects website users together to form a distributed, decentralized browser-to-browser network for efficient file transfer. The more people use a WebTorrent-powered website, the faster and more resilient it becomes.

Architecture

The WebTorrent protocol works just like BitTorrent protocol, except it uses WebRTC instead of TCP or uTP as the transport protocol.

In order to support WebRTC’s connection model, we made a few changes to the tracker protocol. Therefore, a browser-based WebTorrent client or “web peer” can only connect to other clients that support WebTorrent/WebRTC.

Once peers are connected, the wire protocol used to communicate is exactly the same as in normal BitTorrent. This should make it easy for existing popular torrent clients like Transmission, and uTorrent to add support for WebTorrent. Vuze already has support for WebTorrent!

Getting Started

It only takes a few lines of code to download a torrent in the browser!

To start using WebTorrent, simply include the webtorrent.min.js script on your page. You can download the script from the WebTorrent website or link to the CDN copy.

<script src="webtorrent.min.js"></script>

This provides a WebTorrent function on the window object. There is also an
npm package available.

var client = new WebTorrent() // Sintel, a free, Creative Commons movie var torrentId = 'magnet:...' // Real torrent ids are much longer. var torrent = client.add(torrentId) torrent.on('ready', () => { // Torrents can contain many files. Let's use the .mp4 file var file = torrent.files.find(file => file.name.endsWith('.mp4')) // Display the file by adding it to the DOM. // Supports video, audio, image files, and more! file.appendTo('body') })

That’s it! Now you’ll see the torrent streaming into a <video width="300" height="150"> tag in the webpage!

Learn more

You can learn more at webtorrent.io, or by asking a question in #webtorrent on Freenode IRC or on Gitter. We’re looking for more people who can answer questions and help people with issues on the GitHub issue tracker. If you’re a friendly, helpful person and want an excuse to dig deeper into the torrent protocol or WebRTC, then this is your chance!

 

 

The post Dweb: Building a Resilient Web with WebTorrent appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds