Techie Feeds

Mac App Store apps are stealing user data

Malwarebytes - Fri, 09/07/2018 - 17:08

There is a concerning trend lately in the Mac App Store. Several security researchers have independently found different apps that are collecting sensitive user data and uploading it to servers controlled by the developer. (This is referred to as exfiltrating the data.) Some of this data is actually being sent to Chinese servers, which may not be subject to the same stringent requirements around storage and protection of personally identifiable information like organizations based in the US or EU.

Adware Doctor

Patrick Wardle has recently posted an article detailing the misbehavior of an app named Adware Doctor, which is exfiltrating the following data:

  • Safari history
  • Chrome history
  • Firefox history
  • A list of all running processes
  • A list of software that you have downloaded and from where

Most of this is data that App Store apps should not be accessing, much less exfiltrating. In the case of the list of running processes, the app had to work around blockages that Apple has in place to prevent such apps from accessing that data. The developers found a loophole that allowed them to access that data despite Apple’s restrictions.

The developer of this app is one that we at Malwarebytes have had our eye on since 2015. At that time, we discovered an app on the App Store named Adware Medic—a direct rip-off of my own highly-successful app of the same name, which became Malwarebytes for Mac. We immediately began detecting this, and contacted Apple about removing the app. It was eventually removed, but was replaced soon after by an identical app named Adware Doctor.

We’ve continued to fight against this app, as well as others made by the same developer, and it has been taken down several times now, but in a continued failure of Apple’s review process, is always replaced by a new version before long.

Open Any Files: RAR Support

This app came onto our radar late last year. We’ve seen a number of different scam applications like this, which hijack the system’s functionality for handling documents that the user does not have an appropriate app to open, as a means for advertising other products…most often scams. The typical behavior is that, when the user opens an unfamiliar file, this app (and others like it) opens and promotes some antivirus software for scanning the file or the computer, often telling the user that they might be unable to open the file because they are infected.

Interestingly, this software was designed to promote a what appeared to be a mainstream antivirus product. This seemed like an abuse of an affiliate program for that product.

It turned out that this app’s behavior was very similar to the current behavior of Adware Doctor. It was uploading a file named to the following URL:

This file contained the following data:

  • Complete Safari browsing and search history
  • Complete Chrome browsing and search history
  • Complete Firefox browsing and search history
  • Complete App Store browsing history

We reported this app to Apple in December 2017. It is still present on the App Store.

As we were investigating, we found it very odd that Open Any Files was promoting Dr. Antivirus on the App Store. This led us to investigate Dr. Antivirus, as well as a number of other apps.

(Recently, Open Any Files stopped exfiltrating this data, but we have retained the evidence from our observations.)

Dr. Antivirus

On investigating, we learned that this app, like most Mac App Store apps, is limited in what it can detect to begin with, due to restrictions imposed by the App Store. However, even within the user folder, most of antivirus apps in the App Store don’t have a good detection rate, and this was no exception.

Worse, however, was that we observed the same pattern of data exfiltration as seen in Open Any Files! We saw the same data being collected and also uploaded in a file named to the same URL used by Open Any Files.

This file, though, contained an interesting bonus. In addition to the browsing history, it also contained an interesting file named app.plist, which contained detailed information about every application found on the system. (See a short excerpt from the file below, showing only the information listed for Dr. Antivirus.)

It could be argued that it is useful for antivirus software to collect certain limited browsing history leading up to a malware/webpage detection and blocking. But it is very hard to argue to exfiltrate the entire browsing history of all installed browsers regardless of whether the user has encountered malware or not. In addition, there was nothing in the app to inform the user about this data collection, and there was no way to opt out of this data collection.

Dr. Cleaner

Unfortunately, other apps by the same developer are also collecting this data. We observed the same data being collected by Dr. Cleaner, minus the list of installed applications. There is really no good reason for a “cleaning” app to be collecting this kind of user data, even if the users were informed, which was not the case.

Interestingly, we found that the drcleaner[dot]com website was being used to promote these apps. WHOIS records identified an individual living in China, and having a email address, as being the registered owner of the domain.

What does all this mean?

It’s blindingly obvious at this point that the Mac App Store is not the safe haven of reputable software that Apple wants it to be. I’ve been saying this for several years now, as we’ve been detecting junk software in the App Store for almost as long as I’ve been at Malwarebytes. This is not new information, but these issues reveal a depth to the problem that most people are unaware of.

We’ve reported software like this to Apple for years, via a variety of channels, and there is rarely any immediate effect. In some cases, we’ve seen offending apps removed quickly, although sometimes  those same apps have come back quickly (as was the case with Adware Doctor). In other cases, it has taken as long as six months for a reported app to be removed.

In many cases, apps that we have reported are still in the store. Case in point…all of the above.

I strongly encourage you to treat the App Store just like you would any other download location: as potentially dangerous. Be cautious of what you download. A free app from the App Store may seem perfectly innocent and harmless, but if you have to give that app access to any of your data as part of its expected functionality, you can’t know how it will use that data. Worse, even if you don’t give it access, it may find a loophole and get access to sensitive data anyway.

If you download one of these apps and are now regretting it, you can report the app to Apple:

Special thanks

Thanks go to folks who have spent their spare time finding and poking at these applications over the last year: PeterNopSled (from the Malwarebytes forums), @privacyis1st, and Patrick Wardle.

The post Mac App Store apps are stealing user data appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fortnite’s Google Play rebuff sparks security concerns for Android users

Malwarebytes - Thu, 09/06/2018 - 15:00

There’s been no small outbreak of chaos in mobile land recently, all because of an astonishingly popular game called Fortnite.

Here’s the thing: people refer to Android as “open platform,” saying that, in theory, you can do what you want with it. In practice, you buy an Android phone and then you’re locked into apps from the Google Play store. You can switch things off to allow external installs, but it’s generally not advisable, as it leaves the gate open to potentially dubious installs.

You can delve into discussions about whether Android is open source or not, but the conversation is a little more complicated and nuanced than simply answering “yes” or “no.”

With all of the above discord thrown into a melting pot and swirled around, Fortnite steps in and rattles a few more cages.

What happened?

The developers, Epic, decided that they’d rather offer the game on mobile outside of Google Play, which drastically increases the amount of revenue not nibbled at by Google. There are multiple potential issues with this:

  • Having children enable the “allow installs from unknown sources” option on an Android is a recipe for disaster. It not only means many of them will inevitably end up downloading a rogue app by mistake, it also means that those phones are now less secure than the fully locked-down Android devices out there.
  • As pointed out on Twitter, even children with legitimate installs of Fortnite onboard will eventually fall foul to something nasty because the phone is splashing around in the metaphorical malware mud.
  • Everything comes down to how well promoted the official download link is, and how efficiently the game developers tell people to only grab the game from that one specific link.
  • Epic needs to ensure they don’t fall victim to sophisticated SEO scams pointing links away from their site and toward bad downloads, and also that their site security is top notch. If the page is compromised, a rogue download link might be waiting in the wings.

That’s how the initial landscape looked shortly after Epic’s announcement, and many predicted things would quickly go horribly wrong.

Did things go horribly wrong?

They most certainly did. In the end, it wasn’t even a rogue app causing mayhem but an issue found with Fortnite’s installer that allowed for the possibility of rogue apps onboard to hijack the installer and install their own junkware. The so-called “Man in the Disk” attack looks for apps not locking down external storage as well as they should, and quickly gets to work exploiting things happening under the hood.

The uproar over the installer kerfuffle was rounded off with a bit of a fierce debate on Twitter, because that’s what happens with everything in life now.

What happens next?

Whether they like it or not, Epic are now the standard bearer for “app developer going off range into the (incredibly wealthy and insecure) wilderness.” I don’t believe an Android app has attracted quite this much attention before, and that’s without throwing the no Google Play install angle into the mix.

What they’re also stuck with is the realization that for as long as they continue to remain outside of the Google Play ecosystem, stories will come back to haunt them regarding malware installs masquerading as the real thing, social engineering tricks convincing children to download dodgy Fortnite add-ons from Russian servers, and potential SEO poisoning leading would-be gamers astray.

Google Play certainly isn’t perfect, and plenty of rogue apps have been found lurking there through the years. I think most security professionals would argue it’s still an awful lot riskier to switch off the unknown source install ban than it is to visit Play and grab an app, though.

Let’s also not single out Epic on this one; it’s not just game developers taking tentative steps into the world of unknown installs—even mobile phone providers do it. About four or five years ago, I replaced my phone and took out a package deal with a well-known UK retailer. Part of the deal was “six free games for your Android.” Sounds great, right? Except I quickly realized that to get the games, you had to enable unknown source installs and download the six .APK files directly from the phone provider’s website.

At no point did anyone say anything about how turning off a security feature of the phone I’d just been sold was a bad idea. Nothing in the literature provided mentioned anything beyond, “Wow, turning this off is a really good idea, free games! Wow!” This is also at a time when I was regularly writing about fake Angry Birds/Flappy Bird downloads hosted on Russian websites.

Once installed (via dragging and dropping from desktop to mobile through the magic of USB cables), those fake bird-themed games would typically try and perform premium rate SMS shenanigans. This only worked because some people were running around with unknown source installs permitted, and they’d still have to try and social engineer the ones that weren’t into turning it on.

Unknown installs: so hot right now

Now we’re at a point where unknown source installs are not only mainstream but currently attached to the wheels of an absolute gaming juggernaut. There are serious security issues that Epic needs to consider, and it’s going to be fascinating looking back in six to 12 months and deciding if promoting unknown source installs in this way caused a maelstrom of security headaches from all sides, or a large pile of “absolutely nothing much happened.”

If it’s the latter, you can bet more developers will want to take advantage of this method. Then the threat landscape will become significantly more complicated in mobile land.

The post Fortnite’s Google Play rebuff sparks security concerns for Android users appeared first on Malwarebytes Labs.

Categories: Techie Feeds

When spyware goes mainstream

Malwarebytes - Wed, 09/05/2018 - 15:00




These are terms alternately used to effectively identify a file-based threat that has been around since 1996: spyware. More than two decades later, consumer or commercial spyware has gone mainstream, and the surprising number of software designed, openly marketed, and used for spying on people is proof of that.

Forget the government, nation-states, private agencies, and law enforcement. Normal, ordinary citizens can now wield powerful surveillance software and use it against any target they wish—all thanks to “legitimate” companies like mSpy, Retina-X, FlexiSpy, Family Orbit, TheTruthSpy, and others. While the spyware they market can be placed in the hands of employers who want to keep tabs on employees in the workplace, or in the hands of parents who want to look after their kids, it can also be placed in the hands of stalkers, abusive partners, or someone who just wants to get a leg up in the divorce proceedings.

Spyware: spotting the signs

Spyware is usually stealthy by nature—but that doesn’t mean its activities or the effects of its presence on a desktop machine, laptop, or mobile device aren’t unnoticed. Below is a rundown of common symptoms that may indicate your computing devices have spyware installed:

Desktop or laptop:

  • Computer or device sluggishness
  • Crashing (when it usually doesn’t)
  • Multiple, unexpected pop-ups
  • Changes in certain browser settings
  • Unusual redirections to sites you haven’t seen or visited
  • Difficulty logging in to secure websites
  • New browser toolbars, widgets, or apps
  • The appearance of random error messages
  • Certain browser hotkeys stop working

Mobile phone or tablet:

  • Battery runs out quicker than normal
  • The device feels warm even when not in use and not charging
  • Increased data usage/Internet activity
  • Clicking, static, echo-y, or distant voices can be heard when on a call
  • Takes a while to shut down
  • Unexplained phone charges, phone calls, and messages
  • Autocorrect features stop working correctly
  • Longer response time
  • For iPhones: Presence of the Cydia app (although there are products now that don’t require a jailbroken iPhone)
  • For iPhones: Request for Apple ID credentials

Read: IoT domestic abuse: What can we do to stop it?

Spying is caring?

While many of us wrinkle our noses in disgust at spyware, some well-intentioned individuals see the good in planting and using such software in the devices of their loved ones. As mentioned earlier, parents (for example) want to stay in touch with their kids who are out and about. Sometimes just knowing where they are when Mom or Dad checks up on them—of course, they aren’t going to pick up the phone—can help them go about their day a little easier.

If you are already considering or using commercial spyware to “keep an eye” on your kids, we suggest you ask yourself the following questions:

Will I be/Am I breaking any laws?

You are if the following qualifications are true:

The states of Iowa and Washington criminalize some forms of spyware.

Even spyware developers have the Software Principles Yielding Better Levels of Consumer Knowledge (or the SPY BLOCK Act), the Securely Protect Yourself Against Cyber Trespass (or the SPY ACT), and the Internet Spyware Prevention Act (or The I-SPY Act) to contend with.

Have I already looked for better alternatives?

Almost every “legitimate” spy software in the market wears the slogan “completely undetectable,” or a variant of it. As we always say, if it sounds too good to be true, it probably is. Not only is spyware often detectable (see symptoms above), it’s also intruding on privacy. Instead of installing spyware, look for alternative apps that can help you monitor your loved one’s locations without snooping on their other stuff like messages and calls. If you’re an iPhone user, take advantage of Find My Friends. For Android users, you can use Trusted Contacts.

Do I know how these companies treat my target’s information?

“Carelessly” is probably the first word that comes to mind. Just look at the number of breaches that have happened against spyware companies in the last 18 months. Not only that, hackers who claim to target these companies consistently state that the data they siphoned from spyware targets aren’t encrypted at all.

How would I feel if I were in their shoes?

Monitoring a loved one isn’t inherently wrong in and of itself, but doing so without their consent is, even if it’s well-intentioned. This is why it’s so essential for all individuals involved to ask for and give consent when it comes to installing monitoring apps on devices. This doesn’t just apply to the parent-child dynamic.

Of course, for parents of pre-teens, many feel and believe that consent is optional, so they exercise their tough love on the young ones for a little while longer for their own protection and safety. As long as monitoring doesn’t (and shouldn’t) replace a healthy communication between parent or carer and child, this is fine. Parents of teens, on the other hand, may have to reassess their monitoring practices. Perhaps it’s time they sit down with the kids and talk to them about it.

Spying on someone without them knowing sucks. And when they do find out, even if you mean well, the damage caused by the invasion of privacy and breach of trust could be rather hard to undo.

Whether you think it’s beneficial or not to use spyware doesn’t change the fact that it’s still classified as malware, and malware—regardless of the law—isn’t something that should typically be found installed on computing devices of average users.

Stay safe, everyone!

The post When spyware goes mainstream appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (August 27 – September 2)

Malwarebytes - Mon, 09/03/2018 - 15:00

Last week, we looked at dubious antics in mobile land, a peculiar case of spam on the official Cardi B website, and we deep dived into fileless malware. We also explored the inner workings of Hidden Bee, and gave an explainer of Regex.

Other cybersecurity news:

Stay safe, everyone!

The post A week in security (August 27 – September 2) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Explained: regular expression (regex)

Malwarebytes - Fri, 08/31/2018 - 15:00

Regular expression, or “regex” for short, is a mathematical term for the theory used to describe regular languages. But in computing, regexes are used to search for patterns in files and databases, and their functionality is incorporated into many modern programming languages. Regex search patterns make wildcards look like clumsy clowns because they offer a whole slew of additional options.

Regex overview

The simplest and most common method of searching is to look for a specific string or character in a text file, for example, by using F3 on a website. This is basically what you use when you apply the “Search” or “Search and Replace” functions in Notepad.

Like we said, regex can do a lot more. But to achieve this, a few special characters have to be defined. It is good to know these so-called meta characters because syntax errors are the most common cause for failed searches.

The most used special characters are:

Square brackets []

Square brackets are used to specify a character set—at least one of which must be a match, but no more than one unless otherwise specified.

Example: Malwareb[yi]es will be a match for Malwarebytes and Malwarebites, not for Malwarebyites.

The minus sign –

The minus sign or hyphen is used to specify a range of characters.

Example: [0-9] will be a match for any single digit between 0 and 9.

Curly brackets {}

Curly brackets are used to quantify the number of characters.

Example: [0-9]{3} matches for any number sequence between 000 and 999

Parentheses ()

Parentheses are used to group characters. Matches contain the characters in their exact order.

Example: (are) gives a match for malware, but not for aerial because the following order of the characters is different from the specification.

Slash |

The slash, as in many languages, stands for the logical “or” operator.

Example: Most|more will be a match for both of the specified words.

Period .

The dot or period acts as a wildcard. It matches any single character, except line break characters.

Example: Malwareb.tes will be a match for Malwarebytes, Malwarebites, Malwarebotes, and many others, but still not for Malwarebyites.

Backslash \

The backslash is used to escape special characters and to give special meaning to some characters that follow it.

Examples: \d matches for one whole number (0 – 9).

\w matches for one alphanumeric character.

Asterisk *

The asterisk is a repeater. It matches when the character preceding it matches 0 or more times.

Example: cho*se will match for chose and choose, but also for chse (zero match).

Asterisk and period .*

The asterisk is used in combination with the period to match for any character 0 or more times.

Example: Malware.* will match for Malware, Malwarebytes, and any misspelled version that starts with Malware.

Plus sign +

The plus sign matches when the character preceding + matches 1 or more times.

Example: cho+se will match for chose and choose, but not for chse.

There are quite a few more meta characters, but it is outside the scope of this post to explain them all in detail. For those interested, there are many basic and advanced regex tutorials available. One of them will certainly fit your specific wishes.

Responsible use

Sophisticated regexes look intimidating and confusing at first sight, but once you have constructed a few yourself, you will start recognizing what others have tried to accomplish—especially if you take them apart one piece at a time. But we do advise caution when using your own regexes on public-facing servers or apps. An inexperienced publisher could be digging his own grave by doing so.

For most common tasks, there are many examples to be found on code repositories like GitHub. But you will have to choose carefully and ask yourself:

  • Security-wise, is it safe to use in production?
  • Is it well maintained? Does it get updated regularly, or will that become your future task?

The more contributors, the better is the rule of thumb here. More contributors mean not only more eyes that check for vulnerabilities, but also more people writing new code and improving existing code.


As in many other programming languages, regex can be used in JavaScript as well. This capability is nice, but also poses a problem that has been known for several years. The first paper mentioning the possibilities of a regular expression denial of service (ReDoS) stems from 2012.

Basically, an attacker can prepare a specially-crafted and/or lengthy piece of text that he feeds into an input field of a JavaScript-based web server or app. Since JavaScript does not run multi-threaded, the targeted server or app is busy running its regex functions on the text. While it is doing that, it is unable to perform any other tasks, so the server or app will appear to be frozen. Other languages will take a long time to deal with such texts as well, but if they are multi-threaded, other requests can be dealt with at the same time and won’t have to wait until the regex functions are done processing the text.

Since it is not hard to figure out, or in some cases, it’s well-known what regexes will be performed, it is relatively easy to craft a text that will keep an unprotected server occupied for up to a few minutes.

For example, many servers use Node.js, a JavaScript runtime that has quite a few documented ReDoS vulnerabilities.

In other cases, attackers can search for so-called “evil regexes.” What makes a regex stand out as evil?

  • The regular expression applies repetition (“+”, “*”) to a complex subexpression.
  • For the repeated subexpression, there exists a match that is also a suffix of another valid match.
Prevention of ReDoS attacks

To prevent becoming a victim of a ReDoS attack, it is not enough to rely on the built-in security of the regex. Here are some tips:

  • Use atomic grouping in your regex. An atomic group is a group that, when the regex engine exits from it, automatically throws away all backtracking positions remembered by any tokens inside the group.
  • Keep tabs on your regexes. When a regex takes much longer then it should, kill it at once. You can inform the user that it was stopped for this reason and as a security measure.
  • Validate your input, and don’t allow users to use their own regexes. If there is no other way, then pre-format the regexes and only allow certain minimal deviations.
  • Only write your own regexes for production servers and apps if there are no other known reliable sources available.
  • Use one of the verification packages that are available for regexes to have your regex checked for vulnerabilities.
Popular does not equal safe

Even though Node.js is an immensely popular JavaScript runtime, it is not enough to rely on the security it provides. And even though regexes can be useful tools, using them should come with some precautions. Reportedly, there has been an uptick in web apps and servers that have been under ReDos attacks lately.


Understanding ReDoS Attack

JavaScript Web Apps and Servers Vulnerable to ReDoS Attacks

How a RegEx can bring your Node.js service down

Stay safe!

The post Explained: regular expression (regex) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Reversing malware in a custom format: Hidden Bee elements

Malwarebytes - Thu, 08/30/2018 - 15:41

Malware can be made of many components. Often, we encounter macros and scripts that work as malicious downloaders. Some functionalities can also be achieved by position-independent code—so-called shellcode. But when it comes to more complex elements or core modules, we almost take it for granted that it will be a PE file that is a native Windows executable format.

The reason for this is simple: It is much easier to provide complex functionality within a PE file than within a shellcode. PE format has a well-defined structure, allowing for much more flexibility. We have certain headers that define what imports should be loaded and where, as well as how the relocations should be applied. This is a default format generated when we compile applications for Windows, and its structure is then used by Windows Loader to load and execute our application. Even when the malware authors write custom loaders, they are mostly for the PE format.

However, sometimes we find exceptions. Last time, when we analyzed payloads related to Hidden Bee (dropped by the Underminer exploit kit), we noticed something unusual. There were two payloads dropped that didn’t follow the PE format. Yet, their structure looked well organized and more complex than we usually encounter dealing with pieces of shellcode. We decided to take a closer look and discovered that the authors of this malware actually created their own executable format, following a consistent structure.


The first payload: b3eb576e02849218867caefaa0412ccd (with .wasm extension, imitating Web Assembly) is a loader, downloading and unpacking a Cabinet file:

The second payload: 11310b509f8bf86daa5577758e9d1eb5, unpacked from the Cabinet:

We can see at first that in contrast to most shellcodes, it does not start from a code, but from some headers. Comparing both modules, we can see that the header has the same structure in both cases.


We took a closer look to decipher the meaning of particular fields in the header.

The first DWORD: 0x10000301 is the same in both. We didn’t find this number corresponding to any of the pieces within the module. So, we assume it is a magic number that makes an identifier of this format.

Next, two WORDs are offsets to elements related to loading the imports. The first one (0x18) points to the list of DLLs. The second block (0x60) looks more mysterious at first. Its meaning can be understood when we load the module in IDA. We can see the cross-references to those fields:

We see that they are used as IAT—they are supposed to be filled with the addresses to the imported functions:

The next value is a DWORD (0x2A62). If we follow it in IDA, we see that it leads to the beginning of a new function:

This function is not referenced by any other functions so we can suspect that it is the program’s Entry Point.

The meaning of the next value (0x509C) is easy to guess because it is the same as the size of the full module.

Then, we have the last two DWORDs of the header. The second DWORD (0x4D78) leads to the structure that is very similar to the PE’s relocations. We can guess that it must be a relocation table of the module, and the previous DWORD specifies its size.

This is how we were able to reconstruct the full header:

typedef struct { DWORD magic; WORD dll_list; WORD iat; DWORD ep; DWORD mod_size; DWORD relocs_size; DWORD relocs; } t_bee_hdr; Imports

As we know from the header, the list of the DLLs starts at the offset 0x18. We can see that each of the DLL’s names are prepended with a number:

The numbers are not corresponding with a DLL name: In two different modules, the same DLL had different numbers assigned. But if we sum up all the numbers, we find that their total sum is the same as the number of DWORDs in the IAT. So, we can make an educated guess that those numbers are specifying how many functions will be imported from a particular DLL.

We can describe it as the following structure (where the name’s length is not specified):

typedef struct { WORD func_count; char name; } t_dll_name;

Then, the IAT comes as a list of DWORDs:

It is common in malware that when the function’s names are not given as an explicit string, they are imported by checksum. The same is done in this case. Guessing the appropriate function that was used for calculating the checksum can be more difficult. Fortunately, we found it in the loader component:

DWORD checksum(char *func_name) { DWORD result = 0x1505; while ( *func_name ) result = *func_name++ + 33 * result; return result; }

Knowing that we paired appropriate checksums with the function’s names:

Once the address of the function is retrieved, it is stored in the IAT in place of the checksum.


Creating a relocation table is simple. It consists of the list of DWORDs that are identifying the offsets of the places in the code to which we should add the base where the module has been loaded. Without relocations applied, the module will crash (so, it is not position-independent like a typical shellcode).

Comparison to PE format

While the PE format is complex, with a variety of headers, this one contains only essentials. Most of the information that is usually stored in a PE header is completely omitted here.

You can see a PE format visualized by Ange Albertini here.

Compare it with the visualization of the currently analyzed format:

Static analysis

We can load this code into IDA as a blob of raw code. However, we will be missing important information. Due to the fact that the file doesn’t follow a PE structure, and its import table is non-standard, we will have a hard time understanding which API calls are being made at which offset. To solve this problem, I made a tool that resolves hashes into function names and generates a TAG file to mark the offsets where each function’s address is going to be filled.

Those tags can be loaded into IDA using an IFL plugin:

Having all the API functions tagged, it is much easier to understand which actions are performed by the module. Here, for example, we can see that it will be establishing the connection with the C2 server:

Dynamic analysis

This format is custom, so it is not supported by the typical tools for analysis. However, after understanding it, we can write our own tools, such as the parser for the headers and loader that will help to run this format and analyze it dynamically.

In contrast to PE, the module doesn’t have any sections. So, we need to load it in a continuous memory region with RWX (read-write-execute) access. Walking through the relocations list, we will add the value of the base at which the module was loaded to the listed addresses. Then, we have to resolve the imported functions by their hashes and fill the addresses in the thunks. After preparing the stage, it just needs to jump at the Entry Point of the module. We will load the prepared loader under the debugger and follow to the entry point of the loaded module.

Simple but rare

The elements described here are pretty simple—they serve as a first stage of the full malware package, downloading other pieces and injecting them into processes. However, what makes them interesting is the fact that their authors have shown some creativity and decided to invent a custom format that is less complex than a full-fledged PE, but goes a step further than a typical piece of shellcode.

Such module, in contrast to independent shellcode, is not self-sufficient and cannot be loaded in a trivial way, but must be parsed first. Given the fact that the format is custom, it is not supported by existing tools. This is where programming skills come in handy for a malware analyst.

Fortunately, fully custom formats are rather uncommon in the malware world; usually, authors rely heavily on existing formats, from time to time corrupting or customizing selected parts of PE headers.

The post Reversing malware in a custom format: Hidden Bee elements appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Fileless malware: getting the lowdown on this insidious threat

Malwarebytes - Wed, 08/29/2018 - 16:48

Traditionally, malware attacks as we have always known them are files written to disk in one form or another that require execution in order to carry out their malicious scope. Fileless malware, on the other hand, is intended to be memory resident only, ideally leaving no trace after its execution. The malicious payload exists dynamically and purely in RAM, which means nothing is ever written directly to the HD.

The purpose of all this for the attacker is to make post-infection forensics difficult. In addition, this form of attack makes it nearly impossible for antivirus signatures to trigger a detection. In some specific cases, as with SamSam, the only way to even retrieve a sample to analyze would be to catch the attack happening live. This is one of the biggest challenges when dealing with fileless malware.

Fileless malware: the series

In this series of articles, we will discuss the technical details of all types of fileless malware and their related attacks in depth. We will cover a brief overview of the problems with and general features of fileless malware, laying the groundwork for the specific in-depth technical analysis of various samples employing fileless methods. Finally, we will finish off with some theoretical fileless attacks and conditions.

Evolution of fileless attacks

Before continuing, it would be beneficial to take a look at an article we wrote a couple of years ago, covering the basics of fileless attacks.

Now, fileless attacks are not necessarily a new thing, as we saw memory-resident malware in the wild over 15 years ago. One example is the Lehigh Virus, in which it “fills an unused portion of the of the host file’s code in its stack space, causing no increase in the host’s size. It can infect another COMMAND.COM file if a DOS disk is inserted while the virus is in memory.”

However as time goes on, the techniques and tools used to carry out these attacks have become more and more advanced. In fact, this evolution is what has made fileless malware receive a lot of attention among security experts in recent years.

The progression over the years has been interesting:

  • As mentioned with Lehigh, the virus infector actually contained the malicious code and the payload running in memory.
  • Going forward in time, malware such as Poweliks used a “NULL” Runkey (which makes the content invisible) in the registry to run JavaScript and used PowerShell to run an encoded script hidden elsewhere in the registry. Essentially, the payload was stored in the registry, retrieving, decoding, and executing during runtime only.
  • Modern-day exploit kits, such as Magnitude EK, can stream the payload and have it executed without dropping it on disk first.
  • Malware like DNSMessenger retrieves the malicious PowerShell script from a C2 server.
  • SamSam writes an encrypted malware payload to disk, and is only decrypted when a script is manually run by the attacker feeding the decryption password.
The problem with fileless malware

When a SOC team or an in-house security engineer monitors a company’s network and receives an alert of suspicious activity during threat hunting, they can only hope that there is traditional, file-based malware involved. Why? It’s much easier for them to be able to track what damage has been done, as well as the scope of the attack.

Having traces of a file-based malware in the network gives the SOC team a definitive starting point to review. These files allow engineers to trace the malware’s origins and usually give a clear idea of how the network was breached. Whether it was via email that linked to a malicious download or website compromise, having the file history provides a clean timeline that ultimately makes the job that much easier. Even further, having the binary allows SOC teams to analyze the code and see exactly what went on, and which systems and data were targeted.

I like to make the comparison of fileless malware to a manual attack that a hacker might carry out if he gained direct access to a remote machine. Fileless malware is, in many ways, identical to the manual hacker approach, but instead of having to crawl around the remote victim, fileless malware can be executed automatically. In many cases, the exact same tools used by the manual hacker are used by fileless malware.

For example, an attack using PowerShell script execution uses built-in Windows tools to perform malicious activity. Since tools like PowerShell are typically whitelisted (as they are used on a daily basis for non-malicious activity), both manual attackers and fileless malware have a free tool at their disposal to carry out the attack. This makes malicious activity much more difficult to trace for a SOC team. There is no file to trace the history of. The security engineer must now look at other artifacts and logged events to try and form a conclusion. Whether it was a fileless attack or a manual breach, it leaves the security engineer with the same problem.


Now, there are ways to help fight this. I continue to call it a problem because handling and preventing these types of attacks are an ongoing process. The difference between benign PowerShell usage and malicious usage can sometimes be minute. An untrained eye may look at PowerShell’s execution and activity and not realize it is malicious at all.

Alternatively, the exact opposite often occurs. The observer may look at a benign activity log and think it is suspicious. The fact that discerning between these two conditions is tough even for security experts is why this problem is incredibly difficult to solve with modern technology. These types of attacks create a scenario where the solution is not an exact science. It is a problem that is ongoing and advancements continue to be made.

As I mentioned before, some of these attacks create a scenario of “living off the land” in that they leverage built-in Windows tools. Because of this, one method to prevent this attack would be to identify the threat at its point of delivery, before it gets on the system.

Let me elaborate a bit. In the example of the fileless/in-browser exploitation, it is important to first try and block the attack before it begins. This is why Malwarebytes developed technology like exploit mitigation in its software programs. Monitoring memory and examining the chain of execution is a big first step in being able to generically block these attacks from taking place. Our exploit mitigation tech has historically proven to be effective against this type of attack. However, even if the code is able to infect the system, a good endpoint protection solution can identify the anomalous activity, track down hidden code, and remove it from the system, which will disrupt the malware’s ability to restart after reboot.

As I explained, the fight is ongoing, which is why a big part of making sure these threats cannot do damage is also the responsibility of the SOC team themselves. Patching, and enabling logging and access control are a necessary precaution. Upkeep can sometimes make all the difference as the first and final line of defense. This will also help in speeding up reaction time in the event of a compromise.

Future series: fileless and semi-fileless types

The focus of the rest of this series will be to cover fileless malware types and attacks from about the last five years. The techniques used among this group are most common and provide better preparation and useful knowledge when dealing with any future fileless malware attacks.

In addition, we will look at not only pure fileless malware, but also some semi-fileless attacks. For example, Kovter uses non-ASCII characters to create unreadable registry keys that hold obfuscated JavaScript. In this case, the malicious script technically does exist on the disk as registry keys, however, it is not a file on disk in the traditional sense.

SamSam is another type of malware that I consider semi-fileless, in that after obtaining the files involved in the attack, you still do not have enough information to analyze the payload. While there are some files on disk, such as loaders and encrypted payloads, the payload is inaccessible unless you intercept the script that started the whole chain of events.

Tune in for part two of this series, where we will cover the technical details of the various tactics and techniques used in modern-day fileless attacks, drawing reference to samples that use these techniques in the wild.

The post Fileless malware: getting the lowdown on this insidious threat appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Building Cooperation and Trust into the Web with IPFS

Mozilla Hacks - Wed, 08/29/2018 - 14:43

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

Some projects start small, aiming for incremental improvements. Others start with a grand vision, leapfrogging today’s problems by architecting an idealized world. The InterPlanetary File System (IPFS) is definitely the latter – attempting to replace HTTP entirely, with a network layer that has scale, trust, and anti-DDOS measures all built into the protocol. It’s our pleasure to have an introduction to IPFS today from Kyle Drake, the founder of Neocities and Marcin Rataj, the creator of IPFS Companion, both on the IPFS team at Protocol Labs -Dietrich Ayala

IPFS – The InterPlanetary File System

We’re a team of people all over the world working on IPFS, an implementation of the distributed web that seeks to replace HTTP with a new protocol that is powered by individuals on the internet. The goal of IPFS is to “re-decentralize” the web by replacing the location-oriented HTTP with a content-oriented protocol that does not require trust of third parties. This allows for websites and web apps to be “served” by any computer on the internet with IPFS support, without requiring servers to be run by the original content creator. IPFS and the distributed web unmoor information from physical location and singular distribution, ultimately creating a more affordable, equal, available, faster, and less censorable web.

IPFS aims for a “distributed” or “logically decentralized” design. IPFS consists of a network of nodes, which help each other find data using a content hash via a Distributed Hash Table (DHT). The result is that all nodes help find and serve web sites, and even if the original provider of the site goes down, you can still load it as long as one other computer in the network has a copy of it. The web becomes empowered by individuals, rather than depending on the large organizations that can afford to build large content delivery networks and serve a lot of traffic.

The IPFS stack is an abstraction built on top of IPLD and libp2p:

Hello World

We have a reference implementation in Go (go-ipfs) and a constantly improving one in Javascript (js-ipfs). There is also a long list of API clients for other languages.

Thanks to the JS implementation, using IPFS in web development is extremely easy. The following code snippet…

  • Starts an IPFS node
  • Adds some data to IPFS
  • Obtains the Content IDentifier (CID) for it
  • Reads that data back from IPFS using the CID

<script src=""></script> Open Console (Ctrl+Shift+K) <script> const ipfs = new Ipfs() const data = 'Hello from IPFS, <YOUR NAME HERE>!' // Once the ipfs node is ready ipfs.once('ready', async () => { console.log('IPFS node is ready! Current version: ' + (await // convert your data to a Buffer and add it to IPFS console.log('Data to be published: ' + data) const files = await ipfs.files.add(ipfs.types.Buffer.from(data)) // 'hash', known as CID, is a string uniquely addressing the data // and can be used to get it again. 'files' is an array because // 'add' supports multiple additions, but we only added one entry const cid = files[0].hash console.log('Published under CID: ' + cid) // read data back from IPFS: CID is the only identifier you need! const dataFromIpfs = await console.log('Read back from IPFS: ' + String(dataFromIpfs)) // Compatibility layer: HTTP gateway console.log('Bonus: open at one of public HTTP gateways:' + cid) }) </script>

That’s it!

Before diving deeper, let’s answer key questions:

Who else can access it?

Everyone with the CID can access it. Sensitive files should be encrypted before publishing.

How long will this content exist? Under what circumstances will it go away? How does one remove it?

The permanence of content-addressed data in IPFS is intrinsically bound to the active participation of peers interested in providing it to others. It is impossible to remove data from other peers but if no peer is keeping it alive, it will be “forgotten” by the swarm.

The public HTTP gateway will keep the data available for a few hours — if you want to ensure long term availability make sure to pin important data at nodes you control. Try IPFS Cluster: a stand-alone application and a CLI client to allocate, replicate and track pins across a cluster of IPFS daemons.

Developer Quick Start

You can experiment with js-ipfs to make simple browser apps. If you want to run an IPFS server you can install go-ipfs, or run a cluster, as we mentioned above.

There is a growing list of examples, and make sure to see the bi-directional file exchange demo built with js-ipfs.

You can add IPFS to the browser by installing the IPFS Companion extension for Firefox.

Learn More

Learn about IPFS concepts by visiting our documentation website at

Readers can participate by improving documentation, visiting, developing distributed web apps and sites with IPFS, and exploring and contributing to our git repos and various things built by the community.

A great place to ask questions is our friendly community forum:
We also have an IRC channel, #ipfs on Freenode (or on Matrix). Join us!

The post Dweb: Building Cooperation and Trust into the Web with IPFS appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Official Cardi B website plagued by spammers

Malwarebytes - Tue, 08/28/2018 - 15:00

We come bearing tidings of proper website maintenance and general housekeeping for singer Cardi B (or rather, for her web development team). At first glance, it appeared as though her website had been hacked a few days ago. But a look under the hood told a different story.

We were surprised to see the following lurking on the official Cardi B website:

Click to enlarge

Ignore the privacy policy pop-up. Websites can’t get enough of those these days, thanks to GDPR. No, what we’re talking about is the peculiar blast of messed up spam text all over the page. Had it been compromised? Or was something else to blame?

Click to enlarge

Things certainly didn’t look great. Even worse for the singer, the front page of her site was touting similar spammy vids:

Click to enlarge

I could be wrong, but I don’t think her fans are particularly interested in clickthroughs to fake movie streams and a football match involving Stoke City and Wigan Athletic. The spam links also found their way onto the photos page:

Click to enlarge

Those are definitely photos, but not so much of a singer singing. What happened here?

It seems the site allows people to sign up as registered users, then post comments. Somewhere along the line, this feature has attracted the ire of spammers who figured out a way to not only plaster individual pages with spam links, but also feed said spam onto various main sections of the site as a whole.

We’ve posted at length regarding the correct treatment of user-posted comments, and we’ve also taken a look at how things can go wrong with plugins and third-party tools. When it comes to our own site, we keep a sharp eye on spam, moderate comments, and close comments sections after a certain amount of time. With the amount of junk floating around the web, you can’t afford to be lax where keeping a tidy online presence is concerned.

While the rogue pages in question seem to have been taken down, simply searching for the Cardi B website in Google reveals the damage done to the site’s search results:

Click to enlarge

Spammy results such as the above can take a long time to filter out of search engines, and it isn’t great to have things like that sitting at the top of the searches alongside legitimate results.

Click to enlarge

There’s been a cleanup since Cardi B fans started talking about it on social media. Though you can still access the login page for existing user accounts on the site, it looks as though new sign-ups have been disabled so the site admins can bring everything back under control.

Click to enlarge

While a spam outbreak is never good, especially when it spills onto your home page, it appears the scammers had nothing but spam in mind—so no malware links were forthcoming. What was in evidence, however, was any number of cookie-cutter links to video streaming sites and YouTube clips.

Click to enlarge

With so many links spammed, and tedious work to be done to check each one individually, there’s no way to guarantee final destinations were entirely free from harm. If you think you might have ended up on something other than a YouTube video or movie sign-up page via any of these links, then it’s a good idea to run some anti-malware scans on your PC and ensure you’re clean.

As for Cardi B, hopefully the site admins will be able to keep a lid on the kind of spam outbreaks they’ve experienced over the last couple of days. Social features for users of your site are great, but those services need to be balanced with tight moderation and a limit on where said features can take you—even if it is Stoke City versus Wigan Athletic.

The post Official Cardi B website plagued by spammers appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (August 20 – 26)

Malwarebytes - Mon, 08/27/2018 - 17:06

Last week on Labs, we took a look at insider threats, doubled back on the privacy of search browser extensions, profiled green card scams, revisited Defcon badgelife, and talked about what happens to a user’s accounts when they die.

Other cybersecurity news
  • There was an archiving error in Twitch HQ. Unfortunately, that left some private user messages (even those with sensitive info in them) exposed to the public for a time. (Source: Sophos’ Naked Security Blog)
  • Researchers from Catholic University found that apps offering ad blocking and privacy can be bypassed. (Source: Sophos’ Naked Security Blog)
  • Researchers associated with Project Insecurity found a flaw in disability services in Canadian telcos. (Source: Kaspersky’s Threatpost)
  • Facebook continued to clean house, removing more pages of campaigns that originated from Iran and Russia to curb “coordinated inauthentic behavior.” (Source: Facebook Newsroom)
  • A computer science professor at Vanderbilt University published a 55-page study on how Google continues to collect data on users, even when the device is idle. (Source: The Washington Post)
  • Philips revealed that their cardiovascular imaging devices have a flaw that could provide a low-level hacker “improper privilege management.” (Source: ZDNet)
  • Videomaker service provider Animoto was breached. (Source: TechCrunch)
  • Ryuk, a new ransomware, trained their crosshairs at large organizations capable of paying high-valued ransom in Bitcoin. (Source: ZDNet)
  • North Korea’s The Lazarus Group pushed out its first Mac malware and successfully infiltrated IT systems of a cryptocurrency exchange platform based in Asia. (Source: Bleeping Computer)
  • Superdrug, the popular health and beauty retailer based in the UK, was breached. (Source: InfoSecurity Magazine)
  • Cobalt Dickens, a campaign that originated in Iran, targeted universities in 14 countries to steal credentials. (Source: SecureWorks)
  • Hackers make millions by selling unpublished press releases. (Source: The Verge)

Stay safe, everyone!

The post A week in security (August 20 – 26) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Green card scams: preying on the desperate

Malwarebytes - Fri, 08/24/2018 - 15:00

Thanks to @nullcookies for providing leads.

Most online scams depend on two things for success: a broken or otherwise onerous process to deal with a legitimate entity, and a desperate target population. With immigration, there are many, many burdensome processes to navigate, and most applicants involved are at least somewhat desperate due to costs and lengthy time expenditures. The result is an environment ripe for green card scams.

Looks real, but came from a scam site (which is, in fact, none of these things) is a great example of how borrowing the symbolism and language of legitimate authorities, combined with limited authentic communications from those authorities, can create an environment ripe for scamming.

The site is professionally designed, down to a fake logo that approximates the US State Department logo as closely as legally possible. There are multiple urgent calls to action, with red “Apply Today” buttons on most pages, and dire warnings of what can happen to you if your application is entered too late. But scrolling down to the bottom, we see the following:

Which reads:

USA Green Card Office is not affiliated with the U.S. Government or any government agency. You can enter the U.S. Diversity Visa Lottery for Free at in between their open registration dates which typically start in early October 2018. We are not a law firm, we do not provide legal advice, and are not a substitute for an attorney. This site provides a review and submission service that requires a fee.

So not only are they not affiliated with the US government, they’re not attorneys, and therefore probably know nothing about immigration law and cannot provide meaningful help with any green card issues.

Passive DNS on the site doesn’t reveal much, except additional sites usa-dvprogram[.]info, and us-dvprogram[.]info. Stepping backwards to the last IP resolution shows the following:,,,,

After finding little of interest in the scam infrastructure, we decided to register as a prospective immigrant and see what services were on offer.

After paying $129 for the privilege of surrendering some personal information, we promptly got a “verification call” from a man with a South Asian accent. We asked repeatedly about the process, when our application would be forwarded to the relevant officials, and how to move forward. The operator responded with a hard sell to “upgrade” our application for multiple chances to win. (This is not how the real lottery works.)

At no time were we provided any information on the real process, nor did the operator disclose at all what his company would do for us. Based on our experience with the call, the provider does not offer any services whatsoever, but will gladly take both money and significant amounts of personal data. As a scam overall, we rate it as a B-.

A question that sometimes arises with these sorts of scams amongst defenders is often, “Who could possibly fall for that?” The answer is typically, “probably you.”  Let’s look at why.

Below is the real green card lottery site at

Unlike the scam site, the real one provides essentially no information on what the lottery is or how to apply. Signifiers of authenticity are limited to a small logo on the top left. There is no guidance on how to get further information.

By contrast, the scam site provides the basics on what the lottery is, some brief application statistics, and has large, prominent branding all over the site. If you, a prospective applicant, were to be presented with both sites, which one would feel more authentic? Which one would you choose if you had limited financial resources and could only apply once?  Which would feel more accommodating if you had limited English skills?

What’s happening with this scam site and the U.S. Department of State site above is quite similar to what we see with legitimate tech support and tech support scammers. An official entity does a poor job communicating with its constituency, and that creates a vacuum that scammers are all too eager to fill. So while there are concrete steps that an end user can take to stay safe from this sort of thing (see here), large companies and government agencies shoulder a share of the blame as well.

Rather than dismissing the individual for falling for the scam, a more viable solution for security personnel is to collaborate across the company to make sure your corporate communications don’t leave room for scammers to exploit. Does your marketing newsletter look like a scam? Do your support staffers authenticate themselves upon request? Can they verify third parties that work with you? These are all solvable problems that can prevent at least a portion of users from being victimized.

The post Green card scams: preying on the desperate appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Can search extensions keep your searches private?

Malwarebytes - Thu, 08/23/2018 - 15:00

One of the most common things most of us do on the Internet is search, whether we are looking up the price of the latest gadget or we need to find the address of that great restaurant recommended by a friend. The dizzying number of Google search queries per second (more than 40,000, on average) tells us there is plenty of money to be made by advertising in search results.

It’s not just big names in the search industry who are aware of this fact. Others want a piece of the pie, too. But what can they hope to accomplish when their budget is nowhere near that of the marquee players, and one of their prospective competitors has managed to turn its brand name into a verb?

The only thing that makes sense in this scenario is to offer something that others don’t. And with recent data breaches, online tracking, targeted advertising, and other privacy-threatening events all leaving us worried about our online privacy, some smart developers have created browser extensions that promise to keep prying eyes away from our searches.

We have noticed quite a few new names in this fledgling industry. In fact, some of them are so similar in their advertising, wording, coding, and use of images, that there is no other explanation besides their developers deciding power lies in numbers—of extensions, brand names, and domain names. And they’re all doing, or rather not doing, the same thing in an attempt to make the cash register ring.

In case you were wondering whether any of these are worth the time it takes to install them, the short answer is no.


To investigate this trend that we’ve been watching since summer 2017, we looked at 25 extensions that advertise that they offer more privacy during searches. One of the first things we noticed was that over half of these extensions were so alike, we classified them as a single family.

Our generic detection name for smaller variants belonging to this family is PUP.Optional.SearchAlgo.Generic. It’s named after the domain this family uses to route its searches. As far as I can tell, they all end up displaying Yahoo Search results, but this isn’t hardcoded into the extension, so the redirect is probably decided on-the-fly by the code on the servers. That would make it easier for them to switch in case they get a better offer than the one from Yahoo Search.


We have looked into a few of the top results found while searching for private search extensions, and found several nefarious or questionable similarities. It may come as no surprise that all of these extensions, not just the one from the “searchalgo” family, have been added to our detections as potentially unwanted programs (PUPs). Here’s a breakdown of what we found:

Protocol: While a few of the extensions actually use the https protocol to conduct their searches, most of them do not. This leaves us immediately wanting for more privacy when we hit the search button. Using the https protocol would at least make eavesdropping harder.

Results: The division rate of those that display their results on a site of their own and those that simply redirect us to Yahoo Search is about fifty-fifty.

Code: We looked at the code of the extensions to see if developers were paying attention to the privacy of the search or search results. We found no trace of any such code.

Browsers: Most of the extensions we found were only available for Chrome. A few were intended for Firefox. This is probably due to the much bigger market share for Chrome at the moment.

The technical details

Looking at the code of one of the major families, we can see that this is the main search routine:


In case you got your hopes up when you spotted the word “encode,” the encodeURIComponent() function encodes a Uniform Resource Identifier (URI) component by replacing each instance of certain characters by one, two, three, or four escape sequences representing the UTF-8 encoding of the character. This is only used to ensure that certain special characters, like backslashes, don’t get read as code. So, no privacy enhancement there.

As mentioned before, one of the larger families in this category uses its own domain to redirect searches through to the most profitable established search engine.

The most profitable for the extension authors must be Yahoo Search by the look of the results. Others fetch results from a popular search engine and add their own header and a “few” advertisements to earn money.

Extra functionality

Some of these search extensions also promise extra functionality. We have seen variants that promise to be specialized in:

  • Music
  • Movies
  • Games
  • Downloads

And usually, when you visit the domains that are listed as the origin of the extension in the web store, you will find that they advertise these specialized search extensions, but not their privacy enhancing extensions.


We did find that some of these extensions pre-date the rise of the privacy search extensions, but they still use the same code, images, and search domains. For example, has been around since late 2015 and to date still uses the searchalgo search domain.

Is it possible that they just changed the marketing scheme and not the underlying code?

Online privacy

Of course, we appreciate people’s desire for more online privacy. But for those tempted by the promise of enhanced privacy during online searches, we have some better alternatives:

  • First of all, you should have a look at this blogpost about interest-based advertising and what you can do about it.
  • Also, we recommend using a less limited tool to block tracking. There are many that block tracking on every site you visit, not just during searches.
  • Or, you can anonymize your Internet traffic by using a VPN.
Stopping advertisements

One of the side-effects of all the “privacy search” extensions we looked at was the extra influx of advertisements. If you want to put a stop to those, whether they are targeted or not, you really should have a look at this post on blocking ads, as well as this one about which ad blockers you might want to use and how to install them.

The long answer

Even though the publisher(s) of these extensions are trying to tell us that there is privacy to be gained during your online searches, we are of the opinion that there are many better ways to achieve that level of privacy than to install these extensions. We didn’t have time to find and examine every extension that promises to keep your searches private, but we have reasons to believe that the majority of them are more interested in their personal revenue than your privacy. We would advise you to consider one of the other options with a more wide-reaching impact on your privacy like VPNs, anti-tracking tools, taking other measures against interest-based advertising.

The post Can search extensions keep your searches private? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Badgelife: A Defcon 26 retrospective

Malwarebytes - Wed, 08/22/2018 - 16:03

One more year gone, one more Defcon completed.

Defcon is the longest-running security conference in existence and one that I have been attending since Defcon 18. It is an opportunity to see and interact in real life with industry peers that would forever remain a digital persona otherwise. It is the place where you hear about the newest attack techniques, the coolest hacks, and the most spectacular security failures. A giant melting pot of hackers, security professionals, various three-letter agency employees, lawyers, students, black hats, grey hats, white hats, IT admins, help desk warriors, journalists, activists, reversers, cypherpunks, scary pentesting voodoo red team experts, and stoic blue team defenders.

Defcon is the conference of conferences. There’s even a LineCon, consisting of the impromptu discussions that take place while waiting to register or waiting to get into a room to see a presentation. And let’s not forget HallCon, where you strike up a conversation with random strangers and never, not once, have them roll their eyes when you start talking about security.

Villages, such as the LockPick village, exist where volunteers demonstrate just how illusionary the protection a physical lock provides. Then there are various hardware hacking villages, where routers, Wi-Fi repeaters, or anything containing a small computer is picked apart. Soldering irons abound, and disassembling is encouraged. Warranties are gleefully broken and tamper mechanisms are ignored or defeated in an undetectable manner. There’s the car hacking village, drone hacking, the social engineering events. The list goes on and on in a cornucopia of coolness.

And let’s not forget the swag. Oh the swagiest of swag! Epic t-shirts, cool and weird stickers, army backpacks with a bajillion pockets, personalized hotel cards, challenge coins, and the crown jewel of them all…The coveted unofficial electronic badges.

Defcon has the best badges—in part out of necessity, I theorize. How do you combat counterfeit badges when the vast majority of your attendees know about plastic card printers, have a passing familiarity with photo editing software, and perhaps a flexible moral code?

An example of an early Defcon badge. (Photo acquired on the Internet)

You step up your game. Early examples were embossed, then made of laser-cut plexiglass, and even metal! Very soon, functionality was thrown into the mix. It started slowly, with blinking LEDs, and rapidly progressed. As badges started including crypto challenges, greater and greater functionality was added. The rationale behind this enhancement was to foster collaboration between attendees with different skill sets when attempting to solve the puzzles contained within.

As badge functionality grew, enterprising conference attendees started modifying them. The Defcon 16 badge included a “TV-B-GONE” function, to the great chagrin of the Las Vegas restaurants and sports bars owners. A Defcon 17 attendee even added a Breathalyzer to his badge.

Official Defcon badges of yesteryears.

Eventually, the Defcon organizers settled into a cadence. One year was a crypto challenge with an artistic style of badge; the alternating year an electronic one. This was probably a logistical decision, as the electronic badges became more and more intricate, requiring longer and longer development time due to their complexity.

Around this time, Defcon attendees witnessed the birth and rise of unofficial Defcon badges. Built by attendees, these unofficial badges became the most sought-after object to wear around your neck: a prestigious status symbol, confirming your “leet-ness.” A visual confirmation that had the guile necessary to acquire them. You knew the right people, or had the skills to create your own.

Unofficial Defcon badges, including: the Whiskey Pirates badge, a MK1 Bender badge, the Ides of Defcon, and a VoidDC24 badge.

Defcon 26 saw a veritable explosion of unofficial badges, as more and more groups of enterprising con attendees started making their own badges with a dizzying array of features. Here is a selection of unofficial badges acquired this year.

A DC801 badge, a Furcon Badge, a Fale badge, a Linecon2018 badge, and an LHC badge.

With the explosion of unofficial badges, a standard was developed known as the “SAO.” This standard allowed for add-on mini badges that were much easier to make and gave the opportunity to less experienced badge makers to wet their feet. These mini badges also allowed for much brisker badge trading, as they tended to be simpler in design and scope.

A custom red-eyed pickle Rick SAO made by @reanimationxp @tr_h and @ssldemon

A selection of the SAO mini badges acquired through trade, beverage exchange, or monetary transactions.

All of these are but a small sampling of what was available. The project I was involved with was Defcon Drone badge (Hi Bl1n7!) and our team frantically flashed badge operating systems and assembled kits into the late hours of the night. I got to learn about the Arduino IDE as I flashed the base firmware on the Kickstarter pledged badge packages. I also took the opportunity to hone my soldering skills and repair electronics. The suite where all these activities took place was most thoroughly equipped with microscopes, soldering stations, classic sci-fi movies in the background, and a bevy of delicious snacks!

Defcon is what you make of it, and this year I elected to make it all about the badge life. You can find out more about badgelife here, courtesy of Hackaday.

The post Badgelife: A Defcon 26 retrospective appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The digital entropy of death: BSides Manchester

Malwarebytes - Tue, 08/21/2018 - 15:58

Last week, I gave a talk at BSides Manchester based on a previous blog series for Malwarebytes Labs called “The digital entropy of death.”

What do you do when a relative or close friend dies, leaving all of their digital accounts lying around for anyone to break into and make use of? Which companies have provisions in place for being able to “claim” said accounts, offering the ability to lock them down, download selected data, or just purge the profiles from their systems? Do you have a system in place for relatives to take control of your online presence should the worst happen? Are they aware of all the micro-transactions and direct debits going out of your account?

The follow up to my blog on what to do with your online accounts when you die covered the endlessly shifting sands of the net itself, exploring the world of link rot.

It’s all too easy to envisage a future where all the actual content has faded from the net, and all that’s left is dormant profile accounts. We even have a peculiar ecosystem that’s sprung up around sites going offline/removing old content to make space for new articles. Even the links still online may switch expected content around and cause headaches for people trying to access them. What lurks at the other end of a URL you thought you could trust?

All of this and more was covered in the presentation, and you can watch the full recording below.

The various sections of the presentation (roughly 45 minutes in length) are as follows:

* Statistics: How many deaths happen per year, and how many online accounts do we have?
* Rezzing: The three main ways of bringing back a dormant account.
* Digital assets and deficits: Exploring the types of wills available, retaining ownership of accounts post-death, and the notion of inbuilt DRM timers based on life expectancy.
* The DIY approach: How people try to keep control of accounts belonging to loved ones.
* Companies dealing with death: How do the big players in the web space deal with this issue?
* Link rot: What happens when the web starts falling to bits? Which businesses are looking to make some money from it, and how?

This is a subject covered online fairly frequently, but perhaps awareness of said articles isn’t as widespread as it could be. Since giving the talk, I’ve had a lot of positive feedback alongside plenty of examples of “this horrible thing happened to our family.” It’s definitely something that can and will spring up unannounced, and it’s up to us to ensure that the incredibly stressful impact of death is lessened as much as possible.

Nobody wants to be stuck messing around with online accounts, much less hunting around for them, when someone has died, but it’s increasingly becoming something we need to think about. Thankfully where a chasm exists between the digital wants and needs of a grieving family and specific laws in their region of the world that don’t (or can’t) account for digital conundrums, more and more companies are stepping up to offer their own solutions to this incredibly difficult problem.

The post The digital entropy of death: BSides Manchester appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (August 13 – August 19)

Malwarebytes - Mon, 08/20/2018 - 17:33

Last week on Malwarebytes Labs, we talked about how Process Doppelgänging meets Process Hollowing in the Osiris dropper, provided hints, tips, and links for a safer school year, gave a recap of Black Hat USA 2018, offered some tips for a secure content management system, highlighted a silly snail-mail scamming attempt, and provided insight in why money, power, and ego drive hackers to cybercrime.

Other news
  • Walmart gains patent to eavesdrop on shoppers and employees in stores. (Source: CNet)
  • FBI warns of “unlimited” ATM cashout (Source: Krebs on Security)
  • Caesars Palace not-so-Praetorian guards intimidate DEF CON goers with searches. (Source: Ars Technica)
  • Researchers discovered a way to hack Echo smart speakers. (Source: Techspot)
  • Researchers have found another serious security flaw in computer chips designed by Intel. (Source: BBC)
  • Victims lose access to thousands of photos as Instagram hack spreads. (Source: ThreatPost)
  • Web cache poisoning just got real: How to fling evil code at victims. (Source: The Register)
  • Canadian telcos patch vulnerability in TRS systems. (Source: BleepingComputer)
  • Philips vulnerability exposes sensitive cardiac patient information. (Source: Threatpost)
  • Fortnite login credentials sold on the dark web for cheap. (Source SCMagazine)

Stay safe, everyone!

The post A week in security (August 13 – August 19) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

The enemy is us: a look at insider threats

Malwarebytes - Mon, 08/20/2018 - 16:42

They can go undetected for years. They do their questionable deeds in the background. And, at times, one wonders if they’re doing more harm than good.

Although this sounds like we’re describing some sophisticated PUP you haven’t heard of, we’re not.

These are the known attributes of insider threats.

Insider threats are one of a handful of non-digital threats troubling organizations of all sizes to date. And—to bang on the hype—the danger they pose is real.

When once companies thought that risks to their high-valued assets can only come from outside actors, they’re slowly realizing that they are also facing potential dangers from within. The worst part is no one can tell who the culprits are until the damage is done.

In the Osterman Research white paper entitled White Hat, Black Hat and the Emergence of the Gray Hat: The True Costs of Cybercrime, it is found that insider threats account for a quarter of the eight serious cybersecurity risks that significantly affect private and public sectors. To put it another way, an organization’s current and former employees, third-party vendors, contractors, business associates, office cleaning staff, and other entities who have physical or digital access to company resources, critical systems, and networks are collectively ranked in the same list as ransomware, spear phishing, and nation-state attacks.

The majority of insiders who have caused their employers a headache didn’t necessarily have technical backgrounds. In fact, they didn’t have the desire or the inclination to do something malicious against their company to begin with. In the 2016 Cost of Insider Threats [PDF], a benchmark study conducted by the Ponemon Institute, a significant percentage of insider incidents within companies in the United States was not caused by criminal insiders but by negligent staff members.

This finding remains consistent with the 2018 Cost of Insider Threats [PDF], where coverage also includes organizations in the Asia-Pacific region, Europe, Africa, and the Middle East. The insider’s general lack of attention and misuse of access privileges, coupled with little-to-no cybersecurity awareness and training, are some of the reasons why they’re dangerous.

Understanding insider threats

Many have already described what an insider threat is, but none as inclusive and encompassing as the meaning put forward by the CERT Insider Threat Center, a research arm of Carnegie Mellon University’s Software Engineering Institute (SEI). They have defined an insider threat as:

…the potential for individuals who have or had authorized access to an organization’s assets to use their access, either maliciously or unintentionally, to act in a way that could negatively affect the organization.

From this definition, we can classify insiders into two main categories: the intentional and the unintentional. Within those categories, we’ve described the five known types of insider threats to date. The are as follows:

Intentional insiders

They knowingly do harm to the organization, its assets, resources, properties, and people.

The malicious insider 

This type has several names, including rogue agent and turncoat. Perhaps its main differentiation from the professional insider (as you will see below) is that not one insider of this type started off with malicious intent. Some disgruntled employees, for example, may decide to compromise the company’s network if they perceive that their company has done them wrong by planting malware, deleting company files, stealing proprietary intellectual property to be sold, or even withholding essential accounts and data for ransom.

In certain circumstances, employees go rogue because they want to help their home country. Such is the case of Greg Chung, who was found guilty of supplying China with proprietary military and spacecraft intel during his tenure in Rockwell and Boeing by stealing nearly three decades worth of top-secret documents. The number of boxes of files retrieved from his home was not disclosed, but we can assume it to be in the hundreds.

Employees who are coerced or forced to perform malicious acts on behalf of one or more entities also fall under this type.

The professional insider

This type is usually referred to as a spy or mole in an organization. They enter an organization generally as employees or contractors with the intent to steal, compromise, sabotage, and/or damage assets and the integrity of the company. They can either be funded and directed by nation states or private organizations—usually a competitor of the target company.

When the Jacobs Letter was made public, a 37-page allegation penned by former Uber employee Ric Jacobs, it seemed that the civil suit between Google and Uber was no longer your usual intellectual property theft case. In this letter, Jacobs claimed that Uber ex-CEO Travis Kalanick was the mastermind behind the theft, with Anthony Levandowski as the actor. Although this allegation has yet to be substantiated, Levandowski would fit this type if found true.

The violent insider

Acts that negatively impact organizations don’t start or end in the abuse, misuse, and theft of non-physical assets. They can also include threats of a violent nature. Peopleware is as essential as the software and hardware an organization uses, if not even more crucial. So, what negatively affects employees in turn affects the organization, too.

Therefore, it’s imperative that organizations also identify, mitigate, and protect their staff from potential physical threats, especially those that are born from within. The CERT Insider Threat Center recognizes workplace violence (WPV) as another type of insider threat, and we categorized it under intentional insiders.

WPV is defined as violence or threat of violence against employees and/or themselves. This can manifest in the form of physical attacks, threatening or intimidating behavior and speech (written, verbal, or electronically transmitted), harassment, or other acts that can potentially put people at risk.

This author hopes that CERT and/or other organizations looking into insider threats expand their definition to include workplace bullying, domestic violence (e.g. when an abusive partner comes after his/her abused partner in the workplace), and other actions that put employee safety at risk or negatively impact their emotional and psychological well-being.

Read: Of weasels, snakes, and queen bees

Insider Threat Researcher Tracy Cassidy of CERT has identified [PDF] the following indicators that enable an employee to fall under this type:

  • History of violence
  • Legal problems
  • Loss of significant other
  • Conflict with supervisor
  • Potential loss of employment
  • Increased drinking
  • Concerning web searches

In 2015, Vester L. Flanagan II (aka Bryce Williams) shot and killed two of his former colleagues in WDBJ7, a local TV station in Roanoke, Virginia, during a live interview. Flanagan later posted a clip of the shooting on Facebook and on Twitter, claiming that his victims wronged him.

Two years after the Flanagan incident, Randy Stair was posting troubling videos and messages on Twitter about his plot to kill his co-workers at the Weis supermarket in Pennsylvania. No one was entirely sure of his motive, but investigations revealed that he disliked his manager and was showing signs of extreme loneliness days before the incident.

Unintentional insiders

They have no ill intent or malice towards their employer, but their actions, inactions, and behavior sometimes cause harm to the organization, its assets, resources, properties, and people.

The accidental insider

They are also called the oblivious, naïve, or careless insiders. This type is perhaps the most overlooked and underestimated regarding their potential risk and damage to organizations. Yet, multiple studies confirm that accidental insiders account for a majority of the significant breaches that make headlines. Insiders under this type are relatively common.

Incidents, like unknowingly or inadvertently clicking a link in an email message of dubious origin, accidentally posting or leaking information online, improperly disposing sensitive documents, and misplacing company-owned assets (e.g., smartphones, CDs, USBs, laptops), even if they only happen once, may not seem like a big deal. But these actions increase an organization’s exposure to risk that could lead to its compromise.

Here’s an example of an accidental insider’s potential for damage: A publicly-accessible Amazon Web Service (AWS) account was used by hijackers to mine cryptocurrencies. Security researchers from Redlock investigated the matter and found misconfigurations in the AWS server. This gave hijackers access to credentials that could allow anyone to open the S3 buckets where sensitive information was stored. It turned out that the account belonged to someone at Tesla, so the researchers alerted them of the breach.

The negligent insider

Employees under this type are generally familiar with the organization’s security policies and the risks involved if they’re ignored. However, they look for ways to avoid them anyway, especially if they feel such policies limit their ability to do their work.

A data analyst working for the Department of Veterans Affairs downloaded and took home the personal data of 26.5 million US military veterans. Not only was this a violation of the department’s policies, but the analyst was also not authorized to do this. The analyst’s home was then burglarized, and the laptop was stolen. The data included names, social security numbers, and dates of birth.

Steps to controlling insider incidents

While cybersecurity education and awareness are initiatives that every organization must invest in, there are times when these are simply not enough. Such initiatives may decrease the likelihood of accidental insider incidents, but not for negligence-based incidents, professional insiders, or other sophisticated attack campaigns. Organizations must implement controls and use software to minimize insider threat incidents. That said, organizations must also continue to drive education and awareness, as well as provide professional and emotional support for employees to mitigate potential damage from accidental, malicious, or violent insiders.

Get executive support. As more and more organizations realize the risks insider threats pose, it also becomes easier to get executive buy-in on the idea of lessening insider threat incidents happening in the workplace. Gather and use information about incidents that occurred within the organization (especially those the C-suite may not even be aware of) before pitching the idea of creating an insider threat program.

Build a team. If an organization is employing thousands, it would be ideal to have a team that exclusively handles the insider threat program. Members must track, oversee, investigate, and document cases or incidents of insider threats. This team must comprise of a multidisciplinary membership from security, IT security, HR, legal, communication, and other departments. If possible, the organization should also bring in outside help, such as a workplace violence consultant, a mental health professional, and even someone from law enforcement, to act as external advisors to the team.

Identify risks. Threats of insiders vary from one industry to another. It is vital that organizations identify what threats they are exposed to within their industry before they can come up with a plan for how to detect and mitigate them.

Update existing policies. This is assuming that the organization already has a security or cybersecurity policy established. If not, creating one now is essential. It’s also important for the team to create a plan or process for how they should respond to incidents of insider threats, especially on reports of workplace violence. The team should always remember that there is no one-size-fits-all approach to addressing insider threat incidents of a similar nature.

Implement controls. An organization that has little-to-no controls isn’t secure at all. In fact, they are low-hanging fruit for external and internal actors. Controls keep an organization’s system, network, and assets safe. They also minimize the risk of insider threats. Below are some controls organizations may want to consider adopting. (Again, doing so should be based on their risk assessment):

  • Block harmful activity. This includes the accessing of particular websites or the downloading and installation of certain programs.
  • Whitelist applications. This includes file types of email attachments employees can open.
  • Disable USB drives, CD drives, and follow-based email programs.
  • Minimize accessibility of certain data. Organizations should focus on this, too, when it comes to their telework or remote workers.

Read: How to secure your remote workers

  • Provide the least level of access to privileged users.
  • Place flags on old credentials. Former employees may attempt to use the credentials they used when they were still employed.
  • Create an employee termination process.

Install software. Many organizations may not realize that software helps in nipping insider threats in the bud. Below is a list of some programs the organization may want to consider using:

  • User activity monitoring software
  • Predictive/data analytics software (for looking for patterns collected from employee interactions within the organization’s network)
  • Security information and event management (SIEM) software
  • Log management software
  • Intrusion detection (IDS) and prevention (IDP) software
  • Virtual machines (for a safe environment to detonate or open potentially harmful files)

It’s important to note that while software, controls, and policies designed to aid organizations in stopping insider risks are in place, insider threat incidents may never be eradicated entirely. Furthermore, predicting insider threats is not easy.

“To be able to predict when an insider maliciously wants to harm an organization, to defraud them, to steal something from them—it’s really hard with the technology alone to identify someone who is doing something with malicious intent,” said Randy Trzeciak, director of Carnegie Mellon University’s CERT program, in an interview with SearchSecurity.Com. “You really do need to combine the behavioral aspects of what might motivate somebody to defraud an organization, or to steal intellectual property, or to sabotage a network or system, which is usually outside of the control of what a traditional IT department is and what they do to prevent or detect malicious activity by insiders.”

Additional reading:

The post The enemy is us: a look at insider threats appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Liar, liar, pants on fire! Barclays phish claims cards explode

Malwarebytes - Fri, 08/17/2018 - 16:00

We feel compelled to relay the dire warning from this Barclays snail-mail letter, which we acquired through social media, therefore it must be true.

Warning: Barclays debit cards may catch fire!

The letter reads as follows:

Dear costumer,

Many of our bank costumers have reported that their debit cards have caught fire while they are in wallets and purses, and so as a precushion we are issuing an URGENT safety recall. This is a matter of the uppermost emergency as your card could create a pocket fire at any given moment, burning your legs and stomach terribly. This is because of a fault in the factory process at our debit card factory in Molton Keynes.

Therefore, for your own safety and verification, please complete the bottom of this form, and return it with your debit card to the safety manager at the following address:

Mr Smith
Barclays Debit Card Factory
187 Bangalore Lane

IMPORTANT: The PIN number is for verification porpuses only and will destroyed immediately upon a rival. Your private details will not be compromised at any time.

Indeed, Barclays’ costumers, who outfit their execs in fancy leggings and wigs, have reported an uptick of credit card explosions resulting in terrible burns across the legs and stomach, though mysteriously, private parts have been left in tact. The problem reportedly stems from a fault at the debit card factory located in Molton Keynes, England—or was it Bangalore, India?

Barclays announced a recall via snail mail instead of email in order to more quickly expedite these URGENT safety measures. They also requested that customers return their debit cards via snail mail—where they most certainly will not explode—as a safety precushion, which is not to be mistaken for a post cushion. For verification porpuses, the bank has asked that the associated PIN number be recited as whale mating calls. The notice makes clear that the information will be destroyed immediately upon a rival, which means JPMorgan Chase might be feeling the burn real soon.

We reached out to renowned security philosopher and close personal friend Reverend Rob (a title he earned by way of a few hard-earned dollars and clicks on the mouse) to get better insight as to how this might have happened. He told us:

When EMV security chips are near power sources such as running microwaves, the electromagnetic radiation can be amplified, resulting in an increase of temperature, which leads to a chemical change in the petroleum used in the plastic of the card, which anonymous sources close to those familiar with the matter say could make cards containing these chips prone to explosion.

This extremely dangerous product design admittedly caught many of the Malwarebytes researchers off-guard, as most had always assumed the safety of the combination of petroleum-based plastics and embedded EMV microchips. This malfeasance caused us to miss the mark entirely, resulting in the near spontaneous combustion our devoted customers costumers.

I, on the other-hand, saw this coming long ago. That is why I pushed the Malwarebytes Pocket Lint idea in the last all-hands meeting. The plan was to install a customized ROM on the EMV chip itself, which would force all electromagnetic communications through a sophisticated sandbox running in the virtual memory of the card. This would cast a virtual net around all electromagnetic transmissions and offer a virtual splosion-shield, which can mitigate fire and explosion susceptibility. Had the company only rushed this product to market, then maybe we could have saved those poor souls who were lost in the legendary debit card fire epidemic of 2018.

If any of the shared Malwarebytes and Barclays customers out there should wish to redeem vulnerable cards, but do not feel secure in sending the information to India, then we welcome you to send it to us. Be sure to include the card and especially the PIN number. We’ll verify the integrity of the card by ordering lots of beer, pizza, and plenty of Bitcoin.

The post Liar, liar, pants on fire! Barclays phish claims cards explode appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How to secure your content management system

Malwarebytes - Thu, 08/16/2018 - 15:00

Suppose you want to start your own blog or set up a website where you can easily manage its content, the way it looks, and how often it changes. What you need is a content management system (CMS).

WordPress, Drupal, and Joomla are some of the most popular content management systems used by both professionals and amateurs. The three I mentioned are open-source CMSes, meaning they are software with source code that anyone can inspect, modify, and enhance.

Open-source software hands us a double-edged sword. On one side, open-source software allows people the option to adapt it to their specific needs and preferences, and everyone can see what it does behind the curtain. But on the other side, those with bad intentions can study and probe the publicly-available source code until they find a bug, weakness, flaw, or feature they can abuse.

Who uses content management systems?

CMSes aren’t just used by individuals, but companies as well. Many companies, from small outfits to large enterprises, use a CMS in some form. They are ideal for blogs or other publishing outlets that change and update content often, but can also be used for any site that needs to add new information on a regular basis. Basically, they are used for website creation and management.

The advantages of using a well-known, open-source CMS are clear. They include:

  • A well-documented system that’s easy to install and adapt
  • A simple UI/UX design for easy management
  • Suitability for different kinds of content
  • Regular updates
  • Built-in logging (of visitors, users, and link backs)

Also, the availability of many plug-ins and themes with a wide variety of extra functionality, looks, and layouts for the CMS can be considered an advantage—most of the time. As long as they don’t introduce vulnerabilities or even backdoors into the system, they can be a welcome addition.

Update, update quickly, and don’t forget to patch

When using a CMS, and especially a popular one, you will have to keep an eye out for updates. Apply them at your earliest convenience and be sure to do so quickly if the updates are intended to patch a vulnerability that has been published. Website hijackers will make sure that they are aware of the latest vulnerabilities and go after any site that hasn’t been patched for it.

Also read: A look into Drupalgeddon’s client-side attacks

One problem with customized CMS versions is that you have to be careful during the updating process. Depending on the customizations that you made, updates can break the functionality that you added to the system. You will also have to make sure that your changes do not keep the vulnerability that you were supposed to be patching alive. What helps to prevent these problems is to make the changes in separate program modules and not in the main modules of the CMS, because those will get updated regularly.


While it is customary to create a backup before you apply updates or other important changes, it is also recommended to create regular backups—weekly or even daily, depending on how often you post to the CMS.

A weekly backup is adequate for many websites.

If you should find out that some intruder made changes to your website, it is a good idea to have a recent backup that you can restore without losing too much work, and without having to comb through every piece of code to check if anything else has been tampered with. Delete the files of the compromised CMS before restoring the update to make sure that none of the attacker’s files are left behind. Remember that when the attacker gained access, he also gained the ability to change your CMS and run his code on your server.

Security by obscurity

We are not saying that using a less popular CMS is better. Trying to maintain security through keeping a low profile should never be a goal in itself. If businesses happen to like a less popular CMS, they should still make sure that it is secure and maintained in a way that is acceptable for their organization.

If you have the resources and in-house knowledge to build your own system and keep it secure, good for you. If it pays off for your company to have a custom-built CMS, then you will have a specialized partner that can keep things under control and be able to solve any problems you find with it. Even better. But for most companies, it is so much easier to use one of the popular content management systems. If this is true for you, you will also be responsible for keeping it updated and secure.

CMS security in a nutshell

There are a few obvious and easy-to-remember rules to keep in mind if you want to use a CMS without compromising your security. They are as follows:

  • Choose your CMS with both functionality and security in mind.
  • Choose your plug-ins wisely.
  • Update with urgency.
  • Keep track of the changes to your site and their source code.
  • Use secure passwords (or 2FA).
  • Give the user permissions (and their levels of access) a lot of thought.
  • Be weary of SQL injection.
  • If you allow uploads, limit the type of files to non-executables and monitor them closely.

For websites that require even more security, there are specialized vulnerability scanners and application firewalls that you may want to look into. This is especially true if you are a popular target for people that would love to deface or abuse your website.

If the CMS is hosted on your own servers, be aware of the dangers that this setup brings. Remember that you are relying on open-source code. Running it on your own servers should be met with special precautions to keep it separated from other work servers.

Secure your CMSes and protect your home or business from introducing vulnerabilities into the heart of their networks. By doing this, you make the web a safer place!

The post How to secure your content management system appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Black Hat USA 2018: ransomware is still the star

Malwarebytes - Wed, 08/15/2018 - 16:00

The Malwarebytes team was at the annual Black Hat USA event held in Las Vegas at the Mandalay Bay Hotel from August 4–9. Large crowds walked through the expo floor, attended talks, and participated in trainings.

Among the many topics discussed, ransomware came up as one of the main issues that both consumers and businesses face. While it has been slowing down from previous years, ransomware remains a threat, in particular to enterprise users with more targeted and costly attacks. For instance, the SamSam ransomware, which we featured in our Black Hat booth, continues to hit large targets and cause a lot of financial damage.

Helge Husemann presenting about the impact of SamSam ransomware

The number one threat that has taken over, and which was a recurrent theme during Black Hat, was malicious cryptominers. While not as crippling as ransomware, miners play the long game and can affect businesses in different ways, including wearing down machines faster and impacting worker productivity. Miners, along with other threats such as banking Trojans, are propagated via many possible attack vectors, including phishing, backdoors, malspam, malvertising, drive-by mining, and more.

Top attack vectors for the 2018 threat landscape

Both Helge Husemann and Dana Torgersen, our product marketing managers, kept visitors to our booth informed about the latest threats, while our sales engineers performed demos showcasing our products.

Cameron Naghdi doing a demo of our solutions

Finally, we took part in a panel with fellow security researchers, discussing some of the hottest topics of the moment, including our biggest concerns of the threat landscape and which technologies we think are most vulnerable.

To watch the full video, check out our Facebook Live video here.

Other heated topics that kept people animated were the recent changes regarding privacy, for instance with GDPR in Europe and the impact on whois data. At the same time, threat intelligence as a discipline showed that it can fall back on other indicators of compromise to connect the dots.

Despite the crowds and excessive heat in Vegas, Black Hat was a successful event where we got to meet many of our existing customers as well as new folks. See you next year!

The post Black Hat USA 2018: ransomware is still the star appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Building a Resilient Web with WebTorrent

Mozilla Hacks - Wed, 08/15/2018 - 14:49

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

The web is healthy when the financial cost of self-expression isn’t a barrier. In this installment of the Dweb series we’ll learn about WebTorrent – an implementation of the BitTorrent protocol that runs in web browsers. This approach to serving files means that websites can scale with as many users as are simultaneously viewing the website – removing the cost of running centralized servers at data centers. The post is written by Feross Aboukhadijeh, the creator of WebTorrent, co-founder of PeerCDN and a prolific NPM module author… 225 modules at last count! –Dietrich Ayala

What is WebTorrent?

WebTorrent is the first torrent client that works in the browser. It’s written completely in JavaScript – the language of the web – and uses WebRTC for true peer-to-peer transport. No browser plugin, extension, or installation is required.

Using open web standards, WebTorrent connects website users together to form a distributed, decentralized browser-to-browser network for efficient file transfer. The more people use a WebTorrent-powered website, the faster and more resilient it becomes.


The WebTorrent protocol works just like BitTorrent protocol, except it uses WebRTC instead of TCP or uTP as the transport protocol.

In order to support WebRTC’s connection model, we made a few changes to the tracker protocol. Therefore, a browser-based WebTorrent client or “web peer” can only connect to other clients that support WebTorrent/WebRTC.

Once peers are connected, the wire protocol used to communicate is exactly the same as in normal BitTorrent. This should make it easy for existing popular torrent clients like Transmission, and uTorrent to add support for WebTorrent. Vuze already has support for WebTorrent!

Getting Started

It only takes a few lines of code to download a torrent in the browser!

To start using WebTorrent, simply include the webtorrent.min.js script on your page. You can download the script from the WebTorrent website or link to the CDN copy.

<script src="webtorrent.min.js"></script>

This provides a WebTorrent function on the window object. There is also an
npm package available.

var client = new WebTorrent() // Sintel, a free, Creative Commons movie var torrentId = 'magnet:...' // Real torrent ids are much longer. var torrent = client.add(torrentId) torrent.on('ready', () => { // Torrents can contain many files. Let's use the .mp4 file var file = torrent.files.find(file =>'.mp4')) // Display the file by adding it to the DOM. // Supports video, audio, image files, and more! file.appendTo('body') })

That’s it! Now you’ll see the torrent streaming into a <video width="300" height="150"> tag in the webpage!

Learn more

You can learn more at, or by asking a question in #webtorrent on Freenode IRC or on Gitter. We’re looking for more people who can answer questions and help people with issues on the GitHub issue tracker. If you’re a friendly, helpful person and want an excuse to dig deeper into the torrent protocol or WebRTC, then this is your chance!



The post Dweb: Building a Resilient Web with WebTorrent appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds