Techie Feeds

A state of constant uncertainty or uncertain constancy? Fast flux explained

Malwarebytes - Tue, 12/12/2017 - 16:00

Last August, WireX made headlines. For one thing, it was dubbed the first-known DDoS botnet that used the Android platform. For another, it used a technique that—for those who have been around in the industry for quite a while now—rung familiar in the ears: fast flux.

In the context of cybersecurity, fast flux could refer to two things: one, a network similar to a P2P that hosts a botnet’s command and control (C&C) servers and proxy nodes; and two, a method of registering on a domain name system (DNS) that prevents the host server IP address from being identified. For this post, we’re focusing on the latter.

Malware creators are the first actors to use this tactic. And Storm, the infamous worm that boggled and exasperated Internet users and security researchers alike in 2007, is one of the first binaries that proved fast flux’s effectiveness in protecting its mothership from detection and exposure. Fast flux made it doubly difficult for the security community and law enforcement agencies to track down criminal activity and shut down operations. Eventually—and albeit gradually—Storm’s reign ended, mainly due to the ISP that hosted the worm’s master servers, Atrivo, going dark.

From then on, the actors behind fast flux campaigns have been varied: from phishers and bot herders to criminal gangs behind money mule recruitment sites. There are also those that use fast flux to engage in other unlawful schemes, such as hosting exploit sites, extreme or illegal adult content sites, carding sites, bogus online pharmacies, and web traps. Recently, fast flux has been gaining notoriety and usage among cybersquatters, which makes this another threat for businesses with an online presence.

Fast flux—what is it really?

Fast flux is, in a nutshell, an advanced game of hide and seek. Cybercriminals hide by assigning hundreds or thousands of IP addresses that are swapped with extreme frequency to a fully qualified domain name (FQDN)—let’s say This is done using a combination of (1) distributing the load received by the server across many geographical points acting as proxies or redirectors and (2) banking on a remarkably short time-to-live (TTL) data lifespan. This address swapping happens so fast that the whole architecture seems to be in flux.

Here’s a simple illustration: If criminals assign a set of IP addresses that change every 150 seconds, users who access are actually connecting to different infected machines every single time.

Fast flux is occasionally used as a standalone term; however, we also see it used as a descriptor to the nature of a network, botnet, or a malicious agent. As such, you’ll find the below terms used as well, and for clarity, we have listed their definitions:

  • fast-flux service network (FFSN): The Honeynet Project defines this as “a network of compromised computer systems with public DNS records that are constantly changing, in some cases every few minutes.” There are two known types of this: single-flux and double-flux.
  • fast-flux botnet: Refers to a botnet that uses fast flux techniques. Herders behind such a botnet are known to engage in hosting-as-a-service schemes wherein they rent out their networks to other criminals. Also, some fast-flux botnets have begun supporting SSL communication.
  • fast-flux agent: Depending on the context, this could refer to either (1) the malware responsible for infecting systems to add them to the fast-flux network or (2) the machine that belongs to a fast-flux network.

Fast flux shouldn’t be confused with domain flux, which involves the changing of the domain name, not the IP address. Both fluxing techniques have been used by cybercriminals.

Wait, so assigning different IP addresses to a single domain name is legal?

Although it’s generally the case that one domain name points to one IP address, this association isn’t a strict mapping. And that is a good thing! Otherwise, web admins wouldn’t be able to efficiently distribute incoming network traffic to multiple resources, wherein a single resource corresponds to a unique IP address. This is the basic concept behind load balancing, and popular websites use it all the time. And round-robin DNS—this one-domain-to-many-IP-address association—is just one of several load-balancing algorithms one can implement.

There’s nothing illicit about this. What criminals are doing is merely taking advantage of or abusing what network technology already has to offer.

Aside from Storm, what other malware has been associated with fast flux?

Threat campaigns that use malware associated with fast flux networks usually involve botnets. And in the earlier years, worms were the type that used fast-flux botnets. Storm is a worm binary; so is Stration, its rival. Nowadays, other malware strains have banked on fast flux’s efficacy. We have Kronos and ZeuS/Zbot, two known banking Trojans; Kelihos, a Trojan spammer and Bitcoin miner; Teslacrypt, a ransomware (their payment sites are found hosted on an FFSN in East Europe); and Asprox, a Trojan password stealer turned advance persistent threat (APT).

As a side note, fast flux networks are not only used to hide malicious activities. Akamai, a known cloud delivery platform, has revealed in a white paper [PDF] that a fast flux network was used in several web attacks, specifically SQL injection, web scraping, and credential abuse, against their own customers.

Read: Inside the Kronos malware—Part 1, Part 2

Can fast flux be detected/identified? If so, how?

Definitely. Some organizations and independent groups in the security industry have put a lot of effort into investigating, studying, and educating others on what fast flux is, how it works, and how it can be detected. Below are just a few references that you can visit, browse, and read more thoroughly:

Can users protect themselves from fast flux activity?

When it comes to keeping our computing devices safe from physical and online compromise—with data in them unaltered and secure—extra vigilance and good security hygiene can save folks from a lot of headaches in the future. Installing an anti-malware with URL blocking features on devices not only protects them from malware but also blocks sites that have been deemed malicious, consequently stopping the attack chain. Lastly, regularly update all security software you use.

Stay safe out there!

The post A state of constant uncertainty or uncertain constancy? Fast flux explained appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (December 04 – December 10)

Malwarebytes - Mon, 12/11/2017 - 19:58

Last week on the blog, we looked at a RIG EK malware campaign, explored how children are being tangled up in money mule antics, took a walk through the world of Blockchain, and gave a rundown of what’s involved when securing web applications. We also laid out the trials and tribulations of the Internet of Things, advised you to be on the lookout for an urgent TeamViewer update, tore down the disguise of new Mac malware HiddenLotus, sighed at the inevitability of a Napoleon-themed piece of ransomware, and unveiled our New Mafia report.

Other news
  • Bitcoin chaos as NiceHash is compromised and thousands of Bitcoins go wandering into the void, potentially to the tune of $62m. (source: Reddit)
  • How easy is it to make a children’s toy start swearing? This easy. (source: The Register)
  • Chrome 63 is now available and comes with multiple security improvements and additions. (source: Chrome updates website)
  • Phishers are slowly turning to HTTPs scam sites—but why? (source: PhishLabs)
  • The Andromeda Botnet is finally dismantled by law enforcement. (source: Help Net Security)
  • If you try to hack your friends out of jail, you may well end up joining them. (source: MLive)
  • Perfect email spoofs? Oh dear. (source: Wired)
  • Think you’ll be getting a ransom out of North Carolina, think again. (source: Chicago Tribune)

Stay safe, everyone!

The post A week in security (December 04 – December 10) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How cryptocurrency mining works: Bitcoin vs. Monero

Malwarebytes - Mon, 12/11/2017 - 16:00

Ever wondered why websites that are mining in the background don’t mine for the immensely hot Bitcoin, but for Monero instead? We can explain that. As there are different types of cryptocurrencies, there are also different types of mining. After providing you with some background information about blockchain [1],[2] and cryptocurrency, we’ll explain how the mining aspect of Bitcoin works. And how others differ.

Proof-of-Work mining

Cryptocurrency miners are in a race to solve a mathematical puzzle, and the first one to solve it (and get it approved by the nodes) gets the reward. This method of mining is called the Proof-of-Work method. But what exactly is this mathematical puzzle? And what does the Proof-of-Work method involve? To explain this, we need to show you which stages are involved in the mining process:

  1. Verify if transactions are valid. Transactions contain the following information: source, amount, destination, and signature.
  2. Bundle the valid transactions in a block.
  3. Get the hash that was assigned to the previous block.
  4. Solve the Proof-of-Work problem (see below for details).

The Proof-of-Work problem is as follows: the miners look for a SHA 256 hash that has to match a certain format (target value). The hash will be based on:

  • The block number they are currently mining.
  • The content of the block, which in Bitcoin is the set of valid transactions that were not in any of the former blocks.
  • The hash of the previous block.
  • The nonce, which is the variable part of the puzzle. The miners try different nonces to find one that results in a hash under the target value.

So, based on the information gathered and provided, the miners race against each other to try and find a nonce that results in a hash that matches the prescribed format. The target value is designed so that the estimated time for someone to mine a block successfully is around 10 minutes (at the moment).

If you look at, for example, you will notice that every BlockHash is 256 hexadecimal digits long and starts with 18 zeroes. For example the BlockHash for Block #497542 equals 00000000000000000088cece59872a04457d0b613fe1d119d9467062e57987f1. At the time of writing, this is the target—the value of the hash has to be so low that the first 18 digits are zeroes. So, basically, miners have some fixed input and start trying different nonces (which must be an integer), and then calculate whether the resulting hash is under the target value.

How is Monero different?

Browser mining and other methods of using your system’s resources for other people’s gain is usually done using other cryptocurrencies besides Bitcoin, and Monero is the most common one. In essence, Monero mining is not all that different from Bitcoin. It also uses the Proof-of-Work method. Yet, Monero is a popular cryptocurrency to those that mine behind the scenes, and we’ll explain why.


The most notable difference between Bitcoin and Monero mining is anonymity. Where you will hear people say that Bitcoins are anonymous, you should realize that this is not by design. If you look at a site like BlockExplorer, you can search for every block, transaction, and address. So if you have sent or received Bitcoin to or from an address, you can look at every transaction ever made to and from that address.

Therefore we call Bitcoin “pseudononymous.” This means you may or may not know the name of that person, but you can track every payment to and from his address if you want. There are ways to obfuscate your traffic, but they are difficult, costly, and time-consuming.

Monero however, has always-on privacy features applied to its transactions. When someone sends you Monero, you can’t tell who sent it to you. And when you send Monero to someone else, the recipient won’t know it was you unless you tell them. And because you don’t know their wallet address and you can’t backtrack their transactions, you can’t find out how “rich” they are.

                                                                                      Transactions inside a Bitcoin block are an open book.


Monero mining does not depend on heavily specialized, application-specific integrated circuits (ASICs), but can be done with any CPU or GPU. Without ASICs, it is almost pointless for an ordinary computer to participate in the mining process for Bitcoin. The Monero mining algorithm does not favor ASICs, because it was designed to attract more “little” nodes rather than rely on a few farms and mining pools.

There are more differences that lend themselves to Monero’s popularity among behind-the-scenes miners, like the adaptable block size, which means your transactions do not have to wait until they fit into a later block. The Bitcoin main-stream blockchain has a 1 MB block cap, where Monero blocks do not have a size limit. So Bitcoin transactions will sometimes have to wait longer, especially when the transaction fees are low.

The advantages of Monero over Bitcoin for threat actors or website owners are mainly that:

  • It’s untraceable.
  • It can make faster transactions (especially when they are small).
  • It can use “normal” computers effectively for mining

For those of you looking for more information on the technical aspects of this subject, we recommend:

Bitcoin block hashing algorithm

The Blockchain Informer

Blockchain Info

How Bitcoin mining works

How does Monero privacy work

The post How cryptocurrency mining works: Bitcoin vs. Monero appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Napoleon: a new version of Blind ransomware

Malwarebytes - Fri, 12/08/2017 - 17:00

The ransomware previously known as Blind has been spotted recently with a .napoleon extension and some additional changes. In this post, we’ll analyze the sample for its structure, behavior, and distribution method.

Analyzed samples

31126f48c7e8700a5d60c5222c8fd0c7 – Blind ransomware (the first variant), with .blind extension

9eb7b2140b21ddeddcbf4cdc9671dca1 – Variant with .kill extension

235b4fa8b8525f0a09e0c815dfc617d3.napoleon (main focus of this analysis)

//special thanks to @demonslay335  for sharing the older samples

Distribution method

So far we are not 100 percent sure about the distribution method of this new variant. However, looking at the features of the malware and judging from information from the victims, we suspect that the attackers spread it manually by dropping and deploying on the hacked machines (probably via IIS). This method of distribution is not popular or efficient, however we’ve encountered similar cases in the past, such as DMALocker or LeChiffre ransomware. Also, few months ago, hacked IIS servers were used as a vector to plant Monero miners. The common feature of samples dropped in this way is that they are not protected by any cryptor (because it’s not necessary for this distribution method).

Behavioral analysis

After the ransomware is deployed, it encrypts files one-by-one, adding its extension in the format [email].napoleon.

Looking at the content of the encrypted test files, we can see that the same plaintext gave different ciphertext. This always indicates that different key or initialization vectors were used for each file. (After examining the code, it turned out that the difference was in the initialization vector).

Visualizing the encrypted content helps us guess the algorithm with which the files were encrypted. In this case, we see no visible patterns, so this leads us to suspect an algorithm with some method of chaining cipher blocks. (The most commonly used is AES in CBC mode, or eventually in CFB mode). Below, you can see the visualization made with the help of the file2png script: On the left is a BMP file before encryption. And on the right, after encryption by Napoleon:


At the end of each file, we found a unique 384-long block of alphanumeric characters. They represent 192 bytes written in hexadecimal. Most probably this block is the encrypted initialization vector for the particular file):

The ransom note is in HTA format and looks like this:

It also contains a hexadecimal block, which is probably the victim’s key, encrypted with the attackers’ public key.

The GUI of Napoleon looks simplified in comparison to the Blind ransomware. However, the building blocks are the same:

It is common among ransomware authors to prepare a tor-base website that allows automatic processing for payments and better organizes communication with the victim. In this case, the attackers decided to use just an email—probably because they planned for the campaign to be small.

Among the files created by the Napoleon ransomware, we will no longer find the cache file (netcache64.sys) that in the previous editions allowed to recover the key without paying the ransom.

Below is the cache file dropped by the Blind ransomware (the predecessor of Napoleon):

Inside the code

The malware is written in C++. It is not packed by any cryptor.

The execution starts in the function WinMain:

The flow is pretty simple. First, the ransomware checks the privileges with which it runs. If it has sufficient privileges, it deletes shadow copies. Then, it closes processes related to databases—Oracle and SQL Server—so that they will not block access to the database files it wants to encrypt. Next, it goes through the disks and encrypts found files. At the end, it pops up the dropped ransom note in HTA format.

Comparing the code of Napoleon with the code of Blind, we see that not just the extension of encrypted files has has changed, but also many functions inside have been refactored.

Below is a fragment of the view from BinDiff: Napoleon vs Blind:

What is attacked?

First, the ransomware enumerates all the logical drives in the system and adds them into a target list. It attacks both fixed and remote drives ( type 3 -> DRIVE_FIXED  and 4 -> DRIVE_REMOTE):

This ransomware does not have any list of attacked extensions. It attacks all the files it can reach. It skips only the files that already have the extension indicating they are encrypted by Napoleon:

The email used in the extension is hardcoded in the ransomware’s code.

Encryption implementation

Just like the previous version, the cryptographic functions of Napoleon are implemented with the help of the statically-linked library Crypto++ (source).

Referenced strings pointing to Crypto++:

Inside, we found a hardcoded blob—the RSA public key of the attackers:

After conversion to a standardized format, such as PEM, we were able to read its parameters using openssl, confirming that it is a valid 2048 bit–long RSA key:

Public-Key: (2048 bit) Modulus: 00:96:c7:3f:aa:71:b1:e4:2c:2a:f3:22:0b:c2:88: 8c:87:63:b3:fa:31:97:9b:48:1b:64:2a:14:b9:85: 0a:2e:30:b2:22:c2:ee:fe:ce:de:db:b9:b7:68:3f: 12:a6:b3:e1:2b:db:ac:90:ea:3e:0a:07:25:3d:19: f2:98:b3:b2:e3:1b:22:e6:0d:ad:d5:97:6f:57:cd: 77:6c:68:16:49:db:7d:c0:b8:03:e3:81:f5:62:ce: 22:ae:d9:71:f4:ed:28:f0:29:0b:e3:3c:ea:2d:d8: 13:fd:00:ff:da:4a:55:b8:70:c3:9f:ef:32:43:4b: 3f:82:fe:26:31:03:99:fd:b0:1a:2d:7b:f8:b6:65: ab:d8:65:f3:c6:f3:e3:06:a9:58:5f:3e:35:0e:4c: f0:9e:94:49:66:2e:9c:6c:51:27:62:c1:39:02:cc: fb:32:4f:9a:92:f5:f9:99:96:5d:a7:65:5f:1c:fc: 0a:1e:8b:45:53:06:89:9f:50:11:d6:06:84:a2:f2: 5f:ab:e4:fb:cf:0d:09:64:d7:7c:99:f9:2a:b7:f5: c6:e4:c1:23:24:4e:2b:9f:0b:98:c3:94:93:4f:ca: c3:ff:ec:70:9d:df:78:37:56:0d:8b:c4:db:6d:b3: 73:ac:0a:cb:ac:28:b2:d4:54:61:3e:3c:7e:67:97: f5:d9 Exponent: 17 (0x11)

This attacker’s public key is later used to encrypt the random key generated for the particular victim. The random key is the one used to encrypt files – after it is used and destroyed, it’s encrypted version is stored in the victim’s ID displayed in the ransom note. Only the attackers, having the private RSA key, are capable to recover it.

The random AES key (32 bit) is generated by the function provided by Crypto++ library:

It uses underneath the secure random generator: CryptGenRandom:

All the files are encrypted with the same key, however the initialization vector is different for each.

Encrypting single file:

Inside the function denoted as encrypt_file, the crypto is initialized with a new initialization vector:

The fragment of code responsible for setting the IV:

Setting initialization vector:

Encrypting file content:

The same buffer after encryption:


Napoleon ransomware will probably not become a widespread threat. The authors prepared it for small campaigns—lot of data, like email, are hardcoded. It does not come with any external configuration like Cerber that would allow for fast customization.

So far, it seems that the authors fixed the previous bug in Blind of dropping the cache file. That means the ransomware is not decryptable without having the original key. All we can recommend is prevention.

This ransomware family is detected by Malwarebytes as Ransom.Blind.


Read about how to decrypt the previous Blind variant here.



The post Napoleon: a new version of Blind ransomware appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Interesting disguise employed by new Mac malware HiddenLotus

Malwarebytes - Fri, 12/08/2017 - 16:00

On November 30, Apple silently added a signature to the macOS XProtect anti-malware system for something called OSX.HiddenLotus.A. It was a mystery what HiddenLotus was until, later that same day, Arnaud Abbati found the sample and shared it with other security researchers on Twitter.

The HiddenLotus “dropper” is an application named Lê Thu Hà (HAEDC).pdf, using an old trick of disguising itself as a document—in this case, an Adobe Acrobat file.

This is the same scheme that inspired the file quarantine feature in Mac OS X. Introduced in Leopard (Mac OS X 10.5), this feature tagged files downloaded from the Internet with a special piece of metadata to indicate that the file had been “quarantined.” Later, when the user tried to open the file, if it was an executable file of any kind, such as an application, the system would display a warning to the user.

The intent behind this feature was to ensure that the user knew that the file they were opening was an application, rather than a document. Even back in 2009, malicious apps were masquerading as documents. File quarantine was meant to combat this problem.

Malware authors have been using this trick ever since, despite file quarantine. Even earlier this year, repeated outbreaks of the Dok malware were distributed in the form of applications disguised as Microsoft Word documents.

So HiddenLotus didn’t seem all that interesting at first, other than as a new variant of the OceanLotus backdoor first seen being used to attack numerous facets of Chinese infrastructure. OceanLotus was last seen earlier this summer, disguised as a Microsoft Word document and targeting victims in Vietnam.

But there was something strange about HiddenLotus. Unlike past malware, this one didn’t have a hidden .app extension to indicate that it was an application. Instead, it actually had a .pdf extension. Yet the Finder somehow identified it as an application anyway.

This was quite puzzling. Further investigation did not turn up a hidden extension. There was also no sign of a trick like the one used by Janicab in 2013.

Janicab used the old fake document technique, being distributed as a file named (apparently) “RecentNews.ppa.pdf.” However, the use of an RLO (right-to-left override) character caused characters following it to be displayed as if they were part of a language meant to be read right-to-left, instead of left-to-right as in English.

In other words, Janicab’s real filename was actually “,” but the presence of the RLO character after the first period in the name caused everything following to be displayed in reverse in the Finder.

However, this deception was not used in HiddenLotus. Instead, it turned out that the ‘d’ in the .pdf extension was not actually a ‘d.’ Instead, it was the Roman numeral ‘D’ in lowercase, representing the number 500.

It was at this point that Abbati’s tweet referring to “its very nice small Roman Unicode” began to make sense. However, it was still unclear exactly what was going on, and how this special character allowed the malware to be treated as an application.

After further consultation with Abbati, it turned out that there’s something rather surprising about macOS: An application does not need to have a .app extension to be treated like an application.

An application on macOS is actually a folder with a special internal structure called a bundle. A folder with the right structure is still only a folder, but if you give it an .app extension, it instantly becomes an application. The Finder treats it as if it were a single file instead of a folder, and a double-click launches the application rather than opening the folder.

When double-clicking a file (or folder), LaunchServices will consider the extension first. If the extension is known, the item will be opened according to that extension. Thus, a file with a .txt extension will, by default, be opened with TextEdit. Some folders may be treated as documents, as in the case of the .aplibrary extension used for an Aperture library “file.” A folder with the .app extension will, assuming it has the right internal structure, be launched as an application.

A file with an unfamiliar extension is handled by asking the user what they want to do. Options are given to choose an application to open the file or to search the Mac App Store.

However, something strange happens when double-clicking a folder with an unknown extension. In this case, LaunchServices falls back on looking at the folder’s bundle structure (if any).

So what does this mean? The HiddenLotus dropper is a folder with the proper internal bundle structure to be an application, and it uses an extension of .pdf, where the ‘d’ is a Roman numeral, not a letter. Although this extension looks exactly the same as the one used for Adobe Acrobat files, it’s completely different, and there are no applications registered to handle that extension. Thus, the system will fall back on the bundle structure, treating the folder as an application, even though it does not have a telltale .app extension.

There is nothing particularly special about this .pdf extension (using a Roman numeral ‘d’) except that it is not already in use. Any other extension that is not in use will work just as well:

Of course, the example shown above wouldn’t fool anyone, it’s merely illustrative of the issue.

This means that there is an enormously large list of possible extensions, especially when Unicode characters are included. It is easily possible to construct extensions from Unicode characters that look exactly like other, normal extensions, yet are not the same. This means the same trick could be used to mimic a Word document (.doc), an Excel file (.xls), a Pages document (.pages), a Numbers document (.numbers), and so on.

This is a neat trick, but it’s still not going to get past file quarantine. The system will alert you that what you’re trying to open is an application. Unless, of course, what you are opening was downloaded via an application that does not use the APIs that properly set the quarantine flag on the file, as is the case for some torrent apps.

Ultimately, it’s very unlikely that this trick is going to have any kind of significant impact on the Mac threat landscape. It’s probable that we will see it used again in the future, but the risk to the average user is not significantly higher than in the case of any other fake document malware.

More than anything else, this trick opens our eyes to an interesting aspect of how macOS identifies and launches applications.

If you think you may have encountered this malware, Malwarebytes for Mac will protect against it, and will scan for and remove it, if present, for free.

The post Interesting disguise employed by new Mac malware HiddenLotus appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How we can stop the New Mafia’s digital footprint from spreading in 2018

Malwarebytes - Thu, 12/07/2017 - 15:00

Cybercriminals are the New Mafia of today’s world. This new generation of hackers are like traditional Mafia organizations, not just in their professional coordination, but their ability to intimidate and paralyze victims.

To help businesses bring a good security fight to the digital streets, we released a new report today: The New Mafia, Gangs, and Vigilantes: A Guide to Cybercrime for CEOs. This report details the evolution of cybercrime from early beginnings to the present day and the emergence of four distinct groups of cybercriminals: traditional gangs, state-sponsored attackers, ideological hackers, and hackers-for-hire. We worked with a global panel of experts from a variety of disciplines, including PwC, Leeds University, University of Sussex, the Centre for Cyber Victim Counselling in India, and the University of North Carolina to collect the data within the report.

The guide shines a light on the activities of cybercriminals to understand how they work, to examine their weapons of choice—namely ransomware—and to assess what action is needed to protect against them.

Right now, the New Mafia is winning. We found that ransomware attacks have grown by almost 2,000 percent from September 2015 to September 2017. And cyberattacks on businesses have increased 23 percent in 2017. What these attacks show is that we as an ecosystem—vendors, governments, companies—aren’t learning from our mistakes.

Instead of coming together to defeat a common enemy, the focus remains on shaming victims. Whether they be individuals or companies, we’re all quick to point the finger. But that narrative must change. Those affected by cybercrime are often embarrassed and they don’t speak out, which can have dangerous consequences as organizations delay or cover up breach incidents without a plan to prevent them from happening again. We need to educate the C-suite so that CEOs and IT departments both recognize the signs of an attack and can minimize damages, while educating victims instead of shaming them.

This new mentality is important as we face the future of cybercrime. I read articles every week about the billions of devices that will be connected in the future. While the overarching goal is to make our lives easier, it also presents a threat.

The New Mafia is well prepared to exploit the increase in connected devices from cars to pacemakers. We are still making our way through the Wild West of the Internet of Things with early security solutions and a lack of legislation.

For now, we need to keep our digital streets clean with a collaborative model between the public and private sector, a general awareness about the dangers of cybercrime, and the use of proactive defenses.  Shifting from victim shaming those who have been attacked to engaging with them will remove the crime bosses from our highways of digital opportunity in 2018.

To view the full report, featuring original data and insight taken from a global panel of experts from a variety of disciplines, including PwC, Leeds University, University of Sussex, the Centre for Cyber Victim Counselling in India, and the University of North Carolina, visit here.

The post How we can stop the New Mafia’s digital footprint from spreading in 2018 appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Use TeamViewer? Fix this dangerous permissions bug with an update

Malwarebytes - Wed, 12/06/2017 - 19:42

TeamViewer, the remote control/web conference program used to share files and desktops,  is suffering from a case of “patch it now.” Issued yesterday, the fix addresses an issue where one user can gain control of another’s PC without permission.

Windows, Mac, and LinuxOS are all apparently affected by this bug, which was first revealed over on Reddit. According to TeamViewer, the Windows patch is already out, with Mac and Linux to follow on soon. It’s definitely worth updating, as there are shenanigans to be had whether acting as client or server:

As the Server: Enables extra menu item options on the right side pop-up menu. Most useful so far to enable the “switch sides” feature, which is normally only active after you have already authenticated control with the client, and initiated a change of control/sides. As the Client: Allows for control of mouse with disregard to server’s current control settings and permissions.

This is all done via an injectible C++ DLL. The file, injected into TeamViewer.exe, then allows the presenter or the viewer to take full control.

It’s worth noting that even if you have automatic updates set, it might take between three to seven days for the patch to be applied.

Many tech support scammers make use of programs such as TeamViewer, but with this new technique they wouldn’t have to first trick the victim into handing over control. While in theory a victim should know immediately if a scammer has gained unauthorised control over their system and kill off the session straight away, in practice it doesn’t always pan out like that.

TeamViewer has had other problems in the past, including being used as a way to distribute ransomware, denying being hacked after bank accounts were drained, and even being temporarily blocked by a UK ISP. Controversies aside, you should perhaps consider uninstalling the program until the relevant patch for your operating system is ready to install. This could prove to be a major headache for the unwary until the problem is fully solved.

The post Use TeamViewer? Fix this dangerous permissions bug with an update appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Internet of Things (IoT) security: what is and what should never be

Malwarebytes - Wed, 12/06/2017 - 17:00

The Internet has penetrated seemingly all technological advances today, resulting in Internet for ALL THE THINGS. What was once confined to a desktop and a phone jack is now networked and connected in multiple devices, from home heating and cooling systems like the Nest to AI companions such as Alexa. The devices can pass information through the web to anywhere in the world—server farmers, company databases, your own phone. (Exception: that one dead zone in the corner of my living room. If the robots revolt, I’m huddling there.)

This collection of inter-networked devices is what marketing folks refer to as the Internet of Things (IoT). You can’t pass a REI vest-wearing Silicon Valley executive these days without hearing about it. Why? Because the more we send our devices online to do our bidding, the more businesses can monetize them. Why buy a regular fridge when you can spend more on one that tells you when you’re running out of milk?

Unfortunately (and I’m sure you saw this coming), the more devices we connect to the Internet, the more we introduce the potential for cybercrime. Analyst firm Gartner says that by 2020, there will be more than 26 billion connected devicesexcluding PCs, tablets, and smartphones. Barring an unforeseen Day After Tomorrow–style global catastrophe, this technology is coming. So let’s talk about the inherent risks, shall we?

What’s happening with IoT cybercrime today?

 Both individuals and companies using IoT are vulnerable to breach. But how vulnerable? Can criminals hack your toaster and get access to your entire network? Can they penetrate virtual meetings and procure a company’s proprietary data? Can they spy on your kids, take control of your Jeep, or brick critical medical devices?

So far, the reality has not been far from the hype. Two years ago, a smart refrigerator was hacked and began sending pornographic spam while making ice cubes. Baby monitors have been used to eavesdrop on and even speak to sleeping (or likely not sleeping) children. In October 2016, thousands of security cameras were hacked to create the largest-ever Distributed Denial of Service (DDoS) attack against Dyn, a provider of critical Domain Name System (DNS) services to companies like Twitter, Netflix, and CNN. And in March 2017, Wikileaks disclosed that the CIA has tools for hacking IoT devices, such as Samsung SmartTVs, to remotely record conversations in hotel or conference rooms. How long before those are commandeered for nefarious purposes?

Privacy is also a concern with IoT devices. How much do you want KitchenAid to know about your grocery-shopping habits? What if KitchenAid partners with Amazon and starts advertising to you about which blueberries are on sale this week? What if it automatically orders them for you?

At present, IoT attacks have been relatively scarce in frequency, likely owing to the fact that there isn’t yet huge market penetration for these devices. If just as many homes had Cortanas as have PCs, we’d be seeing plenty more action. With the rapid rise of IoT device popularity, it’s only a matter of time before cybercriminals focus their energy on taking advantage of the myriad of security and privacy loopholes.

Security and privacy issues on the horizon

According to Forrester’s 2018 predictions, IoT security gaps will only grow wider. Researchers believe IoT will likely integrate with the public cloud, introducing even more potential for attack through the accessing of, processing, stealing, and leaking of personal, networked data. In addition, more money-making IoT attacks are being explored, such as cryptocurrency mining or ransomware attacks on point-of-sale machines, medical equipment, or vehicles. Imagine being held up for ransom when trying to drive home from work. “If you want us to start your car, you’ll have to pay us $300.”

It’ll be like a real-life Monopoly game.

Privacy and data-sharing may become even more difficult to manage. For example, how do you best protect children’s data, which is highly regulated and protected according to the Children’s Online Privacy Protection Rule (COPPA), if you’re a maker of smart toys? There are rules about which personally identifiable information can and cannot be captured and transmitted for a reason—because that information can ultimately be intercepted.

Privacy concerns may also broaden to include how to protect personal data from intelligence gathering by domestic and foreign state actors. According to the Director of National Intelligence, Daniel Coats, in his May 2017 testimony at a Senate Select Committee on Intelligence hearing: “In the future, state and non-state actors will likely use IoT devices to support intelligence operations or domestic security or to access or attack targeted computer networks.”

In a nutshell, this could all go far south—fast.

So why are IoT defenses so weak?

Seeing as IoT technology is a runaway train, never going back, it’s important to take a look at what makes these devices so vulnerable. From a technical, infrastructure standpoint:

  • There’s poor or non-existent security built into the device itself. Unlike mobile phones, tablets, and desktop computers, little-to-no protections have been created for these operating systems. Why? Building security into a device can be costly, slow down development, and sometimes stand in the way of a device functioning at its ideal speed and capacity.
  • The device is directly exposed to the web because of poor network segmentation. It can act as a pivot to the internal network, opening up a backdoor to let criminals in.
  • There’s unneeded functionality left in based on generic, often Linux-derivative hardware and software development processes. Translation: Sometimes developers leave behind code or features developed in beta that are no longer relevant. Tsk, tsk. Even my kid picks up his mess when he’s done playing. (No he doesn’t. But HE SHOULD.)
  • Default credentials are often hard coded. That means you can plug in your device and go, without ever creating a unique username and password. Guess how often cyber scumbags type “1-2-3-4-5” and get the password right? (Even Dark Helmet knew not to put this kind of password on his luggage, nevermind his digital assistant.)

From a philosophical point of view, security has simply not been made an imperative in the development of these devices. The swift march of progress moves us along, and developers are now caught up in the tide. In order to reverse course, they’ll need to walk against the current and begin implementing security features—not just quickly but thoroughly—in order to fight off the incoming wave of attacks.

What are some solutions?

 Everyone agrees this tech is happening. Many feel that’s a good thing. But no one seems to know enough or want enough to slow down and implement proper security measures. Seems like we should be getting somewhere with IoT security. Somehow we’re neither here nor there. (Okay, enough quoting Soul Asylum.)

Here’s what we think needs to be done to tighten up IoT security.

Government intervention

In order for developers to take security more seriously, action from the government might be required. Government officials can:

  • Work with the cybersecurity and intelligence communities to gather a series of protocols that would make IoT devices safer for consumers and businesses.
  • Develop a committee to review intelligence gathered and select and prioritize protocols in order to craft regulations.
  • Get it passed into law. (Easy peasy lemon squeezy)
Developer action

Developers need to bake security into the product, rather than tacking it on as an afterthought. They should:

  • Have a red team audit the devices prior to commercial release.
  • Force a credential change at the point of setup. (i.e., Devices will not work unless the default credentials are modified.)
  • Require https if there’s web access.
  • Remove unneeded functionality.

Thankfully, steps are already being taken, albeit slowly, in the right direction. In August 2017, Congress introduced the Internet of Things Cybersecurity Improvement Act, which seeks to require that any devices sold to the US government be patchable, not have any known security vulnerabilities, and allow users to change their default passwords. Note: sold to the US government. They’re not quite as concerned about the privacy and security of us civies.

And perhaps in response to blowback from social and traditional media, including one of our one posts on smart locks, Amazon is now previewing an IoT security service.

So will cybersecurity makers pick up the slack? Vendors such as Verizon, DigiCert, and Karamba Security have started working on solutions purpose-built for securing IoT devices and networks. But there’s a long way to go before standards are established. In all likelihood, a watershed breach incident (or several), will lead to more immediate action.

How to protect your IoT devices

 What can regular consumers and businesses do to protect themselves in the meantime? Here’s a start:

  • Evaluate if the devices you are bringing into your network really need to be smart. (Do you need a web-enabled toaster?) It’s better to treat IoT tech as hostile by default instead of inherently trusting it with all your personal info—or allowing it access onto your network. Speaking of…
  • Segment your network. If you do want IoT devices in your home or business, separate them from networks that contain sensitive information.
  • Change the default credentials. For the love of God, please come up with a difficult password to crack. And then store it in a password manager and forget about it.

The reason why IoT devices haven’t already short-circuited the world is because a lot of devices are built on different platforms, different operating systems, and use different programming languages (most of them proprietary). So developing malware attacks for every one of those devices is unrealistic. If businesses want to make IoT a profitable model, security WILL increase out of necessity. It’s just a matter of when. Until then…gird your loins.

The post Internet of Things (IoT) security: what is and what should never be appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How to harden AdwCleaner’s web backend using PHP

Malwarebytes - Wed, 12/06/2017 - 16:00

More and more applications are moving from desktop to the web, where they are particularly exposed to security risks. They are often tied to a database backend, and thus need to be properly secured, even though most of the time they are designed to restrict access to authenticated users only. PHP is used to develop a lot of these web applications, including several dedicated to AdwCleaner management.

There is no magic unique solution to harden a web application, but as always in security, it’s a matter of layers including:

  • Applying the latest security patch and updates
  • Sending the correct HTTP headers
  • Hardening the language stack
  • Hardening the OS
  • Taking network security measures

Since we’re in 2017, we’ll consider that security patches and updates are applied properly so this article will focus on several must-have HTTP headers, as well as how we harden our web stack at a PHP level in an effective and easy way for the AdwCleaner web management application.

Securing a web application using HTTP headers

There are a lot of standard HTTP headers for various uses (like encoding and caching) and a lot of them aim to enforce smart security behaviors, like mitigating XSS, for HTTP clients (i.e web browsers). Here are a few useful ones.

A website suffering of XSS, without the proper HTTP headers in place to mitigate it.


This instructs the browser to connect to the website using HTTPS directly for a certain period of time using the max-age directive. It can also be applied to subdomains with includeSubDomains directive.


This header aims to have a fine-grained control over when the referrer is transmitted. Several directives are available, from no-referrer to completely disable the referrer header to strict-origin-when-cross-origin, which means that the full URL is sent with any request made in TLS in the same domain. (Whereas only the domain is sent as referrer if the request is made on a different domain or subdomain.) Finally, if the request is made in HTTP, the referrer is not sent.

It’s a handy header especially to reduce internal URL leaks to external services.


It enforces the MIME type of resources, and states that they shouldn’t be changed. If the MIME type is not the one advertised with the Content-Type header, then the request is dropped in order to mitigate MIME confusion attacks. There’s only one directive: nosniff.

Mozilla Documentation


This header controls whether or not the page can be loaded as an iframe or an object. There are different directives, from DENY to forbid this behaviour, to SAMEORIGIN, which allows it only from the same origin (domain or subdomain), and ALLOW-FROM which allows the operator to specify a whitelist of origins.

RFC 7034


This controls how the page should be handled by crawling bots (i.e search engines). Several directives exist: the noindex, nofollow, nosnippet, noarchive directives will avoid the page to be indexed in search results and instruct the crawler to not follow the links of the page. The crawler will also not store any copy of the page.

Google documentation


This legacy header instructs the browser to block any detected XSS request when set to 1; mode=block. It’s now superseded by the Content-Security-Policy header, but is still useful on older web browsers. This header would have mitigated the XSS on the website at the beginning of this article.


This powerful header allows the operator to define rules specifying how the webpage resources can be loaded and where from. It’s particularly efficient against XSS. For instance, it’s possible to enforce loading resources on HTTPS only using default-src: https:, or to forbid any inline scripts with the directive default-src: ‘unsafe-inline’.

It’s possible to create more complex rules, for instance:

base-uri ‘none’;  Forbid the usage <base> URI.
default-src ‘self’; Will use the origin as fallback for any fetch directive which is not specified.
frame-src; forbid any external content to be loaded using iframes.
connect-src ‘self’; Forbid ping, Fetch, XMLHttpRequest, WebSocket, and EvenSource to load external content.
form-action ‘self’; Enforce the forms submissions to the origin.
frame-ancestors ‘none’; As X-Frame-Options: Deny, it forbids loading the page using iframes, objects, embed, or applets.
img-src ‘self’ data:; Allow <img> tags to use data uris from the origin only.
media-src ‘none’;  Forbid loading any <audio> or <video> elements.
object-src ‘none’; Forbid loading any <object>, <embed>, and <applet> elements.
script-src ‘self’ ‘unsafe-inline’; Javascript can be loaded inline from the origin only.
style-src ‘self’ ‘unsafe-inline’; Stylesheets can be loaded inline from the origin only.
report-uri /csp-report; Instruct the client to POST any violation of the policy to the specified address, here This directive is being replaced by report-to which has the same syntax.

Here are the W3C specifications about CSP level 2 and CSP3.

While deploying all of these headers may seem difficult, the only read head-scratcher is Content-Security-Policy. Although this one must be deployed, it should be done with care as it may break a lot of applications easily. Use Google Evaluator, a handy tool to analyze any website CSP.

Another valuable service is, which test your web application headers and give some advice when some are missing or misconfigured.

Here are a few configuration snippets for three webservers to deploy the above configuration. Please note that you may need to adapt this configuration depending on your specific needs (especially the CSP):

add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; add_header 'Referrer-Policy' 'same-origin'; add_header X-Content-Type-Options nosniff; add_header X-Frame-Options SAMEORIGIN; add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"; add_header X-XSS-Protection "1; mode=block"; add_header Content-Security-Policy "base-uri 'none'; default-src 'self'; child-src;connect-src 'self'; form-action 'self'; frame-ancestors 'none'; img-src 'self' data:; media-src 'none'; object-src 'none'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; report-uri /csp-report; report-to /csp-report;" header / { Strict-Transport-Security "max-age=31536000; includeSubdomains; preload" Referrer-Policy 'same-origin' X-Content-Type-Options nosniff X-Frame-Options SAMEORIGIN X-Robots-Tag "noindex, nofollow, nosnippet, noarchive" X-XSS-Protection "1; mode=block" Content-Security-Policy "base-uri 'none'; default-src 'self'; child-src;connect-src 'self'; form-action 'self'; frame-ancestors 'none'; img-src 'self' data:; media-src 'none'; object-src 'none'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; report-uri /csp-report; report-to /csp-report;" } Header always set Strict-Transport-Security "max-age=31536000; includeSubdomains; preload" Header always set Referrer-Policy 'same-origin' Header always set X-Content-Type-Options nosniff Header always set X-Frame-Options SAMEORIGIN Header always set X-Robots-Tag "noindex, nofollow, nosnippet, noarchive" Header always set X-XSS-Protection "1; mode=block" Header always set Content-Security-Policy "base-uri 'none'; default-src 'self'; child-src;connect-src 'self'; form-action 'self'; frame-ancestors 'none'; img-src 'self' data:; media-src 'none'; object-src 'none'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; report-uri /csp-report; report-to /csp-report;"

While setting the correct security HTTP headers is a good first step to mitigate some attacks, it’s not sufficient.

That’s why AdwCleaner’s backend PHP stack is also hardened to higher the cost of exploiting vulnerabilities.

Hardening PHP

The problem we’re trying to solve is to restrict the language surface that the application can access to:

  • block access to specific functions.
  • give access only to a restricted set of files and classes.
  • sanitize various functions inputs.
  • restrict execution to read-only PHP files and deny it on writable ones.
  • replace rand() and mt_rand() by random_int().

This may sound simple, but it becomes quickly complex to manage at large scale, especially without tinkering with the application source code.

Since we’re in 2017, we use PHP7, meaning that we cannot use Suhosin any longer, as it’s only working with PHP5 and below. We’re not alone in this situation. Thus, some fine people developed Snuffleupagus, a PHP7+ extension that takes a lot of inspiration from Suhosin but with extended capacities and a more industrialized usage.

Snuffleupagus logo – an elephant as majestic as PHP itself

Snuffleupagus mitigates issues in two main ways:

Killing bug classes at once is pretty handy: Instead of writing a rule for every situation, it’s possible to write a generic rule which will mitigate numerous bugs. For instance, mail() RCE, weak PRNG, permissive chmod() , system injections, or file upload RCE can be easily fixed using only one or two rules to address the whole bug family.

A practical example using file-upload RCE:

$uploaddir = '/var/www/uploads/'; $uploadfile = $uploaddir . basename($_FILES['userfile']['name']); move_uploaded_file($_FILES['userfile']['tmp_name'], $uploadfile)

This gives countless RCEs (CVE-2001-1032, CVE-2016-9187…). It’s possible to mitigate it using the following directive:


Where the file tests/ return 0 to allow and any other value to deny the upload – vld is pretty useful for that:

$ php -d vld.execute=0 -d -d $file

That way any upload containing PHP code will be dropped.

Another feature valuable for our use case is virtual patching. It allows fine-grained settings for functions. For instance, I want to allow a call to system(“id”) but I don’t want to allow any other system calls. The rules would look like:

sp.disable_functions.function("system").param("cmd").value("id").allow(); sp.disable_functions.function("system").param("cmd").drop();

Since the rules are evaluated in order, we first allow a call to system with id as the cmd argument, and we then drop all other rules.

It’s also possible to write rules for a specific filename (filename(name)), hash (hash(sha256)), return value (ret(value)) or type (ret_type(type_name)), and client ip (cidr(ip/mask)). Also, the behaviour can be adapted. If it triggers a rule:

  • drop(): drop the request
  • simulation(): only log the event without blocking it
  • allow(): allow the request
  • dump(): dump the request in a directory

An entry in the PHP logfile is written when an event is triggered, for instance:

2017/10/08 07:30:19 [error] 625#625: *54641 FastCGI sent in stderr: "PHP message: [snuffleupagus][][include][drop] Inclusion of a forbidden file (/a/path/to/a/webroot/../../../)" while reading response header from upstream, client: <redacted>, server:, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: ""

Since no one likes writing rules by hand, a nice way to start is by using a script that parses the application PHP files, computes the hash of functions containing dangerous functions, and generates rules based on the results—only the files with the corresponding hashes will be allowed to execute these functions.

We generate new customized rules at every update pushed in production, alongside a set of default rules that are always valid (like system calls, uploads validation, and read-only execution). Since the log format is easy enough to parse, we can trigger notifications when a request has been blocked by one of the rules and act accordingly:

Mail notification sent when a snuffleupagus rule has been triggered.


The documentation is available on ReadTheDocs along with slides from their talk at BerlinSide,, and BlackAlps.


This article covered only two of the multiple measures we take to secure AdwCleaner‘s backend. Although some of these vulnerabilities can be mitigated client-side using browsers add-ons like NoScript, it’s always better to fix them as soon as possible using the easy techniques explained above. More hardening can be done at the OS and network level, and you can refer to our previous article about TLS to learn more about some of these.

The post How to harden AdwCleaner’s web backend using PHP appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Blockchain technology: not just for cryptocurrency

Malwarebytes - Tue, 12/05/2017 - 18:20

Imagine a place where you can safely store all your personal information and only you decide who has access to it. You can choose which parts of that information you want to share, and you can just as easily revoke that access.

If this place ever comes into existence, I am willing to bet it will be built on blockchain technology.

Blockchain technology is still very much in development, but those in the know are convinced it will change many markets and industries. So, after delving into the workings of blockchain and crypto-currency, it’s time to have a closer look at what blockchain technology can do outside the realm of cryptocurrencies. Most of these possibilities take the form of smart contracts.

What is a smart contract?

The expression “smart contracts” was coined by Nick Szabo long before blockchain technology was refined. He envisioned a technology meant to replace legal contracts, where the terms of a contract could be digitized and automated. An action (payment) could be completed as soon as the condition (delivery) was met.

After the introduction of blockchain, the term “smart contract” was used more widely as software that runs computations on the blockchain.

As a quick reminder, the blockchain is defined as a distributed, decentralized, cryptographically-secured ledger, where each new block contains a reference to the previous block, as well as all the confirmed “transactions” since that previous block was approved.

I use the term transactions lightly here since it would seem to imply that we are still discussing crypto-currency, which is not the case. We call them transactions because of the protocols that are in place to determine whether a contract is considered fulfilled.

Today, a smart contract can be any kind of software, as long as it’s based on blockchain technology. It can be used not only to complete “transactions,” but to secure data. A smart contract could specify that your physician has access to your medical history, but she can’t see your financial history.

Some early blockchain technology developers

Although the use of the blockchain technology for other applications is still in the early stages, we are seeing some promising developments. For example, the Ethereum Project advertises itself as a decentralized platform that runs smart contracts: applications that run exactly as programmed, without any possibility of downtime, censorship, fraud, or third-party interference. A list of 850 apps built on the Ethereum platform can be found at

One of the best known Ethereum-based apps is Augur, which uses a blockchain-based betting system to use the knowledge of the masses in order to predict upcoming events.

IBM is involved in some try-outs with major global companies like Maersk based on the Hyperledger Fabric. Hyperledger Fabric is a blockchain framework that provides a foundation for developing applications or solutions with a modular architecture. It was designed by IBM and Digital Asset as a technology to host smart contracts called “chaincode” that comprise the application logic of the system.

The potential future applications of this technology are endless, from implementing a blockchain ledger in order to streamline management operations and approvals to moving elections online (and guaranteeing secure votes, as it would take an insane amount of computer power to hack). Still, as with any new tech, there are both golden opportunities and potential for corruption.

Positive application of smart contracts

Here are some examples of how companies can benefit from using smart contracts. Using blockchain technology, they could:

  • Design a fully-automated supply chain management system. When a certain condition is reached, the appropriate action is taken. Imagine a factory that automatically orders supplies when it threatens to run out of them.
  • Manage huge paper trails. Each step in the paper trail can be added as a new block in the chain, and checks can be placed to ensure all conditions have been met that are needed to proceed.
  • Exchange vital business information in real time. Every node can contribute to and access all the information in the blocks.
  • Eliminate the middleman when dealing with others. The parties can interact directly and securely, by relying on the blockchain technology.
  • Eliminate fraud. Irreversibility makes it fraud-resistant. In a proper setup, there is no way to make unauthorized changes in already approved blocks.
Potential pitfalls of smart contracts

Reasons why companies might shy away from using blockchain technology for certain parts of their business include:

  • The content of the contracts is visible to all participants. There are some parts of your business that are not suitable for public knowledge. So there may be a need to encrypt certain data.
  • It’s impossible to correct errors. You would have to reverse the contract once a faulty one has been approved.
  • Long development and implementation is needed to replace existing solutions on a large scale. This may improve when we are more well-versed in applying this technology.
  • If personally identifiable information needs to be stored, this could break local or international regulations. For example, smart contracts would have a hard time complying with privacy laws like the upcoming GDPR.
  • A fully distributed network offers a larger surface for hackers. Remember that all the nodes have access to all the information. So it could pose extra risks if a hacker can access a node or pretend to be one.

The development of the blockchain is expected to cause a revolution similar to the one brought to us by the Internet. It may take some time for smart contracts to conquer the corporate world, but the ball is rolling. If you want to be ready for the future, especially if you work in industries where value transactions take place, it’s a good idea to start learning more about blockchain technology and smart contracts.

The post Blockchain technology: not just for cryptocurrency appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Children and young adults: the next-generation money mules

Malwarebytes - Tue, 12/05/2017 - 16:00

According to Cifas, a nonprofit fraud prevention organization based in the United Kingdom, more than 8,500 cases of bank account misuse have been filed against 18- to 24-year-olds between January and September 2017. Cifas has linked the account abuse to an uptick in young people acting as money mules,  allowing their bank account to be used in order to facilitate the movement of criminal funds. The number of young people engaged in this kind of money laundering scheme doubled in the last four years, painting a troubling picture.

Earlier this year, the Metropolitan Police in the UK claimed that children as young as 13 years old have also been used by gangs or criminals to move stolen money on their behalf. The youngsters were either approached—often with threats of violence—right outside their schools, or they responded to adverts they saw on social media or video sharing sites that offered cash rewards of at least $67 (£50) for money transfer work. Some posts were even advertised as legitimate work under the likely titles of “Financial Manager,” “Monet transfer Agent,” or “Payment processing agent.”

Money mules (aka smurfers) usually get a cut from moving money, but it was found that youngsters were often never paid.

This worrying trend has inspired Cifas and Financial Fraud Action (FFA) UK to launch the Don’t Be Fooled campaign this week, aiming to educate would-be smurfers of the consequences of money muling.

This video sheds light on significant crimes that are enabled by money muling.

What children and young adults may not seem to realize is that money muling is a form of money laundering. Those caught could face imprisonment of up to 14 years. Their bank accounts will also be closed, their credit scores will likely reflect a low or poor rating, and they will face a significant challenge should they wish to apply for student loans, mobile phone contracts, and other products or services that require an account.

Parents and caregivers, if you opened an account for your child and she or he is active on social media, we urge you to monitor for money coming in and out from unknown sources. It may also be time to talk about them about money muling, what it is, and how it affects others—particularly, the victims of these crimes. Here are a few online sources to read so you can guide them on protecting themselves from being part of a criminal scheme:

Read: Stranger danger and the sociable child

Europol has a dedicated page for money muling awareness and prevention here. Cifas, too, has put out several resources, one of which is a flyer called A mule’s life is a fool’s life [PDF]. It contains a trove of information about who these criminals target when they go out to recruit mules, what tactics they use to trick their prospects, and what signs to watch out for if one is likely being led to such a scheme.

If you already believe that you or someone you know is acting as a money mule, cease transferring money immediately and notify your bank, the service you use in making the transactions, and law enforcement. Remember, one cannot claim ignorance when caught red-handed.

Sadly, money muling isn’t the only financial crime children and young adults could be exposed to. Other “easy money” schemes include selling bank accounts, conducting payments that they know will bounce, and opening credit or retail accounts and phone contracts without the intention of honoring the credit agreements.

“With Christmas only a few weeks away, we want to warn young people, in particular students, to be wary of anyone approaching them in the student union or elsewhere with promises of cash for the use of their bank account,” said Cifas Chief Executive Simon Dukes in a BBC interview. “Criminals may make it sound attractive by offering a cash payment, but the reality is that letting other people use your account in this way is fraud and it is illegal.”

Stay safe!

The post Children and young adults: the next-generation money mules appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Seamless campaign serves RIG EK via Punycode (updated)

Malwarebytes - Mon, 12/04/2017 - 22:48

Update (2017-12-05): We noted some malvertising chains using a new domain name (newadultthem[.]info) also hosted on the same IP address as the Punycode one.

– –

The Seamless campaign is one of the most prolific malvertising chains pushing the RIG exploit kit and almost exclusively delivering the Ramnit Trojan. Identification of Seamless is typically easy, due to its use of static strings and an IP literal URLs. However, for over a week now we have been seeing another Seamless campaign running in parallel, making use of special characters.

Rather than using an IP address, this Seamless chain uses a Cyrillic-based domain name, which is transcribed into recognizable characters via Punycode, a visual representation of Unicode. In this blog post, we’ll do a quick historical review of the Seamless gate and describe this latest iteration in a new format.


We noted redirections via adult sites around March 2017 (as pictured below) that were going through a new gate targeting Canada. Due to the presence of the string of the same name in its code, Cisco named this new campaign “Seamless.” Seamless dropped the Ramnit banking Trojan from the very beginning and still continues to do so.

The URL patterns were typically:

These days, web traffic to Seamless still comes from adult portals serving malvertising, eventually redirecting to the same IP literal URLs containing the string test followed by three digits:

Seamless and Punycode

It wasn’t until recent years that domain registrars began to allow for non-English (ASCII) characters in domain names, defined by the Internationalized Domain Names (IDNs) for Applications framework. This allowed for countries to customize services with their own alphabets, which include what we’d otherwise call “special characters,” but have in fact existed long before the Internet was born.

Punycode is a representation of Unicode characters into ASCII used for hostnames, which allows for IDNs, while DNS lookups can still be performed using ASCII characters. The threat actors behind Seamless have been using a domain name containing Cyrillic characters (mostly found in Eastern European countries), which we noticed in our honeypot captures via its Punycode representation.

The call to the Seamless gate was initiated by a malvertising redirection:

<html> <head><link rel="icon" type="image/gif" href="data:image/gif;base64,[removed]=="/> <meta http-equiv="refresh" content="0; URL='http://xn--80af6acaaaj9h .xn--p1acf/test551.php'" /> </head> <body></body> </html>

It is worth noting that Punycode has been exploited by scammers crafting phishing domain names resembling official brands, as sometimes certain Unicode characters are hard to distinguish from ASCII ones.

It is unclear whether this was a deliberate attempt to bypass intrusion detection systems or if it is simply an odd case similar to previous ones such as the Decimal IP campaign. Time will tell if the Seamless operators maintain it or abandon it in favor of the long-used IP literal URLs.

Indicators of compromise (IOCs)

Note: These IOCs are specific to the Punycode Seamless campaign.


xn--80af6acaaaj9h .xn--p1acf/test441.php xn--80af6acaaaj9h .xn--p1acf/test551.php

IP address:


1609ab905f2ebbfe23b1111789cf8cade8b303642ecc5002ea63a3be24d2a07e 1a82f19a88827586a4dd959c3ed10c2c23f62a1bb3980157d9ba4cd3c0f85821 555f2d1665910e5d47ba45b0c8ec9eafaeb7c12868c5b76d52fcd0e17da248ec 58fb6436df51c59f8efe67b82e2a8a3af10faf798e9f4f047bac2fab1c1b0541 5943564ab3d38d4a9a0df32352dd5d2b04ccb76294e68a5efcbad5745d397de3 6fc5375161decbb23391e8302693437740765cf3e5e6f17afee9d720c22a1fac 79754c3bf6d5e40d89d2d81ba5a124b9ee13d924994fddeb170a50abd7f2be62 85c3822ab9254c8b52515869ca7430165142b37d05d4a41fe0293177098d44f2 888c88d190776cbf6bb010c2613c31428fb0779ed90f4f2bb8611a754a6bd44a a5d557f02bbf6454e912f7295f641985ec5535443a58bb163d7beadade854783 afe98c589e78abf76080ecdd1640f8e6e64f1f0244cfd4a1c216a4c0f8a9df24 b6c18ec6499e9671e0d80107e27485dee0ece626220a701570037407423f25c1 bba607dcf8d747daa2cb8d60986240caa932300dcf12da7073238822b3a1f42f bbea73e7b0d57969e45ef34667a016992f9295932e5b901858e3417593e080a2 d7b0ea1593eee67d43f5cd0a4472ac2cd12920a9ae2e87f29b02fafc98b00321 ec82f488d2c0c5f69f01c7d051081e8404f883cd4b63e57506d461c3e3926b0e f71411642bb2cdc1bb3da39d44d8f157bda0bdb853632174ec3796b9d3b500f5 f766d034b8579922fceb9a0c50b7ea7799ed6d9ed79acc8fcc08abea129b6f4a

The post Seamless campaign serves RIG EK via Punycode (updated) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 27 – December 03)

Malwarebytes - Mon, 12/04/2017 - 18:30

Last week on Labs, we touched on a huge macOS High Sierra vulnerability, a PayPal phish, and Terror EK’s new tactic. We also took a crack at identity theft protection services, drive-by cryptomining, and rounded up interesting talks while attending a security conference in Ireland called IRISSCON.

Other news
  • Our friends at Zimperium investigated a fake WhatsApp on Google Play, and found that this app displays an advertisement of a malicious game called Cold Jewel Lines (already removed from the Play Store) that further infects users with a second malware “capable of click fraud, data extraction, and SMS surveillance.” (Source: SC Magazine)
  • A question to parents: Should you buy your child smart toys for Christmas? Security experts say that whatever your decision is, make sure you read up on the potential risks first. (Source: Help Net Security)
  • Facebook users, rejoice! The social media network now has a tool that tells you which posts you have liked that are mere propaganda from Russia. (Source: Facebook Newsroom)
  • Imgur confirmed that they have been breached for the second time, affecting 1.7 million users. Email addresses and passwords were compromised. (Source: Help Net Security)
  • Finally, the “revenge porn” bill is introduced in the Senate. (Source: TechCrunch)
  • Vice’s Motherboard released a guide to avoiding (passive and active) state surveillance, which can be a handy reference to those who want to achieve more privacy online. (Source: The Motherboard)
  • What do hotcakes and ransomware have in common? They’re both selling. (Source: Security Brief)
  • Fake Victoria’s Secret apps are found being advertised on the Dark Web, prompting security experts to posit that criminals may be targeting VS shoppers this Christmas season. (Source: The Telegraph)
  • Afraid of insider threats? According to NTT security, most of them happen by accident. (Source: Help Net Security)
  • Cryptocurrency is more popular than ever at this point. This, of course, sprung up the creation of cryptocurrency apps. Be warned, though: a majority of these popular apps do not protect user information. (Source: Kroddos)

Stay safe everyone!

The post A week in security (November 27 – December 03) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Yet another flaw in Apple’s “iamroot” bug fix

Malwarebytes - Mon, 12/04/2017 - 17:05

Last week, we discussed a particularly serious vulnerability, dubbed “iamroot,” in macOS 10.13 (High Sierra). To sum up, the vulnerability allows an attacker to gain access to the ultra-powerful root user on any Mac running macOS 10.13.0 or 10.13.1. Worse, the vulnerability can be exploited remotely if screen sharing is turned on.

Security Update 2017-001

In response to this, Apple quickly pushed a fix out the door in the form of Security Update 2017-001. Confusingly, Apple has already released a Security Update 2017-001 several times. It released two back in March, for macOS 10.9 and 10.10 (Yosemite and El Capitan). It released another in October for macOS 10.11 (Sierra). Yes, you read that right…Apple’s using the same name for unrelated security fixes for different versions of the system.

Okay, back to the fix. Security Update 2017-001—the one for High Sierra—was released to fix the “iamroot” bug, and it was released for both macOS 10.13.0 and 10.13.1. Because of the severity of the problem, this update was pushed out automatically. In other words, if you don’t install it right away, it will be installed automatically after a fairly short interval. It’s a quick install and doesn’t require a reboot.

Great, the bug is fixed. We can all go home now! Wait, what’s that you say, Bob? Something’s wrong with file sharing?

Apparently, Apple’s first attempt at fixing the bug had a problem. Specifically, it made file sharing stop working. So if you relied on being able to use file sharing to move files from one Mac to another, this update wreaked havoc with your workflow.

Security Update 2017-001 2.0

Apple quickly released a technical note about the file sharing problem, and how to fix it, and then re-issued Security Update 2017-001. Yup, that’s a second Security Update 2017-001, not Security Update 2017-002. Thus, people who had already installed Security Update 2017-001 found themselves wondering why they had to install it again. Fortunately, again, the update was automatic. So if you didn’t do it manually, your confusion wasn’t going to keep you from getting the update.

Great, now everything’s fixed. Time to go home for real. Sorry, Bob. You’re going to have to go back to your desk in IT. You’ve got another problem to deal with now. Take note of the fact that this second update increments the build number of macOS to 17B1003. This will be important later.

Remember how we said that Security Update 2017-001 could be applied to both macOS 10.13.0 and 10.13.1? What happens if you install the update on 10.13.0, then update later to 10.13.1? Turns out, it’s bad.

If you install Security Update 2017-001 on 10.13.0, then update to 10.13.1, the bug will re-emerge. The process of installing the 10.13.1 undoes the fix that was applied by Security Update 2017-001, making the machine vulnerable to the “iamroot” bug again.

Of course, Security Update 2017-001 exists for 10.13.1 as well, and will be automatically re-applied sometime after the 10.13.1 update. Crisis averted, right? Bob, you’re really going to have to stop jumping out of your chair every couple paragraphs. I know you want to go home, but unfortunately, there’s still a problem.

You’ve upgraded from 10.13.0 to 10.13.1. You’ve installed Security Update 2017-001. And you’ve verified that you’re running macOS 10.13.1 build 17B1003 (which you can do by choosing About This Mac from the Apple menu and clicking the version number in the window that opens). Unfortunately, you are still vulnerable!

How to fix the problem

So, to sum up, what we’ve described above involves the following steps:

  1. Install Security Update 2017-001 on macOS 10.13.0
  2. Update to macOS 10.13.1
  3. Install Security Update 2017-001 again
  4. Verify that you’re running macOS 10.13.1 build 17B1003

After doing this, you should be safe from “iamroot.” However, in reality, an attacker would still be able to trigger the original vulnerability and gain root access.

It turns out that restarting the computer will fix the problem, with no further changes needed. This begs the question, though: Why is this necessary? After all, Security Update 2017-001 doesn’t normally require a restart. Why is one required in this specific case?

Since the update doesn’t require a restart, and since many Mac users can be rather averse to restarting, this means that people upgrading from 10.13.0 to 10.13.1 could easily end up being vulnerable to this bug for weeks or months, until they next decide to restart. Keep in mind that nearly all 10.13.0 users have probably already had Security Update 2017-001 installed automatically at this point, putting them into a pipeline heading straight for this issue.

Over the weekend, Apple released a fix and added a mention of the problem to their notes on Security Update 2017-001. Rather than releasing yet another iteration of Security Update 2017-001, Apple added a fix to the MRT application. MRT, which stands for Malware Removal Tool, is not something Apple talks about, and little is known about exactly how it works. It is known, though, that MRT is designed to remove malware that has already infected the system.

If you’re wondering why Apple would release a fix for this bug in MRT, that’s an excellent question. It doesn’t seem to make much sense and feels a bit like a hack to me. My guess is that MRT was something that could be easily and quietly updated, so that’s what they did.

A series of unfortunate events

Now, to be fair, Apple reacted to the original vulnerability quickly, and any rushed fix has a risk of problems. However, this isn’t an isolated incident, and it has Apple fans worried.

In October, there was a problem with certain authentication dialogs revealing the password in the place of the password hint. That has been followed up with the “iamroot” bug and an embarrassing series of buggy responses, which prompted Apple to issue a rare public apology. Now, the latest response to the bugs in the “iamroot” fixes feels more like a hack meant to go unnoticed than anything else.

On top of all that, this weekend, Apple released iOS 11.2 on a Saturday (which is unusual), apparently to address a problem that caused many iOS devices to start crashing on December 2 after 12:15 am. This suggests that the iOS 11.2 release was probably rushed, and time will tell whether there are any issues with the update as a result.

This has many worried about what’s going on at Apple these days. I’ve been using Apple products since 1984, and I’ll be honest: I’m worried about the future of the Mac. Admittedly, I’m not as worried as I was back in the 90s, when Apple was floundering and looked like it could go out of business. However, I do hope that Apple is able to put some of its vast resources to the task of improving the quality assurance process and reverse the worrying trend of increasing bugginess of macOS releases.

The post Yet another flaw in Apple’s “iamroot” bug fix appeared first on Malwarebytes Labs.

Categories: Techie Feeds

PayPal phish asks to verify transactions—don’t do it

Malwarebytes - Fri, 12/01/2017 - 19:35

There’s a number of fake PayPal emails going around right now claiming that a recent transaction can’t be verified. If your response to this is, “What transaction?” read on. If your response to this is, “Oh no, not my recent transaction!” you should still read on. Why? Because scammers have both eyes and at least one virtual hand on your cash, assuming you follow their direction.

Here’s two examples of how these mails are being named from one of our mailboxes:

Click to enlarge

[New Transaction Statements] we’re letting you know : We couldn’t verify your recent transactions [New Activity Statements] [Account Hold] Re : Your payments processed cannot completed

Here’s the most recent email in question:

Click to enlarge

We couldn’t verify your recent transaction Dear Client,We just wanted to confirm that you’ve changed your password. If you didn’t make this change, please check information in here. It’s important that you let us know because it helps us prevent unauthorised persons from accessing the PayPal network and your account information.
We’ve noticed some changes to your unsual selling activities and will need some more information about your recent sales.

Verify Information Now
Thank you for your understanding and cooperation. If you need further assistance, please click Contact at the bottom of any PayPal page.Sincerely,PayPal

Clicking the button takes potential victims to a fake PayPal landing page, which tries very hard to direct them to a “resolution center.” The URL is:


Click to enlarge

ΡayΡaI is constantly working to ensure security by regularly screening the accounts in our system. We recently reviewed your account, To help us
provide you with a secure service. We would like to return your account to regular standing as soon as possible. We apologise for the inconvenience.
Why is my account access limited? Your account access has been limited for the following reason(s)

December 1, 2017: We notice some unusualy activity on your PayPaI account. As a security precaution to protect your account until we have more details from you, we’ve place a limitation on your account
( Your case ID for this reason is PP-003-523- 280- 570 )
How can I help resolve the issue on my account? It’s usually easy to resolve issues like this. Most of the time, we just need a little more information about your account transactions
To help us resolve this issue, please log in to your account and go to the ResoIution Center to find out what information
You need to provide. We’ll review the information you provide and email you if we need more details.
Completing all the checklist items will automatically restore your account access.

From here, it’s a quick jump to two pages that ask for the following slices of personal information and payment data:

  1. Name, street address, city, state, zip, country, phone number, mother’s maiden name, and date of birth
  2. Credit card information (name, number, expiration code, security code)

Click to enlarge

Click to enlarge

Sadly, anyone submitting their information to this scam will have more to worry about than a fictional declined payment, and may well wander into the land of multiple actual not-declined-at-all payments instead. With a tactic such as the above, scammers are onto a winner—there’ll always be someone who panics and clicks through on a “payment failed” missive, just in case. It’s an especially sneaky tactic in the run up to December, as many people struggle to remember the who/what/when/where/why of their festive spending.

Whatever your particular spending circumstance, wean yourself away from clicking on any email link where claims of payment or requests for personal information are concerned. Take a few seconds to manually navigate to the website in question. and log in directly instead. If there are any payment hiccups happening behind the scenes, you can sort things out from there. Scammers are banking on the holiday rush combined with the convenience of “click link, do thing” to steal cash out from under your nose.

Make it an (early) New Year’s resolution to make things as difficult for the scammers as possible. You can report PayPal phishing attempts here. And if in doubt, at least delete the email.

The post PayPal phish asks to verify transactions—don’t do it appeared first on Malwarebytes Labs.

Categories: Techie Feeds

An IRISSCON 2018 roundup

Malwarebytes - Thu, 11/30/2017 - 13:00

Last week, some 400-plus attendees listened to a wide variety of infosec topics at the ninth annual IRISSCON, Ireland’s longest-running security event.

I already talked a fair bit about this one a few weeks back, so rather than repeat myself, I’ll let the videos do the talking. First up, the Keynote:

Next, a great and brutally honest talk by Quentyn Taylor about the day-to-day dealings of a CISO. Humorous and filled with truth bombs—What could be better?

Back in July, I gave my Mahkra ni Orroz talk at SteelCon, and I was lucky enough to give a retooled version for IRISSCON, now with revised slides, additional content, extra information, and a few of the wrinkles mentioned in this blog post ironed out.

If you want a good introduction to the world of physical security and social engineering your way into places you shouldn’t be, then this talk by FreakyClown will likely be just what the tailgating doctor ordered:

The tale of how Lee Munson got into security—and stayed there—is definitely worth checking out, especially as it shines a light on how many people with no security qualifications make valuable contributions to the industry.

A splash of human nature to go with your computer security, courtesy of Dr. Jessica Barker:

There’s a few pieces of coverage for the event over on The Register [1], [2], [3], and you can catch the rest of the talks on the official IRISSCERT Youtube channel.

The post An IRISSCON 2018 roundup appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Persistent drive-by cryptomining coming to a browser near you

Malwarebytes - Wed, 11/29/2017 - 18:00

Since our last blog on drive-by cryptomining, we are witnessing more and more cases of abuse involving the infamous Coinhive service that allows websites to use their visitors to mine the Monero cryptocurrency. Servers continue to get hacked with mining code, and plugins get hijacked and affect hundreds or even thousands of sites at once.

One of the major drawbacks of web-based cryptomining we mentioned in our paper was its ephemeral nature compared to persistent malware that can run a miner for as long as the computer remains infected. Indeed, when users close their browser, the cryptomining activity will also stop, thereby cutting out the perpetrators’ profit.

However, we have come across a technique that allows dubious website owners or attackers that have compromised sites to keep mining for Monero even after the browser window is closed. Our tests were conducted using the latest version of the Google Chrome browser. Results may vary with other browsers. What we observed was the following:

  • A user visits a website, which silently loads cryptomining code.
  • CPU activity rises but is not maxed out.
  • The user leaves the site and closes the Chrome window.
  • CPU activity remains higher than normal as cryptomining continues.

The trick is that although the visible browser windows are closed, there is a hidden one that remains opened. This is due to a pop-under which is sized to fit right under the taskbar and hides behind the clock. The hidden window’s coordinates will vary based on each user’s screen resolution, but follow this rule:

  • Horizontal position = ( current screen x resolution ) – 100
  • Vertical position = ( current screen y resolution ) – 40

If your Windows theme allows for taskbar transparency, you can catch a glimpse of the rogue window. Otherwise, to expose it you can simply resize the taskbar and it will magically pop it back up:

A look under the hood

This particular event was caught on an adult site that was already using aggressive advertising tricks. Looking at the network traffic, we can see where the rogue browser window came from and what it loaded.

The pop-under window (elthamely[.]com) is launched by the Ad Maven ad network (see previous post about bypassing adblockers), which in turn loads resources from Amazon (cloudfront[.]net). This is not the first cryptominer being hosted on AWS, but this one does things a little bit differently by retrieving a payload from yet another domain (

We notice some functions that come straight from the Coinhive documentation, such as .hasWASMSupport(), which checks whether the browser supports WebAssembly, a newer format that allows users to take full advantage of the hardware’s capability directly from the browser. If it doesn’t, it would revert to the slower JavaScript version (asm.js).

The WebAssembly module (.wasm) is downloaded from hatevery[.]info and contains references to cryptonight, the API used to mine Monero. As mentioned above, the mining is being throttled to have a moderate impact on users’ machines so that it stays under the radar.


This type of pop-under is designed to bypass adblockers and is a lot harder to identify because of how cleverly it hides itself. Closing the browser using the “X” is no longer sufficient. The more technical users will want to run Task Manager to ensure there is no remnant running browser processes and terminate them. Alternatively, the taskbar will still show the browser’s icon with slight highlighting, indicating that it is still running.

More abuse on the horizon

Nearly two months since Coinhive’s inception, browser-based cryptomining remains highly popular, but for all the wrong reasons. Forced mining (no opt-in) is a bad practice, and any tricks like the one detailed in this blog are only going to erode any confidence some might have had in mining as an ad replacement. History shows us that trying to get rid of ads failed before, but only time will tell if this will be any different.

Unscrupulous website owners and miscreants alike will no doubt continue to seek ways to deliver drive-by mining, and users will try to fight back by downloading more adblockers, extensions, and other tools to protect themselves. If malvertising wasn’t bad enough as is, now it has a new weapon that works on all platforms and browsers.

Indicators of compromise,yourporn[.]sexy,Adult site,elthamely[.]com,Ad Maven popunder,d3iz6lralvg77g[.],Advertiser's launchpad,hatevery[.]info,Cryptomining site

Cryptonight WebAssembly module:


The post Persistent drive-by cryptomining coming to a browser near you appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Serious macOS vulnerability exposes the root user

Malwarebytes - Wed, 11/29/2017 - 16:00

Update: 9:29 am PT: Apple has now released a fix for the bug described here. That fix is part of Security Update 2017-001, which is available from the Mac App Store, in the Updates tab, with the label “Install this update as soon as possible.” (Somewhat confusingly, there have already been previous Security Update 2017-001 releases, for unrelated issues, for Sierra, El Capitan and Yosemite.) This update should be installed as soon as possible, and does not require a restart.

On Tuesday afternoon, a tweet about a vulnerability in macOS High Sierra set off a firestorm of commentary throughout the Twitterverse and elsewhere.

It turns out that the issue in question works with any authentication dialog in High Sierra. For example, in any pane in System Preferences, click the padlock icon to unlock it and an authentication dialog will appear. Similarly, if you try to move a file into a folder you don’t have access to, you’ll be asked to authenticate:

Enter “root” as the username, and leave the password field blank. Try this a few times, and it may work on the first try, but more likely you’ll have to try two or a few more times.

When the authentication window disappears, whatever action you were attempting will be done, without any password required.

Let’s take a step back for just a moment and consider what this means. On a Unix system, such as macOS, there is one user to rule them all. (One user to find them. One user to bring them all and in the darkness bind them. /end obligatory nerdy Lord of the Rings reference>)

That user is the “root” user. The root user is given the power to change anything on the system. There are some exceptions to that on recent versions of macOS, but even so, the root user is the single most powerful user with more control over the system than any other.

Being able to authenticate as the root user without a password is serious, but unfortunately, the problem gets worse. After this has bug has been triggered, it turns out you can do anything as root on the first try, without a password.

The root user, which has no password by default, is normally disabled. While the root user is disabled, it should not be possible for anyone to log in as root. This is how macOS has worked since day one, and it has never been an issue before, but this vulnerability causes the root user to become enabled… with no password.

Unfortunately, this means that anyone will be able to log into your Mac using user “root” and no password!

Note that this does not require that the login window be set to always ask for a username and password. If you have it set to display a list of user icons instead, after triggering this vulnerability, there will be an “Other…” icon that will be present on the login screen. Clicking that will allow you to manually enter “root” with no password.

Remote access

This bug does not appear to be exploitable through some of the remote access services that can be enabled in the Sharing pane of System Preferences. Remote Login, which enables access via SSH, does not appear to be exploitable in our testing, nor does File Sharing. Even after triggering the bug and, thus, enabling the root user with no password, we were not able to connect to the vulnerable Mac through these methods.

Unfortunately, it looks like Screen Sharing, which allows you to view and remotely control the screen of your Mac, is vulnerable to this bug. In fact, it can actually be used to trigger this bug, without needing to rely on the root user already having been enabled!

In the screen sharing authentication window on a remote Mac, the same technique can be used. We were able to connect via screen sharing, using “root” as the username and no password, on the second attempt. At that point, the root user was enabled on the remote Mac, and we were able to log in to the root account via screen sharing without any blatant indication that we were doing so appearing on the screen shown to the logged in user on the target Mac. (An icon does appear in the menu bar on the target Mac, but it is not immediately obvious what that icon means. The average user will likely never notice the new icon.)

Unforeseen consequences

Once someone is logged into your Mac as root, they can do whatever they want, including accessing your files, installing spyware, you name it. So, in other words, if you were to leave your Mac unattended for 30 seconds, someone could backdoor it and have a very powerful way in later.

Suppose that you are Suzy, an average office worker in a cubicle farm. You step away from your desk for a moment to grab a cup of coffee. You’ll only be gone for about a minute, and don’t bother locking your screen. While you’re gone, Bob from the next cubicle comes over and “roots” your computer.

Later, you go to lunch. You’re gone for an hour, and Bob knows this because he’s familiar with your routine. He uses the root user to log into your Mac and install spyware—perhaps something to peep through the webcam, hoping to catch you in a compromising position later on when you’ve taken your MacBook Pro home with you.

Of course, all that’s even easier if you have screen sharing turned on, and he can install the spyware remotely, without ever touching your Mac.

Creeped out yet?

Fortunately, if you have your Mac’s hard drive encrypted with FileVault, this will prevent the attacker from having a persistent backdoor. In order to log in, the attacker would have to know the password that will unlock FileVault. Not even the all-powerful root user can access an encrypted FileVault drive without the password.

It’s also worth pointing out that a well-prepared attacker with access to your unlocked Mac could install spyware in less than a minute without relying on this vulnerability and without needing an admin password of any kind (depending on what the spyware does). Some spyware can be installed with normal user privileges.

Further, with a longer interval of unsupervised physical access to any Mac that doesn’t have FileVault turned on, an attacker can install spyware of any kind without needing an admin password.

Avoiding an attack using this vulnerability is actually fairly trivial. Just turn on FileVault, and always lock your Mac’s screen or log out when you’re away from it. While you’re at it, set a firmware password. And, to prevent remote access, turn off all services in System Preferences -> Sharing as a precaution.

Still, this is a very serious vulnerability, which Apple needs to address as quickly as possible. We contacted Apple for comment, but by the time of this writing, had not heard back.

Undoing the damage

If you, like many, have tried this out on your own Mac, you’ve opened up a potential backdoor. Fortunately, closing that door isn’t particularly hard, if you know the door is there and that it’s open.

First, open the Directory Utility application. It’s buried deep in the system where it’s hard to find, but there’s an easy way to open it. Just use Spotlight. Click the magnifying glass icon at the right side of the menu bar, or press command-space, to invoke Spotlight. Then start typing Directory Utility in the search window. Once the application is found, simply double-click it in the list to open it. (Or, even easier, press return once it’s selected in the search results.)

Once Directory Utility opens, click the lock icon in the bottom left corner of the window to unlock it. Then, pull down the Edit menu.

If you see an item reading Enable Root User, as shown in the screenshot above, you’re good. Whatever you did, the root user wasn’t enabled. Quit Directory Utility, and go about your business.

If, instead, you see an item reading Disable Root User, choose that. The root user will be disabled again, as it should be, and it will no longer be possible to log in as the root user from the login screen. Just be aware that this does nothing to protect against the vulnerability, so the root user could easily be enabled again.

Be sure to take the other measures described above to secure your system against unauthorized physical access. Namely,  turn on FileVault, always lock your Mac’s screen or log out when you’re away from it, set a firmware password, and turn off all services in System Preferences -> Sharing.

The post Serious macOS vulnerability exposes the root user appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Please don’t buy this: identity theft protection services

Malwarebytes - Tue, 11/28/2017 - 17:31

With an ever-increasing tempo of third-party breaches spilling consumer data all across the dark web, a natural impulse for a security-savvy user is to do something proactive to protect their sensitive information. After Equifax, there was an explosion of interest in credit monitoring and identity theft protection services. But most of these services offer limited value for the money, and in many cases, are subsidiaries of entities prone to leaking information in the first place. Sometimes doing something isn’t always the best option.

What do they do?

Before we get into the problems with identity theft protection services, let’s break down which services are actually offered, and in exchange for what. Identity protection services usually start by collecting your personal information, including the following:

  • your birthdate
  • your social security number
  • your address
  • your email address(es)
  • your phone number(s)

A company like Lifelock would then use “proprietary technology that searches for a wide range of threats to your identity.” (Sidenote: Subsuming an entire discussion of one’s product under “technology that searches” is usually a red flag, albeit a small one.) If any threats are found, they will notify you and provide some handholding to rectify the situation. In addition, they offer an insurance policy that provides reimbursement of any monetary losses. Starting price for these services runs around $109 per year.

IdentityWorks is another service run by one of the major credit bureaus, Experian. IdentityWorks has an introductory product for $9.99 per month that offers credit monitoring, a credit lock (something different from a freeze), identity theft insurance, and a customer service line for fraud resolution.

IdentityForce tends to be ranked higher in comparison to other services. They provide credit monitoring, bank account monitoring (not found in most other products), change of address monitoring, court record monitoring, as well as general personal information protection. Their recovery services are mostly the same though, including a customer service line for fraud resolution, identity theft protection insurance, and stolen funds replacement up to $1 million, depending on where you live. Standard cost is $17.95 per month.

Why shouldn’t I buy it?

Brian Krebs, a security researcher who’s arguably one of the biggest public targets for identity theft and financial crime, wrote a blog on credit monitoring services, stating that while some of these and other ID protection services are helpful for those who’ve already been snaked by ID thieves, they don’t do much to prevent the crime from happening in the first place.

Searching the darknet for your personal information is something advertised by almost all of these companies. What they don’t disclose is that a darknet site is almost always hosted on a “bulletproof” hosting service that will not respond to takedown requests or legal threats. So while essentially anybody can fire up the TOR browser and find your social security number on a dark website, almost nobody (including those in ID protection services) can actually do anything about it. All they can do is alert you.

Our big issue with paying for an identity theft protection service—besides the fact that the service doesn’t actually protect against identity theft—is that the insurance you would be forking out for is coverage most users already have under Visa and Mastercard zero liability rules. Another is the narrow focus on credit, typically to the exclusion of bank accounts, mortgage loans, and tax fraud. Lastly, account application notifications can’t actually prevent creditors from doing a “hard pull” on your credit, which dings your credit score.

Who else is looking at your data?

Somewhat more concerning is the lack of transparency concerning where these companies draw their data for analysis and alerting. Lifelock, in particular, outsources its credit monitoring services to… Equifax. In September of this year, the LA Times reported the relationship with Lifelock and Equifax, noting that in some instances, purchasing services would require the end user to give Equifax more information than it would otherwise have.

Does anyone, anywhere, want to give more personal data to Equifax?

How many competing companies also rely on the credit bureaus for monitoring services? While Equifax was the loudest and most recent breach in memory, odds are good that the other credit bureaus operating on an identical business model have identical security practices. As a reminder, Experian offers its own service, IdentityWorks, backed by data services it does not disclose and personal information you did not consent to give.

As well as the red flags above, there’s some slightly more ambiguous questions regarding these services that users should evaluate before purchase. For example: Is it a responsible threat model to protect against third-party data breaches by handing over, even more, data to a third party? Doesn’t that create ostensibly the biggest online target in the world?

And looking at the problem from another angle: If the biggest players in the industry rely on agreements with credit bureaus to do at least a portion of their monitoring, why aren’t the bureaus doing this for all of us? Given that Transunion, Equifax, and Experian took it upon themselves to collect our financial data without consent, don’t they have a responsibility to protect it with industry standard best practices? As a reminder, Equifax was not breached by an arcane APT attack. They were breached by negligence.


Identity theft monitoring services sound great on the surface. They’re not that expensive and seem to provide peace of mind against an avalanche of ever-more damaging breaches. But they don’t, at present, protect against the worst impacts of identity theft—the theft itself. Instead, they duplicate free services and, worst of all, let the credit bureaus off the hook for improving their security.

Please don’t buy this. Instead, you can stay relatively safe by learning about credit freezes and other steps to take in order to protect your identity when data is stolen or tax fraud is committed.

The post Please don’t buy this: identity theft protection services appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Terror exploit kit goes HTTPS all the way

Malwarebytes - Mon, 11/27/2017 - 20:00

We’ve been following the Terror exploit kit during the past few months and observed notable changes in both its redirection mechanism and infrastructure, which have made capturing it in the wild a more challenging task.

Unlike the RIG exploit kit, which uses predictable URI patterns and distribution channels, Terror EK is constantly attempting to evade detection by using malvertising chains without any static upper referrers (at least to our knowledge) combined with multi-step filtering in some cases, as well as HTTPS throughout the delivery sequence.

Traffic redirection

We’ve noticed consistent malvertising incidents via the Propeller Ads Media ad network, followed by the advertiser’s campaign, which we were able to recognize through URI patterns and other identifying creative choices. Ultimately, the ad redirected to the exploit kit’s first check-in page, which acts as both a decoy and launchpad.

Over time, the threat actors behind Terror have been trying to hide the call to the exploit kit. In one example, they created overly long URLs and using obfuscation to mask their iframe. Interestingly, in other sequences, we witnessed an additional type of filtering that uses unique subdomains. The user is first taken to a page whose current theme is cheap flights and hotels, containing what looks like an affiliate link to the travel site

But the main point of focus here is the additional invisible iframe, created with a unique 15-digit subdomain and refreshed for each new visit: ...

This iframe is what creates the final call to the exploit kit landing page. We believe this setup may be to prevent replays that attempt to step over the normal redirection flow, although it was only used for a short period of time.

HTTPS all the things

In late August 2017, we saw Terror EK make an attempt at HTTPS by using free SSL certificates, although it kept switching back and forth between HTTP and HTTPS. At times, there also seemed to be problems with domains that had the wrong certificate:

However, in recent days we’ve observed a constant use of SSL, not only for the exploit kit itself but also at the upper redirection stage.

This is what the traffic looks like using a customized version of the Fiddler web debugger set up as a man-in-the-middle proxy:

Without using a MITM proxy, network administrators will see the SSL handshake with the corresponding server’s IP address, but not the full URIs or content being sent:

Terror EK is one of few exploit kits to have used SSL encryption this year, the other well-documented one being Astrum EK, used in large malvertising attacks via the AdGholas group. Also, unlike RIG EK, which appears to have permanently switched to IP literal URIs after operation ShadowFall, Terror is making full use of domains using new/abused TLDs.

As usual, Terror EK is dropping Smoke Loader, which in turn downloads several more payloads, likely to generate a lot of noise on the network:


Despite no significant advancement with more powerful vulnerabilities being integrated, exploit kit authors are nonetheless still leveraging malvertising as their primary distribution method and attempting to evade detection from the security community, which they monitor closely.

In light of these new challenges, security defenders must also understand the malicious techniques that are used by threat actors in order to adapt their tools and procedures and keep tracking the new campaigns taking place.

Indicators of compromise

Terror EK-related IP addresses and domains:

SSL certificates:

CN=Let's Encrypt Authority X3, O=Let's Encrypt, C=US [Serial Numbers] 03C5BC64ED4CB1331212750F0ECBF7D2EB4E 0337D982AFCC25063A91502A482AAB39A559 [Thumbprints] 73FDC41268FC8B53D37D66BF63FDF71FDF111803 60ADD6955D23029A571BE7F0079C941631CAD32F


Smoke Loader


Other drops:

3579870858e68d317bb907b6362d956a80f3973c823021d452a077fd90719cdf 99d6c4830605ed61e444c002193da4efe3bc7d015ad230624a2c9aae81982740 a8a8b5ed76019c17add5101b157ab9c288a709a323d8c12dbae934c7ec6e1d14

The post Terror exploit kit goes HTTPS all the way appeared first on Malwarebytes Labs.

Categories: Techie Feeds


Subscribe to Furiously Eclectic People aggregator - Techie Feeds