Techie Feeds

Malwarebytes helps take down massive ad fraud botnets

Malwarebytes - Wed, 11/28/2018 - 14:00

On November 27, the US Department of Justice announced the indictment of eight individuals involved in a major ad fraud case that cost digital advertisers millions of dollars. The operation, dubbed 3ve, was the combination of the Boaxxe and Kovter botnets, which the FBI—in collaboration with researchers in the private sector, including one of our own at Malwarebytes—was able to dismantle.

The US CERT advisory indicates that 3ve was controlling over 1.7 million unique IP addresses between both Boaxxe and Kovter at any given time. Threat actors rely on different tactics to generate fake traffic and clicks, but one of the most common is to infect legitimate computers and have them silently mimic a typical user’s behavior. By doing so, fraudsters can generate millions of dollars in revenue while eroding trust in the online advertising business.

This criminal enterprise was quite sophisticated in that it had many evasion techniques that not only made it difficult to detect the presence of ad fraud, but also clean up affected systems. Kovter in particular is a unique piece of malware that goes to great lengths to avoid detection and even trick analysts. Its fileless nature to maintain persistence has also made it more challenging to disable.

Malwarebytes, along with several other companies, including Google, Proofpoint, and ad fraud detection company White Ops, was involved in the global investigation into these ad fraud botnets. We worked with our colleagues at White Ops, sharing our intelligence and samples of the Kovter malware. We were happy to be able to leverage our telemetry, which proved to be valuable for others to act upon.

Even though cybercriminal enterprises can get pretty sophisticated, this successful operation proves that concerted efforts between both the public and private sectors can defeat them and bring perpetrators to justice.

The full report on 3ve, co-authored by Google and White Ops, with technical contributions from Proofpoint and others, can be downloaded here.

The post Malwarebytes helps take down massive ad fraud botnets appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Why Malwarebytes decided to participate in AV testing

Malwarebytes - Tue, 11/27/2018 - 22:44

Starting this month, Malwarebytes began participating in the antivirus software for Windows comparison test performed by AV-test.org. This is uncharted territory for us, as we have refrained from participating in these types of tests since our inception. Although recent testing results show Malwarebytes protecting against more than 97 percent of web vector threats and detecting and removing 99.5 percent of malware during a scan on any machine, we still maintain reservations about the entire testing process.

Why participate now?

In the past, we’ve avoided AV comparison tests because we felt their methods did not allow us to demonstrate how our product works in a real environment. By testing only a small portion of our product’s technologies, AV comparison tests are often unable to replicate Malwarebytes’ overall effectiveness. However, we understand the importance of independent reviews for those considering a Malwarebytes purchase, so we decided to participate.

Malwarebytes is not a traditional antivirus, and detecting files based on signatures—which is what the testing companies review—is only one of the methods we use to protect our customers from threats. We probably never will be the best performer in this category; it simply isn’t our focus. We mostly rely on other methods, such as hardening, application behavior, and vector blocking defenses that disrupt malware earlier in the attack chain.

What did the test miss?

Some of our best technologies block malware before it has the chance to execute. Our application behavior and web protection modules, for example, stop threats earlier in the attack—at the point of delivery instead of the point of execution. However, the URLs tested only represent the final stage of an attack (i.e. the URL pointing to the final payload EXE).

In addition, testers often do not replicate the original infection vector used by malware campaigns, such as malspam, exploits, or redirects. Instead, they download the malware directly, bypassing typical delivery methods. By doing this, they´re controlling the environment, but also missing out on the trigger for many of our detections.

What exactly is checked in these monthly AV-Test.org tests?
  • Detections (specifications)
    • Detection of URLs pointing directly to malware EXEs (i.e. “web and email threats” test)
    • On-demand scan of a directory full of malware EXEs (i.e. “widespread and prevalent malware” test)
  • Performance impact, such as browsing slowdown, application load slowdown, slowdown of file copy operations, etc.
  • Usability test, with focus on false positives

More information about the test procedures can be found at AV-Test.org.

Unsolicited tests

A number of times in the past, Malwarebytes has been included in tests that we were not aware of or in which we didn’t choose to participate. Some even compared our free, limited scanner against fully functional AVs. No surprises there: while the other vendors may have scored higher in their detections, our free scanner still outperformed them in remediation and removal.

Change the tests

If the tests miss out on our best protection modules, you would expect us to try and change the testing methods altogether, right? We did look into this, and it’s not entirely off the table. We feel sure that using live malware or duplicating real-life attacks would show our excellence, but these conditions are hard to replicate for a controlled and equal testing environment.

What we would like to see is a test for zero-day effectiveness, and not a test based on relatively old samples and infection vectors. But again, we also understand that this is hard to achieve for a testing organization that likes to have some control over the environment and in order to create a level playing field.

When and where can we expect to see your test results?

As of November 27, 2018, AV-Test.org will include results for our flagship consumer product, Malwarebytes for Windows versions 3.5 and 3.6. AV-Test.org publishes their results publicly every two months. The November 2018 results are the summary of tests performed during September and October. Our participation is only in the “Windows Antivirus” test for home users.

We still do not believe in the “pay-to-play” model, and especially the “pay-to-see-what-you-missed” model that some organizations use. (AV companies, for an additional fee, can see the samples they did not catch in the test and develop fixes in the product for future tests/use.) Nonetheless, we want to give our customers some idea of what we are capable of, even when the playing field is skewed.

We would just like you to keep in mind that, when reviewing our scores, these tests only show part of the whole picture. Many of our best protection modules have been left out of the test entirely—which basically misses what Malwarebytes is truly capable of.

So what would you rather have: a product that does well on AV tests, or a product that detects, blocks, and cleans up threats in the real world?

The post Why Malwarebytes decided to participate in AV testing appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Malwarebytes’ 2019 security predictions

Malwarebytes - Tue, 11/27/2018 - 16:00

Every year, we at Malwarebytes Labs like to stare into our crystal ball and foretell the future of malware.

Okay, maybe we don’t have a crystal ball, but we do have years and years of experience in observing trends and sensing shifts in patterns. When it comes to security, though, we can only know so much. For example, we guarantee there’ll be some kind of development that we had zero indication would occur. We also can pretty much assure you that data breaches will keep happening—just as the sun rises and sets.

And while all hope is for a malware-free 2019, the reality will likely look a little more like this:

New, high-profile breaches will push the security industry to finally solve the username/password problem. The ineffective username/password conundrum has plagued consumers and businesses for years. There are many solutions out there—asymmetric cryptography, biometrics, blockchain, hardware solutions, etc.—but so far, the cybersecurity industry has not been able to settle on a standard to fix the problem. In 2019, we will see a more concerted effort to replace passwords altogether.

IoT botnets will come to a device near you. In the second half of 2018, we saw several thousand MikroTik routers hacked to serve up coin miners. This is only the beginning of what we will likely see in the new year, with more and more hardware devices being compromised to serve up everything from cryptominers to Trojans. Large scale compromises of routers and IoT devices are going to take place, and they are a lot harder to patch than computers. Even just patching does not fix the problem, if the device is infected.

Digital skimming will increase in frequency and sophistication. Cybercriminals are going after websites that process payments and compromising the checkout page directly. Whether you are purchasing roller skates or concert tickets, when you enter your information on the checkout page, if the shopping cart software is faulty, information is sent in clear text, allowing attackers to intercept in real time. Security companies saw evidence of this with the British Airways and Ticketmaster hacks.

Microsoft Edge will be a prime target for new zero-day attacks and exploit kits. Transitioning out of IE, Microsoft Edge is gaining more market share. We expect to see more mainstream Edge exploits as we segue to this next generation browser. Firefox and Chrome have done a lot to shore up their own technology, making Edge the next big target.

EternalBlue or a copycat will become the de facto method for spreading malware in 2019. Because it can self-propagate, EtnernalBlue and others in the SMB vulnerability present a particular challenge for organizations, and cybercriminals will exploit this to distribute new malware.

Cryptomining on desktops, at least on the consumer side, will just about die. Again, as we saw in October (2018) with MikroTik routers being hacked to serve up miners, cybercriminals just aren’t getting value out of targeting individual consumers with cryptominers. Instead, attacks distributing cryptominers will focus on platforms that can generate more revenue (servers, IoT) and will fade from other platforms (browser-based mining).

Attacks designed to avoid detection, like soundloggers, will slip into the wild. Keyloggers that record sounds are sometimes called soundloggers, and they are able to listen to the cadence and volume of tapping to determine which keys are struck on a keyboard. Already in existence, this type of attack was developed by nation-state actors to target adversaries. Attacks using this and other new attack methodologies designed to avoid detection are likely to slip out into the wild against businesses and the general public.

Artificial Intelligence will be used in the creation of malicious executables While the idea of having malicious AI running on a victim’s system is pure science fiction at least for the next 10 years, malware that is modified by, created by, and communicating with an AI is a dangerous reality. An AI that communicates with compromised computers and monitors which and how certain malware is detected can quickly deploy countermeasures. AI controllers will enable malware built to modify its own code to avoid being detected on the system, regardless of the security tool deployed. Imagine a malware infection that acts almost like “The Borg” from Star Trek, adjusting and acclimating its attack and defense methods on the fly based on what it is up against.

Bring your own security grows as trust declines. More and more consumers are bringing their own security to the workplace as a first or second layer of defense to protect their personal information. Malwarebytes recently conducted global research and found that nearly 200,000 companies had a consumer version of Malwarebytes installed. Education was the industry most prone to adopting BYOS, followed by software/technology and business services. 

The post Malwarebytes’ 2019 security predictions appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 19 – 25)

Malwarebytes - Mon, 11/26/2018 - 18:21

Last week on Malwarebytes Labs, we took a look at a devastating business email compromise attack, web skimming antics, and the fresh perils of Deepfakes. We also checked out some Chrome bug issues, and took the deepest of deep dives into DNA testing.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 19 – 25) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Spoofed addresses and anonymous sending: new Gmail bugs make for easy pickings

Malwarebytes - Wed, 11/21/2018 - 17:53

Tim Cotten, a software developer from Washington, DC, was responding to a request for help from a female colleague last week, who believed that her Gmail account has been hacked, when he discovered something phishy. The evidence presented was several emails in her Sent folder, purportedly sent by her to herself.

Cotten was stunned when, upon initial diagnosis, he found that those sent emails didn’t come from her account but from another, which Gmail—being the organized email service that it is—only filed away in her Sent folder. Why would it do that if the email wasn’t from her? It seems that while Google’s filtering and organizing technology worked perfectly, something went wrong when Gmail tried to process the emails’ From fields.

This trick is a treat for phishers

Cotten noted in a blog post that the From header of the emails in his coworker’s Sent folder contained (1) the recipient’s email address and (2) another text—usually a name, possibly for increased believability. The presence of the recipient’s address caused Gmail to move the email to the Sent folder while also disregarding the email address of the actual sender.

Weird “From” header. Screenshot by Tim Cotten, emphasis (in purple) ours.

Why would a cybercriminal craft an email that never ends up in a victim’s inbox? This tactic is particularly useful for a phishing campaign that banks on the recipient’s confusion.

“Imagine, for instance, the scenario where a custom email could be crafted that mimics previous emails the sender has legitimately sent out containing various links. A person might, when wanting to remember what the links were, go back into their sent folder to find an example: disaster!” wrote Cotten.

Cotten provided a demo for Bleeping Computer wherein he showed a potentially malicious sender spoofing the From field by displaying a different name to the recipient. This may yield a high turnover of victims if used in a business email compromise (BEC)/CEO fraud campaign, they noted.

After raising an alert about this bug, Cotten unknowingly opened the floodgates for other security researchers to come forward with their discovered Gmail bugs. Eli Grey, for example, shared the discovery of a bug in 2017 that allowed for email spoofing, which has been fixed in the web version of Gmail but remains a flaw in the Android version. One forum commenter claimed that the iOS Mail app also suffers from the same glitch.

Another one stirs the dust

Days after publicly revealing the Gmail bug, Cotten discovered another flaw wherein malicious actors can potentially hide sender details in the From header by forcing Gmail to display a completely blank field.

Who’s the sender? Screenshot by Tim Cotten, emphasis (in purple) ours.

He pulled this off by replacing a portion of his test case with a long and arbitrary code string, as you can see below:

The string. Screenshot from Tim Cotten, emphasis (in purple) ours.

Average Gmail users may struggle to reveal the true sender because clicking the Reply button and the “Show original” option still yields a blank field.

The sender with no name. Screenshot by Tim Cotten, emphasis (in purple) ours.

There’s nothing there! Screenshot by Tim Cotten, emphasis (in purple) ours.

Missing sender details could potentially increase the possibility of users opening a malicious email to click an embedded link or open an attachment, especially if it contains a subject that is both actionable and urgent.

When met with silence

The Gmail vulnerabilities mentioned in this post are all related to user experience (UX), and as of this writing, Google has yet to address them. (Cotten has proposed a possible solution for the tech juggernaut.) Unfortunately, Gmail users can only wait for the fixes.

Spotting phishing attempts or spoofed emails can be tricky, especially when cybercriminals are able to penetrate trusted sources, but a little vigilance can go a long, long way.

The post Spoofed addresses and anonymous sending: new Gmail bugs make for easy pickings appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Are Deepfakes coming to a scam near you?

Malwarebytes - Wed, 11/21/2018 - 16:00

Your boss contacts you over Skype. You see her face and hear her voice, asking you to transfer a considerable amount of money to a firm you’ve never ever heard of. Would you ask for written confirmation of her orders? Or would you simply follow through on her instructions?

I would certainly be taken aback by such a request, but then again, this is not anywhere near a normal transaction for me and my boss. But, given the success rate of CEO fraud (which was a lot less convincing), threat actors would need only find the right person to contact to be able to successfully fool employees into sending the money.

Imagine the success rate of CEO fraud where the scam artists would be able to actually replicate your boss’ face and voice in such a Skype call. Using Deepfake techniques, they may reach that level in a not too distant future.

What is Deepfake?

The word “Deepfake” was creating by mashing “deep learning” and “fake” together. It is a method of creating human images based on artificial intelligence (AI). Simply put, creators feed a computer data consisting of a lot of facial expressions of a person and find someone who can imitate that person’s voice. The AI algorithm is then able to match the mouth and face to synchronize with the spoken words. All this would result in a near perfect “lip sync” with the matching face and voice.

Compared against the old Photoshop techniques to create fake evidence, this would qualify as “videoshop 3.0.”

Where did it come from?

The first commotion about this technique arose when a Reddit user by the handle DeepFakes posted explicit videos of celebrities that looked realistic. He generated these videos by replacing the original pornographic actors’ faces with those of the celebrities. By using deep learning, these “face swaps” were near to impossible to detect.

DeepFakes posted the code he used to create these videos on GitHub and soon enough, a lot of people were learning how to create their own videos, finding new use cases as they went along. Forums about Deepfakes were immensely popular, which was immediately capitalized upon by coinminers. And at some point, a user-friendly version of Deepfake technology was bundled with a cryptominer.

The technology

Deepfake effects are achieved by using a deep learning technology called autoencoder. Input is compressed, or encoded, into a small representation. These can be used to reproduce the original input so they match previous images in the same context (here, it’s video). Creators need enough relevant data to achieve this, though. To create a Deepfake image, the producer reproduces face B while using face A as input. So, while the owner of face A is talking on the caller side of the Skype call, the receiver sees face B making the movements. The receiver will observe the call as if B were the one doing the talking.

The more pictures of the targeted person we can feed the algorithm, the more realistic the facial expressions of the imitation can become.

Given that an AI already exists which can be trained to mimic a voice after listening to it for about a minute, it doesn’t look as if it will take long before the voice impersonator can be replaced with another routine that repeats the caller’s sentences in a reasonable imitation of the voice that the receiver associates with the face on the screen.

Abuse cases

As mentioned earlier, the technology was first used to replace actors in pornographic movies with celebrities. We have also seen some examples of how this technology could be used to create “deep fake news.”

So, how long will it take scammers to get the hang of this to create elaborate hoaxes, fake promotional material, and conduct realistic fraud?

Hoaxes and other fake news are damaging enough as they are in the current state of affairs. By nature, people are inclined to believe what they see. If they can see it “on video” with their own eyes, why would they doubt it?

You may find the story about the “War of the Worlds” broadcast and the ensuing panic funny, but I’m pretty sure the more than a million people that were struck with panic would not agree with you. And that was just a radio broadcast. Imagine something similar with “live footage” and using the faces and voices of your favorite news anchors (or, better said, convincing imitations thereof). Imagine if threat actors could spoof a terrorist attack or mass shooting. There are many more nefarious possibilities.

Countermeasures

The Defense Advanced Research Project Agency (DARPA) is aware of the dangers that Deepfakes can pose.

“While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns.

This manipulation of visual media is enabled by the wide-scale availability of sophisticated image and video editing applications, as well as automated manipulation algorithms that permit editing in ways that are very difficult to detect either visually or with current image analysis and visual media forensics tools. The forensic tools used today lack robustness and scalability, and address only some aspects of media authentication; an end-to-end platform to perform a complete and automated forensic analysis does not exist.”

DARPA has launched the MediFor program to stimulate researchers to develop technology that can detect manipulations and even provide information about how the manipulations were done.

One of the signs that researchers now look for when trying to uncover a doctored video is how often the person in the video blinks his eyes. Where a normal person would blink every few seconds, a Deepfake imitation might not do it at all, or not often enough to be convincing. One of the reasons for this effect is that pictures of people with their eyes closed don’t get published that much, so they would have to use actual video footage as input to get the blinking frequency right.

As technology advances, we will undoubtedly see improvements on both the imitating and the defensive sides. What already seems to be evident is that it will take more than the trained eye to recognize Deepfake videos—we’ll need machine learning algorithms to adapt.

Anti-video fraud

With the exceptional speed of developments in the Deepfakes field, it seems likely that you will see a hoax or scam using this method in the near future. Maybe we will even start using specialized anti-video fraud software at some point, in the same way as we have become accustomed to the use of anti-spam and anti-malware protection.

Stay safe and be vigilant!

The post Are Deepfakes coming to a scam near you? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Web skimmers compete in Umbro Brasil hack

Malwarebytes - Tue, 11/20/2018 - 16:51

Umbro, the popular sportswear brand has had their Umbro Brasil website hacked and injected with not one but two web skimmers part of the Magecart group.

Magecart has become a household name in recent months due to high profile attacks on various merchant websites. Criminals can seamlessly steal payment and contact information from visitors purchasing products or services online.

Multiple threat actors are competing at different scales to get their share of the pie. As a result, there are many different web skimming scripts and groups that focus on particular types of merchants or geographical areas.

Case in point, in this Umbro Brasil compromise, one of the two skimming scripts checks for the presence of other skimming code and if present will slightly alter the credit card number that was entered by the victim. Effectively, the first skimmer will receive wrong credit card numbers as a direct act of sabotage.

Two skimmers go head to head

The Umbro Brasil website (umbro.com[.]br) runs the Magento e-commerce platform. The first skimmer is loaded via a fake BootStrap library domain bootstrap-js[.]com, recently discussed by Brian Krebs. Looking at its code, we see that it fits the profile of threat actors predominantly active in South America, according to a recent report from RiskIQ.

1st skimmer with code exposed in plain sight (conditional with referer check)

This skimmer is not obfuscated and exfiltrates the data in a standard JSON output. However, another skimmer is also present on the same site, loaded from g-statistic[.]com. This time, it is heavily obfuscated as seen in the picture below:

2nd skimmer, showing large obfuscation blurb

No fairplay between Magecart groups

Another interesting aspect is how the second skimmer alters the credit card number from the first skimmer. Before the form data is being sent, it grabs the credit card number and replaces its last digit with a random number.

The following code snippet shows how certain domain names trigger this mechanism. Here we recognize bootstrap-js[.]com, which is the first skimmer. Then, a random integer ranging from 0 to 9 is generated for later use. Finally, the credit card number is stripped of its last digit and the previously generated random number is used.

Code to conditionally swap the last digit of the credit card (decoding courtesy of Willem de Groot)

By tampering with the data, the second skimmer can send an invalid but almost correct credit card number to the competing skimmer. Because only a small part of it was changed, it will most likely pass validation tests and go on sale on black markets. Buyers will eventually realize their purchased credit cards are not working and will not trust that seller again.

The second skimmer, now being the only one to hold the valid credit card number, uses a special function to encode the data it exfiltrates. Looking at the POST request, we can only see what looks like gibberish sent to its exfiltration domain (onlineclouds[.]cloud):

Encoded data sent back to exfiltration server

This situation where multiple infections reside on the same host is not unusual. Indeed, unless a vulnerability with a webserver is fixed, it can be prone to several compromises by different perpetrators. Sometimes they can coexist peacefully, sometimes they are directly competing for the same resources.

Coolest sport in town

While web skimming has been going on for years, it has now become a very common (re-)occurrence. Security researcher Willem de Groot has aggregated data for 40K websites since counting in 2015. His study also shows that reinfection among e-commerce sites (20% reinfection rate) is a problem that needs addressing.

Website owners that handle payment processing need to do due diligence in securing their platform by keeping their software and plugins up-to-date, as well as paying special attention to third-party scripts.

Consumers also need to be aware of this threat when shopping online, even if the merchant is a well known and reputable brand. On top of closely monitoring their bank statements, they should consider ways in which they can limit the damage from malicious withdrawals.

We have informed CERT.br of this compromise and even though the skimmers are still online, Malwarebytes users are covered by our web protection module.

Acknowledgments:

Thanks to Willem de Groot for his assistance in this research.

IOCs

Skimmers

1st skimmer: bootstrap-js[.]com 2nd skimmer: g-statistic[.]com

Exfiltration

1st skimmer's exfil domain: bootstrap-js[.]com 2nd skimmer's exfil domain: onlineclouds[.]cloud

The post Web skimmers compete in Umbro Brasil hack appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What DNA testing kit companies are really doing with your data

Malwarebytes - Tue, 11/20/2018 - 15:00

Sarah* hovered over the mailbox, envelope in hand. She knew as soon as she mailed off her DNA sample, there’d be no turning back. She ran through the information she looked up on 23andMe’s website one more time: the privacy policy, the research parameters, the option to learn about potential health risks, the warning that the findings could have a dramatic impact on her life.

She paused, instinctively retracting her arm from the mailbox opening. Would she live to regret this choice? What could she learn about her family, herself that she may not want to know? How safe did she really feel giving her genetic information away to be studied, shared with others, or even experimented with?

Thinking back to her sign-up experience, Sarah suddenly worried about the massive amount of personally identifiable information she already handed over to the company. With a background in IT, she knew what a juicy target hers and other customers’ data would be for a potential hacker. Realistically, how safe was her data from a potential breach? She tried to recall the specifics of the EULA, but the wall of legalese text melted before her memory.

Pivoting on her heel, Sarah began to turn away from the mailbox when she remembered just why she wanted to sign up for genetic testing in the first place. She was compelled to learn about her own health history after finding out she had a rare genetic disorder, Ehlers-Danlos syndrome, and wanted to present her DNA for the purpose of further research. In addition, she was on a mission to find her mother’s father. She had a vague idea of who he was, but no clue how to track him down, and believed DNA testing could lead her in the right direction.

Sarah closed her eyes and pictured her mother’s face when she told her she found her dad. With renewed conviction, she dropped the envelope in the mailbox. It was done.

*Not her real name. Subject asked that her name be changed to protect her anonymity.

An informed decision

What if Sarah were you? Would you be inclined to test your DNA to find out about your heritage, your potential health risks, or discover long lost family members? Would you want to submit a sample of genetic material for the purpose of testing and research? Would you care to have a trove of personal data stored in a large database alongside millions of other customers? And would you worry about what could be done with that data and genetic sample, both legally and illegally?

Perhaps your curiosity is powerful enough to sign up without thinking through the consequences. But this would be a dire mistake. Sarah spent a long time weighing the pros and cons of her situation, and ultimately made an informed decision about what to do with her data. But even she was missing parts of the puzzle before taking the plunge. DNA testing is so commonplace now that we’re blindly participating without truly understanding the implications.

And there are many. From privacy concerns to law enforcement controversies to life insurance accessibility to employment discrimination, red flags abound. And yet, this fledgling industry shows no signs of stopping. As of 2017, an estimated 12 million people have had their DNA analyzed through at-home genealogy tests. Want to venture a guess at how many of those read through the 21-page privacy policy to understand exactly how their data is being used, shared, and protected?

Nowadays, security and privacy cannot be assumed. Between hacks of major social media companies and underhanded sharing of data with third parties, there are ways that companies are both negligent of the dangers of storing data without following best security practices and complicit in the dissemination of data to those willing to pay—whether that’s in the name of research or not.

So I decided to dig into exactly what these at-home DNA testing kit companies are doing to protect their customers’ most precious data, since you can’t get much more personally identifiable than a DNA sample. How seriously are these organizations taking the security of their data? What is being done to secure these massive databases of DNA and other PII? How transparent are these companies with their customers about what’s being done with their data?

There’s a lot to unpack with commercial DNA testing—often pages and pages of documents to sift through regarding privacy, security, and design. It can be mind-numbingly difficult to process, which is why so many customers just breeze through agreements and click “Okay” without really thinking about what they’re purchasing.

But this isn’t some app on your phone or software on your computer. It’s data that could be potentially life-changing. Data that, if misinterpreted, could send people into an emotional tailspin, or worse, a false sense of security. And it’s data that, in the wrong hands, could be used for devastating purposes.

In an effort to better educate users about the pros and cons of participating in at-home DNA testing, I’m going to peel back the layers so customers can see for themselves, as clearly as possible, the areas of concern, as well as the benefits of using this technology. That way, users can make informed choices about their DNA and related data, information that we believe should not be taken or given away lightly.

That way, when it’s your turn to stand in front of the mailbox, you won’t be second-guessing your decision.

Area of concern: life insurance

Only a few years ago in the United States, health insurance companies could deny applicants coverage based on pre-existing conditions. While this is thankfully no longer the case, life insurance companies can be more selective about who they cover and how much they charge.

According to the American Counsel for Life Insurers (ACLI), a life insurance company may ask an applicant for any relevant information about his health—and that includes the results of a genetic test, if one was taken. Any indication of health risk could factor into the price tag of coverage here in the United States.

Of course, there’s nothing that forces an individual to disclose that information when applying for life insurance. But the industry relies on honest communication from its customers in order to effectively price policies.

“The basis of sound underwriting has always been the sharing of information between the applicant and the insurer—and that remains today,” said Dr. Robert Gleeson, consultant for the ACLI. “It only makes sense for companies to know what the applicant knows. There must be a level playing field.”

The ACLI believes that the introduction of genetic testing can actually help life insurers better determine risk classification, enabling them to offer overall lower premiums for consumers. However, the fact remains: If a patience receives a diagnosis or if genetic testing reveals a high risk for a particular disease, their insurance premiums go up.

In Australia, any genetic results deemed a health risk can result in not only increased premiums but denial of coverage altogether. And if you thought Australians could get away with a little white lie of omission when applying for life insurance, they are bound by law to disclose any known genetic test results, including those from at-home DNA testing kits.

Area of concern: employment

Going back as far as 1964 to Title VII of the Civil Rights Act, employers cannot discriminate based on race, color, religion, sex, or nationality. Workers with disabilities or other health conditions are protected by the Americans with Disabilities Act, the Rehab Act, and the Family and Medical Leave Act (FMLA).

But these regulations only apply to employees or candidates with a demonstrated health condition or disability. What if genetic tests reveal the potential for disability or health concern? For that, we have GINA.

The Genetic Information Nondiscrimination Act (GINA) prohibits the use of genetic information in making employment decisions.

“Genetic information is protected under GINA, and cannot be considered unless it relates to a legitimate safety-sensitive job function,” said John Jernigan, People and Culture Operations Director at Malwarebytes.

So that’s what the law says. What happens in reality might be a different story. Unfortunately, it’s popular practice for individuals to share their genetic results online, especially on social media. In fact, 23andMe has even sponsored celebrities unveiling and sharing their results. Surely no one will see videos of stars like Mayim Bialik sharing their 23andMe results live and follow suit.

The hiring process is incredibly subjective. It would be almost impossible to point the finger at any employer and say, “You didn’t hire me because of the screenshot I shared on Facebook of my 23andMe results!” It could be entirely possible that the candidate was discriminated against, but in court, any he said/she said arguments will benefit the employer and not the employee.

Our advice: steer clear of sharing the results, especially any screenshots, on social media. You never know how someone could use that information against you.

Area of concern: personally identifiable information (PII)

Consumer DNA tests are clearly best known for collecting and analyzing DNA. However just as important—arguably more so to their bottom line—is the personally identifiable information they collect from their customers at various points in their relationship. Organizations are absorbing as much as they can about their customers in the name of research, yes, but also in the name of profit.

What exactly do these companies ask for? Besides the actual DNA sample, they collect and store content from the moment of registration, including your name, credit card, address, email, username and password, and payment methods. But that’s just the tip of the iceberg.

Along with the genetic and registration data, 23andMe also curates self-reported content through a hulking, 45-minute long survey delivered to its customers. This includes asking about disease conditions, medical and family history, personal traits, and ethnicity. 23andMe also tracks your web behavior via cookies, and stores your IP address, browser preference, and which pages you click on. Finally, any data you produce or share on its website, such as text, music, audio, video, images, and messages to other members, belongs to 23andMe. Getting uncomfortable yet? These are hugely attractive targets for cybercriminals.

Survey questions gather loads of sensitive PII.

Oh, but there’s more. Companies such as Ancestry or Helix have ways to keep their customers consistently involved with their data on their sites. They’ll send customers a message saying, “You disclosed to us you had allergies. We’re doing this study on allergies—can you answer these questions?” And thus even more information is gathered.

Taking a closer look at the companies’ EULAs, you’ll discover that PII can also be gathered from social media, including any likes, tweets, pins, or follow links, as well as any profile information from Facebook if you use it to log into their web portals.

But the information-gathering doesn’t stop there. Ancestry and others will also search public and historical records, such as newspaper mentions, birth, death, and marriage records related to you. In addition, Ancestry cites a frustratingly vague “information collected from third parties” bullet point in their privacy policy. Make of that what you will.

Speaking of third parties, many of them will get a good glimpse of who you are thanks to policies that allow for commercial DNA testing companies to market new products offers from business partners, including producing targeted ads personalized to users based on their interests. And finally, according to the privacy policy shared among many of these sites, DNA testing companies can and do sell your aggregate information to third parties “in order to perform business development, initiate research, send you marketing emails, and improve our services.”

That’s a lot of marketing emails.

One such partner who benefits from the sharing of aggregate information is Big Pharma: at-home DNA testing kits profit by selling user data to pharmaceutical companies for development of new drugs. For some, this might constitute crossing the line; for others, it represents being able to help researchers and those suffering from disease with their data.

“You have to trust all their affiliates, all their employees, all the people that could purchase the company,” said Sarah, our IT girl who elected to participate in 23andMe’s research. “It’s better to take the mindset that there’s potential that any time this could be seen and accessed by anyone. You should always be willing to accept that risk.”

Sadly, there’s already more than enough reason to assume any of this information could be stolen—because it has.

In June 2018, MyHeritage announced that the data of over 92 million users was leaked from the company’s website in October the previous year. Emails and hashed passwords were stolen—thankfully, the DNA and other data of customers was safe. Prior to that, the emails and passwords of 300,000 users from Ancestry.com were stolen back in 2015.

But as these databases grow and more information is gathered on individuals, the mark only becomes juicier for threat actors. “They want to create as broad a profile of the target as possible, not just of the individual but of their associates,” said security expert and founder of Have I Been Pwned Troy Hunt, who tipped off Ancestry about their breach. “If I know who someone’s mother, father, sister, and descendants might be, imagine how convincing a phishing email I could create. Imagine how I could fool your bank.”

Cybercriminals can weaponize data not only to resell to third parties but for blackmail and extortion purposes. Through breaching this data, criminals could dangle coveted genetic, health, and ancestral discoveries in front of their victims. You’ve got a sibling—send money here and we’ll show you who. You’re pre-dispositioned to a disease, but we won’t tell you which one until you send Bitcoin here. Years later, the Ashley Madison breach is still being exploited in this way.

Doing it right: data stored safely and separately

With so much sensitive data being collected by DNA testing companies, especially content related to health, one would hope these organizations pay special attention to securing it. In this area, I was pleasantly surprised to learn that several of the top consumer DNA tests banded together to create a robust security policy that aims to protect user data according to best practices.

And what are those practices? For starters, DNA testing kit companies store user PII and genetic data in physically separating computing environments, and encrypt the data at rest and in transit. PII is assigned a randomized customer identification number for identification and customer support services, and genetic information is only identified using a barcode system.

Security is baked into the design of the systems that gather, store, and disseminate data, including explicit security reviews in the software development lifecycle, quality assurance testing, and operational deployment. Security controls are also audited on a regular basis.

Access to the data is restricted to authorized personnel, based on job function and role, in order to reduce the likelihood of malicious insiders compromising or leaking the data. In addition, robust authentication controls, such as multi-factor authentication and single sign-on, prohibit data flowing in and out like the tides.

For additional safety measures, consumer DNA testing companies conduct penetration testing and offer a bug bounty program to shore up vulnerabilities in their web application. Even more care has been taken with security training and awareness programs for employees, and incident management and response plans were developed with guidance from the National Institute of Standards and Technology (NIST).

In the words of the great John Hammond: They spared no expense.

When Hunt made the call to Ancestry about the breach, he recalls that they responded quickly and professionally, unlike other organizations he’s contacted about data leaks and breaches.

“There’s always a range of ways organizations tend to deal with this. In some cases, they really don’t want to know. They put up the shutters and stick their head in the sand. In some cases, they deny it, even if the data is right there in front of them.”

Thankfully, that does not seem to be the case for the major DNA testing businesses.

Area of concern: law enforcement

At-home DNA testing kit companies are a little vague about when and under which conditions they would hand over your information to law enforcement, using terms such as “under certain circumstances” and “we have to comply with valid requests” without defining the circumstances or indicating what would be considered “valid.” However, they do provide this transparency report that details government requests for data and how they have responded.

Yet, news broke earlier this year that DNA from 23andMe was used to find the Golden State Killer, and it gave consumers collective pause. While putting a serial killer behind bars is worthy cause, the killer was found because a relative of his had participated in 23andMe’s test, and the DNA was a close enough match to DNA found at the original 1970’s crime scenes that they were able to pin him down.

This opens up a can of worms about the impact of commercially-generated genetic data being available to law enforcement or other government bodies. How else could this data be used or even abused by police, investigators, or legislatures? The success of the Golden State Killer arrest could lead to re-opening other high-profile cold cases, or eventually turning to the consumer DNA databases every time there’s DNA evidence found at the scene of a crime.

Because so many individuals have now signed up for commercial DNA tests, odds are 60 percent and rising that, if you live in the US and are of European descent, you can be identified by information that your relatives have made public. In fact, law enforcement soon may not need a family member to have submitted DNA in order to find matches. According to a study published in Science, that figure will soon rise to 100 percent as consumer DNA databases reach critical mass.

What’s the big deal if DNA is used to capture criminals, though? Putting on my tinfoil hat for a second, I imagine a Minority-Report-esque scenario of stopping future crimes or misinterpreting DNA and imprisoning the wrong person. While those scenarios are a little far-fetched, I didn’t have to look too hard for real-life instances of abuse.

In July 2018, Vice reported that Canada’s border agency was using data from Ancestry.com and Familytreedna.com to establish nationalities of migrants and deport those it found suspect. In an era of high tensions on race, nationality, and immigration, it’s not hard to see how genetic data could be used against an individual or family for any number of civil or human rights violations.

Area of concern: accuracy of testing results

While this doesn’t technically fall under the guise of cybersecurity, the accuracy of test results is of concern because these companies are doling out incredibly sensitive information that has the potential to levy dramatic change on peoples’ lives. A March 2018 study in Nature found that 40 percent of results from at-home DNA testing kits were false positives, meaning someone was deemed “at risk” for a category that later turned out to be benign. That statistic is validated by the fact that test results from different consumer testing companies can vary dramatically.

The relative inaccuracy of the test results is compounded by the fact that there’s a lot of room to misinterpret them. Whether it’s learning you’re high risk for Alzheimer’s or discovering that your father is not really your father, health and ancestry data can be consumed without context, and with no doctor or genetic counselor on hand to soften the blow.

In fact, consumer DNA testing companies are rather reticent to send their users to genetic counselors—it’s essentially antithetical to their mission, which is to make genetic data more accessible to their customers.

Brianne Kirkpatrick, a genetic counselor and ancestry expert with the National Society for Genetic Counselors (NSGC), said that 23andMe once had a fairly prominent link on their website for finding genetic counselors to help users understand their results. That link is now either buried or gone. In addition, she mentioned that a one of her clients had to call 23andMe three times until they finally agreed to recommend Kirkpatrick’s counseling services.

“The biggest drawback is people believing that they understand the results when maybe they don’t,” she said. “For example, people don’t understand that the BRCA1 and BRCA2 testing these companies provide is really only helpful if you’re Ashkenazi Jew. In the fine print, it says they look at three variants out of thousands, and these three are only for this population. But people rush to make a conclusion because at a high level it looks like they should be either relieved or worried. It’s complex information, which is why genetic counselors exist in the first place.”

But what’s the symbology?

The data becomes even more messy when you move beyond users of European descent. People of color, especially those of Asian or African descent, have had a particularly hard go of it because they are underrepresented in many companies’ data sets. Often, black, Hispanic, or Asian users receive reports that list parts of their heritage as “low confidence” because their DNA doesn’t sufficiently match the company’s points of reference.

DNA testing companies not only offer sometimes incomplete, inaccurate information that’s easy to misunderstand to their customers, they also provide the raw data output that can be downloaded and then sent to third party websites for even more evaluation. But those sites have not been as historically well-protected as the major consumer DNA testing companies. Once again, the security and privacy of genetic data goes fluttering away into the ether when users upload it, unencrypted and unprotected, to third-party platforms.

Doing it right: privacy policy

As an emerging industry, there’s little in the way of regulation or public policy when it comes to consumer genetic testing. Laboratory testing is bound by Medicare and Medicaid clauses, and commercial companies are regulated by the FDA, but DNA testing companies are a little of both, with the added complexity of operating online. The General Data Protection Regulation (GDPR) launched in May 2018 requires companies to publicly disclose whether they’ve experienced a cyberattack, and imposes heavy fines for those who are not in compliance. But GDPR only applies to companies doing business in Europe.

As far as legal precedent is concerned, the 1990 California Supreme Court case Moore vs. Regents of the University of California found that individuals no longer have claim over their genetic data once they relinquish it for medical testing or other forms of study. So if Ancestry sells your DNA to a pharmaceutical company that then uses your cells to find the cure for cancer, you won’t see a dime of compensation. Bummer.

Despite the many opportunities for data to be stolen, abused, misunderstood, and sold to the highest bidder, the law simply hasn’t caught up to our technology. So the teams developing security and privacy policies for DNA testing companies are doing pioneering work, embracing security best practices and transparency at every turn. This is the right thing to do.

Almost two years ago, founders at Helix started working with privacy experts in order to understand all the key pieces they would need to safeguard—and they recognized that there was a need to form a formal coalition to enhance collaboration across the industry.

Through the Future of Privacy forum, they developed an independent think tank focused on creating public policy that leaders in the industry could follow. They teamed up with representatives from 23andMe, Ancestry, and others to create a set of standards that primarily hammered on the importance of transparency and clear communication with consumers.

“It is something that we are very passionate about,” said Misha Rashkin, Senior Genetic Counselor at Helix, and an active member of developing the shared privacy policy. “We’ve spent our careers explaining genetics to people, so there’s a years-long held belief that transparent, appropriate education—meaning developing policy at an approachable reading level—has got to be a cornerstone of people interacting with their DNA.”

While the privacy coalition strived for easy-to-understand language, the fact remains that their privacy policy is a 21-page document that most people are going to ignore. Rashkin and other team members were aware, so they built more touch points for customers to drill into the data and provide consent, including in-product notifications, emails, blog posts, and infographics delivered to customers as they continued to interact with their data on the platform.

Maps, diagrams, charts, and other visuals help users better understand their data.

After Rashkin and company finalized and published their privacy policy, they turned it into a checklist that partners could use to determine baseline security and privacy standards, and what companies need to do to be compliant. But the work won’t stop there.

“This is just the beginning,” said Elissa Levin Senior Director of Clinical Affairs and Policy at Helix, and a founding member of the privacy policy coalition. “As the industry evolves, we are planning on continuing to work on these standards and progress them. And then we’re actually going out to educate policy makers and regulators and the public in general. We want to help them determine what these policies are and differentiate who are the good players and who are the not-so-good players.”

Biggest area of concern: the unknown

We just don’t know what we don’t know when it comes to technology. When Mark Zuckerberg invented Facebook, he merely wanted an easy way to look at pretty college girls. I don’t think it entered his wildest dreams that his company’s platform could be used to directly interfere with a presidential election, or lead to the genocide of citizens in Myanmar. But because of a lack of foresight and an inability to move quickly to right the ship, we’re now all mired in the mud.

Right now, cybercriminals aren’t searching for DNA on the black market, but that doesn’t mean they won’t. Cybercrime often follows the path of least resistance—what takes the least amount of effort for the biggest payoff? That’s why social engineering attacks still vastly outnumber traditional malware infection vectors.

Because of that, cybercriminals likely believe it’s not worth jumping through hoops to try and break serious encryption for a product (genetic data) that’s not in demand—yet. But as biometrics and fingerprinting and other biological modes of authentication become more popular, I imagine it’s only a matter of time before the wagons start circling.

And yet—does it even matter? Even with all of the red flags exposed, millions of customers have taken the leap of faith because their curiosity overpowers their fear, or the immediate gratification is more satisfying than the nebulous, vague “what ifs” that we in the security community haven’t solved for. With so much data publicly available, do people even care about privacy anymore?

“There are changing sentiments about personal data among generations,” said Hunt. “There’s this entire generation who has grown up sharing their whole world online. This is their new social norm. We’re normalizing the collection of this information. I think if we were to say it’s a bad thing, we’d be projecting our more privacy-conscience viewpoints on them.”

Others believe that, regardless of personal feelings on privacy, this technology isn’t going away, so we—security experts, consumers, policy makers, and genetic testers alike—need to address its complex security and privacy issues head on.

“Privacy is such a personal matter. And while there may be trends, that doesn’t necessarily speak to an entire generation. There are people who are more open and there are people who are more concerned,” said Levin.  “Whether someone is concerned or not, we are going to set these standards and abide by these practices because we think it’s important to protect people, even if they don’t think it’s critical. Fundamentally, it does come down to being transparent and helping people be aware of the risk to at least mitigate surprises.”

Indeed, whether privacy is personally important to you or not, understanding which data is being collected from where and how companies benefit from using your data makes you a more well-informed consumer.

Don’t just check that box. Look deeper, ask questions, and do some self-reflection about what’s important to you. Because right now, if someone steals your data, you might have to change a few passwords or cancel a couple credit cards. You might even be embroiled in identity theft hell. But we have no idea what the consequences will be if someone steals your genetic code.

Laws change and society changes. What’s legal and sanctioned now may not be in the future. But that data is going to be around a long time. And you cannot change your DNA.

The post What DNA testing kit companies are really doing with your data appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 12 – 18)

Malwarebytes - Mon, 11/19/2018 - 17:08

Last week on Malwarebytes Labs, we found out that TrickBot became a top business threat, so we took a deeper look at what’s new with it.

With Christmas just around the corner, the Secret Sister scam returned.

We also touched on the security and privacy (or lack thereof) in smart jewelry, air traffic control compromise, and what security concerns to take note of when automating your business.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (November 12 – 18) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Business email compromise scam costs Pathé $21.5 million

Malwarebytes - Mon, 11/19/2018 - 16:00

Recently released court documents show that European-based cinema chain Pathé lost a small fortune to a business email compromise (BEC) scam in March 2018. How much? An astonishing US$21.5 million (roughly 19 million euros). The attack, which ran for about a month, cost the company 10 percent of its total earnings.

What is business email compromise?

Business email compromise is a type of phishing attack, sprinkled with a dash of targeted social engineering. A scammer pretends to be an organisation’s CEO, then starts bombarding the CFO with urgent requests for a money transfer. The requests are generally for wire transfers (hard to trace), and are often routed through Hong Kong (lots of wire transfers, even harder to trace).

Scammers will sometimes buy domain names to make the fake emails look even more convincing. These attacks rely on the social importance of the CEO: nobody wants to question the boss. If an organisation has no safeguards in place against these attacks, a scammer will likely be very rich indeed. It only takes one successful scam to generate a huge haul, at which point the scammer simply vanishes into the ether.

What happened here?

This particular BEC scam is of interest because it highlights a slightly different approach to the attack. Scammers abandoned pitting the fake CEO against the real CFO in favour of faking French head office missives to the Dutch management.

It all begins with the following mail:

“We are currently carrying out a financial transaction for the acquisition of foreign corporation based in Dubai. The transaction must remain strictly confidential. No one else has to be made aware of it in order to give us an advantage over our competitors.”

Even though the CFO and CEO thought it strange, they pressed on regardless and sent over 800,000 in Euros. More requests followed, including some while the CFO was on vacation—both executives were fired after the head office noticed. Although they weren’t involved in the fraud, Pathé said they could—and should—have noticed the “red flags.” They didn’t, and there was no safety net in place, so the business email compromise attempt was devastatingly successful.

The shame game

Many instances of BEC fraud go unreported because nobody wants to voluntarily admit they fell victim. As a result, the first you tend to hear about it is in court proceedings. It’s hard to guess how much is really lost to BEC fraud, but the FBI have previously floated a $2.1 billion-dollar figure. The actual figure could easily be higher.

How can businesses combat this?
  1. Check the social media accounts and other online portals of your executives, and have those connected to finance make their profiles as private—and secure—as possible. You can certainly reduce a CFO’s online footprint, even if you can’t remove it completely.
  2. Authentication is key. The CFO and CEO, or whoever is responsible for wire authorisation, should have a special process in place for approvals. It shouldn’t be email based, as that’s how people end up in BEC scam trouble in the first place. If you have a unique, secure method of communication, then use it. If you can lock down approvals with additional security like two-factor authentication, then do so. Some organisations make use of bespoke, offline authenticator apps on personal devices. The solution is out there!
  3. If you have many offices, and different branches move money around independently, the same rules apply: find a consistent method of authentication that can be used across multiple locations. This would have almost certainly saved Pathé from losing $21.5 million.
  4. When there’s no other way to lock things down, it’s time to break out the telephone and rely on verbal authentication. While this may cause a small amount of business drag (If you’re on the other side of the world, is your CFO fielding calls at 2:00am?), it’s better than losing everything.
A threat worth tackling

Business email compromise continues to grow in popularity among scammers, and it’s up to all of us to combat it. If your organisation doesn’t take BEC seriously, you could easily be on the receiving end of an eye-watering phone call from your bank manager. Keeping your finances in the black is a priority, and BECs are one of the most insidious threats around, whether you distribute movies, IT services, or anything else for that matter. Don’t let malicious individuals decide when to call things a wrap.

The post Business email compromise scam costs Pathé $21.5 million appeared first on Malwarebytes Labs.

Categories: Techie Feeds

6 security concerns to consider when automating your business

Malwarebytes - Fri, 11/16/2018 - 16:00

Automation is an increasingly-enticing option for businesses, especially when those in operations are in a  perpetual cycle of “too much to do and not enough time to do it.”

When considering an automation strategy, business representatives must be aware of any security risks involved. Here are six concerns network admins and other IT staff should keep in mind.

1. Using automation for cybersecurity in counterproductive ways

The cybersecurity teams at many organizations are overextended, accustomed to taking on so many responsibilities that their overall productivity goes down. Automating some cybersecurity tasks could provide much-needed relief for those team members, as long as those employees use automation strategically.

For example, if cybersecurity team members automate standard operating procedures, they’ll have more time to triage issues and investigate potential vulnerabilities. But, the focus must be on using automation in a way that makes sense for cybersecurity—as well as the other parts of the business. Human intelligence is still needed alongside automation in order to better identify threats, analyze patterns, and quickly make use of available resources. If you build up defenses but leave them unattended, eventually the enemies are going to break through.

2. Giving too many people access to automatic payment services

Forgetting to pay a bill on time is embarrassing and can negatively affect a company’s access to lines of credit. Fortunately, companies can use numerous automatic bill-paying services to deduct the necessary amounts each month, often on a specified day.

Taking that approach prevents business representatives from regularly having to pull credit cards out of their wallets and manually type the numbers into forms. However, it’s a best practice to restrict the number of people who can set up those payments and verify that they happen.

Otherwise, if there are problems with a payment, it’ll become too difficult to investigate what went wrong. In addition, there’s a possibility of insider threats, such as a disgruntled employee or someone looking to get revenge after termination. Malicious insiders could access a payment service and change payment schedules, delete payment methods, withdraw large amounts, or otherwise wreak havoc.

3. Thinking that automation is infallible

One of the especially handy things about automation is that it can reduce the number of errors people make. Statistics indicate that almost 71 percent of workers report being disengaged at the office. Repetitive tasks are often to blame, and automation could reduce the boredom people feel (and mistakes they make) by relegating them to more challenging projects.

Regardless of the ways they use automation, IT admins mustn’t fall into the habit of believing that automated tools are foolproof, and it’s not necessary to check for mistakes. For example, if a company uses automation to deal with financial-related content, such as invoices, it should not adopt a relaxed approach to keeping that information secure just because a tool is now handling the task.

In all responsibilities that involve keeping data secure, humans still play a vital role in ensuring things are working as they should. After all, people are the ones who set up the processes that automation carries out, and those people could have made mistakes, too.

4. Failing to account for GDPR

The General Data Protection Regulation (GDPR) went into effect in May 2018, and it determines how businesses must treat the data of customers in the European Union. Being in violation could result in substantial fines for businesses, yet some companies aren’t even aware they’re doing something wrong.

Keeping information in a customer relationship management (CRM) database could maintain GDPR compliance by helping businesses have accurate and up-to-date records of their customers, making it easier to ensure they treat that information appropriately. As the GDPR gives customers numerous rights, including the right to have data erased or the right to have the data stored but not processed, any automation tools selected by an organization need to be agile enough to accommodate those requests.

Automation—whether achieved through a CRM tool or otherwise—can actually help companies better align with GDPR regulations. In fact, it’s essential that companies not overlook GDPR when they choose ways to automate processes.

5. Not using best practices with password managers

Password managers are incredibly convenient and secure because they store, encrypt, and automatically fill in the proper passwords for any number of respective accounts—as long as users know the correct master password. Some of them even automate filling in billing details by storing payment information in secure online wallets.

However, there are wrong ways to use password managers for business or personal purposes. For example, if a person chooses a master password that she’s already used on multiple other sites or shares that password with others, she’s defeated the purpose of the password manager. Choosing a password manager with multi-factor authentication is our recommendation for the most secure way to log into your accounts.

It’s undoubtedly convenient to visit a site and have it automatically fill in your password for you with one click. But, password managers only work as intended when employees use them correctly.

6. Ignoring notifications to update automation software

Many automation tools display pop-up messages when new software updates are available. Sometimes the updates only encompass new features, but it’s common for them to address bugs that could compromise security. When the goal is to dive into work and get as much done as possible, taking a few minutes to update automation software isn’t always an appealing option.

But, if outdated software ends up leading to an attack and compromising customer records, people will wish they didn’t procrastinate. It’s best for businesses to get on a schedule, such as checking automation software for updates on a particular day each month (Patch Tuesday, for example).

Fortunately, many software titles allow people to choose the desired time for the update to happen, or in essence, automate the maintenance of automation software. Then, users can set the software to update outside of business hours or during other likely periods of downtime.

Automation is advantageous—if security remains a priority

Although automation can be a tremendous help to businesses, it can also pose risks if misused, neglected, or too heavily relied upon. Staying aware of the security-related issues raised in this article helps organizations of all sizes and in all industries use automated tools safely and effectively.

The post 6 security concerns to consider when automating your business appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Compromising vital infrastructure: air traffic control

Malwarebytes - Thu, 11/15/2018 - 20:12

While most of us know that flying is the safest mode of transport, we still feel that sigh of relief when the plane has made its landing on the runway and we can text our loved ones that we have arrived safe and sound. Accidents may be rare, but they’re often shocking and horrific and accompanied by the loss of many lives. Unfortunately, they also tend to make the news, which only heightens fear.

In this blog post, we look at the dangers related to flying from a cybersecurity perspective. As we know, cybercriminals are motivated mostly by money, power, and ego—and messing with air traffic and air traffic control can boost any of those factors. While the majority of these cybersecurity incidents result in data breaches, make no mistake: Attacks on this vital infrastructure could lead to much more grim consequences.

Air traffic control

Air traffic can roughly be divided into four general categories:

  • Public transport
  • Cargo and express freight
  • Military operations
  • Smaller aircrafts (recreational, training, helicopters, and drones)

Organizations like the ATO and EUROCONTROL manage the air traffic across entire continents, communicating with commercial and military bodies to control the coordination and planning of air traffic in their designated territory. These organizations work closely together, as there are many intercontinental flights that pass from one territory to another.

Air traffic control organizations need to react quickly to incidents, and their instructions should be followed to the T. They need flawless communication to work properly, as they are crucial to maintaining the normal flow of air traffic. Therefore, these organizations and their related systems are heavily computerized. This makes them primary targets for cyberattacks.

Public transportation

Using airlines as a means of public transport brings with it certain security-related dangers. Online bookings have led to many data leaks. Recently we have learned about breaches at Cathay Pacific, British Airways, Arik Air, and Air Canada. Some of these breaches were website hacks. Others only concerned users of mobile apps.

Another privacy-related cause for worry is the type of information displayed on an airline ticket or boarding pass. Some people post pictures of their tickets on social media, and the Aztec codes used on those tickets are easy to decipher. This can provide a threat actor with a wealth of personally identifiable information, such as payment method, confirmation numbers, names, and addresses.

Travelers should also pay extra attention to spam that comes in looking convincingly like a ticket confirmation. This type of spam has been around for a few years, and is usually easy to discard—except when you actually happened to have booked with the same airline being spoofed.

For more travel safety tips read: Tips for safe summer travels: your cybersecurity checklist

Air cargo

Air cargo is by definition always in a hurry. If delivery of the cargo wasn’t urgent, it would have been put on a less costly mode of transportation. This makes shipment information valuable to both thieves and scammers. How often have you received a phishing mail claiming to be shipment information from one of the major express freighters such as DHL, FedEx, or UPS? If a threat actor were to know you were expecting air cargo or an express delivery from a particular company, these blind attempts could become more targeted and efficient.

Military

In warfare, competition for air supremacy is fierce. It is defined by the USDoD and NATO as the “degree of air superiority wherein the opposing air force is incapable of effective interference.” There are several levels of control of the air, but the general idea is that air supremacy is a major goal on the way to victory.

In modern warfare, you can expect every side to try every possible way to gain control of the air, including cyberattacks on the enemies’ air traffic infrastructure. In such a scenario, the infrastructure includes planes, aircraft factories, airports, air traffic control, and the lines of communications between all of them.

Recreational use of the airways

Interfering with recreational air traffic may not be a target for cybercriminals, but recreational traffic can, and has been known to, hinder other forms of air traffic. Drones have been reported in hundreds of near misses with commercial air liners, and one even managed to land on the grounds of the White House. Considering that the number of drones is expected to grow exponentially in years to come—with increasing commercial use-cases, such as delivery, photography, inspection, and reconnaissance—expect more interference problems to emerge.

Drones come in many forms and shapes, and the same is true for their level of security. But you can readily assume that most of them can be remotely hacked. In the US, drone operations are not allowed within five miles of an airport unless they inform traffic control. One would expect these rules to become stricter as we proceed.

Terrorist attacks

Aircrafts have been hijacked by terrorists in the past, the most famous example being 9/11, where terrorists snuck their way onto four different aircrafts, incapacitated the pilots, and flew the planes into the World Trade Centers, Pentagon, and crashing into a field in Pennsylvania. These physical, in-person hijacks are the reason for the extensive security measures that you encounter at every major airport.

But hijackers don’t have to be physically present to cause huge damage. As demonstrated in the past, aircrafts can be hacked remotely and malware can infect computer systems in the aircraft.

Ransomware victims

Like any other industry, you will find many ransomware victims in the aviation and air traffic sector.

The flight information screens on Bristol Airport went dark after the airport’s administration system was the subject of a cyberattack. The attack was suspected to be ransomware, although I could not find official confirmation for this. In this case, flight operations were (thankfully) not affected.

Boeing was one of the many victims of the WannaCry attack in May 2017, even though the attack was played down afterward, since the production lines had not been disturbed.

As mentioned in an earlier blog, air and express freight carrier FedEx has been a ransomware victim twice: once through their TNT division hit by NotPetya, and once in their own delivery unit by WannaCry.

Targeted cyberattacks

A targeted attack was suspected when malware was found in the IT network of Boryspil International Airport, located in the Ukraine, which reportedly included the airport’s air traffic control system. Due to rocky relations between Ukraine and Russia, attribution quickly swerved to BlackEnergy, a Russian APT group held responsible for many cyberattacks on the Ukraine.

Ukranian aircraft builder Antonov was also a victim of NotPetya, ransomware that was suspected of targeting Ukrainian users. In hindsight, it may just have looked that way because the malware was spread with software update systems for a Ukrainian tax accounting package called MeDoc.

Budget concerns

In 2017, the Air Traffic Control Association (ATCA) published a white paper issuing the following warning:

Where budgets are concerned, cybersecurity is treated reactively instead of proactively.

This was after a 2016 report by the Ponemon Institute that found organizations did not budget for the technical, administrative, testing, and review activities that are necessary to operate a truly secure system. Instead, at least two-thirds of businesses waited until they had experienced a cyberattack or data breach to hire and retain security vendors to help.

The budgeting process for systems architecture in the aviation industry does not account for built-in security. It would certainly make sense to include it if we want to protect our passengers and cargo making use of this vital infrastructure. It would even be more cost effective, since retroactively securing a system after an attack is usually much more expensive than preventing one.

So, while the physical security on airports has been tightened significantly, it would seem the cybersecurity of this important infrastructure still needs a lot of work, especially when you consider the sheer number of cyberattacks on the industry that have taken place in the last few years.

Those in the aviation, air traffic, and air cargo industries need to include cybersecurity in their budget and design proposals for 2019, otherwise the excrement might really hit the propeller.

The post Compromising vital infrastructure: air traffic control appeared first on Malwarebytes Labs.

Categories: Techie Feeds

My precious: security, privacy, and smart jewelry

Malwarebytes - Wed, 11/14/2018 - 17:27

Emery was staring at her computer screen for almost an hour, eyes already lackluster as the full-page ad on Motiv looped once more. She was contemplating whether she’d give in and get her boyfriend Ben a new fitness tracker as a present for his upcoming marathon. The phone app he was currently using worked, but Ben never got used to wearing his iPhone on his arm. In fact, the weight of it distracted him.

Emery thought that something lightweight, sturdy, and inconspicuous was what he needs as a replacement. And the Motiv Ring—in elegant slate gray, of course—seemed to be the best option. But for $199, she immediately stepped back. Admittedly, the price tag tempted her to go back to cheaper options.

Reaching for her coffee mug, Emery was reminded of the weight of the Ela Bangle around her wrist. Ben had given it to her as a welcome-home present after her two-week medical mission. He had called it a smart locket, one you can’t wear around your neck. He knew she got homesick easily, so Emery was ecstatic when Ben had shown her photos and audio messages near and dear to her all saved on its rounded-square stone.

At least that was what the brochure said. In reality, her personal files were stored in the cloud associated with the Ela.

Although Emery could only rave about her smart locket, she couldn’t help but wonder if anyone else could see her files. She’s as techie as the next nurse in her ward, but stories of hacking, stolen information, and locked out files were frequently discussed at the hospital, making her realize that owning technology from a nascent industry can put one in a precarious position.

Emery and her current situation may be fictitious, but her dilemma is real. Smart jewelry has real appeal, but it doesn’t come without risks to security and privacy.

Whatever enamored them, potential buyers would be wise to consider this one, significant detail before they make up their minds: data. Mainly, what happens with the data they freely allow their smart jewels to monitor, collect, analyze, and store. Could these be accessed, retrieved, transported, or used by anyone who has the skills? Could data leak on accident or because of simple manipulation of certain elements (such as incrementing the user ID)? These are some questions we need to continue asking ourselves in this age of breaches.

Not only that, the data collected about a person’s health and well-being is yet another trove that should be under the protection of a statute like HIPAA—but isn’t. It’s no wonder that lawmakers and those working in the cybersecurity and privacy sectors have expressed concern regarding the evident lack of security of not just wearable technology, but the Internet of Things as a whole.

How smart jewelry works

Smart jewelry, or wearable jewelry, is a relatively new form of wearable technology (WT) capable of low-processing data. And like other WT, it’s generally not a stand-alone device. It requires an app to be paired with your smart jewelry so it can do what it’s designed to do. In a nutshell, this tandem is how smart jewelry—and wearables as a whole—works.

Wearable jewelry that acts as a fitness tracker usually follows the standard model below:

  • Tracking of data using sensors in the wearable, such as an accelerometer, gyroscope, tracker, and others.
  • Transmitting of data from the wearable to the smartphone via Bluetooth Low Energy (BLE) or ant plus (ANT+)
  • Aggregating, analyzing, processing, and comparing the data in the smartphone.
  • Syncing of data from the smartphone app to its cloud server via an Internet connection.
  • Presenting data to the user via the smartphone.

In-depth processing and data analysis also happen in the cloud. Manufacturers offer this additional service to users as an option. As you can tell, this is how service providers monetize the data.

Nowadays, smart jewelry is becoming more than just a pretty fitness tracker. Some already function as an extension of the smartphone, providing notifications on incoming calls and new text messages and emails. Others can be used for sleep or sleep apnea monitoring, voice recording, hands-free sharing and communication, unlocking doors, or paying for purchases. A small number of smart jewelry can even act as one’s personal safety device, train or bus pass, bank card, or smart door key.

But while the jewelry gets blingier and the processor—the wearable jewelry’s core computer—gets smarter with time, one is likely to ask: Is smart jewelry getting more secure? Is it protecting my privacy?

Unfortunately, the strong, resounding answer to both is “no.”

Security and privacy challenges faced by smart jewelry

Because of the processor’s size—a necessity to make wearables lightweight, relatively inexpensive, and fit for mass production—manufacturers are already limited from adding any security measure into it. This is an inherent problem in a majority of wearable devices.

In fact, it is safe to say that some vulnerabilities or security shortcoming we find in wearable devices can also be found in smart jewelry, too.

In the research paper entitled, “Wearable Technology Devices Security and Privacy Vulnerability Analysis,” Ke Wan Ching and Manmeet Mahinderjit Singh, researchers at the Universiti Sains Malaysia (USM), have presented several weaknesses and limitations within wearable devices that we have grouped into main categories. These are:

  • Little or lacking authentication. A majority of wearables have no way of authenticating or verifying that the person accessing or using them are who they claim they are. These devices are then susceptible to data injection attack, denial of service (DoS) attacks, and battery drain hacks. For gadgets that do have an authentication scheme in place, usually, the system isn’t secure enough. This could quickly be taken advantage of by brute force attacks.
  • Leaky BLE. Because of this, persons with ill intent can easily track users wearing smart jewelry. And if a location can be determined with ease, then privacy is compromised, too. Other Bluetooth attacks that can work against wearables are eavesdropping, surveillance, and man-in-the-middle (MiTM) attacks.
  • Information leakage. If one’s location can be determined with pinpoint accuracy, it’s possible that hackers can pick up personally identifiable information (PII) and other data just as easily. Information leakage also leads to other security attacks, such as phishing.
  • Lack of encryption. Some wearables are known to send and receive data to or from the app in plain text. It’s highly likely that smart jewelry is doing this, too.
  • Lack of or incomplete privacy policy. Some smart jewelry manufacturers make clear what they do to information they collect from users visiting their website. Yet, they hardly mention what they do to the more personal data they receive from their wearables and app. Their privacy policy does not (or seldom) say what is being collected, when is data collected, what will the data be used for, or how long the data can be kept.
  • Insecure session. Users can access their smart jewelry via its app, and its app saves user accounts. Account-based management is at risk if its weakness is in the way it manages sessions. Attackers would be able to guess user accounts to hijack sessions or access data belonging to the user.

It’s also important to note that, unlike smartphones and other mobile devices, smart jewelry owners have no way of tracking their wearable jewelry should they accidentally misplace or lose it.

How smart jewelry manufacturers are addressing challenges

The European Union’s introduction of the General Data Protection Regulation (GDPR) has created a tsunami effect on organizations across industries worldwide. Manufacturers of wearable devices are no exception. Owners of smartwatches, smart wristbands, and other wearable gadgets may already have noticed some tweaking to the privacy policies they agreed to—and this is a good thing.

When it comes to security and privacy, much to the surprise of many, they are not entirely absent from smart jewelry. Manufacturers recognize that wearables can be used to secure data and accounts. They also understand that their wearables need to be secured. And a small number of organizations are already taking steps.

Motiv, the example we used in our introductory narrative, has already incorporated in their devices biometric and two-factor authentication schemes, which they recently revealed in a blog post. The Motiv Ring now includes a feature called WalkID, a verification process that monitors a wearer’s gait. It runs continuously in the background, which means WalkID regularly checks for the wearer’s identity. The ring can also now serve as an added layer of protection to online accounts that are linked to it. In the future, Motiv has promised its users password-free logins, fingerprint scanning, and facial recognition.

Diamonds—and data—are forever

It was in January of this year that Ringly, a pioneer smart jewelry company, bid farewell to the wearable tech industry (probably for good) after only four years. Although it wasn’t revealed why, one mustn’t take this as a sign of a dwindling future ahead for wearable jewelry. On the contrary, many experts forecast an overwhelmingly positive outlook on wearable tech. However, the wearables industry must make a concerted effort to address the many weaknesses found in modern smart jewelry.

So, should you bite the bullet and splurge on some smart jewelry?

The answer still depends on what you need it for. And if you’re seriously intent on getting one, remember there are security measures you can do to minimize those risks. Regularly updating the app and the firmware, taking advantage of additional authentication modes if available, using strong passwords, never sharing your PIN, and turning the Bluetooth off when not needed are just some suggestions.

How to choose from smart jewelry options plays a key role in safety, too. Make sure that you select a brand that takes security seriously and shows this by continuously improving on the flaws and privacy concerns we mentioned above. First-generation tech is always insecure. What consumers must look out for are future improvements, not just on the look and functionalities, but also how it protects itself and your data.

Lastly, it’s okay to wait. Seriously. You don’t have to have the latest smart ring, necklace, or bracelet if it doesn’t take care of your data or leaves you open to hackers. It would be wise to settle for other alternatives that would address your needs, first and foremost, and make it coordinate with your attire second. After all, the smart jewelry industry is relatively young, so it still has a long way to go. And with every advancement, we can only hope that smart jewelry comes with beefier security measures and privacy-friendly policy implementations.

As for wearables in the business environment—well, that’s another story.

The post My precious: security, privacy, and smart jewelry appeared first on Malwarebytes Labs.

Categories: Techie Feeds

TrickBot takes over as top business threat

Malwarebytes - Wed, 11/14/2018 - 15:00

Last quarter brought with it a maddening number of political ads, shocking and divisive news stories on climate change and gun laws, and mosquitoes. We hate mosquitoes. In related unpleasant news, it also apparently ushered in an era of banking Trojans that, as of this moment, shows no signs of slowing down.

First it was Emotet. But over the last couple months, Emotet has had some stiff competition. In fact, this already dangerous threat went quiet for a few weeks in October while its authors made tweaks to up the ante, adding a new module that exfiltrates email. Why? Because there’s a newer, more sophisticated banking Trojan in town attempting to penetrate business networks and giving Emotet a run for its money.

And its name is TrickBot.

TrickBot has now overtaken Emotet as our top-ranked threat for businesses, with an uptick in activity especially over the last 60 days. It’s hitting North America the hardest, with Europe, the Middle East, and Africa (EMEA) coming in a distant second. The latest surge in TrickBot activity has even prompted the National Cyber Security Centre in the UK to issue an advisory warning to organizations to prepare for attacks by immediately implementing mitigation protocols.

TrickBot features

The authors of TrickBot are agile and creative, regularly developing and rolling out new features, which is what makes this particular banking Trojan so dangerous. It’s likely why Emotet is locked in an arms race of sorts with TrickBot, competing for market share with increasingly sophisticated methods of attack and propagation.

So, what is TrickBot? Developed in 2016, TrickBot is one of the more recent banking Trojans on the market, with many of its original features inspired by Dyreza, another banking Trojan that acts as a data stealer. Besides targeting a wide array of international banks via webinjects, Trickbot can also harvest emails and credentials using the Mimikatz hack tool. Additional parlor tricks include the capability of stealing from Bitcoin wallets.

TrickBot comes in modules accompanied by a configuration file. Each module has a specific task, such as gaining persistence, propagation, stealing credentials, encryption, and so on. The C&Cs are set up on hacked wireless routers.

Infection vectors

TrickBot is typically spread via malicious spam (malspam) campaigns—for example, spear phishing emails disguised as unpaid invoices or requests to update account information.

Example malspam distributing TrickBot

Other methods of propagation include embedded URLs and infected attachments, such as Microsoft Word documents with macros enabled. TrickBot is also seen as a secondary infection dropped by Emotet.

Malicious document with macro

And, because those stolen NSA exploits keep proving their worth, once it has infected a single endpoint, TrickBot can then spread laterally through the network using the SMB vulnerability (MS17-010), which includes either the EternalBlue, EternalRomance, or EternalChampion exploit.

Clever girl.

Impact to businesses

 The endpoint user will not notice any symptoms of a TrickBot infection. However, a network admin will likely see changes in traffic or attempts to reach out to blacklisted IPs and domains, as the malware will communicate with TrickBot’s command and control infrastructure to exfiltrate data and receive tasks.

TrickBot gains persistence by creating a Scheduled Task. And due to the way it uses the SMB vulnerability to spread through a company’s network, any infected machine on the network will re-infect machines that had been previously cleaned when they rejoin the network.

Therefore, IT teams need to isolate, patch, and remediate each infected system one-by-one. This can be a long and painstaking process that’s costly on time and resources. Much like with ransomware attacks, the best protection against a threat like TrickBot is to proactively prevent infection in the first place. However, that’s not always possible.

Prevention

According to a 2018 study conducted by Osterman Research on the true cost of cybercrime, 53 percent of US organizations have experienced a phishing attack in the last year. Another 23 percent have witnessed spear phishing attacks. There’s a reason why this attack vector is so popular: It’s still incredibly effective.

On average, four percent of the targets in any given phishing campaign will click on it. But here’s the kicker: the more phishing emails someone has clicked, the more likely they are do click on them again. And even worse—although most folks don’t click on phishing emails, only 27 percent actually report them. In a business environment, this is no bueno.

Think it’s time for organizations to educate their employees on how to spot phishing attempts? We sure do. One of the easiest ways to stop threats like TrickBot, which are initially spread via malspam/phishing campaigns, is to train users to spot that suspicious email from a mile away, even when it’s cloaked in reputable company logos. In addition, teaching users about the importance of immediately reporting a suspicious email to the correct teams will help to reduce the amount of time to detect and respond to phishing attacks.

But the training shouldn’t stop there. Work closely with individuals with access to the most sensitive data, since they will likely be targeted, and provide role-specific education. Show them how to spot a spear phish and emphasize the importance of healthy skepticism. They are the gatekeepers, and they must be on guard.

It’s not just end users who need a little (phish)boning up. Train the responders alongside them. Test their ability to detect a campaign, identify potentially infected hosts, determine which actions were taken on compromised machines, and confirm whether or not data exfiltration took place.

If you periodically drill your employees and response teams and provide follow-up seminars, especially for those who fail to identify or report phishes, your organization will be able to either successfully fend off a phishing attack or limit the impact of a successful phish.

Of course, increasing employee awareness isn’t the only prevention method. Patching for the SMB vulnerability can keep TrickBot and other threats that use these exploits from spreading laterally through the network. And using a comprehensive cybersecurity solution that blocks exploits can keep endpoints from getting infected.

Remediate

Malwarebytes can detect and remove TrickBot on business endpoints without further user interaction. But to be effective on networked machines, you must first follow these steps:

  1. Identify the infected machine(s). If you have unprotected endpoints/machines, you can run Farbar Recovery Scan Tool (FRST) to look for possible Indicators of Compromise (IOC). Besides verifying an infection, FRST can also be used to verify removal before bringing an endpoint/machine back into the network.
  2. Disconnect the infected machines from the network.
  3. Patch each endpoint for MS 17-010.
  4. Disable administrative shares. Windows Server by default installs hidden share folders specifically for administrative access to other machines. The Admin$ shares are used by TrickBot once it has brute forced the local administrator password. A file share server has an IPC$ share that TrickBot queries to get a list of all endpoints that connect to it. These AdminIP shares are normally protected via UAC, however, Windows will allow the local administrator through without a prompt.The most recent Trickbot variants use C$ with the Admin credentials to move around and re-infect all the other endpoints.It is recommended to disable these Admin$ shares via the registry, as discussed in this Microsoft support article. If you do not see this registry key, it can be added manually and set up to be disabled.
  1. Remove the TrickBot Trojan.
  2. Change account credentials. Repeated re-infections are an indication the worm was able to guess or brute force the administrator password successfully. Please change all local and domain administrator passwords.
A threat for the ages

 TrickBot has proven itself to be a most tricky foe, but that doesn’t mean organizations should run screaming. Its modular structure, use of the SMB exploit, and simple, yet sophisticated attack vector (malspam/phishing campaigns) make it dangerous, yes, but we have proven methods for protecting against each of these traits. With dedicated employee training for phishing awareness, a cybersecurity solution that protects against exploits, and some good old-fashioned patching, you can not only keep individuals from losing out on productivity for the week—you can keep your whole network afloat.

And in the age of TrickBot, that’s no small fete.

The post TrickBot takes over as top business threat appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Secret Sister scam returns in time for Christmas

Malwarebytes - Tue, 11/13/2018 - 18:55

The festive season may be imminent, but it’s a Facebook Secret Sister (not Santa) you have to steer clear of. Secret Sister has been a mainstay of Yuletide scams since at least 2015, and has come back around once more. But what is it?

Your office probably has a Secret Santa scheme in place. You draw names from a hat, and you secretly buy the named person a gift. It’s all pretty straightforward, and a great source of unwanted deodorants and novelty kitchenware. Secret Sister isn’t quite as nice, and could drop you in a great deal of trouble. You probably won’t even get your hands on the deodorant.

How the scam works

Usually, chain letters of the Secret Sister variety are jammed through your front door. In this case, the chain letter lands in your digital mailbox as opposed your real one. You could in theory receive one of these anywhere, and people have reported receiving them on everywhere from Reddit and Facebook to various social portals and forums. For whatever reason, Facebook seems to be the scammer’s favourite place to get the ball rolling on this particular scam. The possibility of being able to send it pinging around large social connection chains is too good to resist.

Secret Sister sample 

The messages can vary wildly, but one of the most popular ones going back a year or so reads as follows:

Anyone interested in a Holiday Gift exchange? I don’t care where you live – you are welcome to join. I need 6 (or more) ladies of any age to participate in a secret sister gift exchange. You only have to buy ONE gift valued at $10 or more and send it to one secret sister and you will receive 6-36 in return!

Let me know if you are interested and I will send you the information!

Please don’t ask to participate if you are not willing to spend the $10.

TIS THE SEASON! and its getting closer. COMMENT if You’re IN and I will send you a private message. Please don’t comment if you are not interested and aren’t willing to send the gift!

It might sound promising to many people reading it, but it really won’t do you much good.

From chains to pyramids

Chain letters are essentially pyramid schemes. Pyramid schemes involve funneling money from bottom to top of the pyramid, benefiting those at the top and not many others. If you’re there from the get-go, your chances of making a good return increase somewhat. For everyone else, you’re probably going to lose out.

Where this becomes complicated is in the US is these schemes tend to resemble gambling. This means you could easily end up breaking the law. From the US Postal Inspectors website:

They’re illegal if they request money or other items of value and promise a substantial return to the participants. Chain letters are a form of gambling, and sending them through the mail (or delivering them in person or by computer, but mailing money to participate) violates Title 18, United States Code, Section 1302, the Postal Lottery Statute

Secret Sister data harvesting

You definitely won’t receive a pile of free gifts. However, you could be dragged into some sort of dubious postal scam with mail fraud penalties instead. There’s also the risk of identity theft to consider. Mail fraud scammers typically ask for various pieces of personal information. You could end up handing them your name, address, phone number, alongside a variety of online profiles to tie them to. This could be all an enterprising criminal needs to do some additional damage, especially if they persist in branching out from your profile to those of your friends.

No matter how appealing the prospect of easy free gifts sounds as 2018 slowly draws to a close, don’t fall for it. These types of antics have been around for a long time, and moving into the digital realm doesn’t make them any safer. If you’re not based in the US, you may not have the legal worry to deal with as a result but that’s scant consolation.

Our advice is to stick to Secret Santa, and give his sister nothing more than a Return to Sender.

The post Secret Sister scam returns in time for Christmas appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (November 5 – November 11)

Malwarebytes - Mon, 11/12/2018 - 17:17

Last week on Malwarebytes Labs, we looked at browser lockers that fly under the radar with complete obfuscationtransport and logistics in our series about compromising vital infrastructure, Google logins now requiring JavaScript, how to create a sticky cybersecurity training program, and an introduction for Process Hacker.

Other cybersecurity news
  • Dutch police have achieved a breakthrough in intercepting encrypted messages between criminals. (Source: TellerReport)
  • If terrorists launch a major cyberattack, we won’t see it coming. (Source: The Atlantic)
  • Why are fake Elon Musk bitcoin scams running rife on Twitter right now? (Source: ZDNet)
  • VirtualBox Guest-to-Host escapes zero-day and exploit released online. (Source: HelpNetSecurity)
  • Microsoft releases info on protecting BitLocker from DMA attacks. (Source: BleepingComputer)
  • Further protections from harmful ad experiences on the web. (Source: Chromium blog)
  • Android users now face forced app updates, thanks to Google’s new dev tools. (Source: ZDNet)
  • Active exploitation of newly patched ColdFusion vulnerability (CVE-2018-15961). (Source: Volexity Threat Research)
  • Spammers hack 100,000 home routers via UPnP vulns to craft email-flinging botnet. (Source: The Register)
  • Google: newer Android versions are less affected by malware. (Source: Security Shelf)

Stay safe, everyone!

The post A week in security (November 5 – November 11) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What’s new in TrickBot? Deobfuscating elements

Malwarebytes - Mon, 11/12/2018 - 15:00

Trojan.TrickBot has been present in the threat landscape from quite a while. We wrote about its first version in October 2016. From the beginning, it was a well organized modular malware, written by developers with mature skills. It is often called a banker, however its modular structure allows to freely add new functionalities without modifying the core bot. In fact, the functionality of a banker is represented just by one of many of its modules.

With time, developers extended TrickBot capabilities by implementing new modules – for example, the one for stealing Outlook credentials. But the evolution of the core bot, that was used for the deployment of those modules, was rather slow. The scripts written to decode modules from the first version worked till recent months, showing that the encryption schema used to protect them stayed unchanged.

October 2018 marks end of the second year since TrickBot’s appearance. Possibly the authors decided to celebrate the anniversary by a makeover of some significant elements of the core.

This post will be an analysis of the updated obfuscation used by TrickBot’s main module.

Behavioral analysis

The latest TrickBot starts its actions from disabling Windows Defender’s real-time monitoring. It is done by deploying a PowerShell command:

After that, we can observe behaviors typical for TrickBot.

As before, the main bot deploys multiple instances of svchost, where it injects the modules.

Persistence is achieved by adding a scheduled task:

It installs itself in %APPDATA%, in a folder with a name that depends on the bot’s version.

Encrypted modules are stored in the Data folder (old name: Modules), along with their configuration:

As it turns out, recently the encryption of the modules has changed (and we had to update the scripts for decoding).

The new element in the main installation folder is the settings file, that comes under various names, that seems to be randomly chosen from some hardcoded pool. It’s most commonly occurring name is settings.ini (hardcoded), but there are other variants such as: profiles.ini, SecurityPreloadState.txt, pkcs11.txt. The format of the file looks new for the TrickBot:

We can see many strings, that at first looks scrambled/encrypted. But as it turns out, they are junk entries that are added for obfuscation. The real configuration is stored in between of them, in a string that looks like base64 encoded. Its meaning will be explained in the further part of this post.

Inside

In order to better understand the changes, we need to take a deep dive in the code. As always, the original sample comes packed – this time there are two layers of protection to be removed before we get the main bot.

The main bot comes with 2 resources: RES and DIAL, that are analogical to the resources used before.

RES – is an encrypted configuration file, in XML format. It is encrypted in the same way as before (using AES, with key derived by hashing rounds), and we can decode it using an old script: trickbot_config_decoder.py. (Mind the fact that the first DWORD in the resource is a size, and not a part of the encrypted data – so, it needs to be removed before using the script).

DIAL – is an elliptic curve public key (ECC curve p-384), that is used to verify the signature of the aforementioned encrypted configuration, after it is decrypted.

Obfuscation

In the first edition, TrickBot was not at all obfuscated – we could even find all the strings in clear. During the two years of evolution, it has slowly changed. Several months ago, the authors decided to obfuscate all the strings, using a custom algorithm (based on base64). All the obfuscated strings are aggregated from a single hardcoded list:

When any of them is needed, it is selected by its index and passed to the decoding function:

Example – string fetched by the index 162:

The deobfuscation process, along with the used utility, was described here. Due to the fact that the API of the decoding functions didn’t change since then, the same method can be used until today. The list of deobfuscated strings, extracted from the currently analyzed sample can be found here.

Additionally, we can find other, more popular methods of strings obfuscation. For example, some of the strings that are divided into chunks, one DWORD per each:

The same method was used by GandCrab, and can be deobfuscated with the following script.

Similarly, the Unicode strings are divided:

Most of the imports used by TrickBot are loaded dynamically. That makes static analysis more difficult, because we cannot directly see the full picture: the pointers are retrieved just before they are used.

We can solve this problem in various ways, i.e. by adding tags by an automated tracer. Created CSV/tags file for one of the analyzed samples is available here (it can be loaded to the IDA database with the help of IFL plugin).

The picture given below shows the fragment of TrickBot’s code after the tags are loaded. As we can see, the addresses of the imported functions are retrieved from the internal structure rather than from the standard Import Table, and then they are called via registers.

Apart from the mentioned obfuscation methods, on the way of its evolution, TrickBot is going in the direction of string randomization. Many strings that were hardcoded in the initial versions are now randomized or generated per victim machine. For example the mutex name:

Used encryption

In the past, modules were encrypted by AES in CBC mode. The key used for encryption was derived by hashing initial bytes of the buffer. Once knowing the algorithm, we could easily decrypt the stored modules along with their configuration.

In the recent update the authors decided to complicate it a bit. Yet they didn’t change the main algorithm, but just introduced an additional XOR layer. Before the data is passed to the AES, it is first XORed with a 64 character long, dynamically generated string, that we will refer as the bot key:

The mentioned bot key is generated per victim machine. First, GetAdapterInfo function is used:

The retrieved structure (194 bytes) is hashed by SHA256 and then the hash is converted into string:

The reconstructed algorithm to generate the Bot Key (and the utility to generate the keys) can be found here.

This key is then stored in the dropped settings file.

Encoding settings

As mentioned before, new editions of TrickBot drop a new settings file, containing some encoded information. Example of the information that is stored in the settings:

0441772F66559A1C71F4559DC4405438FC9B8383CE1229139257A7FE6D7B8DE9 1085117245 5 6 13

The elements:

1. the BotKey (generated per machine)

2. a checksum of a test string: (0-256 bytes encoded with the same charset) – used for the purpose of a charset validation

3. three random numbers

The whole line is base64 encoded using a custom charset, that is generated basing on the hardcoded one: “HJIA/CB+FGKLNOP3RSlUVWXYZfbcdeaghi5kmn0pqrstuvwx89o12467MEDyzQjT”.

Yet, even at this point we can see the effort of the authors to avoid using repeatable patterns. The last 8 characters of the charset are swapped randomly. The pseudocode of the generation algorithm:

Randomization of the n characters:

Example of the transformation:

inp: “HJIA/CB+FGKLNOP3RSlUVWXYZfbcdeaghi5kmn0pqrstuvwx89o12467MEDyzQjT

out: “HJIA/CB+FGKLNOP3RSlUVWXYZfbcdeaghi5kmn0pqrstuvwx89o12467jDEzTyQM

The decoder can be found here: trick_settings_decoder.py

Slowly improving obfuscation

The authors of TrickBot never cared much about obfuscation. With time they slowly started to introduce its elements, but, apart from some twists, it’s still nothing really complex. We can rather expect that this trend will not change rapidly, and after updating the scripts for new additions, decoding Trick Bot elements will be as easy for the analysts as it was before.

It seems that the authors believe in a success based on quantity of distribution, rather than on attempts of being stealthy in the system. They also focus on constant adding new modules, to diversify the functionality (i.e. recently, they added a new module for attacking Point-Of-Sale systems).

Scripts

Updated scripts for decoding TrickBot modules for malware analysts:
https://github.com/hasherezade/malware_analysis/tree/master/trickbot

Indicators of compromise

Sample hash:

9b6ff6f6f45a18bf3d05bba18945a83da2adfbe6e340a68d3f629c4b88b243a8

 

The post What’s new in TrickBot? Deobfuscating elements appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Advanced tools: Process Hacker

Malwarebytes - Fri, 11/09/2018 - 16:16

Process Hacker is a very valuable tool for advanced users. It can help them to troubleshoot problems or learn more about specific processes that are running on a certain system. It can help identify malicious processes and tell us more about what they are trying to do.

Background information

Process Hacker is an open source project and the latest version can be downloaded from here. The site also provides a quick overview of what you can do with Process Hacker and what it looks like. At first sight Process Hacker looks a lot like Process Explorer but it does have more options. And since it is open source you can even add some of your own if you are able and willing.

Installing

If you just want to start using the tool it’s as easy as downloading and running the installer:

The current version at the time of writing is 2.39

During the installation process you will see a few options:

Accept the EULA

Choose the destination folder for the program

Select the components you wish. They don’t require a lot of space, so there is no need to be picky.

These are worth studying and depend on how you plan to use Process Hacker. Most can be changed afterwards though.

And you are done!

The main screen

When you run Process Hacker this is the main screen.

In the default settings it shows you the Processes tab with all the running processes in tree-view and lists:

  • their Process Identifier (PID)
  • their CPU usage percentage (CPU)
  • their I/O total rate
  • their private bytes
  • the user-name the process is running under
  • a short description

By hovering over one of the process name you can find more information about them.

The other tabs are Services, Network, and Disk. Each of the last two shows more information about the processes with regards to their network and disk usage respectively. The services tab shows a full list of present services and drivers.

The meaning of the color-coding of the processes can be found, and changed, under Hacker > Options > on the Highlighting tab.

The option to toggle to Replace Task manager with Process Hacker can be found under Hacker > Options > on the Advanced tab.

Releasing handles

One very useful option that Process Hacker has to offer is that it can help you delete those files that just don’t want to go away because they are “in use by another process”.

Process Hacker can help you identify that process and break the tie. Here’s the procedure:

  • In the main menu click on Find handles or DLLs
  • In the Filter bar type the full name of the file or a part of that name, then click on Find
  • In the results look for the exact filename and right-click that line
  • From the right-click menu choose Go to owning process
  • The process will be highlighted in the Processes window
  • Right-click the highlighter process and choose Terminate
  • Consider the warning in the prompt that data might be lost and be aware that Process Hacker can close processes where other task managers might fail
  • If you choose to terminate the process you can try deleting the locked file again

 

Escaping browlocks

Process Hacker can also help you to escape some of the sites that use browlock tactics. For example sites that use this login script:

The goal of the threat-actors is is to get the visitor of the site to first allow notifications, and in the end, get him to install one of their extensions. Malwarebytes generally detects these extensions as PUP.Optional.ForcedInstalledExtensionFF.Generic. You can find some more details about these extensions in this removal guide for a nameless extension hailing from browser-test.info.

But anyhow, to close such a tab in Firefox would normally require you to shut down Firefox completely, losing all the other open tabs, or, if you have Restore previous session enabled, get them all back, including the browlock. With Process Hacker you can look at the Network tab:

Once you found the guilty party, select the line that shows the contact to the IP or domain of the browlock site. I used tracert to determine the IP for ffkeitlink.cool. Right-click that line and choose Close and this will temporarily break the connection, stopping the script from refreshing the prompt all the time. That give you the chance to close the tab and carry on without having to force-close the Firefox process.

Dumping strings from memory

You can use Process Hacker to create memory dumps of processes. Analysts use these dumps to search for strings and they use scripts or Yara rules to make an initial classification of a process. Is it malware? If so, what kind of malware? Is it after information, which information, and where does it send it?

Right-click on the running process you wish to create a memory dump for and select Create dump file… and then use the explorer window to browse to the location where you want to save the dump. The memory dump created by Process Hacker will have the dmp extension. Depending on what you are looking for the dmp file can be opened in a hex editor, a text editor, or mimikatz (if you are after credentials).

You can create the same memory dumps with Process Explorer, but Process Hacker also includes .NET support which Process Explorer does not.

Finding resource hogs

As with most programs of this kind identifying resource, hogs is easy.

Click the CPU title above the column and the column will be sorted by CPU usage, showing you if a process is slowing you down and which one. The same can be done for Private bytes and I/O total rate.

Versatile and powerful

Process Hacker is a very versatile tool that has a lot in common with Process Explorer. Since it does have a few more options and is more powerful than Process Explorer, advanced users may prefer Process Hacker. Less advanced users who are afraid of the possible consequences of the extra power might want to stick with Process Explorer. If you require help with Process Hacker they have a fairly active forum where you might find a helping hand.

Notes:

  • Some AV’s flag Process Hacker as Riskware or Potentially Unwanted because it is able to terminate many processes including some that belong to security software. Malwarebytes does not detect Process Hacker as malicious or potentially unwanted.
  • The mbamservice process can not be permanently stopped by Process Hacker if you have the self-protection module enabled. You can find this setting under Settings Protection tab > Startup options.

The post Advanced tools: Process Hacker appeared first on Malwarebytes Labs.

Categories: Techie Feeds

How to create a sticky cybersecurity training program

Malwarebytes - Thu, 11/08/2018 - 17:00

Organizations know that training employees on cybersecurity and privacy are not only expensive but time-consuming. However, given that current threats are targeting businesses more than consumers, introducing and teaching cybersecurity and privacy best practices in the workplace has undoubtedly become an absolute must.

Creating a successful training program is a massive undertaking. It doesn’t just require one to grab “Cybersecurity 101” material from the Internet, stuff it in a PowerPoint presentation, and expect trainees to understand what’s at stake, let alone change unwanted behaviors. It’s more thoughtful and systematic than this.

Putting together a cybersecurity and privacy training program that is not only effective but sticks requires an incredible amount of time, effort, and thought in finding out employees’ learning needs, planning, creating goals, and identifying where they want to go. Without these, imparting knowledge on cybersecurity and privacy would be ineffective, not to mention costly.

We’re past asking the wrong questions (“Where do I start?”) at this point. But if you’re still weighing your options on whether your organization should do an awareness campaign instead of a full-blown training or education program, we’ll help make it easier for you. Although awareness, training, and education are used interchangeably or compounded together in this field, in reality, these three are entirely different at their core.

Awareness, training, and education

Awareness is in the heart of what we do here at Malwarebytes Labs. We impart learnings by writing about matters related to cybersecurity, which include news, opinion pieces, thought leadership, malware analyses, business and consumer guides, and quarterly threat reports. But for companies aiming to solidify specific security skills that employees need to function, awareness might not be enough.

A security awareness campaign aims to make employees realize that particular actions or responses toward, say, an email of questionable origin could actually be dangerous. The National Institute of Standards and Technology (NIST) defines awareness, training, and education as follows:

  • Awareness is not training. The purpose of awareness is simply to focus attention on security. Awareness presentations are intended to allow individuals to recognize IT security concerns and respond accordingly.
  • Training strives to produce relevant and needed security skills and competencies.
  • Education integrates all of the security skills and competencies of the various functional specialties into a common body of knowledge . . . and strives to produce IT security specialists and professionals capable of vision and pro-active response.

The IT Security Learning Continuum as described by NIST, which can serve as an excellent backbone for organizations in creating their own cybersecurity learning framework.

NIST also recognizes that awareness can be the foundation of training. That said, organizations may not have to pick one over the other. They can start with awareness, then training, then education, should they choose to produce specialists. In this blog post, we’ll focus solely on the middle ground: training, from its conception to follow-through.

Why training doesn’t work: Let us count the ways

While training is an excellent approach for building security literacy in an organization, it doesn’t always work out as planned. Reasons why training may be ineffective include (1) trainees didn’t learn much from the program, (2) trainees didn’t change their behavior after the program, and (3) trainee learnings aren’t retained for a long time. Let’s also list down a few additional points to keep in mind before putting together a training program to keep our expectations in check.

Training is done once a year. Traditionally, organizations hold training sessions once per year, but when it comes to cybersecurity, this would have to change dramatically. It’s important to hold regular sessions, especially for new hires, to stay on top of any new tactics cybercriminals are adopting that might affect the business. Also, organizations must keep in mind that cybersecurity best practices shouldn’t be the only discipline they should train employees on. They should also pay particular attention to addressing insider threats and workplace violence.

Training is optional. Organizations who aren’t serious about cybersecurity but want to tick off a box conduct a training session (usually a short one) where employees are merely encouraged to attend but are not required. With breaches happening on a regular basis across industries, an employee base that understands cybersecurity has become a necessity in every organization. Governments recognized this need long before and passed legislation to hold organizations accountable for handling and securing assets. Complying with laws requires training, especially for employees who directly use data assets as part of their work.

Training is only conducted for non-managerial employees. This is another traditional scenario we see in the workplace. Many may not realize this, but a lot of employees in the managerial position are none the wiser when it comes to cybersecurity. Often, they’re unaware of any security measures and policies the organization already has in place. It is then crucial for the organization to include management in the training sessions.

Training is boring/is too long/has too much information to digest. In turn, the supposed learnings that the training is supposed to impart are not adequately delivered and received. Trainees often feel that there is too much for them to digest (especially if this is the first time they hear all this), and the majority of them end up confused or have too many questions. IT directors and educators will recognize that glazed over look.

Keeping training expectations in check

Below is a list of points organizations must always bear in mind to temper their expectations and avoid making decisions that would set themselves up for failure.

  • While there is no such thing as perfect security, there is also no such thing as the perfect training program. The closest thing to it would be a program that has been custom-designed to fit the specific organization by individuals with a solid background in cybersecurity.
  • Training isn’t the silver bullet that cures all cybersecurity ailments in an organization. Breaches can still happen.
  • Training won’t change your workforce overnight. Change is difficult, and often, not welcomed by employees. So, expect a level of resistance, hang in there, and overdose on repetition.
  • Training employees doesn’t make them experts. But at the very least, organizations must aim at making every employee competent enough in cybersecurity to do their job well while consistently exercising safe computing habits.
Tips for developing an effective cybersecurity training program PRE-TRAINING

This is the preparation stage where much of the thinking, brainstorming, and planning takes place. The work here requires a tremendous amount of time, and some organizations may find the wait a hard pill to swallow while breaches are happening left and right. At the end of the day, though, companies will realize that pre-planning is worth the wait.

  • Get executive buy-in or sponsorship. In an organization, it’s challenging—if not impossible—for any program to start and achieve momentum if executives don’t support it. So, getting their blessing before doing anything else should be a priority. Furthermore, executives can help direct and shape the future of the company’s overall cybersecurity learning program.
  • Create a training task force. Once approved, the organization must then create a task force, ideally comprised of representatives from different departments, including upper management. The task force’s size should be relative to the size of its organization. It’s essential to have an exclusive group responsible for the organization’s cybersecurity learning needs not only to assume accountability but also to adequately address a need and enforce the program.  At this stage, the organization can also invite an external third-party as a consultant to be part of the task force.
  • Get to know your workforce and the company cultures influencing them. One of the first duties of the task force is to assess their potential trainees—including personnel in the managerial level—and finding out what motivates them, how they learn, and what they need to learn.
  • Create a roadmap and include the result they want to aim for. The task force can now use the information they received from their assessment to design the training program and how content can best be presented for maximum learning. It would also be wise to identify bad computing behaviors they have observed from employees and aim to change them. Note: Regardless of how the organization decides to train employees, the training should be based on teaching cybersecurity and privacy best practices and local and international security standards.
  • Prioritize. Once the task force has identified bad behaviors that need to change, the team must then determine the top three they want to address. Doing so avoids information overload. For example: Since every employee in an organization handles emails, the team can focus on introducing risks one may find in their inboxes, identifying risk indicators, highlighting the current insecure practices of employees, and then providing changes to these behaviors that, in turn, mitigate the risks.

Read: Insider threats in your work inbox

  • Decide on the best training approach or strategies to use for your target trainees. There is no one-size-fits-all approach here. The task force can use the traditional classroom-style method, computer-based training (or e-learning), or a combination of the two—otherwise known as blended learning. As a rule of thumb, e-learning is the ideal method for learnings that may need repetition (especially for new hires). Someone in the task force familiar with adult learning methodologies could significantly contribute to further optimize the learning method. For example, the group can plan on solidifying learning weeks after the initial training by having learners go through simulations. Any failure in the simulations is a new learning opportunity.
  • Communicate. This is the part where the task force should socialize the organization’s plans for training the staff, why this is important, what the organization is trying to achieve, and how the training will be conducted. Get buy-in from stakeholders and official sign-off from executives before the first email goes out alerting employees about the upcoming training and having them block off their calendars. The task force can also help employees by sending out regular notices of upcoming training details (or any changes to them) as the date approaches.
  • Prep trainees with an online course before training. Should the task force decide to take the blended learning route, the task force may want to consider preparing employees by having them undergo an online course as a primer to the training proper. The course could be an introduction to terms they may likely encounter during the training but not have heard of, such as business email compromise (BEC). This way, trainees would be able to pick up the jargon used in the training without having to look up acronyms in the dictionary.
PERI-TRAINING

Now that organizations have identified which employee behaviors they want to focus on improving, and they know how to effectively impart the training, it’s time to now deliver the training to employees. The work doesn’t stop here.

  • Think of the trainer as a facilitator, and not a lecturer. The person presiding over the program should encourage interaction and openness, as employees will better retain knowledge as active participators.
  • Make the training environment open and friendly to allow every trainee to speak up without being forced or put on the spot.
  • Make training more interactive. In this age, there are more creative ways to present information than clicking through PowerPoint slides. Videos, voiceovers and/or podcasts, articles, and infographics are at an organization’s disposal. They can also participate in live demos, role-playing, and simulations where possible.
  • To help trainees retain learnings, use mid-course quizzes, exercises, and post-completion summaries.
  • Consider using an online platform for final exams. In this way, companies can also integrate gamification techniques to make exams more fun and less stressful.

As a side note, the task force may want to consider allowing an informal talk or chat among the facilitator and trainees outside the training room and into, say, the company break room or the coffee shop just across the road. This chat could be a further discussion about the topic or learnings of the day’s training or an opportunity for trainees to ask questions.

Read: How to create an intentional culture of cybersecurity

POST-TRAINING

At this point, the task force’s work is almost done—but not yet.

Activities that happen after the training shouldn’t be optional. In fact, the post-training stage is as indispensable as the pre-training stage. Without this, the effectiveness of the training will not be measured, feedback won’t be acted upon, and eventually, the program will stagnate. Note, however, that post-training doesn’t just involve monitoring effectiveness but looks to produce continuous improvement of the program and company-wide promotion of sound cybersecurity and privacy behaviors.

  • Implement awareness acknowledgments to foster accountability for all employees. Awarding trainees a completion certificate seems to be a popular end-result of security training programs. But what if, instead of or in addition to the certificate, organizations had trainees sign acknowledgment documents to promote accountability. After investing in and conducting an effective training program, companies should expect their workforce to comply with the new cybersecurity and privacy policies in place. Nothing can solidify this message more than to have them put their names down on paper acknowledging that they should now know better.
  • Set up a portal within the intranet that contains training materials for employees to revisit at their own comfort. If possible, allow employees to access these materials on any device, anytime and anywhere, but only with a VPN. The task force can also add more skill-building resources, such as simulations, for employees to practice what they have learned at any given time.
  • Consider creating an FAQ page to anticipate the usual questions from employees. Not all questions are raised and answered during or immediately after training. And often, one employee has the same concern as the another. Having an accessible go-to page to address questions about cybersecurity helps trainees get a leg up before the training, and keeps answers to important questions handy long after the program has ended.
  • Create visual cues around the workplace to keep the learnings fresh. Posters, emails, newsletters. Basically, the works.
  • Make reporting of cybersecurity and privacy incidents readily available to employees. At some point in the future, that proverbial “potentially malicious” email could drop into your employee’s work inbox. Being trained, they would realize that something is off and likely take caution when dealing with it. If a simple reporting system is in place, such as having a dedicated room for cybersecurity and privacy incident reporting in Slack, it would be easier for anyone to raise questions or concerns about suspicious emails. For emails that may have been sent by imposters, a flagging and verification process should be in place for employees to know and properly handle such an incident.
  • Update training materials when necessary. Expect this to happen on a regular basis, especially when new and appropriate case studies come up or more effective teaching methods come to light.
Technology and training do go together

All too often, organizations recognize weaknesses in their systems and network, so they focus on putting money on the best software to beef up. And all too often, organizations also recognize flaws in their employees’ online habits due to lack of training or awareness, yet training on matters relating to cybersecurity and privacy doesn’t happen because of budget and resource constraints.

But while it is crucial to address weaknesses in systems that only technology can solve, organizations must realize that, holistically, its security can only be as good as its firewall or AV. Beyond that, we have people who, while lacking awareness, are a threat to security. For employees to transform from liability to last layer of defense, every person in the organization—from the Chief Executive Officer to the receptionist—must undergo at least one general training on cybersecurity and privacy, which only gets more in-depth and tailored for individual departments. Neglecting this could cost organizations more than just money; their reputation as a service provider, competitive advantage, and overall business success are also at stake.

Theodore Roosevelt once said that knowing what’s right doesn’t mean much unless you do what’s right. He’s right. It’s not enough for employees to know what to do. They need to absorb it and apply what they know, changing any bad behaviors and adopting safer online practices. The lessons need to stick. In order to achieve this, organizations must create a highly effective training program that ensures employees know and understand why that training is essential, and guarantees that employees follow requirements by holding them accountable. These are tried-and-tested ways organizations can hold their own in this era of breaches.

The post How to create a sticky cybersecurity training program appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Google logins: JavaScript now required

Malwarebytes - Wed, 11/07/2018 - 16:00

Google users: In news that may sound alarming, it is now a requirement for you to enable JavaScript.

Why? When your username and password are entered on Google’s sign-in page, Google runs a risk assessment and only allows the sign-in if nothing looks suspicious. Recently, Google went about improving this analysis and now requires JavaScript in order to run their assessment. Want to use some of those comprehensive security enhancements for your account? Then JavaScript must be enabled, or you won’t be able to log in. JavaScript is now your forever friend.

What is JavaScript?

If you use websites such as portals or social media platforms, you likely run into JavaScript all the time. It’s a programming language used for all sorts of interactive effects in games and basic operations like logins. It ticks away in the background alongside cascading style sheets and HTML for a solid browsing experience.

It’s now a core slice of the Google login pie, and you will absolutely have to try a slice.

What has changed?

When using the Google sign-in page, you won’t get any further if you have JavaScript disabled. This could be frustrating for some users, given how much important data can be stored in a Google account. Why has the drawbridge come up? In a nutshell, to keep you safe from the many scams and attacks aimed at Google users.

Google accounts have a whole variety of safety measures to keep would-be compromisers out. If someone manages to obtain your password and tries to sign in as you, Google runs some checks. If they flag certain unusual activity, such as logins from another country, they’ll request additional verification.

Google can’t do any of this without JavaScript up and running, so moving forward you’ll have to switch it on.

Is this a problem?

I mean…no, I don’t think it is. JavaScript shows up in a lot of attacks, and we don’t want anybody becoming complacent. It is, however, possible to impede your own preferred browsing behaviour unnecessarily.

There’s one school of security thinking which is a little like security nihilism. Essentially, everything is a threat and we must reduce the attack surface. Okay, fine. The problem is, for some, this turns into a game of “remove absolutely everything from the device.” At what point do we stop and look in wonder at our expensive, utterly non-functional box?

You probably have JavaScript enabled right now, unless you’re highly security-centric or super keen on having the fastest loading times possible. It’s usually one of the most common complaints related to script blocker extensions. “I blocked them, and nothing works. Now what?”

The Sun has the blocker fired directly into its heart, that’s what. If you want to strip out the functionality of browsers, there is always going to be a price to pay. For example, the earliest ad blocker/script blocker tools often made everything nigh on unusable. Thankfully, ad blockers have stepped up their game and are now part of a healthy, balanced cybersecurity hygiene routine.

Good news, the choice is easy

Google estimates the impact of their new JavaScript requirement is likely to be small—supposedly only 0.1 percent of their users have it switched off. At this point, they’re going to have to make a choice.

This isn’t a stark “one thing or the other” decision. There’s absolutely nothing preventing someone from enabling JavaScript purely for logins, then switching off afterwards. Yes, there are JavaScript exploits out there, but there’s an exploit for pretty much everything anyway. You are unlikely to hit any sort of trouble switching it on just to sign in.

As was mentioned on the Daily Swig blog, surfers such as those using TOR are likely to be the most impacted. If you’re on TOR and trying to use Google services, you may have to force yourself to switch. If you still won’t use an alternate browser for Googling, perhaps ultimately, you may have to find another provider.

For everyone else, this is a good thing and will help keep your accounts more secure in the long run.

The post Google logins: JavaScript now required appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds