Word Up: Virtual vs. Virtualized Security

I have a pet peeve about virtualization and security, and it happens to be a minor thing with syntax. It comes down to this question; what is the difference between virtual and virtualized, and why does it matter in in the language of security?

From our friends at TheFreeDictionary:

1. Existing or resulting in essence or effect though not in actual fact, form, or name: the virtual extinction of the buffalo.

2. Existing in the mind, especially as a product of the imagination. Used in literary criticism of a text.

3. Computer Science Created, simulated, or carried on by means of a computer or computer network: virtual conversations in a chatroom.

DatacenterTo boil it down, something that is virtual exists as an idea, but not in reality. When a datacenter or an end-user system is virtualized, it doesn’t become ‘virtual’; it still exists, n matter how virtualized it may be. A virtual machine is still a machine, though it is abstracted from the underlying physical infrastructure. So why is there so much virtual security out there? I propose it is because much of it is exactly as it claims – virtual = vapor.

Virtualization of the datacenter encompasses a massive change in how datacenters are designed, built, and operated. The workloads, whether borne by servers or end-user systems, are being abstracted to sit atop of not just a supervisor (the operating system that abstracts applications from hardware) but also a hypervisor (the operating systems that abstracts the supervisors from hardware).

Virtualization has gained acceptance because it seamlessly performs this new layer of abstraction, while taking advantage of deduplication. For example, f the same memory contents are used by several VMs, keep a single copy and so on. The same physical RAM that was dedicated to servicing a single supervisor can then pull multiple-duty by deduplicating memory contents. Layered above the basic deduplication advantages are the management advantages; moving VMs from host to host, live snapshots, fault tolerance, and so on.

Why, then, is security still virtual, and not virtualized?

To answer this, we need to draw a box. The outside of the box represents the perimeter of a datacenter, or ‘slice’ therein (I’m staying away from networking terms on purpose; let’s keep this conceptual for this discussion). Inside the box are the workloads. Traditionally, physical network devices have kept the outside out, while selectively letting some things in. Endpoint security, that being primarily anti-malware, operated inside the box, happily doing lots of endpoint security things within each endpoint.

Now, consider that box as applied to a virtualized (not virtual!) datacenter. The perimeter may not have changed. In-fact, in large environments maintaining hefty, and very physical security devices at the edge are likely to be the norm for years to come. Perimeters within the datacenter may take advantage of virtualization via virtualized versions of perimeter devices, an example being virtual appliances that run IDS/IPS. Edge-worthy throughput negotiated through a supervisor that is, in-turn, negotiating with a hypervisor to access hardware sounds like the game of telephone tag that it is. Dedicated hardware at the edge of large networks shall remain. Within a datacenter, some would call it a software-defined network; let’s agree to call it network stuff that happens within the box.

Inside the box are the workloads. The applications work within the supervisors that work within the hypervisors. Where did the endpoint security go?

From my observations, most organizations that are virtualizing server workloads don’t hesitate to virtualize the endpoint security along with the workload. On some levels it makes sense; whilst virtualizing the application and the operating system within which the application resides, the endpoint security should be along for the ride. Unfortunately, this has led to a lot of ‘virtual’ endpoint security.

As organizations have moved from piloting server virtualization, through embracing the vision of a virtualized datacenter, and on to private cloud and end-user system virtualization (VDI), they begin to notice significant problems. Traditional endpoint security does not become ‘virtualized security’ simply because the endpoint that the security runs within is virtualized. Traditional endpoint security also cannot become virtualization-specific security via addition of work-around features; virtual security strikes again!

The problem comes down to duplication. A traditional anti-malware agent is designed to treat the operating system that it is protecting as an island. Scanning activity is done within that isolated system. Scanning engines and databases are maintained (installed, updated, upgraded, etc.) must be present. We all know what impact that has – remember the last time you bought a new desktop or laptop and smiled as it booted in a fraction of the time that your old one did. Then you install anti-malware, and sigh as that new-found thrill quickly fades as the boot time seems to double. After your shiny new system finally fires-up, you check the resource usage and roll your eyes at the couple of hundred megabytes of memory being consumed solely to secure your system.

That simple problem of duplication leads to bigger problems, which bring a virtualization project to a grinding halt. With virtualized servers, perhaps twenty or forty instances can be squeezed on a particularly hefty host, and two- to four-hundred VDI instances. That means there are twenty to four-hundred anti-malware agents happily churning away. Organizations have learned to disable scheduled scans, lest every anti-malware agent grabs as much computing power as it can, and the host resources quickly become exhausted. When the consolidation ratio (the holy grail of virtualization) gets high enough, even regular updates can create resource churning, and upgrades are even worse.

Organizations typically won’t see these issues when first jumping into virtualization. Pilot projects usually use over-spec’ed hardware. The reasonable assumption is that the pilot will help iron-out the wrinkles, and when done, more physical systems will be migrated to the hardware. Only when the project moves past the initial stages does this hidden problem become apparent. However, if an organization starts with VDI, the problem is obvious from the start. If you’re wondering who jumps straight to VDI, consider small to medium sized organizations. It can actually be much simpler for them to virtualize end-user systems with the help of solutions like Citrix’s VDI-in-a-Box.

To try to solve these problems, security vendors have tried a few different workarounds. One of my favorites (I’m being deeply cynical) is randomized scheduled scans. In other words, a schedule scan that is, well, scheduled, but actually runs at a random time during a scheduled interval with the hopes that not too many agents are running the scan simultaneously. Amazingly, the treatment is applied to updates. Take a moment to ponder that – an industry that has touted near real-time updates of protection and then actively hobbles the functionality and calls it a feature.

Other vendors have taken a more holistic approach. If duplicating a scan engine within each VM doesn’t work, use a single scan engine on a dedicated virtual appliance. The problem then becomes one of remote scanning; how can introspection of activity within a VM be achieved from a virtual appliance?

VMware has created an API and functionality for remote introspection. It is called vShield Endpoint. Vendors who are willing can create a virtual appliance that is integrated with the vShield API. vShield handles the remote introspection by exposing file system events that are captured by the vShield driver that is embedded in VMware Tools within protected VMs. This means that the single agent on the virtual appliance performs scanning, deduplicating the impact and freeing-up resources.

TechnologyThis approach works very well if the protected system is supported. Currently, that means it is limited to Windows VMs running on ESXi. Also, since the remote introspection is handled through ESXi, the virtual appliance is tied to the host (one per host, and they cannot be moved). Finally, although file system events are exposed, other areas, such as memory, processes, and registry, are not.

Other vendors have bypassed vShield and created their own remote introspection technologies. Not being tied to a particular hypervisor and the API of the virtualization vendor means that these solutions tend to be hypervisor agnostic. This is especially handy when the VMs that are being protected are running on infrastructure that doesn’t belong to the organization, namely public cloud. This approach also has the possibility of going beyond file system events to include memory and process inspection, and expanding that protection to include Linux.

If these approaches are so great, why isn’t every endpoint anti-malware company doing it?

Simply put, it’s not easy, and it’s all fairly new. Although the scanning engine on the virtual appliance is, more or less, simply a scanning engine, the architecture of the solution around that engine is new. That means doing more than tweaking scan and update schedules – it means building a new product. Vendors that have traditionally purchased innovation have had a hard time because this approach is new enough that there simply are not start-ups or small players that available for acquisition. Also, the core is still an anti-malware engine. Who would pitch a start-up that includes creating a brand-new commodity technology? True, and an anti-malware engine could be OEM’ed, but that makes the prospects of acquisition rather murky.

In the end, the existing endpoint anti-malware players need to come-up with solutions, not workarounds, for virtualization security. As more organizations expand virtualization projects, ‘virtual security’ isn’t going to cut-it. Organizations face a choice of continuing with ‘virtual’ security, the product of imagination, or embracing security that is virtualized.


Source: http://www.securityweek.com

Anonymous claims first strikes against North Korea

Members of the Anonymous hacktivist collective claim to have launched the first strikes against North Korea as part of its Operation Free Korea.

Earlier this week, the hactivist group threatened North Korea with cyber war if the country’s leader Kim Jong-un does not resign and install free democracy in the territory.

Anonymous is also demanding that North Korea abandon its nuclear ambitions and give universal and uncensored internet access to its citizens.

The threat was made in a message posted to Pastebin that also claimed that 15,000 membership records had been stolen from the Uriminzokkiri website.

Anonymous also claimed that it had access to North Korea’s local intranets, mail servers and web servers.

Now the group claims it has forced Uriminzokkiri offline and breached the state-run website’s Twitter and Flickr accounts.

The accounts have stopped sending out typical content, instead the Flickr account posted a picture of Kim Jong-un’s face with a pig-like snout, according to the Belfast Telegraph.

The paper said the accompanying text reads: “Threatening world peace with ICBMs and nuclear weapons/Wasting money while his people starve to death.”

A series of postings on the Uriminzokkiri Twitter account said “Hacked” followed by a link to different North Korea-related websites.

In addition to taking down Uriminzokkiri and its social accounts, Anonymous has defaced books and music store Ryomyong and a website belonging to a North Korea-linked political group, known as AINDF.

Anonymous claimed in another Pastebin statement that it has members inside North Korea who are aiding them with their attack.

“We have a few guys on the ground who managed to bring the real internet into the country using a chain of long-distance Wi-Fi repeaters with proprietary frequencies, so they’re not jammed (yet),” the group wrote.

“We also have access to some N.K. phone landlines which are connected to Kwangmyong through dial-ups. Last missing piece of puzzle was to interconnect the two networks, which those guys finally managed to do,” the statement said.

The group praised its operatives for “trying to bring the real, free, uncensored internet to the citizens of North Korea” and called on others to help and stand up against governments around the world.

“Citizens of North Korea, South Korea, USA, and the world, don’t allow your governments to separate you.

“We are all one. We are the people. Our enemies are the dictators and regimes, our goals are freedom and peace and democracy. United as one, divided by zero, we can never be defeated,” the group said.

The Anonymous attacks and threats come amid heightened tensions on the Korean Peninsula with South Korea and its US ally after the North’s latest nuclear test, but Anonymous said it is fighting for freedom and does not support the US.


Source: http://www.computerweekly.com

Biggest DDoS Attack in History?

5639_logo_spamhaus_175x175The distributed-denial-of-service operation known as Operation Stophaus has been blamed for major online disruptions last week in Europe. In fact, some media outlets have dubbed it the “biggest cyber-attack in history.”

But some DDoS and online-activity monitoring experts say the attack pales relative to the DDoS activity U.S. banking institutions have been withstanding since the fall of 2012. In short, they say that Operation Stophaus is more hype than reality.

“This was a DNS reflection attack,” Dan Holden of DDoS-mitigation provider Arbor Networks says about the attacks waged against The Spamhaus Project, a Geneva-based not-for-profit organization dedicated to fighting Internet spam operations.

At the height of the attack, which has since subsided, Spamhaus was seeing traffic at an unprecedented pace of 300 gigabytes per second, or roughly three times the strength of even the biggest DDoS attacks against U.S. banks, according to Spamhaus hosting partner CloudFlare, which refers to this incident as, “The DDoS that almost broke the Internet.”

But some DDoS experts say this attack wasn’t necessarily as menacing as reported, and it has no relationship, whatsoever, to the bank attacks credited to the hacktivist group Izz ad-Din al-Qassam Cyber Fighters.

Spamhaus Attack

For several weeks, The Spamhaus Project and the countermovement known as Operation Stophaus have been dueling it out in public forums such as Pastebin. Operation Stophaus attackers took aim at Spamhaus, claiming the group was using The Spamhaus Project as a front to conceal an offshore criminal network of Internet terrorists pretending to be spam fighters.

Early on March 28, 10 days after the DDoS assault began, Spamhaus found itself so besieged by press inquiries that it set up an FAQ page to address questions about the attack.

On that FAQ page, Spamhaus claims the DDoS attack has subsided, and declines to point fingers at a single source to blame for the attacks. “A number of people have claimed to be involved in these attacks,” Spamhaus states. “At this moment, it is not possible for us to say whether they are really involved.”

News reports, including one by The New York Times, say the attack began on March 18 after Spamhaus added CyberBunker, a Dutch data storage company, to its blacklist of spammers. CyberBunker has not claimed credit for the attack, which is said to have been so massive that it jammed Internet traffic to the point where users had difficulty accessing Netflix and other consumer sites.

Spamhaus also dodges the question of whether this is truly “the biggest cyber-attack in history,” saying only, “It certainly is the biggest attack ever directed at Spamhaus.”

But the organization is using the incident as a global rallying cry for organizations to improve their abilities to detect and deflect DDoS.

“These attacks should be a call-to-action for the Internet community as a whole to address and fix those problems [that enable DDoS],” Spamhaus says.

‘Almost Broke the Internet’

CloudFlare, retained by Spamhaus to help mitigate the attack, has posted two blogs about the incident. The latest posting, The DDoS that Almost Broke the Internet, goes into great technical detail about the attack, which relied not on just a botnet of PCs, but on the strength of open recursive DNS resolvers, which are used in the DNS process to translate URLs into IP addresses. Using open DNS resolvers gave the attackers massive strength, CloudFlare says.

“Unlike traditional botnets, which could only generate limited traffic because of the modest Internet connections and home PCs they typically run on, these open resolvers are typically running on big servers with fat pipes,” CloudFlare writes in its latest blog. CloudFlare goes on to compare the attack vectors to bazookas, which caused the collateral damage of jamming the Internet for millions of users.

“If the Internet felt a bit more sluggish for you over the last few days in Europe, this may be part of the reason why,” CloudFlare writes. “What’s troubling is that, compared with what is possible, this attack may prove to be relatively modest.”

Attack Size Relative to Others

Meanwhile, U.S. banking institutions continue to be targeted by DDoS attacks attributed to Izz ad-Din al-Qassam Cyber Fighters. Two new institutions, TD Bank and Key Bank, this week confirmed that they are among the latest DDoS victims, which include more than a dozen U.S. banks and credit unions that have suffered online outages since the attacks began last fall.

But some DDoS experts say the Spamhaus and bank attacks are completely separate.

Holden says the bandwidth consumed during the Spamhaus attack was four to five times greater than what U.S. banking institutions have faced. But the traffic was just noise that had a ripple effect that impacted other Internet users.

“Because it was so large, it brought damage to others on the Internet, outside the intended victim,” Holden says. “Any streaming media with a streaming connection, such as Skype or Netflix, could have experienced a disruption.”

But Aaron Rudger, Web performance marketing manager for online-traffic monitoring and performance provider Keynote, says online traffic patterns for the last four weeks reveal the attack was not so large.

“In other words, the Internet appeared to be relatively unclogged throughout most of the DDoS event,” Rudger says. “There is a little blip that shows up [March 26] across the European agents,” but nothing extremely significant, he adds. From March 13 through March 27, those European agents experienced online response times that were 40 percent slower than average, he notes.

Keynote’s KB40 Index, which includes online-uptime traffic measurements for the top 40 websites in the world – including a handful of European agents and three U.S. agents – shows traffic experienced its greatest dip between the hours of 8:30 a.m. PT and 2:30 p.m. PT on March 26. But none of the online outages were that significant.

“I don’t have any reason to not believe in the severity or the size of the attacks, as they’ve been characterized in the media,” Rudger says. “What I have less confidence in is the impact that this attack has had on the rest of the Internet. There does not appear to be that massive slow-down that has been reported, but we cannot substantiate that across our network.”

The Internet is designed to be extremely resilient, he says. So a single focused attack would not have that big of an impact.

“I think the U.S. may just be more used to these types of attacks,” Rudger adds. “This attack does seem to be a little overly exaggerated.”

Carl Herberger, a security expert at DDoS-prevention provider Radware, says the Stophaus attack was not extraordinary. “We don’t see this as being the largest attack ever,” he says. “From our perspective, there’s nothing there that has not become fairly normal, when it comes to online attacks.”

Although he’s reluctant to put any gigabyte size to the attack, since determining a specific size is too subjective, Herberger says the numbers Radware has seen don’t suggest the attack was all that substantial.

Relative to attacks U.S. banks have been facing, this attack was relatively low-grade, Holden says.

“The DNS deflection attacks can consume a great deal of bandwidth, but they are different than what we’ve seen against the banks,” he says. “These guys would not be able to do the sophisticated, targeted attacks that are being launched against U.S. banks.”


Source: www.bankinfosecurity.com

Gartner: Application Layer DDoS Attacks to Increase in 2013

In 2013, less will be more.

Volumetric, blunt-force attacks will remain the primary type of Distributed Denial Of Service Attack (DDoS) in the coming year, but there will be noticeable growth in the incidence of low-and-slow application layer DDoS attacks, according to new research by Gartner.

In a report titled, “Arming Financial and E-Commerce Services Against Top 2013 Cyberthreats,” Gartner forecasts that 25% of ALL DDoS attacks in 2013 will be application-based. These incidents, which send out targeted commands to applications to tax the central processing unit (CPU) and memory and make the application unavailable, are more sophisticated and subtle than typical flooding DDoS assaults, and often pass through network defenses unnoticed.

In late 2012 and continuing on into 2013, the financial sector has been dogged by a well-publicized barrage of DDoS attacks. Initially, the attacks were of the nuisance variety, preventing customers from logging on to online banking portals. Of late, Islamic hacktivists, such as the extremist group Izz ad-Din al-Qassam Cyber Fighters claimed to have initiated these attacks over a blasphemous YouTube video, they were seen as a vehicle for social and political protest.

In general, DDoS attacks have been and continue to be a popular tactic due to the relative simplicity, low cost to conduct, and the large number of potential targets. According to Gartner, in late 2012, attacks grew in size  to upwards of 70 Gbps of noisy network traffic blasting at the banks through their Internet pipes. Until this recent spate of attacks, most network-level DDoS attacks consumed only five Gbps of bandwidth, but more recent levels made it impossible for bank customers and others using the same pipes to get to their websites.

As these attacks began to proliferate, they became cover for a more criminal element aiming to utilize these DDoS attacks for monetary gain. A recent heist in which attackers apparently pilfered $900,000 from San Francisco-based Bank of the West points to an emerging trend. In this scenario, a DDoS attack was used as a diversion as attackers ended up utilizing remotely accessible malware to siphon money from unwitting users’ accounts.

Gartner’s report, as well as recent alerts issued by federal regulators, echoes these warnings.

“A new class of damaging DDoS attacks and devious criminal social-engineering ploys were launched against U.S. banks in the second half of 2012, and this will continue in 2013 as well-organized criminal activity takes advantage of weaknesses in people, processes and systems,” Avivah Litan, vice president and distinguished analyst at Gartner, said in a press release issued on the report.

This announcement comes at a pivotal time for the financial services industry and reinforces the findings of a recent study issued by the Ponemon Institute, and commissioned by Corero Network Security, that surveyed 650 IT professionals from 351 U.S. banks.

The report, titled “A Study of Retail Banks and DDoS Attacks,” found that while 78% of those surveyed believed that DDoS attacks will continue or significantly increase in 2013, only 30% planned to purchase any additional security infrastructure to combat these attacks. A worrisome sign that these attacks, with their increasing level of sophistication, will continue to expand.

Banks are not alone. In fact, Litan maintains that any entity that uses the Internet to conduct business is at heightened risk.

“Organizations that have a critical Web presence and cannot afford relatively lengthy disruptions in online service should employ a layered approach that combines multiple DOS defenses,” said Litan.

In addition to the application-layer findings, in the press release Gartner warns of these continued developments:

  • “High-bandwidth DDoS attacks are becoming the new norm and will continue wreaking havoc on unprepared enterprises in 2013.”
  • “Hackers use DDoS attacks to distract security staff so that they can steal sensitive information or money from accounts.”
  • “People continue to be the weakest link in the security chain, as criminal social engineering ploys reach new levels of deviousness in 2013.”

Litan added that “2012 witnessed a new level of sophistication in organized attacks against enterprises across the globe. And they will grow in sophistication and effectiveness in 2013.”

A complete copy of the report can be found here.


Source: http://www.securitybistro.com

Kaspersky: New Botnet Discovered; Potential Threat to Chilean Banks

If you have money in any Latin American banks, it might be a good idea to begin storing some of that cash under the mattress.

According to a recent blog post from a Kaspersky Labs expert in Argentina, a new weapon in the emerging Latin American cybercrime space is now targeting two large Chilean banks. AlbaBotnet is designed to unleash phishing attacks with an aim on stealing online account information.

Curiously, the botnet has yet to inflict any financial harm, according to the post. According to data analyzed by researchers, AlbaBotnet – which the author of the threat began testing in early 2012 — remains in a trial stage.

The botnet works like many others that have been recently discovered in Latin America: It allows the attacker to customize and automatically deliver emails, thus utilizing a social engineering component to target unsuspecting users.

“The botnet appears to have a similar structure to its Latin American counterparts,” said researcher Jorge Mieres in the post. “As well as the default automated malware builder, it includes a package which automatically sends emails.”

Mieres is referring to three other botnets discovered earlier this year:  vOlk (Mexico), S.A.P.Z. (Peru) and PiceBOT (which Kaspersky noted was discovered in use in Chile, Peru, Panama, Costa Rica, Mexico, Colombia, Uruguay, Venezuela, Ecuador, Nicaragua and Argentina). These, including the newly discovered AlbaBot, all operate in a similar fashion, leading researchers to believe they all share some similar code.

The maker of this botnet is likely after a reasonable, yet consistent haul, since the cost on the black market for these programs is relatively cheap (less than $200) AND the success rate of these botnets, according to Mieres in an earlier post, has been unusually high.

For more of the technical nature of this botnet, read on here.

Who creates malware and why?

Let us first answer the main question. Who benefits from it? Why have computers, networks, and mobile phones become carriers of not only useful information, but also a “habitat” for different malicious programs? It is not difficult to answer this question. All (or almost all) inventions, mass use technologies have, sooner or later, become a tool of hooligans, swindlers, blackmailers and other criminals. As soon as there is an opportunity to misuse something, somebody will definitely find new technologies and use them in a way that was not intended by the inventors, but in an altogether different way — for their own interests or to assert themselves to the detriment of others. Unfortunately, computers, mobile phones, computer and mobile networks have not escaped this fate. As soon as these technologies started being used by the masses, the bad guys stepped in. However, the criminalization of these innovations was a gradual process.

Computer vandalism

In the past the majority of viruses and Trojans were created by students who had just mastered a programming language and wanted to try it out, but failed to find a better platform for their skills. Up to present time writers such viruses were seeking only one thing – to raise self-esteem. Fortunately, a large part of such viruses have not been distributed (by their authors) and shortly viruses “died away” together with the storage disks or authors of viruses sent them only to anti-virus companies with a note that the virus would not be further transferred.

The second group viruses-writers also includes young people (often — students), who have not yet fully mastered the art of programming. Inferiority complex is the only reason prompting them to write viruses, which is compensated by computer hooliganism. Such “craftsmen” often produce primitive viruses with numerous mistakes (the so-called “student viruses”). Life of such virus-writers has become much simpler with the development of Internet and emergence of numerous websites training how to write a computer virus. Web-resources of this kind give detailed recommendations on how to intrude into the system, conceal from anti-virus programs and offer ways of further distribution of a virus. Often ready original texts are provided, which require only minimal “author” changes and compilation as recommended.

When older and more experienced, many virus-writers fall into the third and most dangerous group, creating professional viruses and lets them out to the world. These elaborate and smoothly running programs are created by professionals, not infrequently very talented programmers. These viruses often intrude into data system domains in very unusual ways, use mistakes of security systems of operating environments’, social engineering and other tricks.

The fourth group of malware writers is very special— “researchers”, rather shrewd programmers who invent new methods of infecting, concealing and resistance to anti-viruses etc. They also invent ways of intrusion into new operational systems. These programmers create viruses not for the sake of viruses themselves, but rather to research the potential of “computer fauna” — they produce the so-called “connectional viruses” (Proof of Concept — PoC). Often their authors do not spread these creations, but actively promote their ideas via numerous Internet resources, devoted to the creation of viruses. The danger of such “research viruses” is also very high — when falling among the third groups of “professionals”, new viruses where these ideas are revealed emerge in no time.

“Traditional” viruses created by people mentioned above are still emerging – hooligan teenagers who become adults are constantly replaced by the new generations (of teenagers). Interestingly enough, recently “hooligan viruses” have become less and less relevant — except when malicious programs evoke global network and e-mail epidemics. New viruses of “traditional“ type are considerably decreasing in number — 2005-2006 faced a dramatic decrease in their number as compared to mid and late 1990. There are several possible reasons why students are not as interested to creating viruses.

  1. It was a lot easier to create viruses for MS-DOS in the 1990-s than for the more complex Windows.
  2. Special computer-related articles were introduced to legislation of many countries and arrests of virus writers were widely covered by the press, which definitely cooled students’ interest to viruses.
  3. Moreover, they found a new way to show their worth — network games. Most probably, modern games shifted the interest and attracted computerized young people.

Thus, currently the share of “traditional” hooligan viruses and Trojans is no more than 5% of all programs registered in anti-virus databases. The remaining 95% are much more dangerous than simply viruses. They are created for the following purposes.

Petty theft

Following emergence and promotion of paid internet-services (mail, web, hosting) computer underground members start to take a interest to how to access to network at somebody else’s expense, i.e. by stealing somebody’s login and password (or several logins and passwords from different infected computers) by using specially developed Trojans.

1997 brought the emergence and spread of Trojans designed to steal AOL passwords. In 1998 with further spread of Internet services, Trojans of this kind start to affect other Internet-services as well. Such Trojans, as viruses themselves, are usually written by young people who cannot pay for Internet-services. (It is noteworthy), as the cost of Internet-services gets lower the proportion number of such Trojans decreases accordingly. However, Trojans stealing passwords to dial-up, AOL, ICQ and access codes to other services constitute a considerable part of everyday “inflows’ to labs of anti-virus companies all around the globe.

Petty thieves also create other types of Trojans which steal account information and key files of various program products and resources of infected computers for the benefit of their “master” e. t. c.

In recent years there has been a constant increase in the number of Trojans, stealing personal information from network games (gaming virtual property) for unauthorized use or resale. Such Trojans are especially widely spread in Asian countries, especially China, Korea and Japan.


The most dangerous group of virus writers is hackers or groups of hackers who intentionally create malicious programs in their own interests. They create such virus and Trojan programs which steal access codes to bank accounts, obtrusively advertise products or services, illegally use resources of the infected computer (for the purpose of getting money again – to develop spam-business or arrange distributed network attacks further aiming at blackmailing). Activities of this kind (of individuals) are multifarious. Let us look at major types of criminal business in the network in more detail.

Support for spammers

Trojan proxy-servers and multipurpose Trojans functioning as proxy servers make up “zombie-networks” (proxy server — utility used for anonymous work in the network, usually installed on a dedicated computer) (designed) to mass-mail spam. Further Trojan proxy-servers get a spam sample and addresses to mail this spam from their “master”.

In sending spam from thousands (or tens of thousands) of infected computers spammers achieve several aims:

  • distribution is anonymous — message headings and other service information in the letter do not allow to discover the real address of the spammer;
  • spam-mailing is very fast, as it involves many “zombie-computers”;
  • “black list” technologies of tracing addresses of infected machines are ineffective in this case — it does not seem possible to trace all spam-mailing computers as there are too many of them.

Distributed network attacks

Also referred to as DDoS-attacks (Distributed Denial of Service). Network resources (eg. web-servers) are limited in the number of requests serviced simultaneously — it is limited in capacities of the server as well as width of the channel used to connect it to the Internet. If the number of requests exceeds allowable, either operation of the server will become considerable slower, or users’ requests will be ignored at all.

Taking advantage if this, computer hackers initiate “garbage” requests to the attacked resource, with the number of such requests manifold exceeding potential of the victim resource. A “zombie-network” a mass DDoS-attack starts attacking one or several internet-resources entailing failure of attacked network nodes.

As a result, the attacked resource becomes inaccessible for common users. Usually Internet-stores, Internet-casinos and other businesses which are highly dependent on efficiency of Internet-services are affected. Most often distributed attacks are arranged either to discredit competitor’s business or request money for stop the attack — an Internet-racket of a sort.

In 2002-2004 this kind of criminal activity was quite common. Later it recoiled, which seemed to be accounted for by successful police investigations (at least several tens of people all around the world have been arrested) and due to quite successful technical countermeasures (to such attacks).


Special Trojans – ‘bots’ (from “robot”) are created for this kind of networks, centrally managed by the remote “master”. The Trojan intrudes into thousands, tens of thousands or even millions of computers. This enables the master of the “zombie network” (or “bot-network”) to access resources of all infected computers and use them to own benefits. Sometimes such networks of “zombie-machines” come into the black Internet-market where they are acquired by spammers or rented.

Calls to premium-pay numbers or sending paid SMS

Cybercriminals, or groups of cybercriminals, create and distribute a special program which illegally makes telephone calls or sends SMS messages from mobile phones, which is not authorized by the user. Before this or in parallel the same time the same people register the company on whose behalf a contract with the local mobile provider on paid service is made.

Naturally, the provider is not notified that these calls are not authorized by the user. Then a Trojan calls a paid telephone number, the mobile company выставляет accounts for the numbers which initiated the calls and pays the hacker the sum defined by the contract.

Stealing electronic currency

To be more precise, this includes creation, distribution and maintenance of Trojan spy programs aimed to steal funds from personal e-wallets (e.g. e-gold, WebMoney). Trojan programs of this kind collect information on access codes to accounts and send it to their “master”. Usually the information is collected by searching and decoding files which store personal data of the account’s owner.

Stealing banking information

This is currently one of the most common types of criminal activity on the Internet. In this case numbers of credit cards and access codes to Internet personal (sometimes even corporate) bank accounts ((“Internet-banking”) are at risk. In such attacks Trojan spies use a wide range of methods. For instance, they show a dialogue window or image which duplicates the web-page of the bank and request login and password from the user to access the account or a credit card number (similar methods are also typical of phishing — spam mailings with imitation text which reminds a message from the bank or other Internet-service).

In order to get the user to enter his/ her personal data, social engineering tricks are used. The user is informed about negative consequences if he does not enter the code (e.g. internet-bank will cease to serve the account) or that something very positive will not happen (“a lot of money will be deposited on your account — please, confirm your account details”).

Often a keylogger Trojan (“keyboard spies”) are waiting for the user to connect to his original banking web-page and capture symbols inserted from the keyboard (i.e. login and password). For this purpose they monitor launch and activity of applications and if user uses a browser, compare the name of the website with the list of banks registered in the Trojan’s code. If the web-site is found in the list, the keyboard spy is activated and the tapped information (the sequence of keys) sent to the hacker. Trojans of this type (unlike other bank Trojans) do not reveal themselves in the system.

Stealing other confidential information

Hackers may take an interest not only in financial, but any other valuable information — databases, technical documentation e.t.c. To access and steal this information specially developed Trojan spies intrude into victim computers.

Also legal network applications are known to be used for the attack. An FTP-server, for example, would secretly intrude into the system or file-exchange («Peer-to-Peer» — P2P) program software would also be secretly installed. As a result, computer’s files became accessible from the outside. Due to numerous incidents, connected with felonious use of P2P-networks, they were officially banned in France and Japan in 2006.

Cyber blackmail and cyber extortion

Cybercriminals create Trojans which can encrypt a user’s personal files. The Trojan penetrates the system, searches for and encrypts the user data and then leaves a message that files are not subject to restoration and that the decryption program can be obtained by contacting the address given in the message.

Archiving user files encrypted with a long password is another notorious method of cyber blackmail. Once the original files have been archived, they are deleted followed by a request to transfer a certain amount of money in exchange for the password to the archive.

This type of cybercrime (data encryption) is critically dangerous from the technical perspective. In other cases it is possible to protect the computer from the Trojan, however in this case one has to deal with firm encoding algorithms. If such algorithms and keys (passwords) are long enough, it becomes technically impossible to restore files without getting the information from the hacker.

Evolving “delivery methods”

To commit the crimes described above, cybercriminals have created and distribute network worms which have caused numerous Internet epidemics. Their major aim is to install criminal Trojans on as many computers as possible in the global network. Mydoom and Bagle, notorious since 2004, and the Warezov mail worm, which emerged in 2006, are examples of such worms.

In some cases the aim is not that of “maximum coverage” — vice versa, the number of infected computers seems to be purposefully limited, not to attract too much attention of law enforcement agencies. In such cases victim computers are intruded not by the uncontrolled network worm, but, for instance, through infected web-page. Criminals can register the number of visitors to the page and the number of successful infecting — and develop the Trojan code when the required number of infected computer is reached.

Targeted attacks

Unlike mass attacks, aimed to infect as many computers as possible, targeted attacks have an altogether different purpose — to infect the network of a certain company or organization or implement a specially developed Trojan agent to the single node (server) of the network infrastructure. Companies in possession of valuable information, such as banks, billing companies (e.g. telephone companies) e. t. c. are at risk in this case.

The reason why bank servers or networks are attacked is obvious: criminals are trying to access bank information, illegally transfer funds (sometimes — in very considerable amounts) to the account(s) of the hacker. When billing companies are attacked, the aim is to access clients’ accounts. Targeted attacks are seeking any valuable information stored at the network servers, i.e. client databases, financial and technical documentation — everything that can be of interest for a potential hacker.

Usually large companies holding critical and valuable information are attacked. Their network infrastructure is quite well protected from external attacks and without any internal help it is not possible to intrude it. Therefore most frequently such attacks are arranged either by employees of attacked companies (insiders) or with their direct participation.

Other criminal activity

Other cybercrimes do exist, but are not yet widespread. These are the theft (collection) of e-mail addresses from infected computers and selling them to spammers, search of exposures in operating systems and applications and selling them to other computer criminals. These businesses also include development and selling of custom-made Trojans e. t. c. Most probably, as existing Internet-services develop and new ones emerge, new crimes in the cyber-space will also appear.

Grey market business

Beyond student virus-writers and purely criminal business in the Internet there are “grey” businesses – activities existing on the brink of law. Imposing electronic advertisement, utilities, offering user to visit this or that paid web-resource and other types of unwanted software — they all also require technical support of hacker programmer. It is requires to secretly intrude into the system, repetitive renewal of components and various masking (to protect from deletion from the system), resist anti-virus programs — these aims almost fully coincide with the functional of different Trojans.


Special advertising components penetrate the system, download advertising information from special servers and show it to the user. In most cases (but not always) the intrusion into the system happens unknown for the user and pop-ups appear only when the Internet-browser is operating (as advertising systems are masked as advertisement banners of web-sites).

After several USA states passed anti-advertisement regulations, Adware developers actually turned out to be beyond law (and practically all of them are American companies). Finally some of them legalized their developments to the maximum: Adware is currently supplied with an installator, there is an icon on the systems panel and a deinstallator. However, hardly any person of sound mind will be willing to install an advertising system on his computer, therefore legal Adware is ‘hard-sold’ together with some free software.

Adware is installed together with this software: most users click “OK”, ignoring texts (appearing) on the screen — and get advertising programs together with the ones being installed. As often a half of the desktop and system panel are filled with various icons, the icon of the advertisement program becomes lost among them. Thus Adware, legal de jure, is installed secretly from the user and is not seen in the system.

It should be noted that in some cases it is impossible to delete legal advertising systems without affection of operation of the main software. Thus producers of Adware protect it from deinstallation.

Pornography and premium-pay resources

To attract users to paid web-sites often different programs are used which de jure are not categorized as malicious as they do not conceal their presence, and the user appears on the paid resource having positively answered а corresponding question. However, installation of such is not authorized by the user, and for instance when the user visits dubious web-sites. Then they obtrusively offer (the user) to visit this or that paid resource.

Rogue antivirus and anti-spyware programs

This is a relatively new type of cybercrime. The user is fobbed off with a small program, which informs that spyware or virus has been detected on the computer. The message appears in any case regardless of the actual situation – even if no other programs except ОС Windows are installed on the computer. At the same time the user is offered to purchase a “treatment” for a small sum of money which in fact does not cure anything.

HTTPS cookie crypto CRUMBLES AGAIN in hands of stats boffins

Fresh cryptographic weaknesses have been found in the technology used by Google and other internet giants to encrypt online shopping, banking and web browsing.

The attack, developed by security researchers at Royal Holloway, University of London and University of Illinois at Chicago, targets weaknesses in the ageing but popular RC4 stream cipher. RC4 is quick and simple, and is used in the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols of HTTPS to protect sensitive web traffic from prying eyes.

But data encrypted by the algorithm can be carefully analysed to silently extract the original information, such as an authentication cookie used to log into a victim’s Gmail account. Cracking the encryption on a punter’s web traffic is difficult to pull off, though, for the moment.

The boffins explained:

We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS cyphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions.

An attack using the researchers’ findings could work like this: a victim opens a web page containing malicious JavaScript code that tries to log into Google Gmail on behalf of the user via HTTPS; doing so sends the victim’s RC4-encrypted authentication cookie (created the last time the punter logged in) using a new session key. Someone eavesdropping on the network then records the encrypted data sent and the JavaScript terminates the connection; it repeats this continually, forcing new keys to be used each time, and thus allows someone snooping on the connections to build up a treasure trove of encoded messages.

Ideally, this data should appear to be random, but RC4 suffers from statistical biases that will reveal parts of the encrypted sensitive information over time – provided the attacker can gather millions of samples to process. In this way, it is similar to the earlier BEAST attack on SSL connections.

The Royal Holloway and Chicago team argue that the most effective countermeasure against the attack is to stop using RC4 in TLS.

“There are other, less-effective countermeasures against our attacks and we are working with a number of TLS software developers to prepare patches and security advisories,” the computer scientists revealed in an advisory on their research.

RC4 is used by many websites to provide HTTPS encryption – including Google

Dan Bernstein, one of the researchers, unveiled the attack at the Fast Software Encryption conference in Singapore this week.

“Unfortunately, if your connection is encrypted using RC4, as is the case with Gmail, then each time you make a fresh connection to the Gmail site, you’re sending a new encrypted copy of the same cookie,” explained Matthew Green, a cryptographer and research professor at Johns Hopkins University in Maryland, US.

“If the session is renegotiated (ie, uses a different key) between those connections, then the attacker can build up the list of ciphertexts he needs.

“To make this happen quickly, an attacker can send you a piece of JavaScript that your browser will run – possibly on a non-HTTPS tab. This JavaScript can then send many HTTPS requests to Google, ensuring that an eavesdropper will quickly build up thousands, or millions, of requests to analyse.”

Other security experts say there’s no need to panic.

“It’s not a very practical attack in general, requiring at least 16,777,216 captured sessions, but as mentioned, attacks will only improve in time,” said Arnold Yau, lead developer at mobile security firm Hoverkey. “I think it’d be wise for TLS deployments to migrate away from RC4 as advised.”

RC4 was invented by Ron Rivest in 1987. Various attacks have been developed against RC4, which is used in Wi-Fi WEP protection, but the technology is still widely used. About 50 per cent of all TLS traffic is protected using RC4, and its use is, if anything, growing after another encryption algorithm in TLS, Cipher-block Chaining (CBC), was broken by experts.

TLS in CBC-mode was cracked by the BEAST and Lucky 13 techniques, which use so-called padding oracle attacks to defeat HTTPS encryption. Cryptographers at Royal Holloway, University of London developed the Lucky 13 breakthrough; BEAST was unleashed by Juliano Rizzo and Thai Duong – who also designed the CRIME attack on HTTPS that exploits the use of data compression in TLS rather than abusing ciphers.

“I will say, it’s funny seeing the RC4 breakers recommend CBC, and vice versa,” said noted security researcher Dan Kaminsky.

Marsh Ray, of PhoneFactor, a recent Microsoft acquisition, offered a different take: “Until I see three practical ways Duong and Rizzo can decrypt a cookie as a stage trick over RC4 think I’ll continue to recommend it over CBC.”

Separately, another team of crypto-researchers took the wraps off a refinement of the CRIME attack: the TIME (Timing Info-leak Made Easy) technique could be used to decrypt browser cookies to hijack online accounts in the process. Tal Be’ery and Amichai Shulman of Imperva unveiled their research at the Black Hat conference in Amsterdam, the Netherlands.


Source: theregister.co.uk

It’s Time to Think Outside the Sandbox

Attackers are Thinking Outside of the Sandbox and so Must We…

Sandbox-SecurityOver the years we’ve all heard claims of ‘silver bullet’ solutions to solve security problems. One of the most recent claims has been around the use of sandboxing technology alone to fight advanced malware and targeted threats.

The idea behind sandboxing is that you limit the impact malware can have by isolating an unknown or untrusted file, constraining it to run in a tightly controlled environment and watching it for suspect or malicious behavior. Sandbox technology can mitigate risk, but it doesn’t remove it entirely.

One of the challenges with deploying a sandbox-only solution to deal with malware is that attackers are making it their job to understand security technologies, how they work, where they are deployed and how to exploit their weaknesses. This includes sandbox detection.

The attack chain, a simplified version of the “cyber kill chain,” (the chain of events that leads up to and through the phases of an attack) illustrates how relying on a sandbox-only antimalware solution can create a false sense of security.

Survey: Attackers start with surveillance malware to get a full picture of your environment. This encompasses the extended network that also includes endpoints, mobile devices and virtual desktops and data centers, as well as the security technologies deployed, such as sandboxing.

Write: Based on this intelligence, attackers then create targeted, context-aware malware.

Test: They validate that the malware works as intended by recreating your environment to ensure the malware successfully evades the security tools you have place, for example detecting if it is in a sandbox and acting differently than on a user system or not executing at all.

Execute: Attackers then navigate through your extended network, environmentally aware, evading detection and moving laterally until reaching the target.

Accomplish the mission: Be it to gather data or destroy, the attacker is positioned to maximize success of the mission.

Given the attack chain, we can quickly see that motivated and sophisticated attackers can and do defeat even multiple layers of detection technologies. In fact, the Verizon 2012 Data Breach Investigations Report found that in over half of the incidents investigated it took months – sometimes even years – for a breach to be discovered. That’s more than ample time for the attacker to accomplish the mission, remove evidence and establish a beachhead for subsequent attacks.

Detection will always be important, but these technologies only scan files once at an initial point in time to determine if they are malicious. If the file isn’t caught or if it evolves and becomes malicious after entering your environment, point in time detection technologies cease to be a factor in the unfolding follow-on activities of the attacker.

Thwarting attacks can’t be just about detection but also about mitigating the impact once an attacker gets in. You need to take a proactive stance to understand the scope of the damage, contain the event, remediate it and bring operations back to normal. Technologies that also enable continuous analysis and retrospective security are now essential to defeat malware.

• Continuous analysis uses big data analytics to constantly gather and analyze files that have moved across the wire and into the network. Should a file pass through that was thought to be safe but later demonstrates malicious behavior, you can automatically be alerted to take action.

• Retrospective security uses this real-time security intelligence to determine the extent of the damage, contain it and remediate the malware. Compromises that would have gone undetected for weeks or months can be identified, scoped, contained and cleaned up rapidly.

When it comes to defending our networks today, it’s clear that silver bullet solutions don’t exist. Not a day goes by that we don’t read about another successful breach. Attackers are thinking outside of the sandbox and so must we.

Ongoing “Invoice” Attack Campaign Delivers Booby-trapped PDFs

An ongoing malicious email campaign is masquerading as an unpaid invoice, a Kaspersky Lab researcher said Thursday.

In this recurring campaign, cyber-criminals are sending out emails with a malicious PDF attachment masquerading as notices and reminders to pay overdue bills, Ben Godwood, a researcher with Kaspersky Lab, wrote on the SecureList blog on Thursday. The email campaign appears to have been ongoing since November, and follow a set schedule, hitting victim inboxes either on the 4th or the 21st of the month.

Kaspersky Lab detected the latest batch of specially crafted PDF messages on March 4, Godwood said. Most of the emails were sent from German IP addresses, and appear to have been sent from compromised home computers, Godwood said. The attack emails were mostly sent from German IP addresses in the latest iteration of the campaign, Godwood said, previous messages appear to have been sent from infected bots in other countries.

Kaspersky blocked “a large number of emails” with the filename including the word “invoice” on Feb. 21, Jan. 4, and Nov. 21, Godwood said. The messages originated from various countries, including South Africa, United States, Australia, and Japan, and the attack code attempted to download additional malware from servers in Germany, United Kingdom, Sweden, and Israel.

“Looking back through our past feedback data, we noticed similar patterns on the 4th and 21st of several months,” Godwood said.

The attack code in the booby-trapped PDF document triggered an old vulnerability in the image library for Adobe Acrobat (CVE-2010-0188), Godwood found. The actual exploit was “not easy to spot” because it was buried under two layers of JavaScript, he said. Based on the image samples posted on the blog, it appears the actual attack code was hidden inside binary data. The second layer of JavaScript code looks very similar to the code in various samples created by BlackHole exploit kit last year, Godwood said.

When the victim opened the file, the attack code downloaded an executable file. The Trojan regularly communicates with a remote server after it installs itself.

If you receive an invoice on March 21 or April 4, be extra cautious, Godwood said. However, since the criminals can always change the dates they run the scam, “it’s better to be cautious all the time,” he said.


Source: www.securityweek.com

DDoS Attacks on Banks Resume – Are You Ready?

A new wave of DDoS attacks on banks and financial institutions has started.

Qassam Cyber Fighters have updated their Pastebin page with a new upcoming threats attacks.
The targets marked for this attack are:

  • BB&T
  • Bank Of America
  • Chase Bank
  • PNC
  • Union Bank
  • US Bank
  • Fifth Third Bank
  • Citibank

Many of the US banks are under attack. The DDoS attacks are severe and cause outages to the online services of the victims.
The attacks are both volumetric and application level, exploiting weaknesses in the cybersecurity defenses of the banks.
These attacks might spread soon to other territories and to other verticals beyond the banks.

Security specialists in all organizations must make sure they are ready and need to ask themselves:

  • Do we have dedicated anti-DDoS solution?
  • Can the anti-DDoS solution protect our firewall, IPS and ADC?
  • Do we have a solution for a volumetric attack the blocks the Internet pipe?
  • Does our solution detect and mitigate SSL based attacks on our secured online services?

You can read more on these attacks here: