ClearEnergy ransomware can destroy process automation logics in critical infrastructure, SCADA and industrial control systems.


Schneider Electric, Allen-Bradley, General Electric (GE) and more vendors are vulnerable to ClearEnergy ransomware.

Researchers at CRITIFENCE® Critical Infrastructure and SCADA/ICS Cyber Threats Research Group have demonstrated this week a new proof of concept ransomware attack aiming to erase (clear) the ladder logic diagram in Programmable Logic Controllers (PLCs). The ransomware a.k.a ClearEnergy affects a massive range of PLC models of world’s largest manufacturers of SCADA and Industrial Control Systems. This includes Schneider Electric Unity series PLCs and Unity OS from version 2.6 and later, other PLC models of leading vendors include GE and Allen-Bradley (MicroLogix family) which are also found to be vulnerable to the ransomware attack.ransomware attack.ransomware a.k.a ClearEnergy affects a massive range of PLC models of world’s largest manufacturers of SCADA and Industrial Control Systems. This includes Schneider Electric Unity series PLCs and Unity OS from version 2.6 and later, other PLC models of leading vendors include GE and Allen-Bradley (MicroLogix family) which are also found to be vulnerable to the ransomware attack.

Ransomware is a type of malware that infects computers and encrypts their content with strong encryption algorithms, and then demands a ransom to decrypt that data. “ClearEnergy attack is based on the most comprehensive and dangerous vulnerability that ever found in Critical Infrastructure, SCADA and ICS Systems, and affects a wide range of vulnerable products from different manufacturers and vendors. These attacks target the most important assets and critical infrastructure and not just because they are easy to attack but also hard to be recovered”. Says Brig. Gen. (ret.) Rami Ben Efraim, CEO at CRITIFENCE.

In 2016 we have seen a rise in ransomware, where the victims were businesses or public organizations that on one hand had poor security and on the other hand the alternative cost of losing business continuity was high. Last year there were reports of a targeted ransomware for PC and other workstation within critical infrastructure, SCADA and industrial control systems. A month ago, scientists from the School of Electrical and Computer Engineering in Georgia Institute of Technology have simulated a proof-of-concept ransomware attack (LogicLocker) in a limited scope designed to attack critical infrastructure, SCADA and industrial control systems.

ClearEnergy acts similarly to other malicious ransomware programs that infect computers and encrypts their content with strong encryption algorithms, and then demands a ransom to decrypt that data back to its original form, with one major difference. ClearEnergy is a malicious ransomware attack designed to target Critical Infrastructure and SCADA systems such nuclear and power plant facilities, water and waste facilities, transportation infrastructure and more.

“Although the codename ClearEnergy, the vulnerabilities behind ClearEnergy ransomware takes us to our worst nightmares where cyber-attacks meets critical infrastructure. Attackers can now take down our electricity, our water supply and our oil and gas infrastructure by compromising power plants, water dams and nuclear plants. Critical Infrastructure are the place in which terrorists, activists, criminals and state actors can make the biggest effect. They have the motivation, and ClearEnergy shows that they have also the opportunity.” Says Brig. Gen. (ret.) Rami Ben Efraim, CEO at CRITIFENCE.

Once ClearEnergy is executed on the victim machine it will search for vulnerable PLCs in order to grab the ladder logic diagram from the PLC and will try to upload it to a remote server. Finally ClearEnergy will start a timer that will trigger a process to wipe the logic diagram from all PLCs after one hour unless the victim will pay in order to cancel the timer and to stop the attack.

SCADA and Industrial Control Systems has been found to be weak in the recent years, against numerous types of attacks that result in damages in a form of loss of service which translate to a power outage, or sabotage. The damage that ClearEnergy attack can cause to the critical infrastructure is high since it can cause a power failure and other damages to field equipment, thus making the recovery process slow in most cases, and might even bring a plant to a halt.

ClearEnergy, which is based on vulnerabilities CVE-2017-6032 (SVE-82003203) and CVE-2017-6034 (SVE-82003204) that have been discovered by CRITIFENCE security researchers, disclosed profound security flaws in the UMAS protocol of the vendor Schneider Electric. UMAS protocol seems to suffer from critical vulnerabilities in the form of bad design of the protocol session key, which results in authentication bypass. “UMAS is a Kernel level protocol and an administrative control layer used in Unity series PLC and Unity OS from 2.6. It relies on the Modicon Modbus protocol, a common protocol in Critical Infrastructure, SCADA and industrial control systems and used to access both unallocated and allocated Memory from PLC to SCADA system. What worries our researchers is that it may not be entirely patched within the coming years, since it affecta a wide range of hardware and vendors.” Says Mr. Eran Goldstein, CTO and Founder of CRITIFENCE.

Following to the disclosure, Schneider Electric has confirmed that the Modicon family of PLCs products are vulnerable to the findings presented by CRITIFENCE and released an Important Cybersecurity Notification (SEVD-2017-065-01). ICS-CERT, Department of Homeland Security (DHS) released an important advisory earlier this morning ([April 11, 2017] ICSA-17-101-01). The basic flaws, which was confirmed by Schneider Electric, allows an attacker to guess a weak (1-byte length) session key easily (256 possibilities) or even to sniff it. Using the session key, the attacker is able to get a full control of the controller, to read controller’s program and rewriting it back with the malicious code.

“The recovery process from this type of cyber-attacks can be very hard and slow in most cases due to lack of management resources in the field of SCADA and process automation. Slow recovery process multiplied by the number of devices need be fixed, as well configuration restoration makes the recovery processes very painful”. Says Mr. Alexey Baltacov, Critical Infrastructure Architect at CRITIFENCE

“Recovering from such an attack would be a slow and tedious process, and prone to many failures. Every plant using PLC’s which is part of a production line and would have dozens of these devices all around the plant. Let’s assume that each PLC is indeed backed-up to its recent configuration. It would take a painstakingly long time to recover each and every one of them to its original status.” Says Mr. Eyal Benderski, Head of the Critical Infrastructure and SCADA/ICS Cyber Threats Research Group at CRITIFENCE. “This restoration process would take a long time, on which the plant would be completely shut down. The costs of that shut down could be substantial, and for critical processes it could affect for more than the down-time, as it is the case with energy plants. Consider a process which relies on keeping a constant temperature for a biological agent or chemical process. Breaking the process chain could require re-initialization that may be days and weeks long. Furthermore, since dealing with the OT network is much more complicated for operational reasons, on many occasions plants don’t even have up-to-date backups, which would require complete reconfiguration of the manufacturing process. Given these complications, plants would very much prefer paying the ransom than dealing with the minor chance that the backups will work as expected. Lastly, let’s assume the backups went on-air as soon as possible, what would prevent the same attack from recurring, even after paying?”

About the author:

CRITIFENCE is a leading Critical Infrastructure, SCADA and Industrial Control Systems cyber security firm. The company developed and provides SCADAGate+ unique passive cyber security technology and solutions designed for Critical Infrastructure, SCADA and Industrial Control Systems visibility and vulnerability assessment,  which allow to monitor, control and to analyze OT network cyber security events and vulnerabilities easily and entirely passively. CRITIFENCE development team and Critical Infrastructure and SCADA/ICS Cyber Threats Research Group combined from top experienced SCADA and cyber security experts and researchers of the IDF’s Technology & Intelligence Unit 8200 (Israel’s NSA) and the Israeli Air Force (IAF).

For more information about CRITIFENCE refer to:







Source code

ClearEnergy | UMASploit –


Schneider Electric – SEVD-2017-065-01

ICS-CERT, Department of Homeland Security (DHS) – ICSA-17-101-01



SecurityAffairs –

0xICF –

VirusGuides –


Flaws in Samsung’s ‘Smart’ Home Let Hackers Unlock Doors and Set Off Fire Alarms





Credit:   Andy Greenberg, wired

[CRITICAL] Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

What if a car could be controlled from a computer halfway around the world? Computer security researcher and hacker Troy Hunt has managed to do just that, via a web browser and an Internet connection, with an unmodified Nissan Leaf in another country. While so far the control was limited to the HVAC system, it’s a revealing demonstration of what’s possible.

Hunt writes that his experiment started when an attendee at a developer security conference where Hunt was presenting realized that his car, a Nissan Leaf, could be accessed via the internet using Nissan’s phone app. Using the same methods as the app itself, any other Nissan Leaf could be controlled as well, from pretty much anywhere.

Hunt made contact with another security researcher and Leaf-owner, Scott Helme. Helme is based in the UK, and Hunt is based in Australia, so they arranged an experiment that would involve Hunt controlling Helme’s LEAF from halfway across the world. Here’s the video they produced of that experiment:

As you can see, Hunt was able to access the Leaf in the UK, which wasn’t even on, and gather extensive data from the car’s computer about recent trips, distances of those trips (recorded, oddly, in yards) power usage information, charge state, and so on. He was also able to access the HVAC system to turn on the heater or A/C, and to turn on the heated seats.

It makes sense these functions would be the most readily available, because those are essentially the set of things possible via Nissan’s Leaf mobile app, which people use to heat up or cool their cars before they get to them, remotely check on the state of charge, and so on.

This app is the key to how the Leaf can be accessed via the web, since that’s exactly what the app does. The original (and anonymous) researcher found that by making his computer a proxy between the app and the internet, the requests made from the app to Nissan’s servers can be seen. Here’s what a request looks like:

GET https://[redacted].com/orchestration_1111/gdc/BatteryStatusRecordsRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris&TimeFrom=2014-09-27T09:15:21

If you look in that code, you can see that part of the request includes a tag for VIN, which is the Vehicle Identification Number (obfuscated here) of the car. Changing this VIN is really all you need to do to access any particular Leaf. Remember, VIN are visible through the windshield of every car, by law.

Hunt describes the process on his site, and notes some alarming details:

This is pretty self-explanatory if you read through the response; we’re seeing the battery status of his LEAF. But what got Jan’s attention is not that he could get the vehicle’s present status, but rather that the request his phone had issued didn’t appear to contain any identity data about his authenticated session.

In other words, he was accessing the API anonymously. It’s a GET request so there was nothing passed in the body nor was there anything like a bearer token in the request header. In fact, the only thing identifying his vehicle was the VIN which I’ve partially obfuscated in the URL above.

So, there’s no real security here to prevent accessing data on a LEAF, nor any attempt to verify the identity on either end of the connection.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

And it gets worse. Here, quoting from Hunt’s site, he’s using the name “Jan” to refer to the anonymous Leaf-owning hacker who discovered this:

But then he tried turning it on and observed this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

That request returned this response:



message: “success”,

userId: “******”,

vin: “SJNFAAZE0U60****”,

resultKey: “***************************”


This time, personal information about Jan was returned, namely his user ID which was a variation of his actual name. The VIN passed in the request also came back in the response and a result key was returned.

He then turned the climate control off and watched as the app issued this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteOffRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

All of these requests were made without an auth token of any kind; they were issued anonymously. Jan checked them by loading them up in Chrome as well and sure enough, the response was returned just fine. By now, it was pretty clear the API had absolutely zero access controls but the potential for invoking it under the identity of other vehicles wasn’t yet clear.

Even if you don’t understand the code, here’s what all that means: we have the ability to get personal data and control functions of the car from pretty much anywhere with a web connection, as long as you know the target car’s VIN.

Hunt proved this was possible after some work, using a tool to generate Leaf VINs (only the last 5 or 6 digits are actually different) and sending a request for battery status to those VINs. Soon, they got the proper response back. Hunt explains the significance:

This wasn’t Jan’s car; it was someone else’s LEAF. Our suspicion that the VIN was the only identifier required was confirmed and it became clear that there was a complete lack of auth on the service.

Of course it’s not just an issue related to retrieving vehicle status, remember the other APIs that can turn the climate control on or off. Anyone could potentially enumerate VINs and control the physical function of any vehicles that responded. That’s was a very serious issue. I reported it to Nissan the day after we discovered this (I wanted Jan to provide me with more information first), yet as of today – 32 days later – the issue remains unresolved. You can read the disclosure timeline further down but certainly there were many messages and a phone call over a period of more than four weeks and it’s only now that I’m disclosing publicly…

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

(Now, just to be clear, this is not a how-to guide to mess with someone’s Leaf. You’ll note that the crucial server address has been redacted, so you can’t just type in those little segments of code and expect things to work.)

While at the moment, you can only control some HVAC functions and get access to the car’s charge state and driving history, that’s actually more worrying than you may initially think.

Not only is there the huge privacy issue of having your comings-and-goings logged and available, but if someone wanted to, they could crank the AC and drain the battery of a Leaf without too much trouble, stranding the owner somewhere.

There’s no provision for remote starting or unlocking at this point, but the Leaf is a fully drive-by-wire vehicle. It’s no coincidence that every fully autonomous car I’ve been in that’s made by Nissan has been on the LEAF platform; all of its major controls can be accessed electronically. For example, the steering wheel can be controlled (and was controlled, as I saw when visiting Nissan’s test facility) by the motors used for power steering assist, and it’s throttle (well, for electrons)-by-wire, and so on.

So, at this moment I don’t think anyone’s Leaf is in any danger other than having a drained battery and an interior like a refrigerator, but that’s not to say nothing else will be figured out. This is a huge security breach that Nissan needs to address as soon as possible. (I reached out to Nissan for comment on this story and will update as soon as I get one.)

So far, Nissan has not fixed this after at least 32 days, Hunt said. Here’s how he summarized his contact with Nissan:

I made multiple attempts over more than a month to get Nissan to resolve this and it was only after the Canadian email and French forum posts came to light that I eventually advised them I’d be publishing this post. Here’s the timeline (dates are Australian Eastern Standard time):

  • 23 Jan: Full details of the findings sent and acknowledged by Nissan Information Security Threat Intelligence in the U.S.A.
  • 30 Jan: Phone call with Nissan to fully explain how the risk was discovered and the potential ramifications followed up by an email with further details
  • 12 Feb: Sent an email to ask about progress and offer further support to which I was advised “We’re making progress toward a solution”
  • 20 Feb: Sent details as provided by the Canadian owner (including a link to the discussion of the risk in the public forum) and advised I’d be publishing this blog post “later next week”
  • 24 Feb: This blog published, 4 weeks and 4 days after first disclosure

All in all, I sent ten emails (there was some to-and-fro) and had one phone call. This morning I did hear back with a request to wait “a few weeks” before publishing, but given the extensive online discussions in public forums and the more than one-month lead time there’d already been, I advised I’d be publishing later that night and have not heard back since. I also invited Nissan to make any comments they’d like to include in this post when I contacted them on 20 Feb or provide any feedback on why they might not consider this a risk. However, there was nothing to that effect when I heard back from them earlier today, but I’ll gladly add an update later on if they’d like to contribute.

I do want to make it clear though that especially in the earlier discussions, Nissan handled this really well. It was easy to get in touch with the right people quickly and they made the time to talk and understand the issue. They were receptive and whilst I obviously would have liked to see this rectified quickly, compared to most ethical disclosure experiences security researches have, Nissan was exemplary.

It’s great Nissan was “exemplary” but it would have been even better if they’d implemented at least some basic security before making their cars’ data and controls available over the internet.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

Security via obscurity just isn’t going to cut it anymore, as Troy Hunt has proven through his careful and methodical work. I’m not sure why carmakers don’t seem to be taking this sort of security seriously, but it’s time for them to step up.

After all, doing so will save them from PR headaches like this, and the likely forthcoming stories your aunt will Facebook you about how the terrorists are going to make all the Leafs hunt us down like dogs.

Until they have to recharge, at least.

(Thanks, Matt and Brandon!)



Credit:  Jason Torchinsky

[CRITICAL] CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow

Have you ever been deep in the mines of debugging and suddenly realized that you were staring at something far more interesting than you were expecting? You are not alone! Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting. Thanks to this engineer’s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!

In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (bug). We couldn’t immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of Red Hat had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”

This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams’ knowledge into a comprehensive patch and regression test to protect glibc users.

That patch is available here.


Issue Summary:

Our initial investigations showed that the issue affected all the versions of glibc since 2.9. You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack. The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack. Google has found some mitigations that may help prevent exploitation if you are not able to immediately patch your instance of glibc. The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Our suggested mitigation is to limit the response (i.e., via DNSMasq or similar programs) sizes accepted by the DNS resolver locally as well as to ensure that DNS queries are sent only to DNS servers which limit the response size for UDP responses with the truncation bit set.


Technical information:

glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query. Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated. Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow. The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.


Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post. With this Proof of Concept, you can verify if you are affected by this issue, and verify any mitigations you may wish to enact. As you can see in the below debugging session we are able to reliably control EIP/RIP.

(gdb) x/i $rip => 0x7fe156f0ccce <_nss_dns_gethostbyname4_r+398>: req (gdb) x/a $rsp 0x7fff56fd8a48: 0x4242424242424242 0x4242424242420042

When code crashes unexpectedly, it can be a sign of something much more significant than it appears; ignore crashes at your peril! Failed exploit indicators, due to ASLR, can range from:

  • Crash on free(ptr) where ptr is controlled by the attacker.
  • Crash on free(ptr) where ptr is semi-controlled by the attacker since ptr has to be a valid readable address.
  • Crash reading from memory pointed by a local overwritten variable.
  • Crash writing to memory on an attacker-controlled pointer.

We would like to thank Neel Mehta, Thomas Garnier, Gynvael Coldwind, Michael Schaller, Tom Payne, Michael Haro, Damian Menscher, Matt Brown, Yunhong Gu, Florian Weimer, Carlos O’Donell and the rest of the glibc team for their help figuring out all details about this bug, exploitation, and patch development.



Credit:  Fermin J. Serna and Kevin Stadmeyer

Another Door to Windows | Hot Potato exploit

Microsoft Windows versions 7, 8, 10, Server 2008 and Server 2012 vulnerable to Hot Potato exploit which gives total control of PC/laptop to hackers

Security researchers from Foxglove Security have discovered that almost all recent versions of Microsoft’s Windows operating system are vulnerable to a privilege escalation exploit. By chaining together a series of known Windows security flaws, researchers from Foxglove Security have discovered a way to break into PCs/systems/laptops running on Windows 7/8/8.1/10 and Windows Server 2008/2010.

The Foxglove researchers have named the exploit as Hot Potato. Hot Potato relies on three different types of attacks, some of which were discovered back at the start of the new millennium, in 2000. By chaining these together, hackers can remotely gain complete access to the PCs/laptops running on above versions of Windows.

Surprisingly, some of the exploits were found way back in 2000 but have still not been patched by Microsoft, with the explanation that by patching them, the company would effectively break compatibility between the different versions of their operating system.

Hot Potato

Hot Potato is a sum of three different security issues with Windows operating system. One of the flaw lies in local NBNS (NetBIOS Name Service) spoofing technique that’s 100% effective. Potential hackers can use this flaw to set up fake WPAD (Web Proxy Auto-Discovery Protocol) proxy servers, and an attack against the Windows NTLM (NT LAN Manager) authentication protocol.

Exploiting these exploits in a chained manner allows the hackers to gain access to the PC/laptop by elevating an application’s permissions from the lowest rank to system-level privileges, the Windows analog for a Linux/Android root user’s permissions.

Foxglove researchers created their exploit on top of a proof-of-concept code released by Google’s Project Zero team in 2014 and have presented their findings at the ShmooCon security conference over the past weekend.

They have also posted proof-of-concept videos on YouTube in which the researchers break Windows versions such as 7, 8, 10, Server 2008 and Server 2012.

You can also access the proof of concept on Foxglove’s GitHub page here.


The researchers said that using SMB (Server Message Block) signing may theoretically block the attack. Other method to stop the NTNL relay attack is by enabling “Extended Protection for Authentication” in Windows.



Credit:  Vijay Prabhu, techworm

BlackEnergy Attacking Ukraine’s Critical Infrastructures

The cybercriminal group behind BlackEnergy, the malware family that has been around since 2007 and has made a comeback in 2014 (see our previous blog posts on Back in BlackEnergy *: 2014 Targeted Attacks in Ukraine and Poland and BlackEnergy PowerPoint Campaigns, as well as ourVirus Bulletin talk on the subject), was also active in the year 2015.

ESET has recently discovered that the BlackEnergy trojan was recently used as a backdoor to deliver a destructive KillDisk component in attacks against Ukrainian news media companies and against the electrical power industry. In this blog, we provide details on the BlackEnergy samples ESET has detected in 2015, as well as the KillDisk components used in the attacks. Furthermore, we examine a previously unknown SSH backdoor that was also used as another channel of accessing the infected systems, in addition to BlackEnergy.

BlackEnergy evolution in 2015

Once activated, variants of BlackEnergy Lite allow a malware operator to check specific criteria in order to assess whether the infected computer truly belongs to the intended target. If that is the case, the dropper of a regular BlackEnergy variant is pushed to the system.

The BlackEnergy malware stores XML configuration data embedded in the binary of DLL payload.

Figure 1 – The BlackEnergy configuration example used in 2015

Figure 1 – The BlackEnergy configuration example used in 2015

Apart from a list of C&C servers, the BlackEnergy config contains a value called build_id. This value is a unique text string used to identify individual infections or infection attempts by the BlackEnergy malware operators. The combinations of letters and numbers used can sometimes reveal information about the campaign and targets.

Here is the list of Build ID values that we identified in 2015:

  • 2015en
  • khm10
  • khelm
  • 2015telsmi
  • 2015ts
  • 2015stb
  • kiev_o
  • brd2015
  • 11131526kbp
  • 02260517ee
  • 03150618aaa
  • 11131526trk

We can speculate that some of them have a special meaning. For example 2015telsmi could contain the Russian acronym SMI – Sredstva Massovoj Informacii, 2015en could mean Energy, and there’s also the obvious “Kiev”.

KillDisk component

In 2014 some variants of the BlackEnergy trojan contained a plugin designed for the destruction of the infected system, named dstr.

In 2015 the BlackEnergy group started to use a new destructive BlackEnergy component detected by ESET products as Win32/KillDisk.NBB, Win32/KillDisk.NBC and Win32/KillDisk.NBD trojan variants.

The main purpose of this component is to do damage to data stored on the computer: it overwrites documents with random data and makes the OS unbootable.

The first known case where the KillDisk component of BlackEnergy was used was documented by CERT-UA in November 2015. In that instance, a number of news media companies were attacked at the time of the 2015 Ukrainian local elections. The report claims that a large number of video materials and various documents were destroyed as a result of the attack.

It should be noted that the Win32/KillDisk.NBB variant used against media companies is more focused on destroying various types of files and documents. It has a long list of file extensions that it tries to overwrite and delete. The complete list contains more than 4000 file extensions.


Figure 2 – A partial list of file extensions targeted for destruction by KillDisk.NBB

Figure 2 – A partial list of file extensions targeted for destruction by KillDisk.NBB

The KillDisk component used in attacks against energy companies in Ukraine was slightly different. Our analysis of the samples shows that the main changes made in the newest version are:

  • Now it accepts a command line argument, to set a specific time delay when the destructive payload should activate.
  • It also deletes Windows Event Logs : Application, Security, Setup, System.
  • It is less focused on deleting documents. Only 35 file extensions are targeted.
Figure 3 – A list of file extensions targeted for destruction by new variant of KillDisk component

Figure 3 – A list of file extensions targeted for destruction by new variant of KillDisk component

As well as being able to delete system files to make the system unbootable – functionality typical for such destructive trojans – the KillDisk variant detected in the electricity distribution companies also appears to contain some additional functionality specifically intended to sabotage industrial systems.

Once activated, this variant of the KillDisk component looks for and terminates two non-standard processes with the following names:

  • komut.exe
  • sec_service.exe

We didn’t manage to find any information regarding the name of the first process (komut.exe).

The second process name may belong to software called ASEM Ubiquity, a software platform that is often used in Industrial control systems (ICS), or to ELTIMA Serial to Ethernet Connector. In case the process is found, the malware does not just terminate it, but also overwrites the executable file with random data.

Backdoored SSH server

In addition to the malware families already mentioned, we have discovered an interesting sample used by the BlackEnergy group. During our investigation of one of the compromised servers we found an application that, at first glance, appeared to be a legitimate SSH server called Dropbear SSH.

In the order to run the SSH server, the attackers created a VBS file with the following content:

Set WshShell = CreateObject(“WScript.Shell”)
WshShell.CurrentDirectory = “C:\WINDOWS\TEMP\Dropbear\”
WshShell.Run “dropbear.exe -r rsa -d dss -a -p 6789″, 0, false

As is evident here, the SSH server will accept connections on port number 6789. By running SSH on the server in a compromised network, attackers can come back to the network whenever they want.

However, for some reason this was not enough for them. After detailed analysis we discovered that the binary of the SSH server actually contains a backdoor.

Figure 4 – Backdoored authentication function in SSH server

Figure 4 – Backdoored authentication function in SSH server

As you can see in Figure 4, this version of Dropbear SSH will authenticate the user if the password passDs5Bu9Te7 was entered. The same situation applies to authentication by key pair – the server contains a pre-defined constant public key and it allows authentication only if a particular private key is used.

Figure 5 – The embedded RSA public key in SSH server

Figure 5 – The embedded RSA public key in SSH server

ESET security solutions detect this threat as Win32/SSHBearDoor.A trojan.

Indicators of Compromise (IoC)

IP addresses of BlackEnergy C2-servers:

XLS document with malicious macro SHA-1:

BlackEnergy Lite dropper SHA-1:

BlackEnergy Big dropper SHA-1:

BlackEnergy drivers SHA-1:

KillDisk-components SHA-1:

VBS/Agent.AD trojan SHA-1:

Win32/SSHBearDoor.A trojan SHA-1:

Credit: welivesecurity

Malware Found Inside Downed Ukrainian Grid Management Points to Cyber-attack

The Burshtyn TES power plant in Ivano-Frankivsk Oblast, Ukraine. It’s not clear if Burshtyn was affected, but power outages did affect the grid in the Ivano-Frankivsk Oblast region. Image: Raimond Spekking/Wikimedia Commons


On December 23, a Ukrainian power company announced that a section of the country had gone dark. This temporary outage was not the result of purely physical sabotage—like the case a month earlier where explosives had knocked out power lines to Crimea—but instead, according to Ukrainian officials, was due to a cyberattack.

The country’s SBU security service immediately castigated Russia for the outage, according to Reuters, and Ukraine started an official investigation into what exactly happened.

Over the past few days, more details around the attack have emerged, including an apparent sample of malware found in a network of the regional control center. If that malware was indeed responsible for causing a blackout throughout parts of Ukraine, it would be a signal that industrial control systems (ICS), and in particular electric grids, really are under threat from cyberattacks, something that researchers have been warning for years.

“It was easily recoverable, but obviously it’s a bad thing for the power to go out”

Around a week after the attack announcement, Robert M. Lee, a former US Air Force cyber warfare operations officer as well as the founder and CEO of Dragos Security, wrote on the SANS ICS Security Blog that his team had obtained a sample of the malware found within the affected network.

“The fact that malware was recovered from the network at all, and the fact that it’s newer, gives a high confidence assessment that the cyberattack on Ukraine was legitimate,” Lee told Motherboard in a phone interview. Lee said the malware was “unique,” implying that it likely wasn’t something that just happened be on the grid network during the outage.

“The malware is a 32 bit Windows executable and is modular in nature indicating that this is a module of a more complex piece of malware,” Lee wrote in his blog post, who passed the sample over to Kyle Wilhoit, a senior threat researcher at cybersecurity company Trend Micro. Wilhoit said that the malware had a wiping function that would impact the targeted system.

“The resolution of APIs that are not used elsewhere in the code probably means that some of the code was borrowed from another program,” wrote Jake Williams, founder of Rendition Security and a SANS instructor, to whom Lee also provided the malware. Williams added that the malware appears to have a code “base,” on which modules are then added.

Other pieces of malware have targeted industrial systems in the past: “Havex” has infected technology commonly used in process control systems, such as water pumps and turbines; and “BlackEnergy,” which has been used in straight-up cybercriminal campaigns, has also been used to hit energy engineering facilities.

An Associated Press investigation published in December last year found that “sophisticated foreign hackers” had gained enough access to control power plant networks around a dozen times in the last decade. More broadly, the Wall Street Journal recently revealed that Iranian hackers had breached a New York dam in 2013. At the latest Chaos Communication Congress, a security, politics and art conference in Hamburg, Germany, researchers warned of the serious vulnerabilities in automated railroad systems. All of those require varying degrees of sophistication, with some of them needing expert knowledge of the target network’s protocols and idiosyncrasies.

After Lee’s post, more researchers published their own findings. Analysts from ESET claimed that the malware found in Ukraine was actually the BlackEnergy malware. Others went a step further, and wrote that BlackEnergy has been found within other Ukrainian power companies during the week of Christmas last year.

One group that has made heavy use of the BlackEnergy malware, and has previously targeted power facilities and other ICS, is alleged Russian hacking group Sandworm. It would be easy to assume that, because of the target and presence of supposed BlackEnergy malware, that Sandworm was behind the attack.

But that’s a logical leap too far, at least with the currently available evidence.

“The BlackEnergy malware has been in existence since 2007 and lots of different actors have used it,” Lee told Motherboard.

“People are saying that this piece of malware is linked to BlackEnergy. I can buy that, and there is some good analysis to say that is likely true,” he added. “But just because the BlackEnergy malware was used, does not mean that it’s linked at all” to Sandworm.

Irrespective of who committed the attack, what appears to have happened is that hackers “caused a power outage that was temporary in nature. It was easily recoverable, but obviously it’s a bad thing for the power to go out,” Lee said. “It’s not trivial—it still takes getting on the system and exploiting all that—but it’s not hard.”

One possible explanation is that the attackers may have remotely accessed a digital control panel located within the control center’s system. Other researchers have pointed towards the data wiping feature of the malware; presumably, wiping out vital data could have a negative impact on the electric grid’s systems. At this point, both of those theories are largely speculative.

But while either of those approaches are relatively easy for a hacker to carry out, attacks that would cause much more impact—that lasted for say, weeks or months—are much less likely to occur.

“Taking down the power grid, or cascading failures, or weeks of impact: that is incredibly hard. People have oversold how easy that is to achieve,” Lee added.

Although experts say it is likely that the power outage in Ukraine was caused by an cyberattack, there are still plenty of questions to be answered. More news is sure to follow in the coming days or weeks, as several research teams now have access to the malware sample.

Correction 1/4/16: This story originally referred to systems being compromised in a power plant or plants on the affected grid. As Michael Toecker pointed out, local sources report it was a regional control center that was affected.


Malware Analysis

The SANS ICS team recently gained access to a sample of malware that came from the network of the Ukrainian site targeted in the cyber attack that led to a power outage. I want to offer a few caveats to this blog post up front.


  • First, this is all developing and the next few days and weeks will add clarity to the situation.
  • Second, with this type of analysis there’s not much that can be definitively stated in terms of attribution or impact. Take everything here as informative only.
  • Third, SANS ICS is not in the business of releasing highly detailed technical analysis of malware. The purpose of this blog is to focus on lessons learned and education for the community. Therefore, I am not going to be sharing the hash of the sample we have but instead talking about the takeaways. There are at least 3 major cybersecurity and threat intelligence vendors I am aware of that have the sample and will be releasing detailed analyses. I do not want us at SANS ICS to impede that by releasing the sample to the wider community right now. However, to any of the major players and researchers that want a sample feel free to reach out to us via the SANS ICS Alumni email distribution and we will provide it to verified sources.

Here I’ll detail the facts, speculation, and takeaways for the community.

The Facts

The SANS ICS team has been researching the cyber attack on the Ukrainian power grid since the event occurred with a mix of interest and a critical viewpoint. The interest was due to the seriousness of the event and the critical viewpoint was taken because while threats are active against ICS there are often otherwise good case-studies that get spun out of control by the media. The idea of a cyber attack on infrastructure that leads to an impact to operations is very serious in nature and must be handled with care, especially when there is geopolitical tension in an area such as Ukraine.

Through trusted contacts in the communitythe SANS ICS team came across a lot of amplifying information about the attack, how it could have occurred, and the seriousness of this incident to the Ukrainian government and the focus they are putting on the investigation that increases the credibility of their reporting. The SANS ICS team was also passed a sample of malware from trusted sources taken from the impacted network by responders in country.

The hash for the malware can also be found on VirusTotal where a user in Ukraine submitted the sample on the 23rd of December. The timing and unique nature of the sample adds some credibility to the sources that collected and passed us the sample of the malware.

The malware is a 32 bit Windows executable and is modular in nature indicating that this is a module of a more complex piece of malware. I passed the malware sample to Kyle Wilhoit, a Senior Threat Researcher at Trend Micro who has done great work in the ICS community before, who confirmed through static analysis that the malware itself has a wiping routine that would impact the infected system. After that I passed the sample to Jake Williams, founder of Rendition Security and a fellow SANS Instructor, who has been analyzing this incident as well for further support. Below is his analysis:


Note that this analysis is based on an extremely limited static analysis of the malware and further analysis may impact these findings. The code appears modular in nature. The attackers take steps to obscure some notable suspicious APIs (e.g. OpenSCManager) from the imports table, but not others (e.g. CreateToolhelp32Snapshot). The string “obfuscation” method is crude and obvious upon manual examination, but effective to thwart string matching. Any of these hyphen separated strings would make an excellent Yara rule.


Notably, the malware does not appear to use all of the functions it imports. Specifically, there are no cross references to service related calls. While this may be due to dynamic call targets, there are significant numbers of cross references to other dynamically resolved APIs (e.g. RegDeleteKey).

The resolution of APIs that are not used elsewhere in the code probably means that some of the code was borrowed from another program. This hints at a development shop with a code base from which to piece modules together. Although the string obfuscation was crude, it was sufficient for the task. The crude string obfuscation should not be taken as an indication that the attacks came from a non-state actor.

Another possible interesting note is the compile timestamp of the executable. It is set to January 6, 1999.


This was likely modified by the attackers, but whether this date is significant in historical context is unknown at this time. It may simply be a random modification.


There are at least 3 major cybersecurity vendors working on the piece of malware right now in their own analysis and I will simply state that I’m impressed with the quality of work from them I have seen so far. Additionally, folks at the ICS-CERT and E-ISAC are doing great analysis as well and will likely be pushing out information through government sharing channels soon. Simply put, a lot will be known about this in the community soon to further support the analysis or help move on to a better understanding.

The Speculation

It is not currently possible right now to state that the malware recovered caused the loss of power in Ukraine. Additionally, the wiping functionality of the module recovered is likely for the purposes of cleanup after the attack; it itself does not appear to have been capable of causing the outage. This is important to note as the wiping capability is not similar in nature to the Shamoon attack but instead an anti-forensics technique.

Also, it is possible that the incident caused responders to look at the network where they found the malware. The malware could be new and yet not be related to the incident. At this time I believe the malware is related to the incident though from analysis by the SANS ICS team and others around the community but this should be categorized as a low-confidence assessment currently.

There has also been speculation that the malware is related to, and potentially a module for, BlackEnergy2. The previous statement should not be taken as a standalone soundbite. There is very little to support this conclusion right now. If true though this would add credibility to Ukraine’s SBU who reported that the malware was launched by Russian security services. Because of the sources concluding the BlackEnergy2 connection I feel it is important to share the (potentially overstated) speculation with the community as there were many organizations around the global community who were impacted by that campaign. Just because a campaign is reported on publicly does not mean it is no longer active. Security personnel in ICS organizations should be actively looking for threats — the Ukrainian incident should not be seen as an incident that only impacts one site in a foreign country although no panic or alarm should be taken, only due diligence towards defense.

The Takeaways

  • There is a lot of great analysis going on in the community by a number of companies, government organizations, and individual researchers. Each have been contributing some unique aspects to the analysis. Defenders must always work together like this and build off of each other’s strengths. Information sharing in this manner is critical to security.
  • The Ukrainian power outage is more likely to have been caused by a cyber attack than previously thought. Early reporting was not conclusive but a sample of malware taken from the network bolsters the claims. The unique nature of the malware indicate some level of targeting may be possible but much more information is needed to confirm that targeting of ICS or this specific facility was intended.
    • If the malware does end up being related to the BlackEnergy2 campaign then this adds to the possibility that the facility and ICS was specifically targeted
    • Technical data alone is very rarely enough to conclude the intention of an adversary
  • ICS facilities around the world need to take an active defense approach to monitoring ICS networks and responding to threats. Additionally, each should have an ability, or at least contacts to request help from, to perform basic threat and malware analysis to know when to reach out for help to the larger community (my one plug: the identification of, response to, and analysis of threats is the type of skill set we teach in SANS ICS515 and I would encourage organizations to find this or similar type of training for security personnel onsite. Firewalls and boxes on the network alone will not protect an ICS fully).

This incident is an important case-study for the ICS community. If the analysis and follow on information is validated about the malware and attack then this will also be a significant event for the international community. The precedence that this event sets is far reaching past the security community and will need to be analyzed and understood fully. The response by countries to this type of attack and any attribution obtained will also be significant in establishing the precedence of these types of events moving forward in the international community.




Credit:  sans, motherboard