Waze | Another way to track your moves

Millions of drivers use Waze, a Google-owned navigation app, to find the best, fastest route from point A to point B. And according to a new study, all of those people run the risk of having their movements tracked by hackers.

Researchers at the University of California-Santa Barbara recently discovered a Waze vulnerability that allowed them to create thousands of “ghost drivers” that can monitor the drivers around them—an exploit that could be used to track Waze users in real-time. They proved it to me by tracking my own movements around San Francisco and Las Vegas over a three-day period.

“It’s such a massive privacy problem,” said Ben Zhao, professor of computer science at UC-Santa Barbara, who led the research team.

Here’s how the exploit works. Waze’s servers communicate with phones using an SSL encrypted connection, a security precaution meant to ensure that Waze’s computers are really talking to a Waze app on someone’s smartphone. Zhao and his graduate students discovered they could intercept that communication by getting the phone to accept their own computer as a go-between in the connection. Once in between the phone and the Waze servers, they could reverse-engineer the Waze protocol, learning the language that the Waze app uses to talk to Waze’s back-end app servers. With that knowledge in hand, the team was able to write a program that issued commands directly to Waze servers, allowing the researchers to populate the Waze system with thousands of “ghost cars”—cars that could cause a fake traffic jam or, because Waze is a social app where drivers broadcast their locations, monitor all the drivers around them.


The attack is similar to one conducted by Israeli university students two years ago, who used emulators to send traffic bots into Waze and create the appearance of a traffic jam. But an emulator, which pretends to be a phone, can only create the appearance of a few vehicles in the Waze system. The UC-Santa Barbara team, on the other hand, could run scripts on a laptop that created thousands of virtual vehicles in the Waze system that can be sent into multiple grids on a map for complete surveillance of a given area.

In a test of the discovery, Zhao and his graduate students tried the hack on a member of their team (with his permission).

“He drove 20 to 30 miles and we were able to track his location almost the whole time,” Zhao told me. “He stopped at gas stations and a hotel.”


Last week, I tested the Waze vulnerability myself, to see how successfully the UC-Santa Barbara team could track me over a three-day period. I told them I’d be in Las Vegas and San Francisco, and where I was staying—the kind of information a snoopy stalker might know about someone he or she wanted to track. Then, their ghost army tried to keep tabs on where I went.

Users could be tracked right now and never know it.

– Ben Zhao, UC-Santa Barbara computer science professor

The researchers caught my movements on three occasions, including when I took a taxi to downtown Las Vegas for dinner:

And they caught me commuting to work on the bus in San Francisco. (Though they lost me when I went underground to take the subway.)

The security researchers were only able to track me while I was in a vehicle with Waze running in the foreground of my smartphone. Previously, they could track someone even if Waze was just running in the background of the phone. Waze, an Israeli start-up, was purchased by Google in 2013 for $1.1 billion. Zhao informed the security team at Google about the problem and made a version of the paper about their findings public last year. An update to the app in January of this year prevents it from broadcasting your location when the app is running in the background, an update that Waze described as an energy-saving feature. (So update your Waze app if you haven’t done so recently!)

“Waze constantly improves its mechanisms and tools to prevent abuse and misuse. To that end, Waze is regularly in contact with the security and privacy research community—we appreciate their help protecting our users,” said a Waze spokesperson in an emailed statement. “This group of researchers connected with us in 2014, and we have already addressed some of their claims, implementing safeguards in our system to protect the privacy of our users.”

The spokesperson said that “the concept of Waze is that we all work together to share information and impact the world around us” and that “users expect to offer certain information about their route in exchange for unparalleled navigation assistance.” Among the safeguards deployed by Waze is a “system of cloaking” so that a user’s location as displayed “from time to time within the Waze application does not represent such user’s actual, real time location.”

But those safeguards did not prevent real-time tracking in my case. The researchers sent me their tracking minutes after my trips, with accurate time stamps for each of my locations, meaning this cloaking system doesn’t seem to work very well.

“Anyone could be doing this [tracking of Waze users] right now,” said Zhao. “It’s really hard to detect.”

Part of what allowed the researchers to track me so closely is the social nature of Waze and the fact that the app is designed to share users’ geolocation information with each other. The app shows you other Waze drivers on the road around you, along with their usernames and how fast they’re going. (Users can opt of this by going invisible.) When I was in Vegas, the researchers simply populated ghost cars around the hotel I was staying at that were programmed to follow me once I was spotted.

“You could scale up to real-time tracking of millions of users with just a handful of servers,” Zhao told me. “If I wanted to, I could easily crawl all of the U.S. in real time. I have 50-100 servers, and could get more from [Amazon Web Services] and then I could track all of the drivers.”

Theoretically, a hacker could use this technique to go into the Waze system and download the activity of all the drivers using it. If they made the data public like the Ashley Madison hackers did, the public would suddenly have the opportunity to follow the movements of the over 50 million people who use Waze. If you know where someone lives, you would have a good idea of where to start tracking them.

Like the Israeli researchers, Zhao’s team was also able to easily create fake traffic jams. They were wary of interfering with real Waze users so they ran their experiments from 2 a.m. to 5 a.m. every night for two weeks, creating the appearance of heavy traffic and an accident on a remote road outside of Baird, Texas.

“No real humans were harmed or even interacted with,” said Zhao. They aborted the experiment twice after spotting real world drivers within 10 miles of their ghost traffic jam.


While Zhao defended the team’s decision to run the experiment live on Waze’s system, he admitted they were “very nervous” about initially making their paper about their findings public. They had approval from their IRB, a university ethics board; took precautions not to interfere with any real users; and notified Google’s security team about their findings They are presenting their paper at a conference called MobiSys, which focuses on mobile systems, at the end of June in Singapore.


“We needed to get this information out there,” said Zhao. “Sitting around and not telling the public and the users isn’t an option. They could be tracked right now and never know it.”

“This is bigger than Waze,” continued Zhao. The attack could work against any app, said Zhao, turning their servers into an open system that an attacker can mine and manipulate. With Waze, it’s a particularly sensitive attack because users’ location information is being broadcast and can be downloaded, but the attack on another app would allow hackers to download any information that users broadcast to other users or allow them to flood the app with fake traffic.

“With a [dating app], you could flood an area with your own profile or robot profiles and basically ruin it for your area,” said Zhao. “We looked at a bunch of different apps and nearly all of them had this near-catastrophic vulnerability.”

The scary part, said Zhao, is that “we don’t know how to stop this.” He said that servers that interact with apps in general are not as robust against attack as those that are web-facing.

“Not being able to separate a real device from a program is a larger problem,” said Zhao. “It’s not cheap and it’s not easy to solve. Even if Google wanted to do something, it’s not trivial for them to solve. But I want them to get this on the radar screen and help try to solve the problem. If they lead and they help, this collective problem will be solved much faster than if they don’t.”

“Waze is building their platform to be social so that you can track people around you. By definition this is going to be possible,” said Jonathan Zdziarski, a smartphone forensic scientist, who reviewed the paper at my request. “The crowd sourced tools that are being used in these types of services definitely have these types of data vulnerabilities.”

Zdziarski said there are ways to prevent this kind of abuse, by for example, rate-limiting data requests. Zhao told me his team has been running its experiments since the spring of 2014, and Waze hasn’t blocked them, even though they have created the appearance of thousands of Waze users in a short period of time coming from just a few IP addresses.

Waze’s spokesperson said the company is “examining the new issue raised by the researchers and will continue to take the necessary steps to protect the privacy of our users.”

In the meantime, if you need to use Waze to get around but are wary of being tracked, you do have one option: set your app to invisible mode. But beware, Waze turns off invisible mode every time you restart the app.

Full paper here.


iBackDoor: High-Risk Code Hits iOS Apps


FireEye mobile researchers recently discovered potentially “backdoored” versions of an ad library embedded in thousands of iOS apps originally published in the Apple App Store. The affected versions of this library embedded functionality in iOS apps that used the library to display ads, allowing for potential malicious access to sensitive user data and device functionality.

These potential backdoors could have been controlled remotely by loading JavaScript code from a remote server to perform the following actions on an iOS device:

  • Capture audio and screenshots
  • Monitor and upload device location
  • Read/delete/create/modify files in the app’s data container
  • Read/write/reset the app’s keychain (e.g., app password storage)
  • Post encrypted data to remote servers
  • Open URL schemes to identify and launch other apps installed on the device
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button

The offending ad library contained identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the potentially backdoored ad library: version codes 5.3.3 to 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2] – version 7.0.5 – the potential backdoors are not present. It is unclear whether the potentially backdoored versions of the ad library were released by adSage or if they were created and/or compromised by a malicious third party.

As of November 4, we have identified 2,846 iOS apps containing the potentially backdoored versions of mobiSage SDK. Among these, we observed more than 900 attempts to contact an ad adSage server capable of delivering JavaScript code to control the backdoors. We notified Apple of the complete list of affected apps and technical details on October 21, 2015.

While we have not observed the ad server deliver any malicious commands intended to trigger the most sensitive capabilities such as recording audio or stealing sensitive data, affected apps periodically contact the server to check for new JavaScript code. In the wrong hands, malicious JavaScript code that triggers the potential backdoors could be posted to eventually be downloaded and executed by affected apps.

Technical Details

As shown in Figure 1, the affected mobiSage library included two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the potential backdoors and exposed interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the potential backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.

Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.

Figure 2: Communication via URL loading in WebView

To process a command in its queue, msageCore dispatches the command, along with its parameters, to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.

Figure 3: Command dispatch in msageCore

At-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name Interfaces
MSageCoreUIManagerPlugin – captureAudio:

– captureImage:

– openMail:

– openSMS:

– openApp:

– openInAppStore:

– openCamera:

– openImagePicker:

– …

MSageCoreLocation – start:

– stop:

– setTimer:

– returnLocationInfo:webViewId:

– …

MSageCorePluginFileModule – createDir

– deleteDir:

– deleteFile:

– createFile:

– getFileContent:

– …

MSageCoreKeyChain – writeKeyValue:

– readValueByKey:

– resetValueByKey:

MSageCorePluginNetWork – sendHttpGet:

– sendHttpPost:

– sendHttpUpload:

– …

MSageCoreEncryptPlugin – MD5Encrypt:

– SHA1Encrypt:

– AESEncrypt:

– AESDecrypt:

– DESEncrypt:

– DESDecrypt:

– XOREncrypt:

– XORDecrypt:

– RC4Encrypt:

– RC4Decrypt

– …

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the potential backdoors in the library. They expose the potential ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.

Beyond the selected interfaces, the ad library potentially exposed users to additional risks by including logic to promote and install “enpublic” apps as shown in Figure 4. As we have highlighted in previous blogs [footnotes 3, 4, 5, 6, 7], enpublic apps can introduce additional security risks by using private APIs in certain versions of iOS. These private APIs potentially allow for background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations. Apple has addressed a number of issues related to enpublic apps that we have brought to their attention.

Figure 4: Installing “enpublic” apps to bypass Apple App Store review

We can see how this ad library functions by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.

Figure 5: Capturing audio with duration and threshold

Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.

Figure 6: Reading the keychain in the kSecClassGenericPassword class

Remote control in msageJS

msageJS contains JavaScript code for communicating with a remote server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.

Figure 7: The file layout of msageJS

The command execution interface is constructed as follows:

          adsage.exec(className, methodName, argsList, onSuccess, onFailure);

The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:

adsage.exec(“MSageCoreUIManager”, “captureAudio”, [“Hey”, 10, 40],  onSuccess, onFailure);

Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.

Figure 8: Base64 encoded JavaScript component in Zip format

When msageJS is launched, it sends a POST request to hxxp://entry.adsage.com/d/ to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9.

Figure 9: Server response to msageJS update request via HTTP POST

Enterprise Protection

To ensure the protection of our customers, FireEye has deployed detection rules in its Network Security (NX) and Mobile Threat Prevention (MTP) products to identify the affected apps and their network activities.

For FireEye NX customers, alerts will be generated if an employee uses an infected app while their iOS device is connected to the corporate network. FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users will receive on-device notifications of the risky app and IT administrators receive email alerts.


In this blog, we described an ad library that affected thousands of iOS apps with potential backdoor functionality. We revealed the internals of backdoors which could be used to trigger audio recording, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these potential backdoors in ad libraries could be controlled remotely by JavaScript code should their ad servers fall under malicious actors’ control.

[1] http://www.adsage.com/mobisage
[2] http://www.adsage.cn/
[3] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html
[4] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html
[5] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html
[6] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html
[7] https://www.virusbtn.com/virusbulletin/archive/2014/11/vb201411-Apple-without-shell

Credit:  Zhaofeng Chen, Adrian Mettler, Peter Gilbert , Yong Kang | Mobile Threats, Threat Research

Newly Discovered Exploit Makes Every iPhone Remotely Hackable

The government would love to get its hands on a foolproof way to break into the new highly encrypted iPhone. And it looks like some clever hackers just gave it to them.

Bug bounty startup Zerodium just announced that a team has figured out how to remotely jailbreak the latest iPhone operating system and will take home a million dollar prize. It’s unclear if Apple will get a peek at the zero-day exploit.

But wait, isn’t that what security researchers are supposed to do? Expose the exploit? Not when there’s this kind of cash on the line.

The hack itself seemed impossible. Zerodium required the exploit to work through a Safari, Chrome, a text message, or a multimedia message. This meant that hackers wouldn’t have to find just one vulnerability but rather a chain of them that would enable them to jailbreak an iPhone from afar. Once the phone’s jailbroken, the hackers could ostensibly download apps to the phone or even upload malware. It could also be a killer surveillance tool for anyone from law enforcement to spy agencies, which is what makes the details of this situation even more unsettling.

Zerodium is no ordinary security company. As Motherboard’s Lorenzo Francheschi-Biccierai explains:

[Founder Chaouki] Bekrar and Zerodium, as well as its predecessor VUPEN, have a different business model. They offer higher rewards than what tech companies usually pay out, and keep the vulnerabilities secret, revealing them only to certain government customers, such as the NSA.

Oh, that sounds bad. But it gets worse:

But there’s no doubt that for some, this exploit is extremely valuable. …This exploit would allow [law enforcement and spy agencies] to get around any security measures and get into the target’s iPhone to intercept calls, messages, and access data stored in the phone.

So unlike a lot of news that comes out of the security industry, this is a real threat. Zero-day vulnerabilities are often shared with the vendor before research is released so that they can have a patch ready. In this case, Zerodium and the winning team of now millionaire hackers will probably keep the bug a secret so that the proprietors of state secrets can take advantage of it. Again, Bekrar and his various ventures have been doing this for years.

There’s a chance Apple will figure out how to patch the vulnerability before the NSA takes off with it. After all, the Cupertino-based purveyor of very expensive gadgets is historically terrific at security. This is actually the first report of a method for jailbreaking an iPhone remotely since iOS 7. Hopefully, it will be the last.



Credit:  Adam Clark Estes – gizmodo

iOS 9 Hack: How to Access Private Photos and Contacts Without a Passcode


Setting a passcode on your iPhone is the first line of defense to help prevent other people from accessing your device. However, it’s pretty easy for anyone to access your personal photographs and contacts from your iPhone running iOS 9 in just 30 seconds or less, even with a passcode and/or Touch ID enabled.


Just yesterday, the Security firm Zerodium announced a Huge Bug Bounty of 1 Million Dollars for finding out zero-day exploits and jailbreak for iPhones and iPads running iOS9. Now…


A hacker has found a new and quite simple method of bypassing the security of a locked iOS device (iPhone, iPad or iPod touch) running Apple’s latest iOS 9 operating system that could allow you to access the device’s photos and contacts in 30 seconds or less. Yes, the passcode on any iOS device running iOS 9.0 is possible to bypass using the benevolent nature of Apple’s personal assistant Siri.


Here’s the List of Steps to Bypass Passcode:

You need to follow these simple steps to bypass passcode on any iOS device running iOS 9.0:
  1. Wake the iOS device and Enter an incorrect passcode four times.
  2. For the fifth time, Enter 3 or 5 digits (depending on how long your passcode is), and for the last one, press and hold the Home button to invoke Siri immediately followed by the 4th digit.
  3. After Siri appears, ask her for the time.
  4. Tap the Clock icon to open the Clock app, and add a new Clock, then write anything in the Choose a City field.
  5. Now double tap on the word you wrote to invoke the copy & paste menu, Select All and then click on “Share“.
  6. Tap the ‘Message‘ icon in the Share Sheet, and again type something random, hit Return and double tap on the contact name on the top.
  7. Select “Create New Contact,” and Tap on “Add Photo” and then on “Choose Photo“.
  8. You’ll now be able to see the entire photo library on the iOS device, which is still locked with a passcode. Now browse and view any photo from the Photo album individually.

Video Demonstration

You can also watch a video demonstration (given below) that shows the whole hack in action.
It isn’t a remote flaw you need to worry about, as this only works if someone has access to your iPhone or iOS device. However, such an easy way to bypass any locked iOS device could put users personal data at risk.

How to Prevent iOS 9 Hack

Until Apple fixes this issue, iOS users can protect themselves by disabling Siri on the lock screen from Settings > Touch ID & Passcode. Once disabled, you’ll only be able to use Siri after you have unlocked your iOS device using the passcode or your fingerprint.


One in Five Android Apps Is Malware

Report: 1 in 5 Android Apps Is Malware

Bad news, phandroids. Android malware is on the rise.

According to Symantec’s latest Internet Security Threat Report, “17 percent of all Android apps (nearly one million total) were actually malware in disguise.” In 2013, Symantec uncovered roughly 700,000 virus-laden apps.

More than one third of all apps were what Symantec calls “grayware” or “madware” — mobile software whose primary purpose is to bombard you with ads. The company also discovered the first example of mobile crypto-ransomware – software that encrypts your data and holds it hostage until you pay ransom for it – for Android devices.

symantec norton internet threat security report

How to stay safe

The good news is that it’s pretty easy to avoid infection if you obtain your apps from a trusted source, like the Google Play Store. The company doesn’t break out how many of the 1 million+ malware apps were found in the Play Store, but Symantec’s Director of Security Response Kevin Haley admits the number is probably quite low.

“Google does a good job of keeping malware out of the Store,” Haley says. “And if a malicious app does make it in there, they do a good job of finding it and getting rid of it.”

On the other hand, if you visit alternate Android app markets, download apps from app maker’s Websites, get them via email links, or find them on Bit Torrent sites, you run a much greater risk of infecting your phone, he adds.

Other App Stores

Symantec used its Norton Mobile Insight software to crawl more than 200 Android app stores, downloading and analyzing more than 50,000 apps and app updates each day in 2014.

Most of the malware found by Symantec tries to steal personal data like phone numbers and contact lists, which are then sold on the Internet’s black market, says Haley. Some may cause your phone to send text messages to premium SMS services, automatically adding charges to your monthly bill. Other apps may pelt you with ads that pop up randomly over other applications. Some apps even change your default ringtone to an advertisement, Haley says.

The Android malware problem is greater overseas, especially in regions where users can’t access Google Play and must rely on third-party app marketplaces.


Mobango is one of hundreds of alternate Android app marketplaces in the wild. Be careful out there. (Mobango.com)

If you see unusual charges on your bill for premium texting services or ads start popping up where you don’t expect them, those are good signs you’ve got an infection, he adds. Your best recourse is to use a mobile security app to scan and protect your phone.

As for iOS? Symantec found a grand total of 3 malware apps in 2014. All of them required the iPhone to be jailbroken before it could be infected. In 2013 it found zero.

“One of the benefits of Android versus iOS is that it gives you a lot more freedom as to where you can download apps,” Haley says. “But that freedom comes with a cost.”




Critical SSL Vulnerability Leaves 25,000 iOS Apps Vulnerable to Hackers

Critical SSL Vulnerability Leaves 25,000 iOS Apps Vulnerable to Hackers
A critical vulnerability resides in AFNetworking could allow an attacker to cripple the HTTPS protection of 25,000 iOS apps available in Apple’s App Store via man-in-the-middle (MITM) attacks.
AFNetworking is a popular open-source code library that lets developers drop networking capabilities into their iOS and OS X products. But, it fails to check the domain name for which the SSL certificate has been issued.
Any Apple iOS application that uses AFNetworking version prior to the latest version 2.5.3 may be vulnerable to the flaw that could allow hackers to steal or tamper data, even if the app protected by the SSL (secure sockets layer) protocol.


Use any SSL Certificate to decrypt users’ sensitive data:
An attacker could use any valid SSL certificate for any domain name in order to exploit the vulnerability, as long as the certificate issued by a trusted certificate authority (CA) that’s something you can buy for $50.

This meant that a coffee shop attacker could still eavesdrop on private data or grab control of any SSL session between the app and the Internet,” reports SourceDNA, a startup company that provides code analysis services.

Like, for example, I can pretend to be ‘facebook.com‘ just by presenting a valid SSL certificate for ‘thehackernews.com.
The vulnerability, which is estimated to affect more than 25,000 iOS apps, was discovered and reported by Ivan Leichtling from Yelp.
AFNetworking had fixed the issue in its latest release 2.5.3 before the previous version 2.5.2, which fails to patch another SSL-related vulnerability.


Version 2.5.2 Failed to Patch the issue:
Previously it was believed that with the release of AFNetworking 2.5.2, the lack of SSL certificate validation issue had been eliminated that allowed hackers with self-signed certificates to intercept the encrypted traffic from vulnerable iOS apps and view the sensitive data sent to the server.
However, even after the vulnerability was patched, SourceDNA scanned for vulnerable code present in iOS apps and found a number of iOS apps till then vulnerable to the flaw.


Therefore, anyone with a man-in-the-middle position, such as a hacker on an unsecured Wi-Fi network, a rogue employee inside a virtual private network, or a state-sponsored hacker, presenting their own CA-issued certificate can monitor or modify the protected communications.


Apps from Big Developers found to be vulnerable. SERIOUSLY?
A quick check for iOS products with the domain name validation turned off; the security company found apps from important developers, including Bank of America, Wells Fargo, and JPMorgan Chase, likely to be affected.
SourceDNA also said that the iOS apps from top developers such as Yahoo and Microsoft, meanwhile, remained vulnerable to the HTTPS-crippling bug.
Prevention against the flaw:
Just to prevent hackers from exploiting the vulnerability, SourceDNA has not disclosed the list of vulnerable iOS apps.
However, the company advised developers to integrate the latest AFNetworking build (2.5.3) into their products in order to enable domain name validation by default.
SourceDNA is also offering a free check tool that could help developers and end users check their apps for the vulnerability.


Meanwhile, iOS users are also advised to check immediately the status of apps they use, especially those apps that use bank account details or any other sensitive information.
And before the developers of vulnerable apps release an update, users should avoid using any vulnerable version of the apps for the time being.


CIA Has Been Hacking iPhone and iPad Encryption Security Since 2006

CIA Has Been Hacking iPhone and iPad Encryption Security Since 2006
Security researchers at the Central Intelligence Agency (CIA) have worked for almost decade to target security keys used to encrypt data stored on Apple devices in order to break the system.


Citing the top-secret documents obtained from NSA whistleblower Edward Snowden, The Intercept blog reported that among an attempt to crack encryption keys implanted into Apple’s mobile processor, the researchers working for CIA had created a dummy version of Xcode.


Xcode is an Apple’s application development tool used by the company to create the vast majority of iOS apps. However using the compromised development software, CIA, NSA or other spies agencies were potentially allowed to inject surveillance backdoor into programs distributed on Apple’s App Store.
In addition, the custom version of Xcode could also be used to spy on users, steal passwords, account information, intercept communications, and disable core security features of Apple devices.
The latest documents from the National Security Agency’s internal systems revealed that the researchers’ work was presented at its 2012 annual gathering called the “Jamboree” — CIA sponsored secretive event which has run for nearly a decade — at a Lockheed Martin facility in northern Virginia.
According to the report, “essential security keys” used to encrypt data stored on Apple’s devices have become a major target of the research team.
Overall, the U.S. government-sponsored researchers are seeking ways to decrypt this data, as well as penetrate Apple’s firmware, using both “physical” and “non-invasive” techniques.
In addition to this, the security researchers also presented that how they successfully modified the OS X updater — a program used to deliver updates to laptop and desktop computers — in an attempt to install a “keylogger” on Mac computers.


Another presentation from 2011 showed different techniques that could be used to hack Apple’s Group ID (GID) — one of the two encryption keys that Apple places on its iPhones.
One of the techniques involved studying the electromagnetic emissions of the GID and the amount of power used by the iPhone’s processor in order to extract the encryption key, while a separate method focused on a “method to physically extract the [Apple’s] GID key.”

According to Matthew Green, a cryptography expert at Johns Hopkins University’s Information Security Institute, “Tearing apart the products of U.S. manufacturers and potentially putting backdoors in software distributed by unknowing developers all seems to be going a bit beyond ‘targeting bad guys.’ It may be a means to an end, but it’s a hell of a means.

Although the documents do not specify how successful or not these surveillance operations have been against Apple, it once again provoke the ongoing battle between spy agencies and tech companies, as well as the dishonesty of the US government.


On one hand, where President Barack Obama criticized China for forcing tech companies to install security backdoors for the purpose of government surveillance. On the other hand, The Intercept notes that China is just following America’s lead, that’s it.
Spies gonna spy,” said Steven Bellovin, a computer science professor at Columbia University and former chief technologist for the FTC. “I’m never surprised by what intelligence agencies do to get information. They’re going to go where the info is, and as it moves, they’ll adjust their tactics. Their attitude is basically amoral: whatever works is OK.”


We have already reported about NSA and GCHQ’s various surveillance programs including PRISM, XkeyScore, DROPOUTJEEP, and many more.




Masque Attack — New iOS Vulnerability Allows Hackers to Replace Apps with Malware


Android have been a long time target for cyber criminals, but now it seems that they have turned their way towards iOS devices. Apple always says that hacking their devices is too difficult for cyber crooks, but a single app has made it possible for anyone to hack an iPhone.
A security flaw in Apple’s mobile iOS operating system has made most iPhones and iPads vulnerable to cyber attacks by hackers seeking access to sensitive data and control of their devices, security researchers warned.
The details about this new vulnerability was published by the Cyber security firm FireEye on its blog on Monday, saying the flaw allows hackers to access devices by fooling users to download and install malicious iOS applications on their iPhone or iPad via tainted text messages, emails and Web links.



The malicious iOS apps can then be used to replace the legitimate apps, such as banking or social networking apps, that were installed through Apple’s official App Store through a technique that FireEye has dubbed “Masque Attack.

This vulnerability exists because iOS doesn’t enforce matching certificates for apps with the same bundle identifier,” the researchers said on the company’s blog. “An attacker can leverage this vulnerability both through wireless networks and USB.”

Masque attacks can be used by cyber criminals to steal banking and email login credentials or users’ other sensitive information.
Security researchers found that the Masque attack works on Apple’s mobile operating system including iOS 7.1.1, 7.1.2, 8.0, 8.1, and the 8.1.1 beta version and that all of the iPhones and iPads running iOS 7 or later, regardless of whether or not the device is jailbroken are at risk.
According to FireEye, the vast majority, i.e. 95 percent, of all iOS devices currently in use are potentially vulnerable to the attack.



The Masque Attack technique is the same used by “WireLurker,” malware attack discovered last week by security firm Palo Alto Networks targeting Apple users in China, that allowed unapproved apps designed to steal information downloaded from the Internet. But this recently-discovered malware threat is reportedly a “much bigger threat” than Wirelurker.

Masque Attacks can pose much bigger threats than WireLurker,” the researchers said. “Masque Attacks can replace authentic apps,such as banking and email apps, using attacker’s malware through the Internet. That means the attacker can steal user’s banking credentials by replacing an authentic banking app with an malware that has identical UI.

Surprisingly, the malware can even access the original app’s local data, which wasn’t removed when the original app was replaced. These data may contain cached emails, or even login-tokens which the malware can use to log into the user’s account directly.”


Apple devices running iOS are long considered more safe from hackers than devices running OS like Microsoft’s Windows and Google’s Android, but iOS have now become more common targets for cybercriminals.
In order to avoid falling victim to Masque Attack, users can follow some simple steps given below:
  • Do not download any apps offer to you via email, text messages, or web links.
  • Don’t install apps offered on pop-ups from third-party websites.
  • If iOS alerts a user about an “Untrusted App Developer,” click “Don’t Trust” on the alert and immediately uninstall the application.
In short, a simple way to safeguard your devices from these kind of threats is to avoid downloading apps from untrusted sources, and only download apps directly from the App Store.


Credit: thehackernews


Apple disabling the SSL3 support in Push Notification Service

Apple are about to disable SSL3 support in Apple Push Notification Service at Wednesday, October 29.

Developers experiencing issues with Provider Communication interface in the development environment consider immediate updating the code. After this date – Push notification using SSL3 will stop working.

Official apple notification is below

The Apple Push Notification service will be updated and changes to your servers may be required to remain compatible.

In order to protect our users against a recently discovered security issue with SSL version 3.0 the Apple Push Notification server will remove support for SSL 3.0 on Wednesday, October 29. Providers using only SSL 3.0 will need to support TLS as soon as possible to ensure the Apple Push Notification service continues to perform as expected. Providers that support both TLS and SSL 3.0 will not be affected and require no changes.

To check for compatibility, we have already disabled SSL 3.0 on the Provider Communication interface in the development environment only. Developers can immediately test in this development environment to make sure push notifications can be sent to applications.

Apple iOS 8 and Yosemite: The latest privacy invasion

Apple’s latest operating systems, Mac OS X Yosemite and iOS 8, both transmit your search terms back to Apple by default and it’s difficult to disable.

Apple’s latest operating systems, Mac OS X Yosemite and iOS 8, both transmit your search terms back to Apple by default. In fact, it’s quite difficult to disable. This means that, if you use the built-in Safari browser, Apple not only knows what you did search for, but it knows what you thought about searching for even if you never actually hit the Enter key.

Billed as a “feature” since it was first rolled out in iOS, Apple has enhanced search by causing it to send your search queries to many places, not just one search engine. Typing in the Spotlight box on iOS8 or Mac OS X Yosemite will cause your search terms to go to Apple, Microsoft’s Bing, and whatever search engine you select (e.g., Google or DuckDuckGo). They go to Apple so that you can see results from Apple properties, like the App store. They’re sent to Bing as a way of getting you quick, relevant search results based on partial queries. Finally, when you press the Enter key, the full phrase you typed is sent to the search engine you choose.

We know where you are and what you’re looking for

If you have Location Services enabled, then your computer (or phone or iPad) will transmit your location along with your search terms. The ostensible benefit is that search engines give you better results if they know where you are. For example, if you’re standing in London, searching on “Times” in the app store will probably place the app for “The Times” newspaper in London at the top of search results. In New York, “The New York Times” will probably rank higher, and in Los Angeles “The Los Angeles Times” app will probably be listed first. Likewise, if you’re at a software testing conference where Michael Bolton is speaking and you search “Michael Bolton” you will probably see results about him and his work, rather than the famous singer.

This has the side-effect, though, of giving your location and search terms to Microsoft, Google, and Apple every time you search (whether you’re on your phone or your laptop). On its page about Spotlight Suggestions, Apple said:

“If you do not want your Spotlight search queries and Spotlight Suggestions usage data sent to Apple, you can turn off Spotlight Suggestions. Simply deselect the checkboxes for both Spotlight Suggestions and Bing Web Searches in the Search Results pane of Spotlight preferences in System Preferences on your Mac. If you turn off Spotlight Suggestions and Bing Web Searches, Spotlight will search the contents of only your Mac.”

Opting out

The solution is to follow these instructions. For seriously high-tech users, they have a script to run that will make the changes directly.

Additionally, there are three separate checkboxes to find and they are poorly labelled. The two mentioned by Apple are logically with Spotlight, but the third is in Safari’s preferences where “Include Spotlight Suggestions” will also invoke this behavior.

On an iOS8 device, similar preferences have to be checked: General, Spotlight Search, Spotlight Suggestions, and Bing Web Results needs to be unticked. The third option is really buried in iOS8: Privacy, Location Services – scroll all the way down to System Services and turn off many options like Spotlight Suggestions.

Note that none of these options, except one under iOS, is under the heading of privacy. This suggests that Apple does not regard these as privacy-related options.

The necessity to make it clear from the start

Apple is not making it sufficiently clear that opting into a feature like “spotlight suggestions” transmits your location data and search queries while you’re typing them (even if you don’t press Enter). Because this is such a change, it deserves more explicit language from Apple. Users do not realize that everything they search for (even if they’re just using Spotlight to start a program, like Microsoft Word) is being sent to Apple and Microsoft. If Apple wants its desktop operating system to be used in places where “phoning home” is not acceptable, they need to have a big switch somewhere labelled “local search only” and turn all the options on or off at once.

Surprisingly, Linux did it first

The first operating system to do this level of search integration was Ubuntu Linux in 2012. By default, the operating system included Amazon product searches as a side-effect of searching for data on the Linux desktop. For the last two years there was as steady drumming of irritation from users, prompting Micah Lee to create Fixubuntu. It is a script that disables those Ubuntu features by default. Ubuntu requires an explicit opt-in to these search features, starting with Ubuntu 14.10.

Users who care about privacy hope that Apple responds in less than the 18 months it took Canonical to respond.



Credit:  Paco Hope , techrepublic