Sunday, December 16, 2018

Fuzzing strategies for DOM XSS - Part 1

XSSes are by far still the most common vulnerability in Web applications, they are easy to introduce and easier to find in comparison with other classes of vulnerabilities. XSSes are split into 3 families, reflected, stored and DOM-based. The former are the most prevalent one and also the hardest to detect out of the three.
To hunt for DOM XSSes, it is possible to have a static approach, parsing Javascript, tainting sources and sinks, propagating taint statically, etc. This approach is hard for Javascript because of the dynamic nature of the language, which makes it false positive prone, complex and resource intensive.
Dynamic approaches may seem to be more suited for the task, they require instrumentation of the Javascript to introspect the JavaScript runtime. The possible approaches are, the list is by no means exhaustive:
  • Rewrite of the Javascript code on the fly to inject instrumentation code, this approach is brittle, moderately complex and resource intensive
  • Instrument the browser Javascript engine, which is the most resource friendly approach, but requires fiddling with a browser internals, which is difficult (because of all the JIT magic most JavaScript engines handles) and costly to maintain on the long run
  • Use debugger API, by setting breakpoints where appropriate, stepping through the code, etc. This has proven to be slow, not as feature complete as one would hope, good luck setting a breakpoint on all eval calls.
  • Use coverage API to grossly peek into what is executing coupled with monkey patching and Proxy object injection, this approach is performant, it is easier to implement but theoretically suffers from some limitations
For this blog post, we will build a simple PoC of a coverage guided XSS fuzzer. The fuzzer will use precise coverage information to identify newly executed code paths and use that information to generate new test payloads. The fuzzer will instrument sink methods, except for some limitations (see eval headache).
To make the PoC confined, we will focus on postMessage XSS, we will use the Chrome browser, the remote debugger API, we will write the PoC in Python 3 and use the Pychrome library to interact with the browser.
Enough with the introductions, let’s begin.
To interact with Chrome debugger API, we will need to enable it with the following command line:
chrome --verbose --window-size=1200x600 --disable-gpu --remote-debugging-port=9222 --user-data-dir=/tmp/foo --disable-web-security
Optionally, you may want to enable headless mode, will save about 20% resources and important to use if you are building a full blown XSS fuzzer.
The important flag is remote-debugging-port, the rest you can ignore, disable-web-security is to disable the XSS auditor (we would like to find XSSes first, we can figure out how to bypass on a different day).
To access the APIs, all we have to do is instantiate it this way:
import pychrome

debug_host = '127.0.0.1'
debug_port = 9222
url = f"http://{debug_host}:{debug_port}"
browser = pychrome.Browser(url=url)
Now that we have our stage set, let’s figure out what should the fuzzer do?
  1. Create a new tab
  2. Enable precise coverage collection in the page
  3. Visit our target page (might seem easy, turns out not to be the case)
  4. Inject instrumentation code
  5. Inject XSS detection methods
  6. Inject payload
  7. Detect paths executed in the code
  8. Generate new payloads from it
  9. Go to step 6
The first step is straight forward, this is the init of our injector class, it creates a new tab, starts it and then enables a set of API in Chrome debugger to collect certain event types:
def __init__(self, browser):
 self.browser = browser
 self.debugger = browser.new_tab()
 self.debugger.start()
 self.debugger.Page.enable()
 self.debugger.Console.enable()
 self.debugger.Runtime.enable()
Enabling code coverage is easy, using its output is a bit more complex. What the following piece of code tries to do is turn the coverage information into something exploitable, it will tell us the executed piece code:
class Coverage:

 def __init__(self, debugger):
  self.debugger = debugger
  self.sources = {}
  self.coverages = []
  self.debugger.Profiler.enable()
  self.debugger.Profiler.start()
  self.debugger.Debugger.enable()
  self.debugger.Debugger.setSkipAllPauses(skip=True)
  self.debugger.set_listener('Debugger.scriptParsed', self._on_script_parsed)
  self.debugger.Profiler.startPreciseCoverage(callCount=True, detailed=True)

 def _on_script_parsed(self, scriptId, **kwargs):
  source = self.debugger.Debugger.getScriptSource(scriptId=scriptId)
  self.sources[scriptId] = source
Visiting a page is easy, it doesn’t cover all cases. Should we for instance send a POST method, have certain cookies set or are add specific headers to the request, these cases require a more complex code, that in some cases, would require intercept request on the network.
For the sake of the PoC, we will assume the simple case:
self.debugger.Page.navigate(url=url)
Detecting that the page has finished loading is another mess of its own, the debugger API sends events that help with detecting loading completed, but again for the sake of the PoC we will wait:
self.debugger.wait(2)
The next step is injecting instrumentation code that MUST execute before the rest of the page has started loading. This is critical if we need to monkey patch sink methods for instance. The Chrome debugger has an API for that:
source = open('instrument.js', 'r').read()
self.debugger.Page.addScriptToEvaluateOnNewDocument(source=source)
Let’s recap, we can now control the Chrome instance, we can start a new tab, we can inject instrumentation code and we can visit our target page. To avoid writing overblowing the article, we will cover the remaining steps in the next blog post, namely:
  • How to detect the XSS?
  • How to instrument objects using the Javascript Proxy API? 
  • How inject payloads?
  • How to exploit the coverage data to detect new branches?
  • How to generate new payloads?

Saturday, December 15, 2018

Hardcoded AWS access keys in Mobile applications

This article is about how to manage AWS access keys when using AWS services in your mobile application.

One of the main reasons that make clients reluctant about the cloud is the data control and the security of the full chain. To tackle this concern, AWS defined in the Shared Responsibility Model  boundaries between AWS and the customer when dealing with security controls, it described the security best practices to follow when using AWS services at each level.

AWS Shared Responsability Model

In a nutshell, AWS is responsible for the security of the cloud (represented as the orange part) , while the customer is responsible for the security in the cloud (represented as the green part).  So as a customer, you are responsible for managing the users and how they access to the services.

When you access AWS programmatically, you use an access key to verify your identity and the identity of your applications. Anyone who has your access key has the same level of access to your AWS resources that you do. This is why, you should not in any case make your access key public. Uber will not forget this lesson : https://awsinsider.net/articles/2017/11/21/uber-aws-data-breach.aspx  

If you are uploading your code to a git repository, you can use git-secrets which checks and prevents you from committing access keys into git repositories.

If you are accessing AWS services via your mobile application, hardcoding the access keys is a high risk vulnerability since your secrets can be compromised by someone examining your code.

To check if your keys are hardcoded in your application, you can use Ostorlab security scanner which detects hardcoded access keys in your mobile application ( Android and iOS).

In the example, I will be using an Android application to upload pictures to AWS S3.
  • The first example is using the hardcoded keys directly in the upload function:

 // KEY and SECRET are gotten when we create an IAM user above
 String KEY = "AKIAIUG2TRZ99XFZONBA";
 String SECRET = "31eJbrNp5OJgrbvxTyzz38MUZ/MBMNVkG0irtd/2";

 BasicAWSCredentials credentials = new BasicAWSCredentials(KEY, SECRET);
 AmazonS3Client s3Client =new AmazonS3Client(credentials);

 TransferUtility transferUtility =
         TransferUtility.builder()
                 .context(getApplicationContext())
                 .awsConfiguration(AWSMobileClient.getInstance().getConfiguration())
                 .s3Client(s3Client)
                 .build();
  • The second example is using the keys in a resources config file:
<?xml version="1.0" encoding="utf-8"?>
<keys>
 <key name="aws_access_key_id">AIDAJILINIFRJA2HBEEQ </key>
 <key name="aws_secret_access_key"> 5syV8k5gCDJXtSCW4BKOjcpCLJHAhgbe/YfSPoJE </key>
<keys

Ostorlab is able to detect the presence of both keys:


By clicking on AWS sensitive information hardcoded in the application, we can find the keys in the technical details section:


So if you are developing a mobile application and are using AWS services:
  1. Check if you are hardcoding access keys
  2. Obfuscate your code to avoid trivial retrieve of the access keys (You can check Proguard)
  3. Store your keys in a different component and retrieve them only when needed, using AWS Secret Manager for instance.

Sunday, October 28, 2018

DAST (Dynamic Analysis), how does it work?

8:08 PM Posted by ASM






Ostorlab uses dynamic analysis to assess mobile applications and enable detection of false positive free vulnerabilities.

Dynamic analysis consists of running the application on a real Android or iOS device, monitoring application interaction with different OS components and detecting insecure or potentially unsafe behaviours.

The devices used in dynamic analysis are real hardware phones and tablets equipped with different OS versions for different architectures.

During the installation phase, iOS applications (.ipa) are resigned and instrumented before installation, Android applications are installed as is without any modifications.

Once the application is installed, the instrumentation engine is hooked in and starts to intercept a plethora of API calls to detect use vulnerable behaviour, this includes crypto API, keychain, network, filesystem, SQL etc.

A monky UI fuzzer emulates random actions and tries to increase coverage of the application, at the same time a different instance is installed on a separate device where a human operator emulates complex actions that requires multiple steps, or the interaction with third systems, like email registration validation, entering an SMS validation code etc. The goal is to increase coverage of the application code.

During the analysis step, filesystem, netowork and API calls are collected, checked passively for vulnerabilities, like use of weak encryption scheme, and also dispatched to others systems to perform active testing.

During active testing, all entry points of the applications are collected and fuzzed, this includes content providers, broadcast receivers, URI handlers etc.

Backend system are also collected during this step and scanned for vulnerabilities, including SQL injection,  command injection, XSS, etc.

Some of the collectables are exported in the scan results as artifcats, like network log and screenshots.

Initially Ostorlab used Jailbroken and rooted devices to run all of its dynamic testing, this has proven to be unnecessary and in some cases problematic. Use of robust and fast instrumentation engines allows the collection of all the necessary information, but requires support hooking of native C level APIs, as well as platform specific APIs, like JavaScript in the case Cordova and CLI in the case of Xamarin.


Thursday, September 20, 2018

New Features and Roadmap

4:14 PM Posted by ASM


The last few months, Ostorlab team has been hard at work adding exciting new features. Some of these have already hit production, or will do so in the upcoming weeks and months.

The most exciting feature we have been busy with is major work on the backend scanning front. Ostorlab is now able to crawl HTML endpoints, supporting JavaScript heavy websites and single page applications (SPA) based on frameworks like Angular, React or Vue.js.

The new backend is augmented with a new Cross Site Scripting (XSS) scanner based on headless Chrome and a new backend scanner using a novel probabilistic approach. The new backend scanner has support for SQL injection in multiple context (where clause, sort clause, group by, string ...), Jinja template Injection and Command Injection and we are planning to add support for over 100 other backend vulnerabilities in the upcoming months, like Mako template injection Spring expression injection, etc.

Ostorlab has also gone through a major rework of its infrastructure, changing its scanning scheduler to offer increased scalability and robustness.

Other changes include multiple bug fixes, UI tweaks, false positives fixes and new detection rules, like network security configuration rules.

In the upcoming months, Ostorlab team will be focused on delivering new features or extending support to the existing ones. All enterprise scans will expose an Artefact section collecting traffic logs, screenshots, decompiled source code, etc. The feature is almost done and will hit production sometimes next week.

Ostorlab team will also be focusing its effort on enhancing support for Xamarin.  The taint engine will add support for .Net IL and source code decompilation. The backend scanner will continue its progress adding more rules and enhancing detection of persistent XSS and postMessage XSS.

The Ostorlab team welcomes all feedback and will be happy to answer all your questions.


Monday, January 22, 2018

Reinforcement Learning & Automated Testing - part 1

2:23 PM Posted by ASM
I will be sharing through a series of blog posts our past experimentations with the use of reinforcement learning for automated testing, to both chase bugs and find vulnerabilities.

Our initial goal was to build a generic intelligent approach to identify vulnerabilities in mobile applications, targeting initially Android Java based applications and iOS LLVM bitcode based applications. Our experimentation lead us to learn about reinforcement learning and to use it for what seemed to give very interesting results.

For the unfamiliar with reinforcement learning, it is a a branch of machine learning that relies on an input loop to continuously enhance results.


Reinforcement learning was for instance used by the AlphaGo project, that used a specialized form of reinforcement learning called deep reinforcement learning and that, as the name suggests, uses deep learning.

Reinforcement learning is in reality quite simple and intuitive. An agent performs an action that he sends to an environment, he then collects information about the new state and action outcome (referred to as reward or punishment), and finally computes a new action based on that output. The input is computed using an algorithm that is referred to as a Policy.



Though the use of reinforcement learning for security testing AFAIK has never been mentioned before, except for 2 very recent academic papers that claims to be the first ones doing so; in reality reinforcement learning has existed for quite some time in some security testing tools and has already proven to produce amazing results. AFL by Michal Zalewsky is the best example that has found a staggering number of vulnerabilities.

AFL is commonly referred to as an evolutionary fuzzer. It generates a test case that gets passed to an instrumented program, then collects the execution trace and generates new inputs that aims at increasing code coverage within the tested program.

 Evolutionary fuzzing in general needs to solve 3 problems:
1 - Fast Instrumentation
2 - Smart Code Coverage Algorithm
3 - Efficient Vulnerability Identification

Though the first (Fast Instrumentation) and last (Efficient Vulnerability Identification) problems have nothing to do with reinforcement learning, I find them so intersting that I believe are worthy of getting some explanation.

Instrumentation consists simply of tracing a program execution, the principle is simple, but the execution is notoriously complex. Instrumentation can have several levels of granularity, per function call, per block or even per instruction. The higher the granularity is, the slower it gets.
There are different ways to instrument a program:
- Compile-time, which simply delegates the task to the compiler to add instrumentation instructions. Initially it was the fastest approach, as the compiler have better understanding of the program, but most importantly the compiler is able to run his optimizations on the instrumented code.

- Software-based run-time, this approach is adapted when we don't have access to the source code of the program. This is by far the slowest approach as it requires constant jumping between the instrumented code and the instrumentation code. It is also very error prone and difficult to get right.

- Hardware-based run-time, which is my favorite approach as it brings the best of both worlds; low overhead and no need for source code. Intel and ARM added support in their processors for program tracing with very low overhead and both AFL and HonggFuzz for instance have support for using hardware-based instrumentation.

The efficient vulnerability identification is yet another complex subject. Initially most fuzzers relied on the program to crash as a sign for a potential vulnerability. However crashes may not occur if for instance the overflow is too small to overwrite anything interesting.

More advanced approach is the one used by sanitizers. LLVM sanitizers perform compile time modifications to a program to make the triggering fact of a vulnerability more apparent, like the use of memory guards that wraps every memory allocation.

All of these approaches are solely adapted for low level languages hunting low level vulnerabilities, like overflows or use-after-free.

The 2nd component of evolutionary fuzzers is the algorithm to increase the code coverage and this is where the reinforcement part happens.

AFL uses genetic algorithms to generate input and relies on the block based instrumentation to identify if an input was capable of triggering a new path in the program.

Genetic algorithms aim at imitating natural selection which consists of producing input, running a set of modifications (crossover and mutation) and selecting from that generated population a subset that passes a fitness function .



In the case of AFL genetic algorithm:
- crossover operation consists for instance of block switching between inputs;
- mutation operation consists for instance of bit flipping;
- fitness function measures the discovery of a new execution path.

AFL also adds an element of curiosity or an exploration bonus by privileging input that triggers new execution path. Genetic algorithms and exploration bonuses are commonly used in modern reinforcement learning solutions.

Other approaches, that predate AFL genetic algorithm, consist of using  SMT and SAT solvers. This approach requires a highly granular instrumentation and attempts to solve complex equations to discover a new execution branch.

SMT solvers have known huge progress in the recent years, but other than the non-public SAGE, there is no fuzzer that has reported good results with this approach.

Other fuzzers tries a combination of multiple techniques to build on the strengths of both approaches. Driller for instance that won the 2nd place at the DARPA Cyber Grand challenge used both AFL, a modified Qemu and Z3 SMT solver.

In the next blog posts I will dive into some limitations of these approaches and present the use reinforcement learning for identifying high level vulnerabilities like SQLi, Command Injection and XXE.

Wednesday, January 17, 2018

Critical Attack Surface of Mobile Applications

1:24 PM Posted by ASM



LiveOverflow published an interesting video on Mobile Application Security where he tackled the Attack Surface of mobile applications and a case of a security 'researcher' who oversells the results of his work.

Mobile Applications are built on top of an environment that has attack surface reduction in mind, through the use of sandboxing, explicit permission model, automated updates and the offering of an API that tries to be secure by default.

However, some Mobile Applications do however expose a critical attack surface that requires special attention, either due to the technology stack they are based on, the kind of usages the application offers or the interaction that the application has with other components.
Another key factor to take into account is the need for these environments to add more features and functionalities. These features offers room for developers to be creative in finding new ways to use our phones, but at the same time increases the attack surface. Take for instance iOS App Extension, this functionality similar to Android intents was added later on to the iOS ecosystem.

These are few examples that we have seen in the past as security critical vulnerabilities but are very specific to the Mobile environment:

Remotely exploitable JavaScript injection in JavaScript-based applications:

Applications developed using JavaScript Framework, like Cordova, Ionic, might be vulnerable to JavaScript or HTML injection. These vulnerabilities can be leveraged into remote code injection due to the nature of the API exposed through these Frameworks.

For such a vulnerability to be considered critical, the attacker must have the capacity to send the malicious input to other users without any special interaction.

For instance, a sport's app that allows users to share their progress through a personal wall. If that wall is vulnerable to JavaScript injection (XSS), then any user who views the wall of the attacker will be compromised.

Even HTML injection vulnerabilities might be transformed into JavaScript injection via JavaScript Gadget attack.

Memory corruption in native code through untrusted input:

Several applications handles the parsing of binary formats like Audio, Video and Images using native libraries. A memory corruption vulnerability in these libraries using an untrusted input will result in a remote code execution.

For instance, if a chat application that allows sending voice recording in MP4 format, suffers from a memory corruption vulnerability, this will result in remote code execution in the context of the application.

Java, Kotlin, Objective C and Swift are all memory safe languages unless unsafe APIs are used, however linking against libraries in C and C++ opens the door for these kind of vulnerabilities.

Intent injection in browsable Activities exploited using drive-by attacks in Chrome:


Chrome allows sending intent with extra parameters to activities with the Browsable category. An application vulnerable to injection through the intent extra parameters is vulnerable to driveby exploitation.

The attacker may either entice the victim into visiting his malicious page, or serve the attack using Ads for instance. Firefox browser requires extra user interaction to trigger intent sending, while most other browsers on mobile devices do not support this feature.

Communication over clear-text traffic or using insecure TLS/SSL server certificate validation:

The impact of this vulnerability depends on the nature of the exchanged data. For instance if either the authentication phase or any session enabled action is performed over insecure channels, this will result in the compromise of the users' session.

If the application is developed in JavaScript Frameworks, the retrieval of remote JavaScript or HTML will result in a remote code execution. If the application downloads a shared library (.so, .dex, .jar) over insecure channels, this will also result in remote code execution.

An example of this vulnerability is the use of ALLOW_ALL_HOSTNAME_VERIFIER:

ALLOW_ALL_HOSTNAME_VERIFIER implementation don't perform any validation:

Session management shared between Mobile and Web application and the lack of Web related protection on the Mobile Backend:

Web Applications share the same browser with other web applications, which creates opportunities for a set of attacks that are not applicable to mobile applications, like CRSF, session hijacking through all sorts of XSS and even Clickjacking.

In general, these attack vectors are absent for Mobile Applications, therefore developers don't need to implement any security protection against these attacks.

Some Mobile Application backends do however share the session management system between the web and the mobile backend, which creates the opportunity for an attacker to link against the mobile backend from the browser and exploit these vulnerabilities.

Friday, June 16, 2017

Finding security bugs in Android applications the hard way

Ostorlab is a community effort to build a mobile application vulnerability scanner to help developers build secure mobile applications. One of the new key components of the scanner detection capabilities is a new shiny static  taint engine for Android Dalvik Bytecode that was heavily optimized for performance and low false positives.

A simple version of a Static Taint Engine computes how user controlled input propagates inside an application. Tracking the flow uses tainting of variables and attributes, hence the name 'Static Taint Engine'. This taint information serves to detect vulnerabilities in the application.

Few months after shipping the initial version of the engine and scanning over 10.000 mobile applications uploaded by users, these are some of the key results we have collected so far.

The static taint engine has detected over 600 high risk vulnerabilities, ranging from content provider SQL injection, insecure SSL/TLS server certificate validation (detected statically), command injection, insecure shared preferences, weak cryptography, hard coded keys and many other classes of vulnerabilities.

These are some examples of the vulnerabilities found using the static taint engine, I was cautious to only share examples from voluntarily insecure applications:

Content provider SQL injection:

The second parameter (1 if you count from 0) of the method android.database.sqlite.SQLiteDatabase.delete() will cause a SQL injection if user-controlled.
The parameter is exposed by the exported content provider method jakhar.aseem.diva.NotesProvider.delete(), hence making the application vulnerable to a SQL injection:

[TAINT] Parameter '1' ==*==*==*==*==>>> Sink '[u'Landroid/database/sqlite/SQLiteDatabase;', u'delete', u'(Ljava/lang/String; Ljava/lang/String; [Ljava/lang/String;)I', u'1', u'SQL_SINK']'
===========
|__Ljakhar/aseem/diva/NotesProvider;->delete(Landroid/net/Uri; Ljava/lang/String; [Ljava/lang/String;)I / 0
 |__Landroid/content/ContentResolver;->notifyChange(Landroid/net/Uri; Landroid/database/ContentObserver;)V (no childs) / 1
 |__Landroid/content/Context;->getContentResolver()Landroid/content/ContentResolver; (no childs) / 1
 |__Landroid/content/UriMatcher;->match(Landroid/net/Uri;)I (no childs) / 1
 |__Landroid/database/sqlite/SQLiteDatabase;->delete(Ljava/lang/String; Ljava/lang/String; [Ljava/lang/String;)I (no childs) / 1
 |__Landroid/net/Uri;->getLastPathSegment()Ljava/lang/String; (no childs) / 1
 |__Landroid/text/TextUtils;->isEmpty(Ljava/lang/CharSequence;)Z (no childs) / 1
 |__Ljakhar/aseem/diva/NotesProvider;->getContext()Landroid/content/Context; (no childs) / 1
 |__Ljava/lang/IllegalArgumentException;->(Ljava/lang/String;)V (no childs) / 1
 |__Ljava/lang/StringBuilder;->()V (no childs) / 1
 |__Ljava/lang/StringBuilder;->append(C)Ljava/lang/StringBuilder; (no childs) / 1
 |__Ljava/lang/StringBuilder;->append(Ljava/lang/Object;)Ljava/lang/StringBuilder; (no childs) / 1
 |__Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder; (no childs) / 1
 |__Ljava/lang/StringBuilder;->toString()Ljava/lang/String; (no childs) / 1
===========
User controlled parameter is used to construct an SQL parameter vulnerable to SQL injection
Method jakhar.aseem.diva.NotesProvider.delete():

    public int delete(android.net.Uri p8, String p9, String[] p10)
    {
        int v0;
        switch (jakhar.aseem.diva.NotesProvider.urimatcher.match(p8)) {
            case 1:
                v0 = this.mDB.delete("notes", p9, p10);
                break;
            case 2:
                String v2_6;
                int v3_0 = this.mDB;
                StringBuilder v5_1 = new StringBuilder().append("_id = ").append(p8.getLastPathSegment());
                if (android.text.TextUtils.isEmpty(p9)) {
                    v2_6 = "";
                } else {
                    v2_6 = new StringBuilder().append(" AND (").append(p9).append(41).toString();
                }
                v0 = v3_0.delete("notes", v5_1.append(v2_6).toString(), p10);
                break;
            default:
                throw new IllegalArgumentException(new StringBuilder().append("Divanotes(delete): Unsupported URI ").append(p8).toString());
        }
        this.getContext().getContentResolver().notifyChange(p8, 0);
        return v0;
    }

Command injection:

This is an example of the use of a dangerous commands that sets insecure permissive permissions using the mode '777', read write execute to user, group and other, it can't get more permissive that that :/ :

[TAINT] String '/system/bin/chmod -R 0777 F1.txt file12.txt' ==*==*==*==*==>>> Sink '[u'Ljava/lang/Runtime;', u'exec', u'([Ljava/lang/String; [Ljava/lang/String; Ljava/io/File;)Ljava/lang/Process;', u'Object', u'COMMAND_SINK']'
===========
|__Lcom/ibm/android/analyzer/test/cmdinjection/CommandInjection6;->onCreate(Landroid/os/Bundle;)V / 0
 |__Landroid/app/Activity;->onCreate(Landroid/os/Bundle;)V (no childs) / 1
 |__Landroid/content/Intent;->getStringExtra(Ljava/lang/String;)Ljava/lang/String; (no childs) / 1
 |__Lcom/ibm/android/analyzer/test/cmdinjection/CommandInjection6;->cmdRuntime(Ljava/lang/String; I)V / 1
  |__Landroid/content/Context;->getFilesDir()Ljava/io/File; (no childs) / 2
  |__Landroid/util/Log;->i(Ljava/lang/String; Ljava/lang/String;)I (no childs) / 2
  |__Ljava/io/File;->getAbsolutePath()Ljava/lang/String; (no childs) / 2
  |__Ljava/lang/Exception;->printStackTrace()V (no childs) / 2
  |__Ljava/lang/Runtime;->exec(Ljava/lang/String; [Ljava/lang/String; Ljava/io/File;)Ljava/lang/Process; (no childs) / 2
  |__Ljava/lang/Runtime;->exec(Ljava/lang/String; [Ljava/lang/String;)Ljava/lang/Process; (no childs) / 2
  |__Ljava/lang/Runtime;->exec(Ljava/lang/String;)Ljava/lang/Process; (no childs) / 2
  |__Ljava/lang/Runtime;->exec([Ljava/lang/String; [Ljava/lang/String; Ljava/io/File;)Ljava/lang/Process; (no childs) / 2
  |__Ljava/lang/Runtime;->exec([Ljava/lang/String; [Ljava/lang/String;)Ljava/lang/Process; (no childs) / 2
  |__Ljava/lang/Runtime;->exec([Ljava/lang/String;)Ljava/lang/Process; (no childs) / 2
  |__Ljava/lang/Runtime;->getRuntime()Ljava/lang/Runtime; (no childs) / 2
  |__Ljava/lang/StringBuilder;->()V (no childs) / 2
  |__Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder; (no childs) / 2
  |__Ljava/lang/StringBuilder;->toString()Ljava/lang/String; (no childs) / 2
 |__Lcom/ibm/android/analyzer/test/cmdinjection/CommandInjection6;->getIntent()Landroid/content/Intent; (no childs) / 1
 |__Ljava/lang/StringBuilder;->()V (no childs) / 1
 |__Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder; (no childs) / 1
 |__Ljava/lang/StringBuilder;->toString()Ljava/lang/String; (no childs) / 1
===========
The application executes a dangerous command
Method com.ibm.android.analyzer.test.cmdinjection.CommandInjection6.onCreate():

    protected void onCreate(android.os.Bundle p7)
    {
        super.onCreate(p7);
        android.content.Intent v2 = this.getIntent();
        String v0 = v2.getStringExtra("exec");
        if (v0 == null) {
            String v1 = v2.getStringExtra("execR");
            if (v1 == null) {
                this.cmdRuntime("/system/bin/chmod 0777 /data/data/com.ibm.android.analyzer.test/1.txt", 5);
                this.cmdRuntime("/system/bin/chmod -R 0777 F1.txt file12.txt", 5);
            } else {
                this.cmdRuntime(new StringBuilder().append("/system/bin/sh ").append(v1).toString(), 5);
            }
        } else {
            this.cmdRuntime(v0, 5);
        }
        return;
    }

Hard-coded encryption keys:

The use of hard-coded encryption keys are another example of common vulnerabilities we see in mobile applications, in the example, the string 'superSecurePassword' is used to called an encryption method:

[TAINT] String 'superSecurePassword' ==*==*==*==*==>>> Sink '[u'Ljavax/crypto/spec/SecretKeySpec;', u'', u'([B Ljava/lang/String;)V', u'0', u'CIPHER_SINK']'
===========
|__Lcom/android/insecurebankv2/MyBroadCastReceiver;->onReceive(Landroid/content/Context; Landroid/content/Intent;)V / 0
 |__Landroid/content/Context;->getSharedPreferences(Ljava/lang/String; I)Landroid/content/SharedPreferences; (no childs) / 1
 |__Landroid/content/Intent;->getStringExtra(Ljava/lang/String;)Ljava/lang/String; (no childs) / 1
 |__Landroid/content/SharedPreferences;->getString(Ljava/lang/String; Ljava/lang/String;)Ljava/lang/String; (no childs) / 1
 |__Landroid/telephony/SmsManager;->getDefault()Landroid/telephony/SmsManager; (no childs) / 1
 |__Landroid/telephony/SmsManager;->sendTextMessage(Ljava/lang/String; Ljava/lang/String; Ljava/lang/String; Landroid/app/PendingIntent; Landroid/app/PendingIntent;)V (no childs) / 1
 |__Landroid/util/Base64;->decode(Ljava/lang/String; I)[B (no childs) / 1
 |__Lcom/android/insecurebankv2/CryptoClass;->()V / 1
  |__Ljava/lang/Object;->()V (no childs) / 2
 |__Lcom/android/insecurebankv2/CryptoClass;->aesDeccryptedString(Ljava/lang/String;)Ljava/lang/String; / 1
  |__Landroid/util/Base64;->decode([B I)[B (no childs) / 2
  |__Lcom/android/insecurebankv2/CryptoClass;->aes256decrypt([B [B [B)[B / 2
   |__Ljavax/crypto/Cipher;->doFinal([B)[B (no childs) / 3
   |__Ljavax/crypto/Cipher;->getInstance(Ljava/lang/String;)Ljavax/crypto/Cipher; (no childs) / 3
   |__Ljavax/crypto/Cipher;->init(I Ljava/security/Key; Ljava/security/spec/AlgorithmParameterSpec;)V (no childs) / 3
   |__Ljavax/crypto/spec/IvParameterSpec;->([B)V (no childs) / 3
   |__Ljavax/crypto/spec/SecretKeySpec;->([B Ljava/lang/String;)V (no childs) / 3
  |__Ljava/lang/String;->([B Ljava/lang/String;)V (no childs) / 2
  |__Ljava/lang/String;->getBytes(Ljava/lang/String;)[B (no childs) / 2
 |__Ljava/io/PrintStream;->println(Ljava/lang/String;)V (no childs) / 1
 |__Ljava/lang/Exception;->printStackTrace()V (no childs) / 1
 |__Ljava/lang/String;->([B Ljava/lang/String;)V (no childs) / 1
 |__Ljava/lang/String;->toString()Ljava/lang/String; (no childs) / 1
 |__Ljava/lang/StringBuilder;->()V (no childs) / 1
 |__Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder; (no childs) / 1
 |__Ljava/lang/StringBuilder;->toString()Ljava/lang/String; (no childs) / 1
===========
The application uses a hardcoded key to encrypt the data
Method com.android.insecurebankv2.MyBroadCastReceiver.onReceive():

    public void onReceive(android.content.Context p17, android.content.Intent p18)
    {
        String v12 = p18.getStringExtra("phonenumber");
        String v10 = p18.getStringExtra("newpass");
        if (v12 == null) {
            System.out.println("Phone number is null");
        } else {
            try {
                android.content.SharedPreferences v13 = p17.getSharedPreferences("mySharedPreferences", 1);
                this.usernameBase64ByteString = new String(android.util.Base64.decode(v13.getString("EncryptedUsername", 0), 0), "UTF-8");
                String v8 = new com.android.insecurebankv2.CryptoClass().aesDeccryptedString(v13.getString("superSecurePassword", 0));
                String v2 = v12.toString();
                String v4 = new StringBuilder().append("Updated Password from: ").append(v8).append(" to: ").append(v10).toString();
                android.telephony.SmsManager v1 = android.telephony.SmsManager.getDefault();
                System.out.println(new StringBuilder().append("For the changepassword - phonenumber: ").append(v2).append(" password is: ").append(v4).toString());
                v1.sendTextMessage(v2, 0, v4, 0, 0);
            } catch (Exception v9) {
                v9.printStackTrace();
            }
        }
        return;
    }

Insecure SSL/TLS service certificate validation:

This is an example of a method using the insecure ALLOW_ALL_HOSTNAME_VERIFIER to construct an SSL/TLS certificate validation scheme:

[TAINT] Class 'Lorg/apache/http/conn/ssl/SSLSocketFactory;' ==*==*==*==*==>>> Sink '[u'Lorg/apache/http/conn/ssl/SSLSocketFactory;', u'setHostnameVerifier', u'(Lorg/apache/http/conn/ssl/X509HostnameVerifier;)V', u'Object', u'SSLTLS_SINK']'
===========
|__Lcom/ibm/android/analyzer/test/domainvalidation/InsecureApacheSSFAllowAllHostnameVerifier$1;->call()Ljava/lang/Void; / 0
 |__Landroid/util/Log;->i(Ljava/lang/String; Ljava/lang/String;)I (no childs) / 1
 |__Ljava/lang/Exception;->printStackTrace()V (no childs) / 1
 |__Ljava/net/URL;->(Ljava/lang/String;)V (no childs) / 1
 |__Ljava/net/URL;->openConnection()Ljava/net/URLConnection; (no childs) / 1
 |__Ljava/security/KeyStore;->getDefaultType()Ljava/lang/String; (no childs) / 1
 |__Ljava/security/KeyStore;->getInstance(Ljava/lang/String;)Ljava/security/KeyStore; (no childs) / 1
 |__Ljava/security/KeyStore;->load(Ljava/io/InputStream; [C)V (no childs) / 1
 |__Ljavax/net/ssl/HttpsURLConnection;->connect()V (no childs) / 1
 |__Ljavax/net/ssl/SSLContext;->getInstance(Ljava/lang/String;)Ljavax/net/ssl/SSLContext; (no childs) / 1
 |__Ljavax/net/ssl/SSLContext;->init([Ljavax/net/ssl/KeyManager; [Ljavax/net/ssl/TrustManager; Ljava/security/SecureRandom;)V (no childs) / 1
 |__Lorg/apache/http/conn/ssl/SSLSocketFactory;->(Ljava/security/KeyStore;)V (no childs) / 1
 |__Lorg/apache/http/conn/ssl/SSLSocketFactory;->setHostnameVerifier(Lorg/apache/http/conn/ssl/X509HostnameVerifier;)V (no childs) / 1
===========
Use of the insecure attribute ALLOW_ALL_HOSTNAME_VERIFIER to validate TLS certificate
Method com.ibm.android.analyzer.test.domainvalidation.InsecureApacheSSFAllowAllHostnameVerifier$1.call():

    public Void call()
    {
        try {
            android.util.Log.i(this.this$0.TAG, "1");
            javax.net.ssl.SSLContext.getInstance("TLS").init(0, 0, 0);
            java.net.URL v4_1 = new java.net.URL("https://1.www.s81c.com/i/v17/t/ibm_logo_print.png?dv1");
            android.util.Log.i(this.this$0.TAG, "2");
            javax.net.ssl.HttpsURLConnection v5_1 = ((javax.net.ssl.HttpsURLConnection) v4_1.openConnection());
            java.security.KeyStore v3 = java.security.KeyStore.getInstance(java.security.KeyStore.getDefaultType());
            v3.load(0, 0);
            android.util.Log.i(this.this$0.TAG, "3");
            new org.apache.http.conn.ssl.SSLSocketFactory(v3).setHostnameVerifier(org.apache.http.conn.ssl.SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);
            android.util.Log.i(this.this$0.TAG, "4");
            v5_1.connect();
            android.util.Log.i(this.this$0.TAG, "5");
        } catch (Exception v0) {
            android.util.Log.i(this.this$0.TAG, "exception 1!");
            v0.printStackTrace();
        }
        return 0;
    }

Key concepts:

The engine is almost a full year effort that started as a PoC in Python. Python allowed for quick prototyping, focusing on the algorithms and data structures. The current implementation uses a graph representation of the taint propagation inside a single function (see graph).


The graph is used to evaluate the taint of other functions offering a very fast and real world usable static taint engine, while at same time taking into account object oriented aspect of Dalvik Bytecode to ensure accurate taint propagation.

To generate a taint graph, a list of execution paths are compiled and evaluated singularly, then fused with a global function taint.

This approach is however limited if the function has an exponential execution path structure (see graph example), this problem is commonly known as path explosion and is a strong limitation of static analysis methods, like symbolic execution.



To remediate this limitation, we transform the problem into a 'search problem' rather then a 'brute force problem'. A path selection algorithm selects paths with the highest probability of the presence of a vulnerability, for instance if a particular path do not cross any sink function - sink functions might cause a vulnerability if called using user controlled parameters - then there is no vulnerability to look for and the execution paths are excluded.



The current implementation was rewritten in C++14  after investigating several other programming languages (Rust, Go and C) which offered over 200x gain in execution speed.

There are still room to increase performance and code coverage, but also fix several false positives due to the use of a default over-tainted graph for low level native methods.

These capabilities are already part of Ostorlab Scanner and are continuously, and silently :), being enhanced every day.

We urge you to test it and share your feedback. If there is a vulnerability that you think we are missing or a false positive that the scanner is reporting, we would love to hear from you and try to work on a ways to fix it or detect it.

Popular Posts