Product

UI call coverage release for dynamic security testing

Ostorlab released the UI call coverage in the analysis environment to show the UI flow exercised during the dynamic security testing.

Wed 01 September 2021

This summer, the Ostorlab team has been bustling at work and we are excited to initiate a series of announcements of various new capabilities and features in the upcoming weeks. First of which is the release of the UI call coverage and a new enhanced version of our Monkey Tester.

The newly added feature shows the UI flow exercised during the dynamic analysis, it also provides an easy way to validate the coverage of the application and ensure critical use cases are being covered.

UI dynamic testing
Demo of the UI call dynamic security testing

The new release comes with numerous enhancements to the monkey tester logic, offering a better understanding of the UI components and the generation of meaningful events that will offer a high coverage of the application's logic.

Monkey Testing and UI interaction automation

Monkey testing is an automated testing technique whereby a test executor injects inputs and executes clicks or events at different parts of the application to test several aspects of the application, like finding crashes or errors, tracking performance, or security issues.

During the dynamic analysis, Ostorlab's scanner uses real devices where the monkey tester generates a series of events to interact with the application. The inputs can either be direct user interaction, like swipe, button click, filling a text field, or a system interaction like turning on/off Wifi connection Bluetooth, GPS, or sending an IPC.

Our technology supports both Android and iOS platforms and covers native applications as well as multi-platform frameworks like Xamarin, Cordova, Ionic, and Flutter.

While there are few open-source tools for UI testing automation, these presented several challenges:

  • Focus on a single platform and lack of a common way to run or express tests
  • Poor support for several common platform and frameworks. Some frameworks in particular have very peculiar approaches to create UI components like Xamarin and Flutter.
  • Poor coverage of key usage patterns, like registration with policy acceptance or filling a check-out menu.

To overcome these challenges and be able to maximize the coverage of the application across all platforms, we rely on three exploration strategies:

  • Random based strategy
  • Rule-based strategy
  • Evolutionary-based strategy

Exploration Strategies

Use of similar strategies is not unique to Ostorlab. Several open-source and research papers implemented similar strategies. The most notable one is Sapienz by Facebook. The open-source version is not maintained, but Facebook has done several presentations on the improvements done to the internal version.

Two key design differentiators with most of these implementations is that these strategies are not executed separately, but part of an Ensemble strategies the mixes them. This allows transforming several weak performing strategies into a single strong one.

The second key differentiator is that these strategies do not generate rigid test cases that may or may not apply, as test reproducibility is often not guaranteed. Instead, these strategies generate test minions that have a set of parameters that alter how the minion would interact with the application and what type of action or succession to privilege.

Random-based

Random-based strategy is the most basic technique used to interact with the application. It generates a randomized series of events and is suitable for views with multiple and independent components. For example a view with different clickables, text articles, videos.

To illustrate how the random strategy works and measure its efficiency, this is a simple view with 6 UI components. In this video we can see the different events generated by the monkey tester and how it exercises them:

UI dynamic testing random strategy
Example with random rule

For a random strategy with 4 types of events (swipe, click, touch, and check), an average of 30 events is needed to interact with the 6 UI different components. To interact in a specific sequence, like Fill Textbox + Enable Checkbox + Click button 2, an average of 95 events are needed.

Random based strategy are simple, but can take very long (or never) to cover complex logical patterns.

Rule-based

Rule-based strategy correlates the user interaction and the UI component of the application. It uses search mechanisms to identify specific components typologies and can apply advanced logic to those components. This technique is suitable for views with predictable actions based on the components’ typologies. For example, a form view with fill text fields, several checkboxes and clickable button.

Below is an example of a simple login view with a username password and login button.

Example with random rule

In this example, a rule checks if there is a password field in the current view, or a more sophisticated check to identify a credit card or a map component

The monkey tester first iterates on all the rules to determine the ones matching the current view and randomly selects one. In the example above, the monkey tester first identified that there were multiple text fields, started filling all the text fields with dummy values as part of the random strategy interaction, then it applied a login rule by injecting the username the password and clicking on the login button.

Evolutionary-based

Search-based strategy uses meta-heuristic search algorithms powered by genetic algorithm.

Genetic algorithms are used to optimize test minion settings to maximize coverage. The strategy keeps track of the inputs and the coverage of the application. For every iteration, settings are mutated to increase that coverage and discard weakly performing minions.

This strategy is suitable for applications with different path views that can differ depending on the user inputs. For example, a questionnaire where the answers can lead to different views depending on the provided answers.

Summary

While each strategy showed good coverage on specific types of views, the full coverage of each separate strategy tested over 1000 mobile applications was relatively low.

The highest average coverage achieved was 35% for random strategy, 27% for rule-based strategy, and 38% for search-based strategy. However, putting these strategies together in an ensemble strategy that combines all the previous offered a significantly higher coverage of 52% in a shorter duration.

Overall, the revamped monkey tester and the new test case coverage visibility will offer higher coverage and make easier it to visualize and understand what has been done behind the scenes.

We do newsletters, too


Get the latest news, updates, and product innovations from Ostorlab right in your inbox.

Table of Contents