back to overview
12
.
06
.
2020

Ever thought about using LEGO™ bricks for app qualification?

A short story about the hurdles and the fun when testing Coloride
Marc Schlüter
Senior Test Manager
Martin Webermann
Head of App Development

Coloride, our telematics app

Coloride is Swiss Re's technically advanced, globally available telematics app which assesses real driving behavior of insurance clients. By collecting sensor data from users' smartphones, Coloride is able to calculate a driving score which is based on four components:

Coloride driving score components: Distraction, speeding, maneuvrs and context
Coloride: My trips

This score can be used to engage users with a flexible, module-based gamification approach. Coloride records trips automatically through a background process and can distinguish between different transport modes, such as trains, trams, planes, buses or boats. The implementation of Coloride follows a white-label approach which enables clients to roll-out a branded app version (using clients' corporate identity with their own preference for colors, texts or logos) in a short amount of time. Clients can decide whether to integrate Coloride into their own insurance solution or to use it as a standalone system.

Testing in an agile way

Our software development process follows an agile approach. Our team of knowledgeable professionals is organized into several smaller teams containing cross-functional experts in all relevant topics. We have software developers, testers, analysts and business experts giving their best to support the whole effort. Depending on which part of the software is delivered by each team to the product, the test focus differs slightly. Because our main sprint goal is to incrementally deliver a valuable product, each team executes unit tests, integration tests and system tests (to mention just a few) on all development environments, to ensure the quality of their deliveries.

Nevertheless, it is useful to have someone keeping an overview of the quality of the whole product. Someone who can step back and be the interface between the testing experts from each team. When it comes to complex products, such as a telematics app, this aspect is essential.

Test strategy in a complex setup

Usually, articles about qualification describe in detail the test strategy, test process and test plan for a software product. We have decided to proceed differently.

In this article, we would like to draw your attention to the testing peculiarities of Coloride - testing snapshots - which would be digestible also by readers who are not working in software qualification. Since we believe that testing is more than just executing a test specification which returns a OK or NOK result, in the following sections we will give you some insight into the obstacles we run into when publishing Coloride.

First Snapshot issue: Smartphone variety

As a mobile app is designed to run on mobile platforms, we paid special attention to device interoperability tests. Mobile phones are produced by many manufacturers and each of them has high-end and low-budget devices on offer, to cover all segments of their markets. Low-budget phones do not necessarily have high-end modules installed, therefore we had to take a closer look to installed GPS chipset in those devices. In addition, mobile phones' operating systems can limit the options to retrieve proper GPS data. We had to consider that too.

We saw significant differences in the accuracy of phones' chipsets. Our first naïve approach was to use our telematics app in the exact same way through a set of different phones, expecting the same result in terms of trip start and stop and route capturing. We soon realized that the results between iOS and Android devices differed. Secondly, we saw that even the results within our Android devices were not comparable. The cause of this, in our view, relates to the variety of GPS chipsets used by Android phone manufacturers.

To make our situation even more complicated, many vendors equip their Android devices with system add-ons to increase the battery life of their products.

This usually works through individual permissions for high consuming features like GPS data or, in addition, by restricting background usage of apps. To overcome this hurdle, we had to consider a large number of different customised operating system versions.

During our intensive testing sessions, we also realized that some vendors changed the option the user has selected without asking or notifying the user.

With all that in mind, we drew some conclusions that could be of general interest:

  • The smartphone portfolio should be large enough to low-budget and high-end versions across several producers.
  • When selecting devices, the markets where the product is deployed have to be considered, since producers and devices can vary across geographies.
  • Measuring battery consumption of trip recording requires having identical devices (manufacturer, model, age, OS etc.), as results from different vendors cannot be compared.
  • One should start with 100% charged devices since some vendors trigger battery saving mechanisms, without notifying the user, when the battery reaches a defined level.
  • Before starting testing, one needs to check the default configuration of the phones, especially in terms of battery saving and background activities and modify it, where appropriate.
  • OS updates could change expected devices' behaviour.
  • A regular check on the internet for known issues of devices can be timesaving.

Second issue: Events detection

In order to assess driving behaviour, Coloride calculates a score from different components, as described in the introduction. But how do we detect harsh driving maneuvers, such as acceleration, braking, cornering, U-turns, roundabouts and steering?

Every modern smartphone is equipped with a set of inertial sensors, like accelerometer and gyroscope that, when combined, can be used as a robust IMU (Inertial Measurement Unit) along with the GPS.

Coloride consumes such data provided by the operating system. This means our app is able to receive a data stream containing acceleration data in X-, Y- and Z-axis. Every time a trip is recorded on the smartphone, and is processed by our algorithms. Since ML (Machine Learning) algorithms have to be trained, we had to generate a lot of data including true and false positives, so that the algorithm could work. For this, all the maneuvers we wanted to detect had to be executed before and repeatedly, in a controlled environment, like racing tracks, as described in well-defined protocols. And, although it sounds like a lot of fun, some test drivers felt sick after performing the 30th harsh acceleration in a row.

After all these harsh maneuvers were collected, our ML algorithms have to be trained for regular maneuvers as well. On the one hand, we collected a large amount of data from many volunteers, by providing them with Coloride, during their regular commutes, asking them, after their rides, to provide an assessment whether the maneuvers were recognized correctly or not. On the other hand, to make sure our algorithms stayed unbiased, we also equipped our test vehicles with special logging devices from the racing world. We were then able to compare the accelerometer values from the cars with the data streamed by our Coloride running on the smartphones and fine tune our ML algorithms to properly detect driving maneuvers.

Here is what we learned:

  • Smartphones can move around in the car while driving: When driving, they could slide around, for example from the co-driver’s seat, causing misleading acceleration data when falling in the footwell.
  • Picking up the smartphone while driving also causes an acceleration: However, this should not be detected as a maneuver, it's a distraction event.
  • Adding special algorithms and using additional sources to enrich the data, for example cartographic information, before further processing, can help overcoming the problems.

Issue 3: Start and stop recognition

One key feature of a telematics app is to assess the driving behaviour of users in a convenient way. To pursue this, the correct start and stop location of a trip needs to be detected, while the app is only running in the background. To measure if start and stop works as expected, the movement of the device has to be stimulated. Generally, there are two ways to do this:

  • Hire testers, equip them with a device and let them move around with the phone (manual).
  • Use simulation options in the development environment (automated).
Phones working with Legos

Both options have their validity. However, we went for a third semi-automated option, introducing a LEGO™ Mindstorms test setup. This setup allows two devices (e.g. iOS and Android) to be moved in different ways to simulate movements. This stimulates the operating system to detect a movement which is used by our telematics app for start and stop moving detection. With this installation, it is possible to execute use cases where not only one part of our telematics app is stimulated by stubs or mocks, but the whole app is triggered by real signals from the OS. This gets us closer to real phone usage and can be repeated many times, for many hours, without human interaction.

This article wanted to only give a brief glance into the exciting world of qualifying a telematics app, by providing examples on how qualification activities have evolved in this area from more traditional approaches. Are you interested in knowing more? Just get in touch with us.

Share Article