How to crowdsource your drive tests

Drive testing is a fundamental tool for cellular network planning, optimisation and troubleshooting. Drive test logs provide a microscopic view of the air interface by collecting all the metrics reported by the device, including its location and every signalling message exchanged with the network.

Drive tests are vital to complete the acceptance of a new site or cluster of sites before they go on air and are handed over to the Operations team. They are unique to detect common installation and implementation errors such as crossed feeders, missing neighbours or wrong antenna tilt.

But they are used for a lot more than that. Here’s our classification:

There is a lot of cases here. You may notice that there are “directed” drive tests, used like a surgeon’s knife to deepen into a known event and acquire additional data, and “exploratory” drive tests, where we just go about and measure, hoping to find something to work on afterwards.

What’s not so good about drive tests?

Drive tests are delicate because of their many moving parts: an adapted vehicle, a laptop, software, special devices, cables, ports, antennae, a driver and sometimes an additional equipment operator, licences… It takes weeks to fully setup the equipment and still, points of failure abound. Some of them, like a loose cable, are more obvious than others: a wrong setting on Windows like the infamous TCP window scale can ruin days of work and it will only be detected post-mortem if at all.

Drive tests are also limited by design and use. Although there is handheld equipment available, the most common use case is mostly measurement collection on roads and streets, away from offices and homes; drive test teams usually work during day-time and office hours, missing weekends and evenings; not all devices are compatible with drive test software, therefore it is hard to benchmark devices or find correlations between performance and the device hardware or software; drive tests are mostly active tests executed on the network and the actual perceived experience by our users is just ignored; they need to be planned days in advance and their results take days or weeks to be delivered…

So, if your QoE at the most popular restaurant in town were consistently worse than your competitor’s every Friday night, how would you find out?

Why crowdsource?

Drive tests are very good for actively testing the network, but they are not designed to detect or benchmark QoE events passively, especially the quickest ones. A representative set of users equipped with software clients installed on their devices is far more fit for this purpose.

Compare the fixed picture of a typical drive test to the sample density of a crowdsourced campaign.

Typical drive test plot (courtesy of Computa maps)

Typical drive test plot (courtesy of Computa maps)

Sample distribution for a 10,000 user panel over 3 days.

Sample distribution for a 10,000 user panel over 3 days.

The difference is even more noticeable if we animate the samples:

Drive tests will take just one frame of this sequence.

See below another example of how to use a small panel of 6 devices to detect Coverage Holes and locations covered only by competition, many of them indoors.

Which, by the way, can be done every day. A drive test would return very different results depending on which date the campaign was run (see bar chart below). A coverage hole one day may be gone the next one, whereas a permanent crowdsourced collection of measurements gives clear priorities: the more samples, the higher priority.

So, crowdsourced measurements give us more accuracy in terms of location and time. Is that all?

Well, no. There are a few other dimensions over which to aggregate data:

  • Device. On device clients work on OS versions. Any device supporting the software will report measurements. All Android, all iOS.  
  • Technology. Cellular or WiFi. Create a WiFi coverage map. Compare WiFi vs Cellular throughput. Which data speed are your customers getting on their WiFi?
  • Customer segment. As you are monitoring real customers, you can observe their behaviour, which is well beyond the scope of drive tests. Combine your CRM with your on-device clients and find the Drop Call Rate of your VIPs, the average throughput of your prepaid customers.

How to crowdsource your drive tests using on-device clients?

Your crowdsourced system will contain the following elements:

  • User panel: volunteers that will measure their service and network performance on their devices and share their results with you.
  • On-device client. The piece of software that will sit on the devices and collect measurements. These measurements can be either active or passive.

Active measurements are the result of conducting a test on the network: a specific file download from a specific location, a test call, etc… whereas passive measurements are collected during the typical device usage. This is not a small difference: active tests provide Quality of Service, whereas passive measurements return Quality of Experience.

  • Backend. The servers that will collect the measurements and calculate your KPIs.
  • Reporting platform

The panel

Recruiting and retaining a statistically significant panel is key to obtain valid results.

The panel size and composition vary widely with the purpose of the system. The more segmented results you look for, the larger the panel must be. If you would like to measure your own QoE or QoS, you may limit the panel to your customers or employees, but if you intend to conduct benchmarks, then you will need to reach out to your competitors’ customers. The client is key for that purpose.

The client

The client must be able to collect the measurements you’re interested in, but it also needs to remain in the panel’s devices for as long as necessary. Besides its technical features, it must be sticky.

Usually the client takes the form of a speedtest-like app. Unfortunately, these apps do not stay long in the users’ phones, so they are good enough to maintain a small panel for a brief period of time. For large, durable panels the only viable alternative is to embed the client into a host app by means of a Software Development Kit (SDK).

The stickiness of the host app will define the panel. Your own self-care app is ideal to manage your customers’ QoE. In order to conduct competition benchmarks, you will need access to a trusted 3rd party app.  

The backend

The key factors to consider in a backend are times and costs of deployment and maintenance, together with security and data privacy regulations.

The typical in-house solution gives you the most control but takes time to deploy, it is more costly to maintain and it has a lower ROI. An alternative managed hosted solution allows you to quickly ramp up your measurements campaigns, cancel all maintenance procedures and take advantage of the ever shrinking costs of PaaS.

Physical location is a factor. Measured latency against a test server grows with the distance between the client and the server. Some countries may also not allow data to be stored outside their borders. Check with your legal department.

Reporting

Web-based static reporting is the most common solution nowadays. It contains predefined reports that query the database. Typical reports include throughput, latency, packet loss, signal strength, signal quality, call setups, service states and others aggregated over location, cell, device type, customer segment or access technology.

In the near future we will see more dynamic reporting based on Excel or Qlik Sense, which will allow the user to quickly build their own reports, and north-bound interfaces to export the measurements to external BI, Big Data or reporting solutions.

Click here for more information on Ciqual's solution or contact us to continue the conversation.