Improving the quality of traffic statistics

Published on: 08.09.2023
Author: Alex Lavrynets

The project to improve the quality of traffic statistics is currently being carried out within the Data Science Competence Center (DSCC). The aim of this project is to provide Federal Roads Office (FEDRO) staff using the VMON application for traffic monitoring with more reliable data on the traffic load of Switzerland's national roads.

The traffic load measured by sensors on national roads

The VMON application for traffic monitoring set up by FEDRO is a system for controlling and validating data from almost 500 counting stations across Switzerland. Any vehicle passing close to the sensor is recorded for statistical purposes. These data are used to estimate traffic trends on national roads. They consist of the time of passage, the type of vehicle (passenger car, delivery vehicle, motorbike, lorry, bus, etc.) and their speed. Data are transferred between the sensors and the servers via the telecommunications network. This network is sometimes subject to disruption, causing data transfer to be interrupted. FEDRO wishes to improve both the quality of traffic statistics and the efficiency of how they are produced. The statistics form a basis for decisions concerning Swiss mobility policy.

Data science for traffic monitoring

Data science techniques are used to guarantee the quality of the data. Initially, the measured traffic data are analysed using outlier detection algorithms to identify any data gaps. This initial screening alerts FEDRO staff to any suspect data. Machine learning algorithms are then trained to reconstruct missing data (data imputation). Their performance is compared in order to select the most appropriate statistical model. The reconstruction obtained using the selected algorithm is finally submitted to the FEDRO staff.

Re-training algorithms, essential to maintaining their performance

To achieve optimum performance, algorithms using statistical models need to be periodically retrained. These require constant adaptation to the inherent changes in the data analysed (usual fluctuations, seasonal trends, etc.). By re-training the algorithms on the newly available data, their statistical models and underlying parameters can be adjusted. This approach ensures that rare abnormal events are detected and the missing data reliably imputed despite a constantly changing environment.

Last update 08.09.2023

Top of page


Federal Statistical Office
Data Science Competence Center DSCC

Espace de l'Europe 10
CH-2010 Neuch√Ętel


Comments on the blog

For feedback on our blog, please use the form below.
Thank you very much!

Blog Formular DSCC