Return to search

Generating representative test scenarios: The FUSE for Representativity (fuse4rep) process model for collecting and analysing traffic observation data

Scenario-based testing is a pillar of assessing the effectiveness of automated driving systems (ADSs). For data-driven scenario-based testing, representative traffic scenarios need to describe real road
traffic situations in compressed form and, as such, cover normal driving along with critical and accident situations originating from different data sources. Nevertheless, in the choice of data sources, a conflict
often arises between sample quality and depth of information. Police accident data (PD) covering accident situations, for example, represent a full survey and thus have high sample quality but low depth of information.
However, for local video-based traffic observation (VO) data using drones and covering normal driving and critical situations, the opposite is true. Only the fusion of both sources of data using statistical matching
can yield a representative, meaningful database able to generate representative test scenarios. For successful fusion, which requires as many relevant, shared features in both data sources as possible, the following
question arises: How can VO data be collected by drones and analysed to create the maximum number of relevant, shared features with PD? To answer that question, we used the Find–Unify–Synthesise–Evaluation
(FUSE) for Representativity (FUSE4Rep) process model.We applied the first (“Find”) and second (“Unify”) step of this model to VO data and conducted drone-based VOs at two intersections in Dresden, Germany,
to verify our results. We observed a three-way and a four-way intersection, both without traffic signals, for more than 27 h, following a fixed sample plan. To generate as many relevant information as possible, the
drone pilots collected 122 variables for each observation (which we published in the ListDB Codebook) and the behavioural errors of road users, among other information. Next, we analysed the videos for
traffic conflicts, which we classified according to the German accident type catalogue and matched with complementary information collected by the drone pilots. Last, we assessed the crash risk for the detected
traffic conflicts using generalised extreme value (GEV) modelling. For example, accident type 211 was predicted as happening 1.3 times per year at the observed four-way intersection. The process ultimately
facilitated the preparation of VO data for fusion with PD. The orientation towards traffic conflicts, the matched behavioural errors and the estimated GEV allowed creating accident-relevant scenarios. Thus, the
model applied to VO data marks an important step towards realising a representative test scenario database and, in turn, safe ADSs.

Identiferoai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:89724
Date20 February 2024
CreatorsBäumler, Maximilian, Prokop, Günther, Lehmann, Matthias
ContributorsTechnische Universität Dresden
Source SetsHochschulschriftenserver (HSSS) der SLUB Dresden
LanguageGerman
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/acceptedVersion, doc-type:article, info:eu-repo/semantics/article, doc-type:Text
Rightsinfo:eu-repo/semantics/openAccess
Relation23-0122

Page generated in 0.0017 seconds