Shopping Cart

No products in the cart.

BS EN 16803-2:2020

$215.11

Space. Use of GNSS-based positioning for road Intelligent Transport Systems (ITS) – Assessment of basic performances of GNSS-based positioning terminals

Published By Publication Date Number of Pages
BSI 2020 90
Guaranteed Safe Checkout
Categories: ,

If you have any questions, feel free to reach out to our online customer service team by clicking on the bottom right corner. We’re here to assist you 24/7.
Email:[email protected]

Like the other documents of the whole series, this document deals with the use of GNSS-based positioning terminals (GBPT) in road Intelligent Transport Systems (ITS). GNSS-based positioning means that the system providing position data, more precisely Position, Velocity and Time (PVT) data, comprises at least a GNSS receiver and, potentially, for performance improvement, other additional sensor data or sources of information that can be hybridized with GNSS data.

This new document proposes testing procedures, based on the replay of data recorded during field tests, to assess the basic performances of any GBPT for a given use case described by an operational scenario. These tests address the basic performance features Availability, Continuity, Accuracy and Integrity of the PVT information, but also the Time-To-First-Fix (TTFF) performance feature, as they are described in EN 16803-1, considering that there is no particular security attack affecting the SIS during the operation. This document does not cover the assessment tests of the timing performances other than TTFF, which do not need field data and can preferably be executed in the lab with current instruments.

“Record and Replay” (R&R) tests consist in replaying in a laboratory environment GNSS SIS data, and potentially additional sensor data, recorded in specific operational conditions thanks to a specific test vehicle. The data set comprising GNSS SIS data and potential sensor data resulting from these field tests, together with the corresponding metadata description file, is called a “test scenario”. A data set is composed of several data files.

This EN 16803-2 addresses the “Replay” part of the test scenario data set. It does not address the “Record” part, although it describes as informative information the whole R&R process. This “Record” part will be covered by EN 16803-4 under preparation.

Although the EN 16803 series concerns the GNSS-based positioning terminals and not only the GNSS receivers, the present release of this document addresses only the replay process of GNSS only terminals. The reason is that the process of replaying in the lab additional sensor data, especially when these sensors are capturing the vehicle’s motion, is generally very complex and not mature enough to be standardized today. It would need open standardized interfaces in the GBPT as well as standardized sensor error models and is not ready to be standardized. But, the procedure described in the present EN has been designed to be extended to GBPT hybridizing GNSS and vehicle sensors in the future.

This EN 16803-2 does not address R&R tests when specific radio frequency signals simulating security attacks are added to the SIS. This case is specifically the topic of EN 16803-3.

Once standardized assessment tests procedures have been established, it is possible to set minimum performance requirements for various intelligent transport applications but it makes sense to separate the assessment tests issue from minimum performance requirements, because the same test procedure may be applicable to many applications, but the minimum performance requirements typically vary from one application to another. So, this document does not set minimum performance requirements for any application.

PDF Catalog

PDF Pages PDF Title
2 undefined
10 1 Scope
2 Normative references
11 3 Terms and definitions
3.1 Definitions
12 3.2 Acronyms
13 4 Overview of the whole assessment process
4.1 Definition of the general strategy: what kind of tests
4.1.1 Rationale
14 4.1.2 Record and Replay choice
15 4.2 Construction of the operational scenarios: how to configure the tests
4.2.1 General
4.2.2 Basic principles
4.2.2.1 Unique data collection for all the metrics
16 4.2.2.2 Particular case of the integrity risk
4.2.2.3 Same data collection for a flexible list of road applications
17 4.2.3 Definition of the operational scenarios
4.2.3.1 General
4.2.3.2 Standardized definition of an operational scenario
22 4.3 Definition of the test facilities: which equipment to use
4.3.1 For the record phase
23 4.3.2 For the replay phase
4.4 Description of the record phase: how to elaborate the data sets of the test scenarios
4.4.1 General
4.4.2 Test plan
24 4.4.3 Test bench preparation and good functioning verification
4.4.4 Field test execution
4.4.5 Data control and archiving
4.4.5.1 General
4.4.5.2 Field test data archiving
4.4.5.3 Reference trajectory elaboration
4.4.5.4 Segmentation of the data with respect to the environment
25 4.4.5.5 Validation of data by replay on the Benchmark receiver
26 4.5 Replay phase: assessing he DUT performances
5 Definition of the metrics
5.1 General considerations
27 5.2 Basic notation
5.3 Time interpolation procedure
28 5.4 Accuracy metrics
29 5.5 Availability and Continuity metrics
34 5.6 Integrity metrics
5.6.1 Definition of the Protection Level performance metrics
35 5.6.2 Definition of the Misleading Information Rate metrics
36 5.7 Timing metrics
5.7.1 Timestamp resolution
5.7.2 Nominal output latency
5.7.3 Nominal output rate
5.7.4 Output latency stability
37 5.7.5 Output rate stability
38 5.7.6 Time to first fix
39 6 Description of the replay phase: how to assess the DUT performances
6.1 General
6.2 Checking of the content of the test scenario
40 6.3 Setting-up of the replay test-bench
41 6.4 Validation of the data processing HW and SW by the RF test laboratory
6.5 Replaying of the data
44 6.6 Computation of the ACAI performances
6.7 Computation of the TTFF performances
49 6.8 Establishment of the final test report
7 Definition of the validation procedures: how to be sure of the results (checks)
7.1 Definition of the validation
51 7.2 Pass/Fail criteria for the verification of the test procedures
52 8 Definition of the synthesis report: how to report the results of the tests
60 Annex A (informative)Homologation framework
A.1 The road value chain
61 A.2 Roles of the different stakeholders
62 A.3 Responsibilities of the different stakeholders
64 Annex B (informative)Detailed criteria for the testing strategy (trade-off)
B.1 Main criteria for testing strategy
B.2 Metrological quality
B.2.1 Reproductibility
65 B.2.2 Representativeness
B.2.3 Reliability
B.3 Cost efficiency
B.3.1 Cost of test benches
66 B.3.2 Cost of the test operations
B.4 Clarity in the sharing of responsibilities
B.5 Scenario-management authority
68 Annex C (informative)Record and replay testing considerations
C.1 General
C.2 Experimentation considerations
70 C.3 Equipment justification
C.3.1 Equipment for in-field data collection
73 C.3.2 Record and Replay Solutions
75 C.3.3 Recommended equipment
76 C.4 Presentation of a scenario: rush time in Toulouse
78 C.5 Quality of the reference trajectory
79 C.6 Availability, regularity of the DUT’s outputs for the metrics computations
81 Annex D (informative)Perspectives on record and replay of hybridized GBPT
86 Annex E (informative)Considerations on coordinate systems, reference frames and projections
BS EN 16803-2:2020
$215.11