Return to search

A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262

Yes / Assuring safety and thereby certifying is a key challenge of
many kinds of Machine Learning (ML) Models. ML is one of the most
widely used technological solutions to automate complex tasks such as
autonomous driving, traffic sign recognition, lane keep assist etc. The
application of ML is making a significant contributions in the automotive
industry, it introduces concerns related to the safety and security of these
systems. ML models should be robust and reliable throughout and prove
their trustworthiness in all use cases associated with vehicle operation.
Proving confidence in the safety and security of ML-based systems and
there by giving assurance to regulators, the certification authorities, and
other stakeholders is an important task. This paper proposes a framework
to handle uncertainties of ML model to improve the safety level and
thereby certify the ML Models in the automotive industry.

Identiferoai:union.ndltd.org:BRADFORD/oai:bradscholars.brad.ac.uk:10454/18707
Date10 December 2021
CreatorsVasudevan, Vinod, Abdullatif, Amr R.A., Kabir, Sohag, Campean, Felician
Source SetsBradford Scholars
LanguageEnglish
Detected LanguageEnglish
TypeBook chapter, Accepted manuscript
Rights(c) 2022 Springer Cham. Full-text reproduced in accordance with the publisher's self-archiving policy.

Page generated in 0.002 seconds