• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A machine learning framework for prediction of Diagnostic Trouble Codes in automobiles

Kopuru, Mohan 01 May 2020 (has links)
Predictive Maintenance is an important solution to the rising maintenance costs in the industries. With the advent of intelligent computer and availability of data, predictive maintenance is seen as a solution to predict and prevent the occurrence of the faults in the different types of machines. This thesis provides a detailed methodology to predict the occurrence of critical Diagnostic Trouble codes that are observed in a vehicle in order to take necessary maintenance actions before occurrence of the fault in automobiles using Convolutional Neural Network architecture.
2

Finding Patterns in Vehicle Diagnostic Trouble Codes : A data mining study applying associative classification

Fransson, Moa, Fåhraeus, Lisa January 2015 (has links)
In Scania vehicles, Diagnostic Trouble Codes (DTCs) are collected while driving, later on loaded into a central database when visiting a workshop. These DTCs are statistically used to analyse vehicles’ health statuses, which is why correctness in data is desirable. In workshops DTCs can however occur due to work and tests. Nevertheless are they loaded into the database without any notification. In order to perform an accurate analysis of the vehicle health status it would be desirable if such DTCs could be found and removed. The thesis has examined if this is possible by searching for patterns in DTCs, indicating whether the DTCs are generated in a workshop or not. Due to its easy interpretable outcome an Associative Classification method was used with the aim of categorising data. The classifier was built applying well-known algorithms and then two classification algorithms were developed to fit the data structure when labelling new data. The final classifier performed with an accuracy above 80 percent where no distinctive differences between the two algorithms could be found. Hardly 50 percent of all workshop DTCs were however found. The conclusion is that either do patterns in workshop DTCs only occur in 50 percent of the cases, or the classifier can only detect 50 percent of them. The patterns found could confirm previous knowledge regarding workshop generated DTCs as well as provide Scania with new information.
3

Transformer decoder as a method to predict diagnostic trouble codes in heavy commercial vehicles / Transformer decoder som en metod för att förutspå felkoder i tunga fordon

Poljo, Haris January 2021 (has links)
Diagnostic trouble codes (DTC) have traditionally been used by mechanics to figure out what is wrong with a vehicle. A vehicle generates a DTC when a specific condition in the vehicle is met. This condition has been defined by an engineer and represents some fault that has happened. Therefore the intuition is that DTC’s contain useful information about the health of the vehicle. Due to the sequential ordering of DTC’s and the high count of unique values, this modality of data has characteristics that resemble those of natural language. This thesis investigates if an algorithm that has shown to be promising in the field of Natural Language Processing can be applied to sequences of DTC’s. More specifically, the deep learning model called the transformer decoder will be compared to a baseline model called n-gram in terms of how well they estimate a probability distribution of the next DTC condition on previously seen DTC’s. Estimating a probability distribution could then be useful for manufacturers of heavy commercial vehicles such as Scania when creating systems that help them in their mission of ensuring a high uptime of their vehicles. The algorithms were compared by firstly doing a hyperparameter search for both algorithms and then comparing the models using the 5x2 cross-validation paired t-test. Three metrics were evaluated, perplexity, Top- 1 accuracy, and Top-5 accuracy. It was concluded that there was a significant difference in the performance of the two models where the transformer decoder was the better method given the metrics that were used in the evaluation. The transformer decoder had a perplexity of 22.1, Top-1 accuracy of 37.5%, and a Top-5 accuracy of 59.1%. In contrast, the n-gram had a perplexity of 37.6, Top-1 accuracy of 7.5%, and a Top-5 accuracy of 30%. / Felkoder har traditionellt använts av mekaniker för att ta reda på vad som är fel med ett fordon. Ett fordon genererar en felkod när ett visst villkor i fordonet är uppfyllt, detta villkor har definierats av en ingenjör och representerar något fel som har skett. Därför är intuitionen att felkoder innehåller användbar information om fordonets hälsa. På grund av den sekventiella ordningen av felkoder och det höga antalet unika värden, har denna modalitet av data egenskaper som liknar de för naturligt språk. Detta arbete undersöker om en algoritm som har visat sig vara lovande inom språkteknologi kan tillämpas på sekvenser av felkoder. Mer specifikt kommer den djupainlärnings modellen som kallas Transformer Decoder att jämföras med en basmodell som kallas n- gram. Med avseende på hur väl de estimerar en sannolikhetsfördelning av nästa felkod givet tidigare felkoder som har setts. Att uppskatta en sannolikhetsfördelning kan vara användbart för tillverkare av tunga fordon så som Scania, när de skapar system som hjälper dem i deras uppdrag att säkerställa en hög upptid för sina fordon. Algoritmerna jämfördes genom att först göra en hyperparametersökning för båda modellerna och sedan jämföra modellerna med hjälp av 5x2 korsvalidering parat t-test. Tre mätvärden utvärderades, perplexity, Top-1 träffsäkerhet och Top-5 träffsäkerhet. Man drog slutsatsen att det fanns en signifikant skillnad i prestanda för de två modellerna där Transformer Decoder var den bättre metoden givet mätvärdena som användes vid utvärderingen. Transformer Decoder hade en perplexity på 22.1, Top-1 träffsäkerhet på 37,5% och en Top-5 träffsäkerhet på 59,1%. I kontrast, n-gram modellen hade en perplexity på 37.6, Top-1 träffsäkerhet på 7.5% och en Top-5 träffsäkerhet på 30%.

Page generated in 0.0693 seconds