• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algorithm to enable intelligent rail break detection

Bhaduri, Sreyoshi 04 February 2014 (has links)
Wavelet intensity based algorithm developed previously at VirginiaTech has been furthered and paired with an SVM based classifier. The wavelet intensity algorithm acts as a feature extraction algorithm. The wavelet transform is an effective tool as it allows one to narrow down upon the transient, high frequency events and is able to tell their exact location in time. According to prior work done in the field of signal processing, the local regularities of a signal can be estimated using a Lipchitz exponent at each time step of the signal. The local Lipchitz exponent can then be used to generate the wavelet intensity factor values. For each vertical acceleration value, corresponding to a specific location on the track, we now have a corresponding intensity factor. The intensity factor corresponds to break-no break information and can now be used as a feature to classify the vertical acceleration as a fault or no fault. Support Vector Machines (SVM) is used for this binary classification task. SVM is chosen as it is a well-studied topic with efficient implementations available. SVM instead of hard threshold of the data is expected to do a better job of classification without increasing the complexity of the system appreciably. / Master of Science
2

Regression discontinuity design with unknown cutoff: cutoff detection & effect estimation

Khan Tanu, Tanvir Ahmed 27 August 2020 (has links)
Regression discontinuity designs are increasingly popular quasi-experimental research designs among applied econometricians desiring to make causal inferences on the local effect of a treatment, intervention, or policy. They are also widely used in social, behavioral, and natural sciences. Much of the existing literature relies on the assumption that the discontinuity point or cutoff is known a-priori, which may not always hold. This thesis seeks to extend the applicability of regression discontinuity designs by proposing a new approach towards detection of an unknown discontinuity point using structural-break detection and machine learning methods. The approach is evaluated on both simulated and real data. Estimation and inference based on estimating the cutoff following this approach are compared to the counterfactual scenario where the cutoff is known. Monte Carlo simulations show that the empirical false-detection and true-detection probabilities of the proposed procedure are generally satisfactory. Finally, the approach is further illustrated with an empirical application. / Graduate
3

Integrated Coarse to Fine and Shot Break Detection Approach for Fast and Efficient Registration of Aerial Image Sequences

Jackovitz, Kevin S. 22 May 2013 (has links)
No description available.
4

Devising a Trend-break-detection Algorithm of stored Key Performance Indicators for Telecom Equipment / Utformning av trendbrytningsalgoritm av lagrade nyckelindikatorer för telekomutrustning

Hededal Klincov, Lazar, Symeri, Ali January 2017 (has links)
A problem that is prevalent for testers at Ericsson is that performance test results are continuously generated but not analyzed. The time between occurrence of problems and information about the occurrence is long and variable. This is due to the manual analysis of log files that is time consuming and tedious. The requested solution is automation with an algorithm that analyzes the performance and notifies when problems occur. A binary classifier algorithm, based on statistical methods, was developed and evaluated as a solution to the stated problem. The algorithm was evaluated with simulated data and produced an accuracy of 97.54 %, to detect trend breaks. Furthermore, correlation analysis was carried out between performance and hardware to gain insights in how hardware configurations affect test runs. / Ett allmänt förekommande problem för testare på Ericsson är att resultat från flera prestandatester genereras kontinuerligt men inte analyseras. Tiden mellan förekommande fel och informationen av dessa är hög och varierande. Detta på grund av manuell analys av loggfiler som är tidsödande och ledsamt. Den efterfrågade lösningen är automatisering med en algoritm, baserad på statistisk metodik, som analyserar data om prestanda och meddelar när problem förekommer. En algoritm för binär klassifikation utvecklades och utvärderades som lösning till det fastställda problemet. Algoritmen utvärderades med simulerad data och alstrade en noggrannhet på 97,54%, för att detektera trendbrott. Dessutom utfördes korrelationsanalys mellan prestandan och hårdvaran för att få insikt i hur hårdvarukonfigurationen påverkar testkörningar.
5

Three Essays in Economics

Daniel G Kebede (16652025) 03 August 2023 (has links)
<p> The overall theme of my dissertation is applying frontier econometric models to interesting economic problems. The first chapter analyzes how individual consumption responds to permanent and transitory income shocks is limited by model misspecification and availability of data. The misspecification arises from ignoring unemployment risk while estimating income shocks. I employ the Heckman two step regression model to consistently estimate income shocks. Moreover, to deal with data sparsity, I propose identifying the partial consumption insurance and income and consumption volatility heterogeneities at the household level using Least Absolute Shrinkage and Selection Operator (LASSO). Using PSID data, I estimate partial consumption insurance against permanent shock of 63% and 49% for white and black household heads, respectively; the white and black household heads self-insure against 100% and 90% of the transitory income shocks, respectively. Moreover, I find income and consumption volatilities and partial consumption insurance parameters vary across time. In the second chapter I recast smooth structural break test proposed by Chen and Hong (2012), in a predictive regression setting. The regressors are characterized using the local to non-stationarity framework. I conduct a Monte Carlo experiment to evaluate the finite sample performance of the test statistic and examine an empirical example to demonstrate its practical application. The Monte Carlo simulations show that the test statistic has better power and size compared to the popular SupF and LM. Empirically, compared to SupF and LM, the test statistic rejects the null hypothesis of no structural break more frequently when there actually is a structural break present in the data. The third chapter is a collaboration with James Reeder III. We study the effects of using promotions to drive public policy diffusion in regions with polarized political beliefs. We estimate a model that allows for heterogeneous effects at the county-level based upon state-level promotional offerings to drive vaccine adoption during COVID-19. Central to our empirical application is accounting for the endogenous action of state-level agents in generating promotional schemes. To address this challenge, we synthesize various sources of data at the county-level and leverage advances in both the Bass Diffusion model and 10 machine learning. Studying the vaccine rates at the county-level within the United States, we find evidence that the use of promotions actually reduced the overall rates of adoption in obtaining vaccination, a stark difference from other studies examining more localized vaccine rates. The negative average effect is driven primarily by the large number of counties that are described as republican leaning based upon their voting record in the 2020 election. Even directly accounting for the population’s vaccine hesitancy, this result still stands. Thus, our analysis suggests that in the polarized setting of the United States electorate, more localized policies on contentious topics may yield better outcomes than broad, state-level dictates. </p>
6

Analyse et optimisation de la fiabilité d'un équipement opto-électrique équipé de HUMS / Analysis and optimization of the reliability of an opto-electronic equipment with HUMS

Baysse, Camille 07 November 2013 (has links)
Dans le cadre de l'optimisation de la fiabilité, Thales Optronique intègre désormais dans ses équipements, des systèmes d'observation de leur état de fonctionnement. Cette fonction est réalisée par des HUMS (Health & Usage Monitoring System). L'objectif de cette thèse est de mettre en place dans le HUMS, un programme capable d'évaluer l'état du système, de détecter les dérives de fonctionnement, d'optimiser les opérations de maintenance et d'évaluer les risques d'échec d'une mission, en combinant les procédés de traitement des données opérationnelles (collectées sur chaque appareil grâce au HUMS) et prévisionnelles (issues des analyses de fiabilité et des coûts de maintenance, de réparation et d'immobilisation). Trois algorithmes ont été développés. Le premier, basé sur un modèle de chaînes de Markov cachées, permet à partir de données opérationnelles, d'estimer à chaque instant l'état du système, et ainsi, de détecter un mode de fonctionnement dégradé de l'équipement (diagnostic). Le deuxième algorithme permet de proposer une stratégie de maintenance optimale et dynamique. Il consiste à rechercher le meilleur instant pour réaliser une maintenance, en fonction de l'état estimé de l'équipement. Cet algorithme s'appuie sur une modélisation du système, par un processus Markovien déterministe par morceaux (noté PDMP) et sur l'utilisation du principe d'arrêt optimal. La date de maintenance est déterminée à partir des données opérationnelles, prévisionnelles et de l'état estimé du système (pronostic). Quant au troisième algorithme, il consiste à déterminer un risque d'échec de mission et permet de comparer les risques encourus suivant la politique de maintenance choisie.Ce travail de recherche, développé à partir d'outils sophistiqués de probabilités théoriques et numériques, a permis de définir un protocole de maintenance conditionnelle à l'état estimé du système, afin d'améliorer la stratégie de maintenance, la disponibilité des équipements au meilleur coût, la satisfaction des clients et de réduire les coûts d'exploitation. / As part of optimizing the reliability, Thales Optronics now includes systems that examine the state of its equipment. This function is performed by HUMS (Health & Usage Monitoring System). The aim of this thesis is to implement in the HUMS a program based on observations that can determine the state of the system, anticipate and alert about the excesses of operation, optimize maintenance operations and evaluate the failure risk of a mission, by combining treatment processes of operational data (collected on each equipment thanks to HUMS) and predictive data (resulting from reliability analysis and cost of maintenance, repair and standstill). Three algorithms have been developed. The first, based on hidden Markov model, allows to estimate at each time the state of the system from operational data, and thus, to detect a degraded mode of equipment (diagnostic). The second algorithm is used to propose an optimal and dynamic maintenance strategy. We want to estimate the best time to perform maintenance, according to the estimated state of equipment. This algorithm is based on a system modeling by a piecewise deterministic Markov process (noted PDMP) and the use of the principle of optimal stopping.The maintenance date is determined from operational and predictive data and the estimated state of the system (prognosis). The third algorithm determines the failure risk of a mission and compares risks following the chosen maintenance policy.This research, developed from sophisticated tools of theoretical and numerical probabilities, allows us to define a maintenance policy adapted to the state of the system, to improve maintenance strategy, the availability of equipment at the lowest cost, customer satisfaction, and reduce operating costs.

Page generated in 0.0935 seconds