• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 43
  • 11
  • 10
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Exploration of Non-Linear and Non-Stationary Approaches to Statistical Seasonal Forecasting in the Sahel

Gado Djibo, Abdouramane January 2016 (has links)
Water resources management in the Sahel region of West Africa is extremely difficult because of high inter-annual rainfall variability as well as a general reduction of water availability in the region. Observed changes in streamflow directly disturb key socioeconomic activities such as the agriculture sector, which constitutes one of the main survival pillars of the West African population. Seasonal rainfall forecasting is considered as one possible way to increase resilience to climate variability by providing information in advance about the amount of rainfall expected in each upcoming rainy season. Moreover, the availability of reliable information about streamflow magnitude a few months before a rainy season will immensely benefit water users who want to plan their activities. However, since the 90s, several studies have attempted to evaluate the predictability of Sahelian weather characteristics and develop seasonal rainfall and streamflow forecast models to help stakeholders take better decisions. Unfortunately, two decades later, forecasting is still difficult, and forecasts have a limited value for decision-making. It is believed that the low performance in seasonal forecasting is due to the limits of commonly used predictors and forecast approaches for this region. In this study, new seasonal forecasting approaches are developed and new predictors tested in an attempt to predict the seasonal rainfall over the Sirba watershed located in between Niger and Burkina Faso, in West Africa. Using combined statistical methods, a pool of 84 predictors with physical links with the West African monsoon and its dynamics were selected, with their optimal lag times. They were first reduced through screening using linear correlation with satellite rainfall over West Africa. Correlation analysis and principal component analysis were used to keep the most predictive principal components. Linear regression was used to get synthetic forecasts, and the model was assessed to rank the tested predictors. The three best predictors, air temperature (from Pacific Tropical North), sea level pressure (from Atlantic Tropical South) and relative humidity (from Mediterranean East) were retained and tested as inputs for seasonal rainfall forecasting models. In this thesis it has been chosen to depart from the stationarity and linearity assumptions used in most seasonal forecasting methods: 1. Two probabilistic non-stationary methods based on change point detection were developed and tested. Each method uses one of the three best predictors. Model M1 allows for changes in model parameters according to annual rainfall magnitude, while M2 allows for changes in model parameters with time. M1 and M2 were compared to the classical linear model with constant parameters (M3) and to the linear model with climatology (M4). The model allowing changes in the predictand-predictor relationship according to rainfall amplitude (M1) and using AirTemp as a predictor was the best model for seasonal rainfall forecasting in the study area. 2. Non-linear models including regression trees, feed-forward neural networks and non-linear principal component analysis were implemented and tested to forecast seasonal rainfall using the same predictors. Forecast performances were compared using coefficients of determination, Nash-Sutcliffe coefficients and hit rate scores. Non-linear principal component analysis was the best non-linear model (R2: 0.46; Nash: 0.45; HIT: 60.7), while the feed-forward neural networks and regression tree models performed poorly. All the developed rainfall forecasting methods were subsequently used to forecast seasonal annual mean streamflow and maximum monthly streamflow by introducing the rainfall forecasted in a SWAT model of the Sirba watershed, and the results are summarized as follows: 1. Non-stationary models: Models M1 and M2 were compared to models M3 and M4, and the results revealed that model M3 using RHUM as a predictor at a lag time of 8 months was the best method for seasonal annual mean streamflow forecasting, whereas model M1 using air temperature as a predictor at a lag time of 4 months was the best model to predict maximum monthly streamflow in the Sirba watershed. Moreover, the calibrated SWAT model achieved a NASH value of 0.83. 2. Non-linear models: The seasonal rainfall obtained from the non-linear principal component analysis model was disaggregated into daily rainfall using the method of fragment, and then fed into the SWAT hydrological model to produce streamflow. This forecast was fairly acceptable, with a Nash value of 0.58. The evaluation of the level of risk associated with each seasonal forecast was carried out using a simple risk measure: the probability of overtopping of the flood protection dykes in Niamey, Niger. A HEC-RAS hydrodynamic model of the Niger River around Niamey was developed for the 1980-2014 period, and a copula analysis was used to model the dependence structure of streamflows and predict the distribution of streamflow in Niamey given the predicted streamflow on the Sirba watershed. Finally, the probabilities of overtopping of the flood protection dykes were estimated for each year in the 1980-2014 period. The findings of this study can be used as a guideline to improve the performance of seasonal forecasting in the Sahel. This research clearly confirmed the possibility of rainfall and streamflow forecasting in the Sirba watershed at a seasonal time scale using potential predictors other than sea surface temperature.
22

Change-point detection and kernel methods / Détection de ruptures et méthodes à noyaux

Garreau, Damien 12 October 2017 (has links)
Dans cette thèse, nous nous intéressons à une méthode de détection des ruptures dans une suite d’observations appartenant à un ensemble muni d’un noyau semi-défini positif. Cette procédure est une version « à noyaux » d’une méthode des moindres carrés pénalisés. Notre principale contribution est de montrer que, pour tout noyau satisfaisant des hypothèses raisonnables, cette méthode fournit une segmentation proche de la véritable segmentation avec grande probabilité. Ce résultat est obtenu pour un noyau borné et une pénalité linéaire, ainsi qu’une autre pénalité venant de la sélection de modèles. Les preuves reposent sur un résultat de concentration pour des variables aléatoires bornées à valeurs dans un espace de Hilbert, et nous obtenons une version moins précise de ce résultat lorsque l’on supposeseulement que la variance des observations est finie. Dans un cadre asymptotique, nous retrouvons les taux minimax usuels en détection de ruptures lorsqu’aucune hypothèse n’est faite sur la taille des segments. Ces résultats théoriques sont confirmés par des simulations. Nous étudions également de manière détaillée les liens entre différentes notions de distances entre segmentations. En particulier, nous prouvons que toutes ces notions coïncident pour des segmentations suffisamment proches. D’un point de vue pratique, nous montrons que l’heuristique du « saut de dimension » pour choisir la constante de pénalisation est un choix raisonnable lorsque celle-ci est linéaire. Nous montrons également qu’une quantité clé dépendant du noyau et qui apparaît dans nos résultats théoriques influe sur les performances de cette méthode pour la détection d’une unique rupture. Dans un cadre paramétrique, et lorsque le noyau utilisé est invariant partranslation, il est possible de calculer cette quantité explicitement. Grâce à ces calculs, nouveaux pour plusieurs d’entre eux, nous sommes capable d’étudier précisément le comportement de la constante de pénalité maximale. Pour finir, nous traitons de l’heuristique de la médiane, un moyen courant de choisir la largeur de bande des noyaux à base de fonctions radiales. Dans un cadre asymptotique, nous montrons que l’heuristique de la médiane se comporte à la limite comme la médiane d’une distribution que nous décrivons complètement dans le cadre du test à deux échantillons à noyaux et de la détection de ruptures. Plus précisément, nous montrons que l’heuristique de la médiane est approximativement normale centrée en cette valeur. / In this thesis, we focus on a method for detecting abrupt changes in a sequence of independent observations belonging to an arbitrary set on which a positive semidefinite kernel is defined. That method, kernel changepoint detection, is a kernelized version of a penalized least-squares procedure. Our main contribution is to show that, for any kernel satisfying some reasonably mild hypotheses, this procedure outputs a segmentation close to the true segmentation with high probability. This result is obtained under a bounded assumption on the kernel for a linear penalty and for another penalty function, coming from model selection.The proofs rely on a concentration result for bounded random variables in Hilbert spaces and we prove a less powerful result under relaxed hypotheses—a finite variance assumption. In the asymptotic setting, we show that we recover the minimax rate for the change-point locations without additional hypothesis on the segment sizes. We provide empirical evidence supporting these claims. Another contribution of this thesis is the detailed presentation of the different notions of distances between segmentations. Additionally, we prove a result showing these different notions coincide for sufficiently close segmentations.From a practical point of view, we demonstrate how the so-called dimension jump heuristic can be a reasonable choice of penalty constant when using kernel changepoint detection with a linear penalty. We also show how a key quantity depending on the kernelthat appears in our theoretical results influences the performance of kernel change-point detection in the case of a single change-point. When the kernel is translationinvariant and parametric assumptions are made, it is possible to compute this quantity in closed-form. Thanks to these computations, some of them novel, we are able to study precisely the behavior of the maximal penalty constant. Finally, we study the median heuristic, a popular tool to set the bandwidth of radial basis function kernels. Fora large sample size, we show that it behaves approximately as the median of a distribution that we describe completely in the setting of kernel two-sample test and kernel change-point detection. More precisely, we show that the median heuristic is asymptotically normal around this value.
23

Bootstrap in high dimensional spaces

Buzun, Nazar 28 January 2021 (has links)
Ziel dieser Arbeit ist theoretische Eigenschaften verschiedener Bootstrap Methoden zu untersuchen. Als Ergebnis führen wir die Konvergenzraten des Bootstrap-Verfahrens ein, die sich auf die Differenz zwischen der tatsächlichen Verteilung einer Statistik und der Resampling-Näherung beziehen. In dieser Arbeit analysieren wir die Verteilung der l2-Norm der Summe unabhängiger Vektoren, des Summen Maximums in hoher Dimension, des Wasserstein-Abstands zwischen empirischen Messungen und Wassestein-Barycenters. Um die Bootstrap-Konvergenz zu beweisen, verwenden wir die Gaussche Approximations technik. Das bedeutet dass man in der betrachteten Statistik eine Summe unabhängiger Vektoren finden muss, so dass Bootstrap eine erneute Abtastung dieser Summe ergibt. Ferner kann diese Summe durch Gaussche Verteilung angenähert und mit der Neuabtastung Verteilung als Differenz zwischen Kovarianzmatrizen verglichen werden. Im Allgemeinen scheint es sehr schwierig zu sein, eine solche Summe unabhängiger Vektoren aufzudecken, da einige Statistiken (zum Beispiel MLE) keine explizite Gleichung haben und möglicherweise unendlich dimensional sind. Um mit dieser Schwierigkeit fertig zu werden, verwenden wir einige neuartige Ergebnisse aus der statistischen Lerntheorie. Darüber hinaus wenden wir Bootstrap bei Methoden zur Erkennung von Änderungspunkten an. Im parametrischen Fall analysieren wir den statischen Likelihood Ratio Test (LRT). Seine hohen Werte zeigen Änderungen der Parameter Verteilung in der Datensequenz an. Das Maximum von LRT hat eine unbekannte Verteilung und kann mit Bootstrap kalibriert werden. Wir zeigen die Konvergenzraten zur realen maximalen LRT-Verteilung. In nicht parametrischen Fällen verwenden wir anstelle von LRT den Wasserstein-Abstand zwischen empirischen Messungen. Wir testen die Genauigkeit von Methoden zur Erkennung von Änderungspunkten anhand von synthetischen Zeitreihen und Elektrokardiographiedaten. Letzteres zeigt einige Vorteile des nicht parametrischen Ansatzes gegenüber komplexen Modellen und LRT. / The objective of this thesis is to explore theoretical properties of various bootstrap methods. We introduce the convergence rates of the bootstrap procedure which corresponds to the difference between real distribution of some statistic and its resampling approximation. In this work we analyze the distribution of Euclidean norm of independent vectors sum, maximum of sum in high dimension, Wasserstein distance between empirical measures, Wassestein barycenters. In order to prove bootstrap convergence we involve Gaussian approximation technique which means that one has to find a sum of independent vectors in the considered statistic such that bootstrap yields a resampling of this sum. Further this sum may be approximated by Gaussian distribution and compared with the resampling distribution as a difference between variance matrices. In general it appears to be very difficult to reveal such a sum of independent vectors because some statistics (for example, MLE) don't have an explicit equation and may be infinite-dimensional. In order to handle this difficulty we involve some novel results from statistical learning theory, which provide a finite sample quadratic approximation of the Likelihood and suitable MLE representation. In the last chapter we consider the MLE of Wasserstein barycenters model. The regularised barycenters model has bounded derivatives and satisfies the necessary conditions of quadratic approximation. Furthermore, we apply bootstrap in change point detection methods. In the parametric case we analyse the Likelihood Ratio Test (LRT) statistic. Its high values indicate changes of parametric distribution in the data sequence. The maximum of LRT has a complex distribution but its quantiles may be calibrated by means of bootstrap. We show the convergence rates of the bootstrap quantiles to the real quantiles of LRT distribution. In non-parametric case instead of LRT we use Wasserstein distance between empirical measures. We test the accuracy of change point detection methods on synthetic time series and electrocardiography (ECG) data. Experiments with ECG illustrate advantages of the non-parametric approach versus complex parametric models and LRT.
24

Détection de ruptures multiples – application aux signaux physiologiques. / Multiple change point detection – application to physiological signals.

Truong, Charles 29 November 2018 (has links)
Ce travail s’intéresse au problème de détection de ruptures multiples dans des signaux physiologiques (univariés ou multivariés). Ce type de signaux comprend par exemple les électrocardiogrammes (ECG), électroencéphalogrammes (EEG), les mesures inertielles (accélérations, vitesses de rotation, etc.). L’objectif de cette thèse est de fournir des algorithmes de détection de ruptures capables (i) de gérer de long signaux, (ii) d’être appliqués dans de nombreux scénarios réels, et (iii) d’intégrer la connaissance d’experts médicaux. Par ailleurs, les méthodes totalement automatiques, qui peuvent être utilisées dans un cadre clinique, font l’objet d’une attention particulière. Dans cette optique, des procédures robustes de détection et des stratégies supervisées de calibration sont décrites, et une librairie Python open-source et documentée, est mise en ligne.La première contribution de cette thèse est un algorithme sous-optimal de détection de ruptures, capable de s’adapter à des contraintes sur temps de calcul, tout en conservant la robustesse des procédures optimales. Cet algorithme est séquentiel et alterne entre les deux étapes suivantes : une rupture est détectée, puis retranchée du signal grâce à une projection. Dans le cadre de sauts de moyenne, la consistance asymptotique des instants estimés de ruptures est démontrée. Nous prouvons également que cette stratégie gloutonne peut facilement être étendue à d’autres types de ruptures, à l’aide d’espaces de Hilbert à noyau reproduisant. Grâce à cette approche, des hypothèses fortes sur le modèle génératif des données ne sont pas nécessaires pour gérer des signaux physiologiques. Les expériences numériques effectuées sur des séries temporelles réelles montrent que ces méthodes gloutonnes sont plus précises que les méthodes sous-optimales standards et plus rapides que les algorithmes optimaux.La seconde contribution de cette thèse comprend deux algorithmes supervisés de calibration automatique. Ils utilisent tous les deux des exemples annotés, ce qui dans notre contexte correspond à des signaux segmentés. La première approche apprend le paramètre de lissage pour la détection pénalisée d’un nombre inconnu de ruptures. La seconde procédure apprend une transformation non-paramétrique de l’espace de représentation, qui améliore les performances de détection. Ces deux approches supervisées produisent des algorithmes finement calibrés, capables de reproduire la stratégie de segmentation d’un expert. Des résultats numériques montrent que les algorithmes supervisés surpassent les algorithmes non-supervisés, particulièrement dans le cas des signaux physiologiques, où la notion de rupture dépend fortement du phénomène physiologique d’intérêt.Toutes les contributions algorithmiques de cette thèse sont dans "ruptures", une librairie Python open-source, disponible en ligne. Entièrement documentée, "ruptures" dispose également une interface consistante pour toutes les méthodes. / This work addresses the problem of detecting multiple change points in (univariate or multivariate) physiological signals. Well-known examples of such signals include electrocardiogram (ECG), electroencephalogram (EEG), inertial measurements (acceleration, angular velocities, etc.). The objective of this thesis is to provide change point detection algorithms that (i) can handle long signals, (ii) can be applied on a wide range of real-world scenarios, and (iii) can incorporate the knowledge of medical experts. In particular, a greater emphasis is placed on fully automatic procedures which can be used in daily clinical practice. To that end, robust detection methods as well as supervised calibration strategies are described, and a documented open-source Python package is released.The first contribution of this thesis is a sub-optimal change point detection algorithm that can accommodate time complexity constraints while retaining most of the robustness of optimal procedures. This algorithm is sequential and alternates between the two following steps: a change point is estimated then its contribution to the signal is projected out. In the context of mean-shifts, asymptotic consistency of estimated change points is obtained. We prove that this greedy strategy can easily be extended to other types of changes, by using reproducing kernel Hilbert spaces. Thanks this novel approach, physiological signals can be handled without making assumption of the generative model of the data. Experiments on real-world signals show that those approaches are more accurate than standard sub-optimal algorithms and faster than optimal algorithms.The second contribution of this thesis consists in two supervised algorithms for automatic calibration. Both rely on labeled examples, which in our context, consist in segmented signals. The first approach learns the smoothing parameter for the penalized detection of an unknown number of changes. The second procedure learns a non-parametric transformation of the representation space, that improves detection performance. Both supervised procedures yield finely tuned detection algorithms that are able to replicate the segmentation strategy of an expert. Results show that those supervised algorithms outperform unsupervised algorithms, especially in the case of physiological signals, where the notion of change heavily depends on the physiological phenomenon of interest.All algorithmic contributions of this thesis can be found in ``ruptures'', an open-source Python library, available online. Thoroughly documented, ``ruptures'' also comes with a consistent interface for all methods.
25

Adaptive Measurement Strategies for Network Optimization and Control / Adaptiva Mätstrategier för Optimering och Reglering av Nätverk

Lindståhl, Simon January 2023 (has links)
The fifth generation networks is rapidly becoming the new network standardand its new technological capabilities are expected to enable a far widervariety of services compared to the fourth generation networks. To ensurethat these services can co-exist and meet their standardized requirements,the network’s resources must be provisioned, managed and reconfigured ina far more complex manner than before. As such, it is no longer sufficientto select a simple, static scheme for gathering the necessary information totake decisions. Instead, it is necessary to adaptively, with regards to networksystem dynamics, trade-off the cost in terms of power, CPU and bandwidthconsumption of the taken measurements to the value their information brings.Orchestration is a wide field, and the way to quantify the value of a givenmeasurement heavily depends on the problem studied. As such, this thesisaddresses adaptive measurement schemes for a number of well-defined networkoptimization problems. The thesis is presented as a compilation, whereafter an introduction detailing the background, purpose, problem formulation,methodology and contributions of our work, we present each problemseparately through the papers submitted to several conferences. First, we study the problem of optimal spectrum access for low priorityservices. We assume that the network manager has limited opportunitiesto measure the spectrum before assigning one (if any) resource block to thesecondary service for transmission, and this measurement has a known costattached to it. We study this framework through the lens of multi-armedbandits with multiple arm pulls per decision, a framework we call predictivebandits. We analyze such bandits and show a problem specific lower bound ontheir regret, as well as design an algorithm which meets this regret asymptotically,studying both the case where measurements are perfect and the casewhere the measurement has noise of known quantity. Studying a syntheticsimulated problem, we find that it performs considerably better compared toa simple benchmark strategy. Secondly, we study a variation of admission control where the controllermust select one of multiple slices to enter a new service into. The agentdoes not know the resources available in the slices initially, and must insteadmeasure these, subject to noise. Mimicking three commonly used admissioncontrol strategies, we study this as a best arm identification problem, whereone or multiple arms is ”correct” (the arm chose by the strategy if it had fullinformation). Through this framework, we analyze each strategy and devisesample complexity lower bounds, as well as algorithms that meet these lowerbounds. In simulations with synthetic data, we show that our measurementalgorithm can vastly reduce the number of required measurements comparedto uniform sampling strategies. Finally, we study a network monitoring system where the controller mustdetect sudden changes in system behavior such as batch traffic arrivals orhandovers, in order to take future action. We study this through the lensof change point detection but argue that the classical framework is insufficientfor capturing both physical time aspects such as delay as well as measurementcosts independently, and present an alternative framework whichiidecouples these, requiring more sophisticated monitoring agents. We show,both through theory and through simulation with both synthetic data anddata from a 5G testbed, that such adaptive schedules qualitatively and quantitativelyimprove upon classical change point detection schemes in terms ofmeasurment frequency, without losing classical optimality guarantees such asthe one on required measurements post change. / Femte generationens nätverk håller snabbt på att bli den nya standarden och dess teknologiska förmågor förväntas bereda väg för en avsevärt större variation av tjänster jämfört med fjärde generationens nätverk. För att se till att dessa tjänster kan samexistera och möta sina standardiserade krav måste nätverkens resurser provisioneras, hanteras och omkonfigureras på ett mycket mer komplext vis än tidigare. Det är därmed inte längre tillräckligt att välja en simpel, statisk plan för att samla den nödvändiga information som krävs för att ta beslut. Istället behöver man adaptivt, med hänsyn till nätversystemens dynamik, avväga mätningarnas kostnad i termer av effekt-, CPU- och bandbreddskonsumtion mot det värde som de medför. Den här sortens nätverksorkestrering är ett brett fält, och hur mätningarnas värde ska kvantifieras beror i hög grad på vilket optimeringsproblem som studeras. Således bemöter den här avhandlningen adaptiva mätplaner för ett antal väldefinerade optimeringsproblem. Avhandlingen tar formen av en sammanlänkning, där följandes en introduktion som beskriver bakgrund, syfte, problemformulering, metodologi och forskningsbidrag så presenterar vi varje problem separat genom de artiklar vi inlämnat till olika konferenser. Först studerar vi optimal spektrumaccess för lågprioritetstjänster. Vi antar att nätverksregulatorn har begränsat med möjligheter att mäta spektrumanvändning innan den tillger som mest ett resursblock till tjänsten med lägre prioritet att skicka data på, och de här mätningarna har en känd kostnad. Vi studerar det här ramverket från perspektivet av flerarmade banditer med flera armdragningar per beslut, ett ramverk vi benämner förutsägande banditer (predictive bandits). Vi analyserar sådana banditer och visar en problemspecifik undre gräns på dess inlärningsförlust, samt designar en algorithm som presterar lika bra som denna gräns i den asymptotiska regimen. Vi studerar fallet där mätningarna är perfekta såväl som fallet där mätningarna har brus med känd storlek. Genom att studera ett syntetiskt simulerat problem av detta slag finner vi att vår algoritm presterar avsevärt bättre jämfört med en simplare riktmärkesstrategi. Därefter studerar vi en variation av tillträdeskontroll, där en regulator måste välja en av ett antal betjänter att släppa in en ny tjänst till (om någon alls). Agenten vet ursprungligen inte vilka resurser som finns betjänterna tillgängliga, utan måste mäta detta med brusiga mätningar. Vi härmar tre vanligt använda tillträdesstrategier och studerar detta som ett bästa-arms identifieringsproblem, där en eller flera armar är "korrekta" (det vill säga, de armar som hade valts av tillträdesstrategin om den hade haft perfekt kännedom). Med det här ramverket analyserar vi varje strategi och visar undre gränser på antalet mätningar som krävs, och skapar algoritmer som möter dessa gränser. I simuleringar med syntetisk data visar vi att våra mätalgoritmer kan drastiskt reducera antalet mätningar som krävs jämfört med jämlika mätstrategier. Slutligen studerar vi ett övervakningssystem där agenten måste upptäcka plötsliga förändringar i systemets beteende såsom förändringar i trafiken eller överräckningar mellan master, för att kunna agera därefter. Vi studerar detta med ramverket förändringsdetektion, men argumenterar att det klassiska ramverket är otillräckligt för att bemöta aspekter berörande fysisk tid (som fördröjning) samtidigt som den bemöter  mätningarnas kostnad. Vi presenterar därmed ett alternativt ramverk som frikopplar de två, vilket i sin tur kräver mer sostifikerade övervakningssystem. Vi visar, genom både teori och simulering med både syntetisk och experimentell data, att sådana adaptiva mätscheman kan förbättra mätfrekvensen jämfört med klassiska periodiska mätscheman, både kvalitativt och kvantitativt, utan att förlora klassiska optimalitetsgarantier såsom det på antalet mätningar som behövs när förändringen har skett. / <p>QC 20230915</p>
26

Intégration du retour d'expérience pour une stratégie de maintenance dynamique / Integrate experience feedback for dynamic maintenance strategy

Rozas, Rony 19 December 2014 (has links)
L'optimisation de stratégies de maintenance est un sujet primordial pour un grand nombre d'industriels. Il s'agit d'établir un plan de maintenance qui garantisse des niveaux de sécurité, de sûreté et de fiabilité élevé avec un coût minimum et respectant d'éventuelles contraintes. Le nombre de travaux grandissant sur l'optimisation de paramètres de maintenance et notamment sur la planification d'actions préventives de maintenance souligne l'intérêt de ce problème. Un grand nombre d'études sur la maintenance repose sur une modélisation du processus de dégradation du système étudié. Les Modèles Graphiques Probabilistes (MGP) et particulièrement les MGP Markoviens (MGPM) fournissent un cadre de travail pour la modélisation de processus stochastiques complexes. Le problème de ce type d'approche est que la qualité des résultats est dépendante de celle du modèle. De plus, les paramètres du système considéré peuvent évoluer au cours du temps. Cette évolution est généralement la conséquence d'un changement de fournisseur pour les pièces de remplacement ou d'un changement de paramètres d'exploitation. Cette thèse aborde le problème d'adaptation dynamique d'une stratégie de maintenance face à un système dont les paramètres changent. La méthodologie proposée repose sur des algorithmes de détection de changement dans un flux de données séquentielles et sur une nouvelle méthode d'inférence probabiliste spécifique aux réseaux bayésiens dynamiques. D'autre part, les algorithmes proposés dans cette thèse sont mis en place dans le cadre d'un projet d'étude avec Bombardier Transport. L'étude porte sur la maintenance du système d'accès voyageurs d'une nouvelle automotrice destiné à une exploitation sur le réseau ferré d'Ile-de-France. L'objectif général est de garantir des niveaux de sécurité et de fiabilité importants au cours de l'exploitation du train / The optimization of maintenance strategies is a major issue for many industrial applications. It involves establishing a maintenance plan that ensures security levels, security and high reliability with minimal cost and respecting any constraints. The increasing number of works on optimization of maintenance parameters in particular in scheduling preventive maintenance action underlines the importance of this issue. A large number of studies on maintenance are based on a modeling of the degradation of the system studied. Probabilistic Models Graphics (PGM) and especially Markovian PGM (M-PGM) provide a framework for modeling complex stochastic processes. The issue with this approach is that the quality of the results is dependent on the model. More system parameters considered may change over time. This change is usually the result of a change of supplier for replacement parts or a change in operating parameters. This thesis deals with the issue of dynamic adaptation of a maintenance strategy, with a system whose parameters change. The proposed methodology is based on change detection algorithms in a stream of sequential data and a new method for probabilistic inference specific to the dynamic Bayesian networks. Furthermore, the algorithms proposed in this thesis are implemented in the framework of a research project with Bombardier Transportation. The study focuses on the maintenance of the access system of a new automotive designed to operate on the rail network in Ile-de-France. The overall objective is to ensure a high level of safety and reliability during train operation
27

Near-infrared Spectroscopy as an Access Channel: Prefrontal Cortex Inhibition During an Auditory Go-no-go Task

Ko, Linda 24 February 2009 (has links)
The purpose of this thesis was to explore the potential of near-infrared spectroscopy (NIRS) as an access channel by establishing reliable signal detection to verify the existence of signal differences associated with changes in activity. This thesis focused on using NIRS to measure brain activity from the prefrontal cortex during an auditory Go-No-Go task. A singular spectrum analysis change-point detection algorithm was applied to identify transition points where the NIRS signal properties varied from previous data points in the signal, indicating a change in brain activity. With this algorithm, latency values for change-points detected ranged from 6.44 s to 9.34 s. The averaged positive predictive values over all runs were modest (from 49.41% to 67.73%), with the corresponding negative predictive values being generally higher (48.66% to 78.80%). However, positive and negative predictive values up to 97.22% and 95.14%, respectively, were achieved for individual runs. No hemispheric differences were found.
28

Near-infrared Spectroscopy as an Access Channel: Prefrontal Cortex Inhibition During an Auditory Go-no-go Task

Ko, Linda 24 February 2009 (has links)
The purpose of this thesis was to explore the potential of near-infrared spectroscopy (NIRS) as an access channel by establishing reliable signal detection to verify the existence of signal differences associated with changes in activity. This thesis focused on using NIRS to measure brain activity from the prefrontal cortex during an auditory Go-No-Go task. A singular spectrum analysis change-point detection algorithm was applied to identify transition points where the NIRS signal properties varied from previous data points in the signal, indicating a change in brain activity. With this algorithm, latency values for change-points detected ranged from 6.44 s to 9.34 s. The averaged positive predictive values over all runs were modest (from 49.41% to 67.73%), with the corresponding negative predictive values being generally higher (48.66% to 78.80%). However, positive and negative predictive values up to 97.22% and 95.14%, respectively, were achieved for individual runs. No hemispheric differences were found.
29

Stochastic Modelling of Random Variables with an Application in Financial Risk Management.

Moldovan, Max January 2003 (has links)
The problem of determining whether or not a theoretical model is an accurate representation of an empirically observed phenomenon is one of the most challenging in the empirical scientific investigation. The following study explores the problem of stochastic model validation. Special attention is devoted to the unusual two-peaked shape of the empirically observed distributions of the conditional on realised volatility financial returns. The application of statistical hypothesis testing and simulation techniques leads to the conclusion that the conditional on realised volatility returns are distributed with a specific previously undocumented distribution. The probability density that represents this distribution is derived, characterised and applied for validation of the financial model.
30

Robustní monitorovací procedury pro závislá data / Robust Monitoring Procedures for Dependent Data

Chochola, Ondřej January 2013 (has links)
Title: Robust Monitoring Procedures for Dependent Data Author: Ondřej Chochola Department: Department of Probability and Mathematical Statistics Supervisor: Prof. RNDr. Marie Hušková, DrSc. Supervisor's e-mail address: huskova@karlin.mff.cuni.cz Abstract: In the thesis we focus on sequential monitoring procedures. We extend some known results towards more robust methods. The robustness of the procedures with respect to outliers and heavy-tailed observations is introduced via use of M-estimation instead of classical least squares estimation. Another extension is towards dependent and multivariate data. It is assumed that the observations are weakly dependent, more specifically they fulfil strong mixing condition. For several models, the appropriate test statistics are proposed and their asymptotic properties are studied both under the null hypothesis of no change as well as under the alternatives, in order to derive proper critical values and show consistency of the tests. We also introduce retrospective change-point procedures, that allow one to verify in a robust way the stability of the historical data, which is needed for the sequential monitoring. Finite sample properties of the tests need to be also examined. This is done in a simulation study and by application on some real data in the capital asset...

Page generated in 0.0823 seconds