• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 24
  • 22
  • 20
  • 11
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Critérios robustos de seleção de modelos de regressão e identificação de pontos aberrantes / Robust model selection criteria in regression and outliers identification

Guirado, Alia Garrudo 08 March 2019 (has links)
A Regressão Robusta surge como uma alternativa ao ajuste por mínimos quadrados quando os erros são contaminados por pontos aberrantes ou existe alguma evidência de violação das suposições do modelo. Na regressão clássica existem critérios de seleção de modelos e medidas de diagnóstico que são muito conhecidos. O objetivo deste trabalho é apresentar os principais critérios robustos de seleção de modelos e medidas de detecção de pontos aberrantes, assim como analisar e comparar o desempenho destes de acordo com diferentes cenários para determinar quais deles se ajustam melhor a determinadas situações. Os critérios de validação cruzada usando simulações de Monte Carlo e o Critério de Informação Bayesiano são conhecidos por desenvolver-se de forma adequada na identificação de modelos. Na dissertação confirmou-se este fato e além disso, suas alternativas robustas também destacam-se neste aspecto. A análise de resíduos constitui uma forte ferramenta da análise diagnóstico de um modelo, no trabalho detectou-se que a análise clássica de resíduos sobre o ajuste do modelo de regressão linear robusta, assim como a análise das ponderações das observações, são medidas de detecção de pontos aberrantes eficientes. Foram aplicados os critérios e medidas analisados ao conjunto de dados obtido da Estação Meteorológica do Instituto de Astronomia, Geofísica e Ciências Atmosféricas da Universidade de São Paulo para detectar quais variáveis meteorológicas influem na temperatura mínima diária durante o ano completo, e ajustou-se um modelo que permite identificar os dias associados à entrada de sistemas frontais. / Robust Regression arises as an alternative to least squares method when errors are contaminated by outliers points or there are some evidence of violation of model assumptions. In classical regression there are several criteria for model selection and diagnostic measures that are well known. The objective of this work is to present the main robust criteria of model selection and outliers detection measures, as well as to analyze and compare their performance according to different stages to determine which of them fit better in certain situations. The cross-validation criteria using Monte Carlo simulations and Beyesian Information Criterion are known to be adequately developed in model identification. This fact was confirmed, and in addition, its robust alternatives also stand out in this aspect. The residual analysis is a strong tool for model diagnostic analysis, in this work it was detected that the classic residual analysis on the robust linear model regression fit, as well as the analysis of the observations weights, are efficient measures of outliers detection points. The analyzed criteria and measures were applied to the data set obtained from the Meteorological Station of the Astronomy, Geophysics and Atmospheric Sciences Institute of São Paulo University to detect which meteorological variables influence the daily minimum temperature during the whole year, and was fitted a model that allows identify the days associated with the entry of frontal systems.
22

Uncertainty Assessment of Hydrogeological Models Based on Information Theory / Bewertung der Unsicherheit hydrogeologischer Modelle unter Verwendung informationstheoretischer Grundlagen

De Aguinaga, José Guillermo 17 August 2011 (has links) (PDF)
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach. / Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
23

Criptografia de qubits de férmions de Majorana por meio de estados ligados no contínuo / Encrypting Majorana fermions-qubits as bound states in the continuum

Pereira, Geovane Módena 01 December 2017 (has links)
Submitted by GEOVANE MODENA PEREIRA null (geovanemodena@hotmail.com) on 2018-02-10T03:04:38Z No. of bitstreams: 1 Dissertação Geovane - Criptografia de Qubits de Férmions de Majorana por meio de Estados Ligados no Contínuo.pdf: 7524654 bytes, checksum: 0bd9409e8fa9c0c2da9190e44f4cfa33 (MD5) / Approved for entry into archive by Ana Paula Santulo Custódio de Medeiros null (asantulo@rc.unesp.br) on 2018-02-14T16:11:32Z (GMT) No. of bitstreams: 1 pereira_gm_me_rcla.pdf: 7420427 bytes, checksum: 0a0aec5beec2ecdd26883e0f4524844f (MD5) / Made available in DSpace on 2018-02-14T16:11:32Z (GMT). No. of bitstreams: 1 pereira_gm_me_rcla.pdf: 7420427 bytes, checksum: 0a0aec5beec2ecdd26883e0f4524844f (MD5) Previous issue date: 2017-12-01 / Nós investigamos teoricamente uma cadeia topológica de Kitaev conectada a dois pontos quânticos (QDs) hibridizados a terminais metálicos. Neste sistema, observamos o surgimento de dois fenômenos marcantes: (i) uma decriptografia do Férmion de Majorana (MF), que é detectado por meio de medições de condutância devido ao estado de vazamento assimétrico do qubit de MFs nos QDs; (ii) criptografia desse qubit em ambos os QDs quando o vazamento é simétrico. Em tal regime, temos portanto a criptografia proposta, uma vez que o qubit de MFs separa-se nos QDs como estados ligados no contínuo (BICs), os quais não são detectáveis em experimentos de condutância. / We theoretically investigate a topological Kitaev chain connected to a double quantum-dot (QD) setup hybridized with metallic leads. In this system, we observe the emergence of two striking phenomena: i) a decrypted Majorana Fermion (MF) - qubit recorded over a single QD, which is detectable by means of conductance measurements due to the asymmetrical MF-leaked state into the QDs; ii) an encrypted qubit recorded in both QDs when the leakage is symmetrical. In such a regime, we have a cryptography-like manifestation, since the MF-qubit becomes bound states in the continuum, which is not detectable in conductance experiments.
24

Linear properties of inhibited coupling hollow-core photonic crystal fibers / Propriétés linéaires des fibres creuses à cristal photonique à couplage inhibé

Amsanpally, Abhilash 07 July 2017 (has links)
Cette thèse a porté sur les principes de guidage, les propriétés linéaires et les outils de conception autour des fibres à cristal photonique à coeur creux (HC-PCF) à couplage inhibé (IC). Le guidage IC a été démontré comme une manifestation photonique de Q-BiC (état quasi lié dans un continuum) en étudiant des profils asymétriques et dépendants en polarisation dit Fano présentant une bande passante spectrale de 30 GHz. En utilisant le concept de IC, nous reportons la caractérisation linéaire de fibres IC HC-PCF supérieures à l’état de l’art. Par une optimisation de la forme du coeur, une fibre Kagome IC HC-PCF a démontré des pertes très faibles de 8,5 dB/km à 1030 nm associées à une bande passante à 3 dB de 225 nm. Une autre conception avec des entretoises de silice amincies à 300 nm a permis d’atteindre des pertes de 30 dB/km à 780 nm avec une bande de transmission fondamentale record décalée à 670 nm et capable de couvrir toutes les gammes spectrales du Ti:Sa, Yb et Er. Nous avons également travaillé sur la conception et la fabrication de IC HC-PCF présentant une gaine dont la structure est un réseau unique de tubes fins isolés. Une de ces fibres a permis de démontrer une transmission jusqu'à 220 nm avec des pertes records de 7,7 dB/km à ~ 750 nm, tandis qu’une seconde réalisation s’est traduit par une bande fondamentale de plus d’une octave allant de 600 à 1200 nm avec des pertes de 10-20 dB/km. Finalement, cette dernière fibre a été étudiée plus en détail pour déterminer les sources à l’origine des pertes due à la rugosité de surface présente à l’interface du contour du coeur. / This thesis reported on guiding principles, linear properties and conceptual design tools of inhibited coupling (IC) guiding hollow-core photonic crystal fibers (HC-PCF). IC guidance was proved as photonic manifestation of Q-BiC (quasi bound-state-in-a-continuum) by investigating asymmetric and polarization dependent Fano profiles with bandwidth of 30 GHz in high resolution transmission spectra. By using IC design concept, we reported on linear characterization of state-of-the-art IC HC-PCFs. Based on core shaping optimization, a Kagome IC HC-PCF demonstrated ultra-low loss down to 8.5 km/km at 1030 nm associated with a 225 nm wide 3-dB bandwidth. Another Kagome design with thinner silica struts of 300 nm exhibited lowest loss of 30 dB/km at 780 nm along with record level fundamental bandwidth spreading down to 670 nm and able to cover the entire Ti:Sa, Yb and Er laser spectral ranges. We also reported on design and fabrication of single-ring tubular lattice IC HC-PCFs. One of these fibers demonstrated transmission down to 220 nm with a record transmission loss of 7.7 dB/km at ~750 nm, while the second one exhibited ultra-broad fundamental band with loss range of 10-20 dB/km over one octave spanning from 600 to 1200 nm. Finally, the second tubular fiber was further investigated for fundamental loss sources due to surface roughness around its core-contour.
25

Uncertainty Assessment of Hydrogeological Models Based on Information Theory

De Aguinaga, José Guillermo 03 December 2010 (has links)
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach. / Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
26

VePMAD: A Vehicular Platoon Management Anomaly Detection System : A Case Study of Car-following Mode, Middle Join and Exit Maneuvers

Bayaa, Weaam January 2021 (has links)
Vehicle communication using sensors and wireless channels plays an important role to allow exchanging information. Adding more components to allow exchanging more information with infrastructure enhanced the capabilities of vehicles and enabled the rise of Cooperative Intelligent Transport Systems (C-ITS). Leveraging such capabilities, more applications such as Cooperative Adaptive Cruise Control (CACC) and platooning were introduced. CACC is an enhancement of Adaptive Cruise Control (ACC). It enables longitudinal automated vehicle control and follows the Constant Time Gap (CTG) strategy where, distance between vehicles is proportional to the speed. Platooning is different in terms of addressing both longitudinal and lateral control. In addition, it adopts the Constant Distance Gap (CDG) control strategy, with separation between vehicles unchanged with speed. Platooning requires close coupling and accordingly achieves goals of increased lane throughput and reduced energy consumption. When a longitudinal controller only is used, platooning operates in car-following mode and no Platoon Management Protocol (PMP) is used. On the other hand, when both longitudinal and lateral controllers are used, platooning operates in maneuver mode and coordination between vehicles is needed to perform maneuvers. Exchanging information allows the platoon to make real time maneuvering decisions. However, all the aforementioned benefits of platooning cannot be achieved if the system is vulnerable to misbehavior (i.e., the platoon is behaving incorrectly). Most of work in the literature attributes this misbehavior to malicious actors where an attacker injects malicious messages. Standards made efforts to develop security services to authenticate and authorize the sender. However, authenticated users equipped with cryptographic primitives can mount attacks (i.e., falsification attacks) and accordingly they cannot be detected by standard services such as cryptographic signatures. Misbehavior can disturb platoon behavior or even cause collision. Many Misbehavior Detection Schemes (MDSs) are proposed in the literature in the context of Vehicular ad hoc network (VANET) and CACC. These MDSs apply algorithms or rules to detect sudden or gradual changes of kinematic information disseminated by other vehicles. Reusing these MDSs directly during maneuvers can lead to false positives when they treat changes in kinematic information during the maneuver as an attack. This thesis addresses this gap by designing a new modular framework that has the capability to discern maneuvering process from misbehavior by leveraging platoon behavior recognition, that is, the platoon mode of operation (e.g., car-following mode or maneuver mode). In addition, it has the ability to recognize the undergoing maneuver (e.g., middle join or exit). Based on the platoon behavior recognition module, the anomaly detection module detects deviations from expected behavior. Unsupervised machine learning, notably Hidden Markov Model with Gaussian Mixture Model emission (GMMHMM), is used to learn the nominal behavior of the platoon during different modes and maneuvers. This is used later by the platoon behavior recognition and anomaly detection modules. GMMHMM is trained with nominal behavior of platoon using multivariate time series representing kinematic characteristics of the vehicles. Different models are used to detect attacks in different scenarios (e.g., different speeds). Two approaches for anomaly detection are investigated, Viterbi algorithm based anomaly detection and Forward algorithm based anomaly detection. The proposed framework managed to detect misbehavior whether the compromised vehicle is a platoon leader or follower. Empirical results show very high performance, with the platoon behavior recognition module reaching 100% in terms of accuracy. In addition, it can predict ongoing platoon behavior at early stages and accordingly, use the correct model representing the nominal behavior. Forward algorithm based anomaly detection, which rely on computing likelihood, showed better performance reaching 98% with slight variations in terms of accuracy, precision, recall and F1 score. Different platooning controllers can be resilient to some attacks and accordingly, the attack can result in slight deviation from nominal behavior. However, The anomaly detection module was able to detect this deviation. / Kommunikation mellan fordon som använder sensorer och radiokommunikation spelar en viktig roll för att kunna möjliggöra informationsutbyte. Genom att lägga till er komponenter för infrastrukturkommunikation förbättras fordonens generella kommunikationskapacitet och möjliggör C-ITS. Det möjliggör också för att introducera ytterligare applikationer, exempelvis CACC samt plutonering. CACC är en förbättring av ACC -konceptet. Denna teknik möjliggör longitudinell automatiserad fordonskontroll och följer en CTG -strategi där avståndet mellan fordon är proportionellt mot hastigheten. Plutonering är annorlunda med avseende på att hantera longitudinell och lateral kontroll. Dessutom antar den en kontrollstrategi för CDG där avståndet mellan fordon förblir oförändrat med hastighet. Plutonering kräver en nära koppling mellan fordon för att uppnå målet med ökad filgenomströmning och reducerad energikonsumtion. När enbart longitudinell kontroll är aktiverad, fungerar plutonering i bilföljande läge och funktionen PMP används inte. När både longitudinella och laterala kontroller används, arbetar plutonen istället i manöverläge och samordning mellan fordon behövs för att utföra olika manövrar. Informationsutbytet möjliggör att plutonen kan man manövrera i realtid. Alla ovan nämnda fördelar med plutonering kan emellertid inte uppnås om systemet är sårbart för felbeteende, det vill säga att plutonen beter sig fel. I litteraturen kopplas detta missförhållande till skadliga aktörer där en angripare injicerar skadliga meddelanden. I standardiseringsarbeten har man försökt utveckla säkerhetstjänster för att autentisera och auktorisera avsändaren. Trots detta kan autentiserade användare utrustade med kryptografiska primitiv upprätta förfalskningsattacker som inte detekteras av standardtjänster som kryptografiska signaturer. Felaktigt handhavande kan orsaka störningar i plutonens beteende eller till och med orsaka kollisioner och följaktligen påverka tillförlitligheten. Det finns manga MDSs beskrivna i litteraturen i relation till VANET och CACC. MDSs använder algoritmer eller regler för att detektera snabba eller långsamma förändringar kinematisk information som sprids av andra fordon. Direkt användning av MDSs under manövrar kan leda till falska positiva resultat eftersom de kommer att behandla förändringar i kinematisk information under manövern som en attack. Denna avhandling adresserar detta gap genom utformningen av ett modulärt ramverk som kan urskilja manöverprocessen från misskötsamhet genom att utnyttja plutonens beteendeigenkänningsmodul för att intelligent känna igen plutonläget (t.ex. bilföljande läge eller manöverläge). Ramverket har vidare egenskapen att känna igen pågående manövrar (frikoppling eller växelbyte) och avvikelser från förväntat beteende. Modulen använder en oövervakad maskininlärningssmodell, GMMHMM, för att lära en plutons normala beteende under olika lägen och manövrar som sedan används för plutonbeteendeigenkänning och avvikelsedetektion. GMMHMM tränas på data från plutoneringens normalbeteende i form av multivariata tidsserier som representerar fordonets kinematiska karakteristik. Olika modeller används för att upptäcka attacker i olika scenarier (t.ex. olika hastigheter). Två tillvägagångssätt för avvikelsedetektion undersöks, Viterbi-algoritmen samt Forward-algoritmen. Det föreslagna systemet lyckas upptäcka det felaktiga beteendet oavsett om det komprometterade fordonet är en plutonledare eller följare. Empiriska resultat visar mycket hög prestanda för beteendeigenkänningsmodulen som när 100%. Dessutom kan den känna igen plutonens beteende i ett tidigt skede. Resultat med Forward- algoritmen för avvikelsedetektion visar på en prestanda på 98% med små variationer med avseende på måtten accuracy, precision, recall och F1-score. Avvikelsedetektionsmodulen kan även upptäcka små avvikelser i beteende.
27

Tuning Parameter Selection in L1 Regularized Logistic Regression

Shi, Shujing 05 December 2012 (has links)
Variable selection is an important topic in regression analysis and is intended to select the best subset of predictors. Least absolute shrinkage and selection operator (Lasso) was introduced by Tibshirani in 1996. This method can serve as a tool for variable selection because it shrinks some coefficients to exact zero by a constraint on the sum of absolute values of regression coefficients. For logistic regression, Lasso modifies the traditional parameter estimation method, maximum log likelihood, by adding the L1 norm of the parameters to the negative log likelihood function, so it turns a maximization problem into a minimization one. To solve this problem, we first need to give the value for the parameter of the L1 norm, called tuning parameter. Since the tuning parameter affects the coefficients estimation and variable selection, we want to find the optimal value for the tuning parameter to get the most accurate coefficient estimation and best subset of predictors in the L1 regularized regression model. There are two popular methods to select the optimal value of the tuning parameter that results in a best subset of predictors, Bayesian information criterion (BIC) and cross validation (CV). The objective of this paper is to evaluate and compare these two methods for selecting the optimal value of tuning parameter in terms of coefficients estimation accuracy and variable selection through simulation studies.
28

Risk Management Project

Yan, Lu 02 May 2012 (has links)
In order to evaluate and manage portfolio risk, we separated this project into three sections. In the first section we constructed a portfolio with 15 different stocks and six options with different strategies. The portfolio was implemented in Interactive Brokers and rebalanced weekly through five holding periods. In the second section we modeled the loss distribution of the whole portfolio with normal and student-t distributions, we computed the Value-at-Risk and expected shortfall in detail for the portfolio loss in each holding week, and then we evaluated differences between the normal and student-t distributions. In the third section we applied the ARMA(1,1)-GARCH(1,1) model to simulate our assets and compared the polynomial tails with Gaussian and t-distribution innovations.
29

Risk Management Project

Shen, Chen 02 May 2012 (has links)
In order to evaluate and manage portfolio risk, we separated this project into three sections. In the first section we constructed a portfolio with 15 different stocks and six options with different strategies. The portfolio was implemented in Interactive Brokers and rebalanced weekly through five holding periods. In the second section we modeled the loss distribution of the whole portfolio with normal and student-t distributions, we computed the Value-at-Risk and expected shortfall in detail for the portfolio loss in each holding week, and then we evaluated differences between the normal and student-t distributions. In the third section we applied the ARMA(1,1)-GARCH(1,1) model to simulate our assets and compared the polynomial tails with Gaussian and t-distribution innovations.
30

Testing Lack-of-Fit of Generalized Linear Models via Laplace Approximation

Glab, Daniel Laurence 2011 May 1900 (has links)
In this study we develop a new method for testing the null hypothesis that the predictor function in a canonical link regression model has a prescribed linear form. The class of models, which we will refer to as canonical link regression models, constitutes arguably the most important subclass of generalized linear models and includes several of the most popular generalized linear models. In addition to the primary contribution of this study, we will revisit several other tests in the existing literature. The common feature among the proposed test, as well as the existing tests, is that they are all based on orthogonal series estimators and used to detect departures from a null model. Our proposal for a new lack-of-fit test is inspired by the recent contribution of Hart and is based on a Laplace approximation to the posterior probability of the null hypothesis. Despite having a Bayesian construction, the resulting statistic is implemented in a frequentist fashion. The formulation of the statistic is based on characterizing departures from the predictor function in terms of Fourier coefficients, and subsequent testing that all of these coefficients are 0. The resulting test statistic can be characterized as a weighted sum of exponentiated squared Fourier coefficient estimators, whereas the weights depend on user-specified prior probabilities. The prior probabilities provide the investigator the flexibility to examine specific departures from the prescribed model. Alternatively, the use of noninformative priors produces a new omnibus lack-of-fit statistic. We present a thorough numerical study of the proposed test and the various existing orthogonal series-based tests in the context of the logistic regression model. Simulation studies demonstrate that the test statistics under consideration possess desirable power properties against alternatives that have been identified in the existing literature as being important.

Page generated in 0.0291 seconds