• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Anomaly diagnosis based on regression and classification analysis of statistical traffic features

Liu, Lei, Jin, X.L., Min, Geyong, Xu, L. 30 September 2013 (has links)
No / Traffic anomalies caused by Distributed Denial-of-Service (DDoS) attacks are major threats to both network service providers and legitimate customers. The DDoS attacks regularly consume and exhaust the resources of victims and hence result in abnormal bursty traffic through end-user systems. Additionally, malicious traffic aggregated into normal traffic often show dramatic changes in the traffic nature and statistical features. This study focuses on early detection of traffic anomalies caused by DDoS attacks in light of analyzing the network traffic behavior. Key statistical features including variance, autocorrelation, and self-similarity are employed to characterize the network traffic. Further, artificial neural network and support vector machine subject to the performance metrics are employed to predict and classify the abnormal traffic. The proposed diagnosis mechanism is validated through experiments where the datasets consist of two groups. The first group is the Massachusetts Institute of Technology Lincoln Laboratory dataset containing labeled DoS attack. The second group collected from DDoS attack simulation experiments covers three representative traffic shapes resulting from the dynamic attack rate configuration, namely, constant intensity, ramp-up behavior, and pulsing behavior. The experimental results demonstrate that the developed mechanism can effectively and precisely alert the abnormal traffic within short response period.
2

Graph Theory and Dynamic Programming Framework for Automated Segmentation of Ophthalmic Imaging Biomarkers

Chiu, Stephanie Ja-Yi January 2014 (has links)
<p>Accurate quantification of anatomical and pathological structures in the eye is crucial for the study and diagnosis of potentially blinding diseases. Earlier and faster detection of ophthalmic imaging biomarkers also leads to optimal treatment and improved vision recovery. While modern optical imaging technologies such as optical coherence tomography (OCT) and adaptive optics (AO) have facilitated in vivo visualization of the eye at the cellular scale, the massive influx of data generated by these systems is often too large to be fully analyzed by ophthalmic experts without extensive time or resources. Furthermore, manual evaluation of images is inherently subjective and prone to human error.</p><p>This dissertation describes the development and validation of a framework called graph theory and dynamic programming (GTDP) to automatically detect and quantify ophthalmic imaging biomarkers. The GTDP framework was validated as an accurate technique for segmenting retinal layers on OCT images. The framework was then extended through the development of the quasi-polar transform to segment closed-contour structures including photoreceptors on AO scanning laser ophthalmoscopy images and retinal pigment epithelial cells on confocal microscopy images. </p><p>The GTDP framework was next applied in a clinical setting with pathologic images that are often lower in quality. Algorithms were developed to delineate morphological structures on OCT indicative of diseases such as age-related macular degeneration (AMD) and diabetic macular edema (DME). The AMD algorithm was shown to be robust to poor image quality and was capable of segmenting both drusen and geographic atrophy. To account for the complex manifestations of DME, a novel kernel regression-based classification framework was developed to identify retinal layers and fluid-filled regions as a guide for GTDP segmentation.</p><p>The development of fast and accurate segmentation algorithms based on the GTDP framework has significantly reduced the time and resources necessary to conduct large-scale, multi-center clinical trials. This is one step closer towards the long-term goal of improving vision outcomes for ocular disease patients through personalized therapy.</p> / Dissertation
3

Klasifikace vozidel na základě odezvy indukčních senzorů / Vehicle classification using inductive loops sensors

Halachkin, Aliaksei January 2017 (has links)
This project is dedicated to the problem of vehicle classification using inductive loop sensors. We created the dataset that contains more than 11000 labeled inductive loop signatures collected at different times and from different parts of the world. Multiple classification methods and their optimizations were employed to the vehicle classification. Final model that combines K-nearest neighbors and logistic regression achieves 94\% accuracy on classification scheme with 9 classes. The vehicle classifier was implemented in C++.
4

Modèles d'impact statistiques en agriculture : de la prévision saisonnière à la prévision à long terme, en passant par les estimations annuelles / Impact models in agriculture : from seasonal forecast to long-term estimations, including annual estimates

Mathieu, Jordane 29 March 2018 (has links)
En agriculture, la météo est le principal facteur de variabilité d’une année sur l’autre. Cette thèse vise à construire des modèles statistiques à grande échelle qui estiment l’impact des conditions météorologiques sur les rendements agricoles. Le peu de données agricoles disponibles impose de construire des modèles simples avec peu de prédicteurs, et d’adapter les méthodes de sélection de modèles pour éviter le sur-apprentissage. Une grande attention a été portée sur la validation des modèles statistiques. Des réseaux de neurones et modèles à effets mixtes (montrant l’importance des spécificités locales) ont été comparés. Les estimations du rendement de maïs aux États-Unis en fin d’année ont montré que les informations de températures et de précipitations expliquent en moyenne 28% de la variabilité du rendement. Dans plusieurs états davantage météo-sensibles, ce score passe à près de 70%. Ces résultats sont cohérents avec de récentes études sur le sujet. Les prévisions du rendement au milieu de la saison de croissance du maïs sont possibles à partir de juillet : dès juillet, les informations météorologiques utilisées expliquent en moyenne 25% de la variabilité du rendement final aux États-Unis et près de 60% dans les états plus météo-sensibles comme la Virginie. Les régions du nord et du sud-est des États-Unis sont les moins bien prédites. Le rendements extrêmement faibles ont nécessité une méthode particulière de classification : avec seulement 4 prédicteurs météorologiques, 71% des rendements très faibles sont bien détectés en moyenne. L’impact du changement climatique sur les rendements jusqu’en 2060 a aussi été étudié : le modèle construit nous informe sur la rapidité d’évolution des rendements dans les différents cantons des États-Unis et localisent ceux qui seront le plus impactés. Pour les états les plus touchés (au sud et sur la côte Est), et à pratique agricole constante, le modèle prévoit des rendements près de deux fois plus faibles que ceux habituels, en 2060 sous le scénario RCP 4.5 du GIEC. Les états du nord seraient peu touchés. Les modèles statistiques construits peuvent aider à la gestion sur le cours terme (prévisions saisonnières) ou servent à quantifier la qualité des récoltes avant que ne soient faits les sondages post-récolte comme une aide à la surveillance (estimation en fin d’année). Les estimations pour les 50 prochaines années participent à anticiper les conséquences du changement climatique sur les rendements agricoles, pour définir des stratégies d’adaptation ou d’atténuation. La méthodologie utilisée dans cette thèse se généralise aisément à d’autres cultures et à d’autres régions du monde. / In agriculture, weather is the main factor of variability between two consecutive years. This thesis aims to build large-scale statistical models that estimate the impact of weather conditions on agricultural yields. The scarcity of available agricultural data makes it necessary to construct simple models with few predictors, and to adapt model selection methods to avoid overfitting. Careful validation of statistical models is a major concern of this thesis. Neural networks and mixed effects models are compared, showing the importance of local specificities. Estimates of US corn yield at the end of the year show that temperature and precipitation information account for an average of 28% of yield variability. In several more weather-sensitive states, this score increases to nearly 70%. These results are consistent with recent studies on the subject. Mid-season maize crop yield forecasts are possible from July: as of July, the meteorological information available accounts for an average of 25% of the variability in final yield in the United States and close to 60% in more weather-sensitive states like Virginia. The northern and southeastern regions of the United States are the least well predicted. Predicting years for which extremely low yields are encountered is an important task. We use a specific method of classification, and show that with only 4 weather predictors, 71% of the very low yields are well detected on average. The impact of climate change on yields up to 2060 is also studied: the model we build provides information on the speed of evolution of yields in different counties of the United States. This highlights areas that will be most affected. For the most affected states (south and east coast), and with constant agricultural practice, the model predicts yields nearly divided by two in 2060, under the IPCC RCP 4.5 scenario. The northern states would be less affected. The statistical models we build can help for management on the short-term (seasonal forecasts) or to quantify the quality of the harvests before post-harvest surveys, as an aid to the monitoring (estimate at the end of the year). Estimations for the next 50 years help to anticipate the consequences of climate change on agricultural yields, and to define adaptation or mitigation strategies. The methodology used in this thesis is easily generalized to other cultures and other regions of the world.

Page generated in 0.1522 seconds