• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 9
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 13
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Construction of the Intensity-Duration-Frequency (IDF) Curves under Climate Change

2014 December 1900 (has links)
Intensity-Duration-Frequency (IDF) curves are among the standard design tools for various engineering applications, such as storm water management systems. The current practice is to use IDF curves based on historical extreme precipitation quantiles. A warming climate, however, might change the extreme precipitation quantiles represented by the IDF curves, emphasizing the need for updating the IDF curves used for the design of urban storm water management systems in different parts of the world, including Canada. This study attempts to construct the future IDF curves for Saskatoon, Canada, under possible climate change scenarios. For this purpose, LARS-WG, a stochastic weather generator, is used to spatially downscale the daily precipitation projected by Global Climate Models (GCMs) from coarse grid resolution to the local point scale. The stochastically downscaled daily precipitation realizations were further disaggregated into ensemble hourly and sub-hourly (as fine as 5-minute) precipitation series, using a disaggregation scheme developed using the K-nearest neighbor (K-NN) technique. This two-stage modeling framework (downscaling to daily, then disaggregating to finer resolutions) is applied to construct the future IDF curves in the city of Saskatoon. The sensitivity of the K-NN disaggregation model to the number of nearest neighbors (i.e. window size) is evaluated during the baseline period (1961-1990). The optimal window size is assigned based on the performance in reproducing the historical IDF curves by the K-NN disaggregation models. Two optimal window sizes are selected for the K-NN hourly and sub-hourly disaggregation models that would be appropriate for the hydrological system of Saskatoon. By using the simulated hourly and sub-hourly precipitation series and the Generalized Extreme Value (GEV) distribution, future changes in the IDF curves and associated uncertainties are quantified using a large ensemble of projections obtained for the Canadian and British GCMs (CanESM2 and HadGEM2-ES) based on three Representative Concentration Pathways; RCP2.6, RCP4.5, and RCP8.5 available from CMIP5 – the most recent product of the Intergovernmental Panel on Climate Change (IPCC). The constructed IDF curves are then compared with the ones constructed using another method based on a genetic programming technique. The results show that the sign and the magnitude of future variations in extreme precipitation quantiles are sensitive to the selection of GCMs and/or RCPs, and the variations seem to become intensified towards the end of the 21st century. Generally, the relative change in precipitation intensities with respect to the historical intensities for CMIP5 climate models (e.g., CanESM2: RCP4.5) is less than those for CMIP3 climate models (e.g., CGCM3.1: B1), which may be due to the inclusion of climate policies (i.e., adaptation and mitigation) in CMIP5 climate models. The two-stage downscaling-disaggregation method enables quantification of uncertainty due to natural internal variability of precipitation, various GCMs and RCPs, and downscaling methods. In general, uncertainty in the projections of future extreme precipitation quantiles increases for short durations and for long return periods. The two-stage method adopted in this study and the GP method reconstruct the historical IDF curves quite successfully during the baseline period (1961-1990); this suggests that these methods can be applied to efficiently construct IDF curves at the local scale under future climate scenarios. The most notable precipitation intensification in Saskatoon is projected to occur with shorter storm duration, up to one hour, and longer return periods of more than 25 years.
22

Classification of COVID-19 Using Synthetic Minority Over-Sampling and Transfer Learning

Ormos, Christian January 2020 (has links)
The 2019 novel coronavirus has been proven to present several unique features on chest X-rays and CT-scans that distinguish it from imaging of other pulmonary diseases such as bacterial pneumonia and viral pneumonia unrelated to COVID-19. However, the key characteristics of a COVID-19 infection have been proven challenging to detect with the human eye. The aim of this project is to explore if it is possible to distinguish a patient with COVID-19 from a patient who is not suffering from the disease from posteroanterior chest X-ray images using synthetic minority over-sampling and transfer learning. Furthermore, the report will also present the mechanics of COVID-19, the used dataset and models and the validity of the results.
23

Rozpoznávání lidské aktivity s pomocí senzorů v chytrém telefonu / Human Activity Recognition Using Smartphone

Novák, Andrej January 2016 (has links)
The increase of mobile smartphones continues to grow and with it the demand for automation and use of the most offered aspects of the phone, whether in medicine (health care and surveillance) or in user applications (automatic recognition of position, etc.). As part of this work has been created the designs and implementation of the system for the recognition of human activity on the basis of data processing from sensors of smartphones, along with the determination of the optimal parameters, recovery success rate and comparison of individual evaluation. Other benefits include a draft format and displaying numerous training set consisting of real contributions and their manual evaluation. In addition to the main benefits, the software tool was created to allow the validation of the elements of the training set and acquisition of features from this set and software, that is able with the help of deep learning to train models and then test them.
24

Systém pro rozpoznávání APT útoků / System for Detection of APT Attacks

Hujňák, Ondřej January 2016 (has links)
The thesis investigates APT attacks, which are professional targeted attacks that are characterised by long-term duration and use of advanced techniques. The thesis summarises current knowledge about APT attacks and suggests seven symptoms that can be used to check, whether an organization is under an APT attack. Thesis suggests a system for detection of APT attacks based on interaction of those symptoms. This system is elaborated further for detection of attacks in computer networks, where it uses user behaviour modelling for anomaly detection. The detector uses k-nearest neighbors (k-NN) method. The APT attack recognition ability in network environment is verified by implementing and testing this detector.
25

Real-Time Automatic Price Prediction for eBay Online Trading

Raykhel, Ilya Igorevitch 30 November 2008 (has links) (PDF)
While Machine Learning is one of the most popular research areas in Computer Science, there are still only a few deployed applications intended for use by the general public. We have developed an exemplary application that can be directly applied to eBay trading. Our system predicts how much an item would sell for on eBay based on that item's attributes. We ran our experiments on the eBay laptop category, with prior trades used as training data. The system implements a feature-weighted k-Nearest Neighbor algorithm, using genetic algorithms to determine feature weights. Our results demonstrate an average prediction error of 16%; we have also shown that this application greatly reduces the time a reseller would need to spend on trading activities, since the bulk of market research is now done automatically with the help of the learned model.
26

Development of new data fusion techniques for improving snow parameters estimation

De Gregorio, Ludovica 26 November 2019 (has links)
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
27

Feature extraction and similarity-based analysis for proteome and genome databases

Ozturk, Ozgur 20 September 2007 (has links)
No description available.
28

Development of efficient forest inventory techniques for forest resource assessment in South Korea / Entwicklung effizienter Inventurmethoden zur großräumigen Erfassung von Waldressourcen in Süd-Korea

Yim, Jong-Su 12 December 2008 (has links)
No description available.
29

Stochastic Simulation Of Daily Rainfall Data Using Matched Block Bootstrap

Santhosh, D 06 1900 (has links)
Characterizing the uncertainty in rainfall using stochastic models has been a challenging area of research in the field of operational hydrology for about half a century. Simulated sequences drawn from such models find use in a variety of hydrological applications. Traditionally, parametric models are used for simulating rainfall. But the parametric models are not parsimonious and have uncertainties associated with identification of model form, normalizing transformation, and parameter estimation. None of the models in vogue have gained universal acceptability among practising engineers. This may either be due to lack of confidence in the existing models, or the inability to adopt models proposed in literature because of their complexity or both. In the present study, a new nonparametric Matched Block Bootstrap (MABB) model is proposed for stochastic simulation of rainfall at daily time scale. It is based on conditional matching of blocks formed from the historical rainfall data using a set of predictors (conditioning variables) proposed for matching the blocks. The efficiency of the developed model is demonstrated through application to rainfall data from India, Australia, and USA. The performance of MABB is compared with two non-parametric rainfall simulation models, k-NN and ROG-RAG, for a site in Melbourne, Australia. The results showed that MABB model is a feasible alternative to ROG-RAG and k-NN models for simulating daily rainfall sequences for hydrologic applications. Further it is found that MABB and ROG-RAG models outperform k-NN model. The proposed MABB model preserved the summary statistics of rainfall and fraction of wet days at daily, monthly, seasonal and annual scales. It could also provide reasonable performance in simulating spell statistics. The MABB is parsimonious and requires less computational effort than ROG-RAG model. It reproduces probability density function (marginal distribution) fairly well due to its data driven nature. Results obtained for sites in India and U.S.A. show that the model is robust and promising.
30

Využití umělé inteligence ve vibrodiagnostice / Utilization of artificial intelligence in vibrodiagnostics

Dočekalová, Petra January 2021 (has links)
The diploma thesis deals with machine learning, expert systems, fuzzy logic, genetic algorithms, neural networks and chaos theory, which fall into the category of artificial intelligence. The aim of this work is to describe and implement three different classification methods, according to which the data set will be processed. The GNU Octave software environment was chosen for the data application for licensing reasons. Further evaluate the success of data classification, including visualization. Three different classification methods are used for comparison, so that we can compare the processed data with each other.

Page generated in 0.023 seconds