• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 300
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 740
  • 740
  • 740
  • 140
  • 118
  • 112
  • 102
  • 85
  • 67
  • 65
  • 59
  • 56
  • 55
  • 54
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Contrasting Environments Associated with Storm Prediction Center Tornado Outbreak Forecasts using Synoptic-Scale Composite Analysis

Bates, Alyssa Victoria 17 May 2014 (has links)
Tornado outbreaks have significant human impact, so it is imperative forecasts of these phenomena are accurate. As a synoptic setup lays the foundation for a forecast, synoptic-scale aspects of Storm Prediction Center (SPC) outbreak forecasts of varying accuracy were assessed. The percentages of the number of tornado outbreaks within SPC 10% tornado probability polygons were calculated. False alarm events were separately considered. The outbreaks were separated into quartiles using a point-in-polygon algorithm. Statistical composite fields were created to represent the synoptic conditions of these groups and facilitate comparison. Overall, temperature advection had the greatest differences between the groups. Additionally, there were significant differences in the jet streak strengths and amounts of vertical wind shear. The events forecasted with low accuracy consisted of the weakest synoptic-scale setups. These results suggest it is possible that events with weak synoptic setups should be regarded as areas of concern by tornado outbreak forecasters.
12

WASP: An Algorithm for Ranking College Football Teams

Earl, Jonathan January 2016 (has links)
Arrow's Impossibility Theorem outlines the flaws that effect any voting system that attempts to order a set of objects. For its entire history, American college football has been determining its champion based on a voting system. Much of the literature has dealt with why the voting system used is problematic, but there does not appear to be a large collection of work done to create a better, mathematical process. More generally, the inadequacies of ranking in football are a manifestation of the problem of ranking a set of objects. Herein, principal component analysis is used as a tool to provide a solution for the problem, in the context of American college football. To show its value, rankings based on principal component analysis are compared against the rankings used in American college football. / Thesis / Master of Science (MSc) / The problem of ranking is a ubiquitous problem, appearing everywhere from Google to ballot boxes. One of the more notable areas where this problem arises is in awarding the championship in American college football. This paper explains why this problem exists in American college football, and presents a bias-free mathematical solution that is compared against how American college football awards their championship.
13

A Quantitative Analysis of Pansharpened Images

Vijayaraj, Veeraraghavan 07 August 2004 (has links)
There has been an exponential increase in satellite image data availability. Image data are now collected with different spatial, spectral, and temporal resolutions. Image fusion techniques are used extensively to combine different images having complementary information into one single composite. The fused image has rich information that will improve the performance of image analysis algorithms. Pansharpening is a pixel level fusion technique used to increase the spatial resolution of the multispectral image using spatial information from the high resolution panchromatic image while preserving the spectral information in the multispectral image. Resolution merge, image integration, and multisensor data fusion are some of the equivalent terms used for pansharpening. Pansharpening techniques are applied for enhancing certain features not visible in either of the single data alone, change detection using temporal data sets, improving geometric correction, and enhancing classification. Various pansharpening algorithms are available in the literature, and some have been incorporated in commercial remote sensing software packages such as ERDAS Imagine® and ENVI®. The performance of these algorithms varies both spectrally and spatially. Hence evaluation of the spectral and spatial quality of the pansharpened images using objective quality metrics is necessary. In this thesis, quantitative metrics for evaluating the quality of pansharpened images have been developed. For this study, the Intensity-Hue-Saturation (IHS) based sharpening, Brovey sharpening, Principal Component Analysis (PCA) based sharpening and a Wavelet-based sharpening method is used.
14

Large Scale Matrix Completion and Recommender Systems

Amadeo, Lily 04 September 2015 (has links)
"The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets. Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed matrix is low-rank. Low-rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "
15

Classification of Genotype and Age of Eyes Using RPE Cell Size and Shape

Yu, Jie 18 December 2012 (has links)
Retinal pigment epithelium (RPE) is a principal site of pathogenesis in age-related macular de-generation (AMD). AMD is a main source of vision loss even blindness in the elderly and there is no effective treatment right now. Our aim is to describe the relationship between the morphology of RPE cells and the age and genotype of the eyes. We use principal component analysis (PCA) or functional principal component method (FPCA), support vector machine (SVM), and random forest (RF) methods to analyze the morphological data of RPE cells in mouse eyes to classify their age and genotype. Our analyses show that amongst all morphometric measures of RPE cells, cell shape measurements (eccentricity and solidity) are good for classification. But combination of cell shape and size (perimeter) provide best classification.
16

Comparison of Classification Effects of Principal Component and Sparse Principal Component Analysis for Cardiology Ultrasound in Left Ventricle

Yang, Hsiao-ying 05 July 2012 (has links)
Due to the association of heart diseases and the patterns of the diastoles and systoles of heart in left ventricle, we analyze and classify the data gathered form Kaohsiung Veterans General Hospital by using the cardiology ultrasound images. We make use of the differences between the gray-scale values of diastoles and systoles in left ventricle to evaluate the function of heart. Following Chen (2011) and Kao (2011), we modified the way about the reduction and alignment of the image data. We also add some more subjects into the study. We treat images in two manners, saving the parts of concern. Since the ultrasound image after transformation to data form is expressed as a high-dimensional matrix, the principal component analysis is adapted to retain the important factors and reduce the dimensions. In this work, we compare the loadings calculated by the usual principal and sparse principal component analysis, then the factor scores are used to carry out the discriminant analysis and discuss the accuracy of classification. By the statistical methods in this work, the accuracy, sensitivity and specificity of the original classifications are over 80% and the cross validations are over 60%.
17

Examining the Relationship Between Hydroclimatological Variables and High Flow Events

Fliehman, Ryan Mark January 2012 (has links)
In our study we identify dominant hydroclimatic variables and large-scale patterns that lead to high streamflow events in the Santa Cruz, Salt, and Verde River in Arizona for the period 1979-2009 using Principal Component Analysis (PCA). We used winter (Nov - March) data from the USGS daily streamflow database and 11 variables from the North American Reanalysis (NARR) database, in addition to weather maps from the Hydrometeorological Prediction Center (HPC). Using streamflow data, we identify precipitation events that led to the highest 98th percentile of daily streamflow events and find dominant hydroclimatic variables associated with these events. We find that upper level winds and moisture fluxes are dominant variables that characterize events. The dominant mode for all three basins is associated with frontal systems, while the second mode is associated with cut-off upper level low pressure systems. Our goal is to provide forecasting agencies with tools to improve flood forecasting practices.
18

Resilient Average and Distortion Detection in Sensor Networks

Aguirre Jurado, Ricardo 15 May 2009 (has links)
In this paper a resilient sensor network is built in order to lessen the effects of a small portion of corrupted sensors when an aggregated result such as the average needs to be obtained. By examining the variance in sensor readings, a change in the pattern can be spotted and minimized in order to maintain a stable aggregated reading. Offset in sensors readings are also analyzed and compensated to help reduce a bias change in average. These two analytical techniques are later combined in Kalman filter to produce a smooth and resilient average given by the readings of individual sensors. In addition, principal components analysis is used to detect variations in the sensor network. Experiments are held using real sensors called MICAz, which are use to gather light measurements in a small area and display the light average generated in that area.
19

Extração de características de imagens de faces humanas através de wavelets, PCA e IMPCA / Features extraction of human faces images through wavelets, PCA and IMPCA

Bianchi, Marcelo Franceschi de 10 April 2006 (has links)
Reconhecimento de padrões em imagens é uma área de grande interesse no mundo científico. Os chamados métodos de extração de características, possuem as habilidades de extrair características das imagens e também de reduzir a dimensionalidade dos dados gerando assim o chamado vetor de características. Considerando uma imagem de consulta, o foco de um sistema de reconhecimento de imagens de faces humanas é pesquisar em um banco de imagens, a imagem mais similar à imagem de consulta, de acordo com um critério dado. Este trabalho de pesquisa foi direcionado para a geração de vetores de características para um sistema de reconhecimento de imagens, considerando bancos de imagens de faces humanas, para propiciar tal tipo de consulta. Um vetor de características é uma representação numérica de uma imagem ou parte dela, descrevendo seus detalhes mais representativos. O vetor de características é um vetor n-dimensional contendo esses valores. Essa nova representação da imagem propicia vantagens ao processo de reconhecimento de imagens, pela redução da dimensionalidade dos dados. Uma abordagem alternativa para caracterizar imagens para um sistema de reconhecimento de imagens de faces humanas é a transformação do domínio. A principal vantagem de uma transformação é a sua efetiva caracterização das propriedades locais da imagem. As wavelets diferenciam-se das tradicionais técnicas de Fourier pela forma de localizar a informação no plano tempo-freqüência; basicamente, têm a capacidade de mudar de uma resolução para outra, o que as fazem especialmente adequadas para análise, representando o sinal em diferentes bandas de freqüências, cada uma com resoluções distintas correspondentes a cada escala. As wavelets foram aplicadas com sucesso na compressão, melhoria, análise, classificação, caracterização e recuperação de imagens. Uma das áreas beneficiadas onde essas propriedades tem encontrado grande relevância é a área de visão computacional, através da representação e descrição de imagens. Este trabalho descreve uma abordagem para o reconhecimento de imagens de faces humanas com a extração de características baseado na decomposição multiresolução de wavelets utilizando os filtros de Haar, Daubechies, Biorthogonal, Reverse Biorthogonal, Symlet, e Coiflet. Foram testadas em conjunto as técnicas PCA (Principal Component Analysis) e IMPCA (Image Principal Component Analysis), sendo que os melhores resultados foram obtidos utilizando a wavelet Biorthogonal com a técnica IMPCA / Image pattern recognition is an interesting area in the scientific world. The features extraction method refers to the ability to extract features from images, reduce the dimensionality and generates the features vector. Given a query image, the goal of a features extraction system is to search the database and return the most similar to the query image according to a given criteria. Our research addresses the generation of features vectors of a recognition image system for human faces databases. A feature vector is a numeric representation of an image or part of it over its representative aspects. The feature vector is a n-dimensional vector organizing such values. This new image representation can be stored into a database and allow a fast image retrieval. An alternative for image characterization for a human face recognition system is the domain transform. The principal advantage of a transform is its effective characterization for their local image properties. In the past few years researches in applied mathematics and signal processing have developed practical wavelet methods for the multi scale representation and analysis of signals. These new tools differ from the traditional Fourier techniques by the way in which they localize the information in the time-frequency plane; in particular, they are capable of trading on type of resolution for the other, which makes them especially suitable for the analysis of non-stationary signals. The wavelet transform is a set basis function that represents signals in different frequency bands, each one with a resolution matching its scale. They have been successfully applied to image compression, enhancement, analysis, classification, characterization and retrieval. One privileged area of application where these properties have been found to be relevant is computer vision, especially human faces imaging. In this work we describe an approach to image recognition for human face databases focused on feature extraction based on multiresolution wavelets decomposition, taking advantage of Biorthogonal, Reverse Biorthogonal, Symlet, Coiflet, Daubechies and Haar. They were tried in joint the techniques together the PCA (Principal Component Analysis) and IMPCA (Image Principal Component Analysis)
20

Hypothesis formulation in medical records space

Ba-Dhfari, Thamer Omer Faraj January 2017 (has links)
Patient medical records are a valuable resource that can be used for many purposes including managing and planning for future health needs as well as clinical research. Health databases such as the clinical practice research datalink (CPRD) and many other similar initiatives can provide researchers with a useful data source on which they can test their medical hypotheses. However, this can only be the case when researchers have a good set of hypotheses to test on the data. Conversely, the data may have other equally important areas that remain unexplored. There is a chance that some important signals in the data could be missed. Therefore, further analysis is required to make such hidden areas become more obvious and attainable for future exploration and investigation. Data mining techniques can be effective tools in discovering patterns and signals in large-scale patient data sets. These techniques have been widely applied to different areas in medical domain. Therefore, analysing patient data using such techniques has the potential to explore the data and to provide a better understanding of the information in patient records. However, the heterogeneity and complexity of medical data can be an obstacle in applying data mining techniques. Much of the potential value of this data therefore goes untapped. This thesis describes a novel methodology that reduces the dimensionality of primary care data, to make it more amenable to visualisation, mining and clustering. The methodology involves employing a combination of ontology-based semantic similarity and principal component analysis (PCA) to map the data into an appropriate and informative low dimensional space. The aim of this thesis is to develop a novel methodology that provides a visualisation of patient records. This visualisation provides a systematic method that allows the formulation of new and testable hypotheses which can be fed to researchers to carry out the subsequent phases of research. In a small-scale study based on Salford Integrated Record (SIR) data, I have demonstrated that this mapping provides informative views of patient phenotypes across a population and allows the construction of clusters of patients sharing common diagnosis and treatments. The next phase of the research was to develop this methodology and explore its application using larger patient cohorts. This data contains more precise relationships between features than small-scale data. It also leads to the understanding of distinct population patterns and extracting common features. For such reasons, I applied the mapping methodology to patient records from the CPRD database. The study data set consisted of anonymised patient records for a population of 2.7 million patients. The work done in this analysis shows that methodology scales as O(n) in ways that did not require large computing resources. The low dimensional visualisation of high dimensional patient data allowed the identification of different subpopulations of patients across the study data set, where each subpopulation consisted of patients sharing similar characteristics such as age, gender and certain types of diseases. A key finding of this research is the wealth of data that can be produced. In the first use case of looking at the stratification of patients with falls, the methodology gave important hypotheses; however, this work has barely scratched the surface of how this mapping could be used. It opens up the possibility of applying a wide range of data mining strategies that have not yet been explored. What the thesis has shown is one strategy that works, but there could be many more. Furthermore, there is no aspect of the implementation of this methodology that restricts it to medical data. The same methodology could equally be applied to the analysis and visualisation of many other sources of data that are described using terms from taxonomies or ontologies.

Page generated in 0.129 seconds