• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 389
  • 168
  • 46
  • 44
  • 28
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 6
  • 4
  • 3
  • 3
  • Tagged with
  • 943
  • 943
  • 744
  • 149
  • 146
  • 142
  • 124
  • 113
  • 97
  • 86
  • 75
  • 72
  • 70
  • 63
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Medidas de centralidade em redes complexas: correlações, efetividade e caracterização de sistemas / Centrality measures in complex networks: correlations, effectiveness and characterization of systems

Ronqui, José Ricardo Furlan 19 February 2014 (has links)
Centralidades são medidas desenvolvidas para determinar a importância dos nós e ligações, utilizando as características estruturais das redes para esta finalidade. As medidas de centralidade são, portanto, essenciais no estudo de redes complexas pois os sistemas representados por elas geralmente são formados por muitos elementos, e com isso, torna-se inviável estudar individualmente cada um deles; dessa forma é necessário identificar os nós e ligações que são mais relevantes em cada situação. Todavia, com o surgimento de ideias diferentes de como esses elementos podem ser importantes, diversas medidas foram propostas com o intuito de evidenciar elementos que passam despercebidos pelas demais. Neste trabalho utilizamos a correlação de Pearson para avaliar o quão semelhantes são as classificações fornecidas pelas centralidades para redes representando sistemas reais e modelos teóricos. Para avaliar a efetividade das medidas e como elas afetam cada sistema, atacamos as redes usando as centralidades como indicadores para a ordem de remoção dos nós e ligações. Procurando caracterizar as redes usando suas diferenças estruturais, realizamos uma análise de componentes principais empregando as correlações entre os pares de centralidade como características de cada sistema. Nossos resultados mostraram que na maioria dos casos medidas distintas estão correlacionadas, o que indica que em geral os mesmos elementos são evidenciados pelas diferentes centralidades; também observamos que as correlações são mais fortes nos modelos do que nos sistemas reais. Os ataques mostraram que medidas fortemente correlacionadas podem influenciar as redes de maneiras distintas, evidenciando a importância do conjunto de elementos selecionados por cada medida. Nosso último resultado demonstra que as correlações entre os pares de centralidades podem ser utilizados tanto para a diferenciação e caracterização de redes quanto na avaliação de modelos que representem melhor a estrutura de um sistema específico. / Centrality measures were developed to evaluate the importance of nodes and links based on the structure of networks. Centralities are essential in the study of networks because these systems are usually large, which make manual analysis of all nodes and links impossible; therefore recognizing such elements is a vital task. As nodes and links can be considered essential by different reasons, a large number of measures were proposed to identify important elements that were not highlighted by the other ones. In our study, we use Pearson\'s correlation coefficient to measure the similarity between rankings of nodes and links provided by different centralities for real and model based networks. We also perform attacks to networks, using these rankings to determine the order of removal of nodes and links, intending to evaluate and compare the efficiency and how the systems react to attacks guided by different centralities. Finally, we use the correlation coefficients between the pairs of centralities as properties of networks, and perform a principal component analysis with them, to evaluate if differences among network structures can be detected from correlations. Our results showed that centrality measures are frequently correlated, which means that the same elements can be highlighted by different centralities. We also noticed that the correlation coefficients are larger in models than in real world networks. The results of the attacks experiment showed that even when two measures are highly correlated, they can affect networks in distinct ways, meaning that the group of the nodes and links provided by each measure are relevant for the study of networks systems. Our last result evidenced that correlations among centrality measures can be used for characterization of networks and to evaluate how well models represent them.
322

Comparative analysis of XGBoost, MLP and LSTM techniques for the problem of predicting fire brigade Iiterventions /

Cerna Ñahuis, Selene Leya January 2019 (has links)
Orientador: Anna Diva Plasencia Lotufo / Abstract: Many environmental, economic and societal factors are leading fire brigades to be increasingly solicited, and, as a result, they face an ever-increasing number of interventions, most of the time on constant resource. On the other hand, these interventions are directly related to human activity, which itself is predictable: swimming pool drownings occur in summer while road accidents due to ice storms occur in winter. One solution to improve the response of firefighters on constant resource is therefore to predict their workload, i.e., their number of interventions per hour, based on explanatory variables conditioning human activity. The present work aims to develop three models that are compared to determine if they can predict the firefighters' response load in a reasonable way. The tools chosen are the most representative from their respective categories in Machine Learning, such as XGBoost having as core a decision tree, a classic method such as Multi-Layer Perceptron and a more advanced algorithm like Long Short-Term Memory both with neurons as a base. The entire process is detailed, from data collection to obtaining the predictions. The results obtained prove a reasonable quality prediction that can be improved by data science techniques such as feature selection and adjustment of hyperparameters. / Resumo: Muitos fatores ambientais, econômicos e sociais estão levando as brigadas de incêndio a serem cada vez mais solicitadas e, como consequência, enfrentam um número cada vez maior de intervenções, na maioria das vezes com recursos constantes. Por outro lado, essas intervenções estão diretamente relacionadas à atividade humana, o que é previsível: os afogamentos em piscina ocorrem no verão, enquanto os acidentes de tráfego, devido a tempestades de gelo, ocorrem no inverno. Uma solução para melhorar a resposta dos bombeiros com recursos constantes é prever sua carga de trabalho, isto é, seu número de intervenções por hora, com base em variáveis explicativas que condicionam a atividade humana. O presente trabalho visa desenvolver três modelos que são comparados para determinar se eles podem prever a carga de respostas dos bombeiros de uma maneira razoável. As ferramentas escolhidas são as mais representativas de suas respectivas categorias em Machine Learning, como o XGBoost que tem como núcleo uma árvore de decisão, um método clássico como o Multi-Layer Perceptron e um algoritmo mais avançado como Long Short-Term Memory ambos com neurônios como base. Todo o processo é detalhado, desde a coleta de dados até a obtenção de previsões. Os resultados obtidos demonstram uma previsão de qualidade razoável que pode ser melhorada por técnicas de ciência de dados, como seleção de características e ajuste de hiperparâmetros. / Mestre
323

Representation and Interpretation of Manual and Non-Manual Information for Automated American Sign Language Recognition

Parashar, Ayush S 09 July 2003 (has links)
Continuous recognition of sign language has many practical applications and it can help to improve the quality of life of deaf persons by facilitating their interaction with hearing populace in public situations. This has led to some research in automated continuous American Sign Language recognition. But most work in continuous ASL recognition has only used top-down Hidden Markov Model (HMM) based approaches for recognition. There is no work on using facial information, which is considered to be fairly important. In this thesis, we explore bottom-up approach based on the use of Relational Distributions and Space of Probability Functions (SoPF) for intermediate level ASL recognition. We also use non-manual information, firstly, to decrease the number of deletion and insertion errors and secondly, to find whether the ASL sentence has 'Negation' in it, for which we use motion trajectories of the face. The experimental results show: The SoPF representation works well for ASL recognition. The accuracy based on the number of deletion errors, considering the 8 most probable signs in the sentence is 95%, while when considering 6 most probable signs, is 88%. Using facial or non-manual information increases accuracy when we consider top 6 signs, from 88% to 92%. Thus face does have information content in it. It is difficult to directly combine the manual information (information from hand motion) with non-manual (facial information) to improve the accuracy because of following two reasons: Manual images are not synchronized with the non-manual images. For example the same facial expressions is not present at the same manual position in two instances of the same sentences. One another problem in finding the facial expresion related with the sign, occurs when there is presence of a strong non-manual indicating 'Assertion' or 'Negation' in the sentence. In such cases the facial expressions are totally dominated by the face movements which is indicated by 'head shakes' or 'head nods'. The number of sentences, that have 'Negation' in them and are correctly recognized with the help of motion trajectories of the face are, 27 out of 30.
324

Tree species classification using support vector machine on hyperspectral images / Trädslagsklassificering med en stödvektormaskin på hyperspektrala bilder

Hedberg, Rikard January 2010 (has links)
<p>For several years, FORAN Remote Sensing in Linköping has been using pulseintense laser scannings together with multispectral imaging for developing analysismethods in forestry. One area these laser scannings and images are used for is toclassify the species of single trees in forests. The species have been divided intopine, spruce and deciduous trees, classified by a Maximum Likelihood classifier.This thesis presents the work done on a more spectrally high-resolution imagery,hyperspectral images. These images are divided into more, and finer gradedspectral components, but demand more signal processing. A new classifier, SupportVector Machine, is tested against the previously used Maximum LikelihoodClassifier, to see if it is possible to increase the performance. The classifiers arealso set to divide the deciduous trees into aspen, birch, black alder and gray alder.The thesis shows how the new data set is handled and processed to the differentclassifiers, and shows how a better result can be achieved using a Support VectorMachine.</p>
325

Traces of Repolarization Inhomogeneity in the ECG

Kesek, Milos January 2005 (has links)
<p>Repolarization inhomogeneity is arrhythmogenic. QT dispersion (QTd) is an easily accessible ECG-variable, related to the repolarization and shown to carry prognostic information. It was originally thought to reflect repolarization inhomogeneity. Lately, arguments have been risen against this hypothesis. Other measures of inhomogeneity are being investigated, such as nondipolar components from principal component analysis (PCA) of the T-wave. In all here described populations, continuous 12-lead ECG was collected during the initial hours of observation and secondary parameters used for description of a large number of ECG-recordings.</p><p>Paper I studied QTd in 548 patients with chest pain with a median number of 985 ECG-recordings per patient. Paper II explored a spatial aspect of QTd in 276 patients with unstable coronary artery disease. QTd and a derived localized ECG-parameter were compared to angiographical measures. QTd, expressed as the mean value during the observation was a powerful marker of risk. It was however not effective in identifying high-risk patients. Variations in QTd contained no additional prognostic information. In unstable coronary artery disease, QTd was increased by a mechanism unrelated to localization of the disease.</p><p>Two relevant conditions for observing repolarization inhomogeneity might occur with conduction disturbances and during initial course of ST-elevation myocardial infarction (STEMI). Paper III compared the PCA-parameters of the T-wave in 135 patients with chest pain and conduction disturbance to 665 patients with normal conduction. Nondipolar components were quantified by medians of the nondipolar residue (TWRabsMedian) and ratio of this residue to the total power of the T-wave (TWRrelMedian). Paper IV described the changes in the nondipolar components of the T-wave in 211 patients with thrombolyzed STEMI. TWRabsMedian increased with increasing conduction disturbance and contained a moderate amount of prognostic information. In thrombolyzed STEMI, TWRabsMedian was elevated and has an increased variability. A greater decrease in absolute TWR during initial observation was seen in patients with early ST-resolution. Nondipolar components do however not reflect identical ECG-properties as the ST-elevation and their change does not occur at the same time.</p>
326

Learning in wireless sensor networks for energy-efficient environmental monitoring/Apprentissage dans les réseaux de capteurs pour une surveillance environnementale moins coûteuse en énergie

Le Borgne, Yann-Aël 30 April 2009 (has links)
Wireless sensor networks form an emerging class of computing devices capable of observing the world with an unprecedented resolution, and promise to provide a revolutionary instrument for environmental monitoring. Such a network is composed of a collection of battery-operated wireless sensors, or sensor nodes, each of which is equipped with sensing, processing and wireless communication capabilities. Thanks to advances in microelectronics and wireless technologies, wireless sensors are small in size, and can be deployed at low cost over different kinds of environments in order to monitor both over space and time the variations of physical quantities such as temperature, humidity, light, or sound. In environmental monitoring studies, many applications are expected to run unattended for months or years. Sensor nodes are however constrained by limited resources, particularly in terms of energy. Since communication is one order of magnitude more energy-consuming than processing, the design of data collection schemes that limit the amount of transmitted data is therefore recognized as a central issue for wireless sensor networks. An efficient way to address this challenge is to approximate, by means of mathematical models, the evolution of the measurements taken by sensors over space and/or time. Indeed, whenever a mathematical model may be used in place of the true measurements, significant gains in communications may be obtained by only transmitting the parameters of the model instead of the set of real measurements. Since in most cases there is little or no a priori information about the variations taken by sensor measurements, the models must be identified in an automated manner. This calls for the use of machine learning techniques, which allow to model the variations of future measurements on the basis of past measurements. This thesis brings two main contributions to the use of learning techniques in a sensor network. First, we propose an approach which combines time series prediction and model selection for reducing the amount of communication. The rationale of this approach, called adaptive model selection, is to let the sensors determine in an automated manner a prediction model that does not only fits their measurements, but that also reduces the amount of transmitted data. The second main contribution is the design of a distributed approach for modeling sensed data, based on the principal component analysis (PCA). The proposed method allows to transform along a routing tree the measurements taken in such a way that (i) most of the variability in the measurements is retained, and (ii) the network load sustained by sensor nodes is reduced and more evenly distributed, which in turn extends the overall network lifetime. The framework can be seen as a truly distributed approach for the principal component analysis, and finds applications not only for approximated data collection tasks, but also for event detection or recognition tasks. / Les réseaux de capteurs sans fil forment une nouvelle famille de systèmes informatiques permettant d'observer le monde avec une résolution sans précédent. En particulier, ces systèmes promettent de révolutionner le domaine de l'étude environnementale. Un tel réseau est composé d'un ensemble de capteurs sans fil, ou unités sensorielles, capables de collecter, traiter, et transmettre de l'information. Grâce aux avancées dans les domaines de la microélectronique et des technologies sans fil, ces systèmes sont à la fois peu volumineux et peu coûteux. Ceci permet leurs deploiements dans différents types d'environnements, afin d'observer l'évolution dans le temps et l'espace de quantités physiques telles que la température, l'humidité, la lumière ou le son. Dans le domaine de l'étude environnementale, les systèmes de prise de mesures doivent souvent fonctionner de manière autonome pendant plusieurs mois ou plusieurs années. Les capteurs sans fil ont cependant des ressources limitées, particulièrement en terme d'énergie. Les communications radios étant d'un ordre de grandeur plus coûteuses en énergie que l'utilisation du processeur, la conception de méthodes de collecte de données limitant la transmission de données est devenue l'un des principaux défis soulevés par cette technologie. Ce défi peut être abordé de manière efficace par l'utilisation de modèles mathématiques modélisant l'évolution spatiotemporelle des mesures prises par les capteurs. En effet, si un tel modèle peut être utilisé à la place des mesures, d'importants gains en communications peuvent être obtenus en utilisant les paramètres du modèle comme substitut des mesures. Cependant, dans la majorité des cas, peu ou aucune information sur la nature des mesures prises par les capteurs ne sont disponibles, et donc aucun modèle ne peut être a priori défini. Dans ces cas, les techniques issues du domaine de l'apprentissage machine sont particulièrement appropriées. Ces techniques ont pour but de créer ces modèles de façon autonome, en anticipant les mesures à venir sur la base des mesures passées. Dans cette thèse, deux contributions sont principalement apportées permettant l'applica-tion de techniques d'apprentissage machine dans le domaine des réseaux de capteurs sans fil. Premièrement, nous proposons une approche qui combine la prédiction de série temporelle avec la sélection de modèles afin de réduire la communication. La logique de cette approche, appelée sélection de modèle adaptive, est de permettre aux unités sensorielles de determiner de manière autonome un modèle de prédiction qui anticipe correctement leurs mesures, tout en réduisant l'utilisation de leur radio. Deuxièmement, nous avons conçu une méthode permettant de modéliser de façon distribuée les mesures collectées, qui se base sur l'analyse en composantes principales (ACP). La méthode permet de transformer les mesures le long d'un arbre de routage, de façon à ce que (i) la majeure partie des variations dans les mesures des capteurs soient conservées, et (ii) la charge réseau soit réduite et mieux distribuée, ce qui permet d'augmenter également la durée de vie du réseau. L'approche proposée permet de véritablement distribuer l'ACP, et peut être utilisée pour des applications impliquant la collecte de données, mais également pour la détection ou la classification d'événements.
327

Geometric algorithms for component analysis with a view to gene expression data analysis

Journée, Michel 04 June 2009 (has links)
The research reported in this thesis addresses the problem of component analysis, which aims at reducing large data to lower dimensions, to reveal the essential structure of the data. This problem is encountered in almost all areas of science - from physics and biology to finance, economics and psychometrics - where large data sets need to be analyzed. Several paradigms for component analysis are considered, e.g., principal component analysis, independent component analysis and sparse principal component analysis, which are naturally formulated as an optimization problem subject to constraints that endow the problem with a well-characterized matrix manifold structure. Component analysis is so cast in the realm of optimization on matrix manifolds. Algorithms for component analysis are subsequently derived that take advantage of the geometrical structure of the problem. When formalizing component analysis into an optimization framework, three main classes of problems are encountered, for which methods are proposed. We first consider the problem of optimizing a smooth function on the set of n-by-p real matrices with orthonormal columns. Then, a method is proposed to maximize a convex function on a compact manifold, which generalizes to this context the well-known power method that computes the dominant eigenvector of a matrix. Finally, we address the issue of solving problems defined in terms of large positive semidefinite matrices in a numerically efficient manner by using low-rank approximations of such matrices. The efficiency of the proposed algorithms for component analysis is evaluated on the analysis of gene expression data related to breast cancer, which encode the expression levels of thousands of genes gained from experiments on hundreds of cancerous cells. Such data provide a snapshot of the biological processes that occur in tumor cells and offer huge opportunities for an improved understanding of cancer. Thanks to an original framework to evaluate the biological significance of a set of components, well-known but also novel knowledge is inferred about the biological processes that underlie breast cancer. Hence, to summarize the thesis in one sentence: We adopt a geometric point of view to propose optimization algorithms performing component analysis, which, applied on large gene expression data, enable to reveal novel biological knowledge.
328

Traces of Repolarization Inhomogeneity in the ECG

Kesek, Milos January 2005 (has links)
Repolarization inhomogeneity is arrhythmogenic. QT dispersion (QTd) is an easily accessible ECG-variable, related to the repolarization and shown to carry prognostic information. It was originally thought to reflect repolarization inhomogeneity. Lately, arguments have been risen against this hypothesis. Other measures of inhomogeneity are being investigated, such as nondipolar components from principal component analysis (PCA) of the T-wave. In all here described populations, continuous 12-lead ECG was collected during the initial hours of observation and secondary parameters used for description of a large number of ECG-recordings. Paper I studied QTd in 548 patients with chest pain with a median number of 985 ECG-recordings per patient. Paper II explored a spatial aspect of QTd in 276 patients with unstable coronary artery disease. QTd and a derived localized ECG-parameter were compared to angiographical measures. QTd, expressed as the mean value during the observation was a powerful marker of risk. It was however not effective in identifying high-risk patients. Variations in QTd contained no additional prognostic information. In unstable coronary artery disease, QTd was increased by a mechanism unrelated to localization of the disease. Two relevant conditions for observing repolarization inhomogeneity might occur with conduction disturbances and during initial course of ST-elevation myocardial infarction (STEMI). Paper III compared the PCA-parameters of the T-wave in 135 patients with chest pain and conduction disturbance to 665 patients with normal conduction. Nondipolar components were quantified by medians of the nondipolar residue (TWRabsMedian) and ratio of this residue to the total power of the T-wave (TWRrelMedian). Paper IV described the changes in the nondipolar components of the T-wave in 211 patients with thrombolyzed STEMI. TWRabsMedian increased with increasing conduction disturbance and contained a moderate amount of prognostic information. In thrombolyzed STEMI, TWRabsMedian was elevated and has an increased variability. A greater decrease in absolute TWR during initial observation was seen in patients with early ST-resolution. Nondipolar components do however not reflect identical ECG-properties as the ST-elevation and their change does not occur at the same time.
329

Acquiring 3D Full-body Motion from Noisy and Ambiguous Input

Lou, Hui 2012 May 1900 (has links)
Natural human motion is highly demanded and widely used in a variety of applications such as video games and virtual realities. However, acquisition of full-body motion remains challenging because the system must be capable of accurately capturing a wide variety of human actions and does not require a considerable amount of time and skill to assemble. For instance, commercial optical motion capture systems such as Vicon can capture human motion with high accuracy and resolution while they often require post-processing by experts, which is time-consuming and costly. Microsoft Kinect, despite its high popularity and wide applications, does not provide accurate reconstruction of complex movements when significant occlusions occur. This dissertation explores two different approaches that accurately reconstruct full-body human motion from noisy and ambiguous input data captured by commercial motion capture devices. The first approach automatically generates high-quality human motion from noisy data obtained from commercial optical motion capture systems, eliminating the need for post-processing. The second approach accurately captures a wide variety of human motion even under significant occlusions by using color/depth data captured by a single Kinect camera. The common theme that underlies two approaches is the use of prior knowledge embedded in pre-recorded motion capture database to reduce the reconstruction ambiguity caused by noisy and ambiguous input and constrain the solution to lie in the natural motion space. More specifically, the first approach constructs a series of spatial-temporal filter bases from pre-captured human motion data and employs them along with robust statistics techniques to filter noisy motion data corrupted by noise/outliers. The second approach formulates the problem in a Maximum a Posterior (MAP) framework and generates the most likely pose which explains the observations as well as consistent with the patterns embedded in the pre-recorded motion capture database. We demonstrate the effectiveness of our approaches through extensive numerical evaluations on synthetic data and comparisons against results created by commercial motion capture systems. The first approach can effectively denoise a wide variety of noisy motion data, including walking, running, jumping and swimming while the second approach is shown to be capable of accurately reconstructing a wider range of motions compared with Microsoft Kinect.
330

The Application of NMR-based Metabolomics in Assessing the Sub-lethal Toxicity of Organohalogenated Pesticides to Earthworms

Yuk, Jimmy 08 January 2013 (has links)
The extensive agricultural usage of organohalogenated pesticides has raised many concerns about their potential hazards especially in the soil environment. Environmental metabolomics is an emerging field that investigates the changes in the metabolic profile of native organisms in their environment due to the presence of an environmental stressor. Research presented here explores the potential of Nuclear Magnetic Resonance (NMR)-based metabolomics to examine the sub-lethal exposure of the earthworm, Eisenia fetida to sub-lethal concentrations of organohalogenated pesticides. Various one-dimensional (1-D) and two dimensional (2-D) NMR techniques were compared in a contact filter paper test earthworm metabolomic study using endosulfan, a prevalent pesticide in the environment. The results determined that both the 1H Presaturation Utilizing Gradients and Echos (PURGE) and the 1H-13C Heteronuclear Single Quantum Coherence (HSQC) NMR techniques were most effective in discriminating and identifying significant metabolites in earthworms due to contaminant exposure. These two NMR techniques were further explored in another metabolomic study using various sub-lethal concentrations of endosulfan and an organofluorine pesticide, trifluralin to E. fetida. Principal component analysis (PCA) tests showed increasing separation between the exposed and unexposed earthworms as the concentrations for both contaminants increased. A neurotoxic mode of action (MOA) for endosulfan and a non-polar narcotic MOA for trifluralin were delineated as many significant metabolites, arising from exposure, were identified. The earthworm tissue extract is commonly used as the biological medium for metabolomic studies. However, many overlapping resonances are apparent in an earthworm tissue extract NMR spectrum due to the abundance of metabolites present. To mitigate this spectral overlap, the earthworm’s coelomic fluid (CF) was tested as a complementary biological medium to the tissue extract in an endosulfan exposure metabolomic study to identify additional metabolites of stress. Compared to tests on the tissue extract, a plethora of different metabolites were identified in the earthworm CF using 1-D PURGE and 2-D HSQC NMR techniques. In addition to the neurotoxic MOA identified previously, an apoptotic MOA was also postulated due to endosulfan exposure. This thesis also explored the application of 1-D and 2-D NMR techniques in a soil metabolomic study to understand the exposure of E. fetida to sub-lethal concentrations of endosulfan and its main degradation product, endosulfan sulfate. The earthworm’s CF and tissue extract were both analyzed to maximize the significant metabolites identified due to contaminant exposure. The PCA results identified similar toxicity for both organochlorine contaminants as the same separation, between exposed to the unexposed earthworms, were detected at various concentrations. Both neurotoxic and apopotic MOAs were observed as identical fluctuations of significant metabolites were found. This research demonstrates the potential of NMR-based metabolomics as a powerful environmental monitoring tool to understand sub-lethal organohalogenated pesticide exposure in soil using earthworms as living probes.

Page generated in 0.0561 seconds