Spelling suggestions: "subject:"principal component 2analysis"" "subject:"principal component 3analysis""
251 |
Late Pleistocene-Holocene environmental change in Serra do Espinhaço Meridional (Minas Gerais State, Brazil) reconstructed using a multi-proxy characterization of peat cores from mountain tropical mires / Reconstrução paleoambiental da Serra do Espinhaço Meridional (Minas Gerais, Brasil) durante o Pleistoceno tardio e Holoceno usando uma caracterização multi-proxy de testemunhos de turfeiras tropicais de montanhaTerra, Ingrid Horak 12 February 2014 (has links)
The peatlands are ecosystems extremely sensitive to changes in hydrology, and are considered as faithful \"natural archives of ecological memory\". In the Serra do Espinhaço Meridional, Minas Gerais State, Brazil, mountain peatlands has been studied by soil scientists, but until now multi-proxy studies are almost absent. The location of these peatlands is ideal because they are in an area influenced by the activity of the South America Monsoon Systems (SAMS), which controls the amount and distribution of annual rainfall. The aim of this work was to reconstruct the environmental changes occurred throughout the late Pleistocene and Holocene, both at the local and regional scale by using a multi-proxy approach (stratigraphy, physical properties, 14C and OSL datings, pollen and geochemistry). However, determining of the processes involved in the genesis and evolution of peatlands soils was also necessary step. The physico-chemical properties and elemental composition of five peat cores (PdF-I, PdF-II, SJC, PI and SV) from four selected mires (Pau de Fruta, São João da Chapada, Pinheiros and Sempre Viva) seem to have responded to four main processes: relative accumulation of organic and mineral matter, linked to the evolution of the catchment soils (local erosion); deposition of dust from distant/regional sources; preservation of plant remains; and long and short-term peat decomposition. The combination of proxies of PdF-I core defined six main phases of change during the Holocene: (I) 10-7.4 cal kyr BP, wet and cold climate and soil instability in the mire catchment; (II) 7.4-4.2 cal kyr BP, wet and warm with catchment soils stability and enhanced deposition of regional dusts; (III) 4.2-2.2 cal kyr BP, dry and warm and a reactivation of soil erosion in the catchment; (IV) 2.2-1.2 cal kyr BP, dry and punctuated cooling, with enhanced deposition of regional dusts; (V) 1.2 cal kyr-400 cal yr BP, sub-humid climatic and the lowest inputs of local and regional dust and the largest accumulation of peat in the mire; and (VI) <400 cal yr BP, sub-humid conditions but both local and regional erosion largely increased. For the late Pleistocene, a combination of proxies applied to the PI core also defined six main phases: (I) 60-39.2 cal kyr BP, from sub-humid to dry amid colder conditions than today, and high soil instability in the mire catchment; (II) 39.2-27.8 cal kyr BP, dry and warm with cooling events under still high local erosion rates; (III) 27.8-16.4 cal kyr BP, wet and very cold with a decreased in soil erosion in the catchment; (IV) 16.4-6.6 cal kyr BP, very wet and very cold conditions with low intensity of local erosion; (V) 6.6-3.3 cal kyr BP, very dry and warm with increasing rates of local erosion; and (VI) <3.3 cal kyr BP, from dry and warm to sub-humid climate, with local erosion trend similar to the previous period. The climate is seen as the most important driving force of environmental change, but human activities are likely to have been at least partially responsible for the significant changes recorded over the past 400 years. Given the value as environmental archives, mires from Serra do Espinhaço Meridional should be fully protected. / As turfeiras são ecossistemas extremamente sensíveis às mudanças da hidrologia, e são por excelência consideradas como \"arquivos naturais da memória ecológica\". Na Serra do Espinhaço Meridional, Minas Gerais, Brasil, as turfeiras de montanha vem sendo estudadas pelos cientistas do solo, mas até então estudos multi-proxy são quase ausentes. A localização destas é ideal pois estão em uma área influenciada pela atividade do Sistema Monçônico da América do Sul (SMAS), que controlam a quantidade e distribuição de precipitação anual. O objetivo deste trabalho foi reconstruir as mudanças ambientais ocorridas através do Holoceno e Pleistoceno Tardio, tanto sob escala local quanto regional, usando uma abordagem multiproxy (estratigrafia, propriedades físicas, datações 14C e LOE, pólen e geoquímica). No entanto, determinação dos processos envolvidos na gênese e evolução dos solos das turfeiras também foi um passo necessário. As propriedades físico-químicas e composição elementar de cinco testemunhos de turfa (PdF-I, PdF-II, SJC, PI e SV) de quatro turfeiras selecionadas (Pau de Fruta, São João da Chapada, Pinheiros e Sempre Viva) parecem ter respondido a quatro processos principais: acumulação relativa de matéria orgânica e material mineral, ligados à evolução dos solos das bacias das turfeiras (erosão local); deposição de poeira de fontes distantes/regionais; preservação de restos de plantas; e decomposição da turfa em longo e curto prazo. A combinação de proxies de PdF-I definiu seis principais fases de mudanças durante o Holoceno: (I) 10-7,4 mil anos cal AP, clima úmido e frio e instabilidade do solo na bacia da turfeira; (II) 7,4-4,2 mil anos cal AP, úmido e quente com solo na bacia estável e aumento de deposição de poeiras regionais; (III) 4,2-2,2 mil anos cal AP, seco e quente e reativação da erosão do solo na bacia; (IV) 2,2-1,2 mil anos cal AP, seco e resfriamentos pontuais, com aumento de poeiras regionais; (V) 1,2 mil anos-400 anos cal AP, sub-úmido e com os mais baixas entradas de poeiras local e regionais e as maiores acumulações de turfa; e (VI) <400 anos cal AP, sub-úmido com forte erosão local e regional. Para o Pleistoceno tardio, uma combinação de proxies aplicada para PI também definiu seis principais fases: (I) 60-39,2 mil anos cal AP, de sub-úmido para seco em meio à temperaturas mais frias que o atual, e alta instabilidade do solo na bacia da turfeira; (II) 39,2-27,8 mil anos cal AP, seco e quente com alguns resfriamentos e ainda sob elevadas taxas de erosão local; (III) 27,8-16,4 mil anos cal AP, úmido e muito frio com redução da erosão do solo na bacia; (IV) 16,4-6,6 mil anos cal AP, muito úmido e muito frio com baixa intensidade de erosão local; (V) 6,6-3,3 mil anos cal AP, muito seco e quente com taxas crescentes de erosão local; e (VI) <3,3 mil anos cal AP, de seco e quente para sub-úmido, com tendência de erosão local semelhante ao período anterior. O clima é visto como o forçante mais importante das mudanças ambientais, mas é provável que atividades humanas tenham sido parcialmente responsáveis pelas mudanças significativas registradas ao longo dos últimos 400 anos. Dado o valor como arquivos ambientais, as turfeiras da Serra do Espinhaço Meridional devem ser completamente protegidas.
|
252 |
Medidas de centralidade em redes complexas: correlações, efetividade e caracterização de sistemas / Centrality measures in complex networks: correlations, effectiveness and characterization of systemsRonqui, José Ricardo Furlan 19 February 2014 (has links)
Centralidades são medidas desenvolvidas para determinar a importância dos nós e ligações, utilizando as características estruturais das redes para esta finalidade. As medidas de centralidade são, portanto, essenciais no estudo de redes complexas pois os sistemas representados por elas geralmente são formados por muitos elementos, e com isso, torna-se inviável estudar individualmente cada um deles; dessa forma é necessário identificar os nós e ligações que são mais relevantes em cada situação. Todavia, com o surgimento de ideias diferentes de como esses elementos podem ser importantes, diversas medidas foram propostas com o intuito de evidenciar elementos que passam despercebidos pelas demais. Neste trabalho utilizamos a correlação de Pearson para avaliar o quão semelhantes são as classificações fornecidas pelas centralidades para redes representando sistemas reais e modelos teóricos. Para avaliar a efetividade das medidas e como elas afetam cada sistema, atacamos as redes usando as centralidades como indicadores para a ordem de remoção dos nós e ligações. Procurando caracterizar as redes usando suas diferenças estruturais, realizamos uma análise de componentes principais empregando as correlações entre os pares de centralidade como características de cada sistema. Nossos resultados mostraram que na maioria dos casos medidas distintas estão correlacionadas, o que indica que em geral os mesmos elementos são evidenciados pelas diferentes centralidades; também observamos que as correlações são mais fortes nos modelos do que nos sistemas reais. Os ataques mostraram que medidas fortemente correlacionadas podem influenciar as redes de maneiras distintas, evidenciando a importância do conjunto de elementos selecionados por cada medida. Nosso último resultado demonstra que as correlações entre os pares de centralidades podem ser utilizados tanto para a diferenciação e caracterização de redes quanto na avaliação de modelos que representem melhor a estrutura de um sistema específico. / Centrality measures were developed to evaluate the importance of nodes and links based on the structure of networks. Centralities are essential in the study of networks because these systems are usually large, which make manual analysis of all nodes and links impossible; therefore recognizing such elements is a vital task. As nodes and links can be considered essential by different reasons, a large number of measures were proposed to identify important elements that were not highlighted by the other ones. In our study, we use Pearson\'s correlation coefficient to measure the similarity between rankings of nodes and links provided by different centralities for real and model based networks. We also perform attacks to networks, using these rankings to determine the order of removal of nodes and links, intending to evaluate and compare the efficiency and how the systems react to attacks guided by different centralities. Finally, we use the correlation coefficients between the pairs of centralities as properties of networks, and perform a principal component analysis with them, to evaluate if differences among network structures can be detected from correlations. Our results showed that centrality measures are frequently correlated, which means that the same elements can be highlighted by different centralities. We also noticed that the correlation coefficients are larger in models than in real world networks. The results of the attacks experiment showed that even when two measures are highly correlated, they can affect networks in distinct ways, meaning that the group of the nodes and links provided by each measure are relevant for the study of networks systems. Our last result evidenced that correlations among centrality measures can be used for characterization of networks and to evaluate how well models represent them.
|
253 |
Comparative analysis of XGBoost, MLP and LSTM techniques for the problem of predicting fire brigade Iiterventions /Cerna Ñahuis, Selene Leya January 2019 (has links)
Orientador: Anna Diva Plasencia Lotufo / Abstract: Many environmental, economic and societal factors are leading fire brigades to be increasingly solicited, and, as a result, they face an ever-increasing number of interventions, most of the time on constant resource. On the other hand, these interventions are directly related to human activity, which itself is predictable: swimming pool drownings occur in summer while road accidents due to ice storms occur in winter. One solution to improve the response of firefighters on constant resource is therefore to predict their workload, i.e., their number of interventions per hour, based on explanatory variables conditioning human activity. The present work aims to develop three models that are compared to determine if they can predict the firefighters' response load in a reasonable way. The tools chosen are the most representative from their respective categories in Machine Learning, such as XGBoost having as core a decision tree, a classic method such as Multi-Layer Perceptron and a more advanced algorithm like Long Short-Term Memory both with neurons as a base. The entire process is detailed, from data collection to obtaining the predictions. The results obtained prove a reasonable quality prediction that can be improved by data science techniques such as feature selection and adjustment of hyperparameters. / Resumo: Muitos fatores ambientais, econômicos e sociais estão levando as brigadas de incêndio a serem cada vez mais solicitadas e, como consequência, enfrentam um número cada vez maior de intervenções, na maioria das vezes com recursos constantes. Por outro lado, essas intervenções estão diretamente relacionadas à atividade humana, o que é previsível: os afogamentos em piscina ocorrem no verão, enquanto os acidentes de tráfego, devido a tempestades de gelo, ocorrem no inverno. Uma solução para melhorar a resposta dos bombeiros com recursos constantes é prever sua carga de trabalho, isto é, seu número de intervenções por hora, com base em variáveis explicativas que condicionam a atividade humana. O presente trabalho visa desenvolver três modelos que são comparados para determinar se eles podem prever a carga de respostas dos bombeiros de uma maneira razoável. As ferramentas escolhidas são as mais representativas de suas respectivas categorias em Machine Learning, como o XGBoost que tem como núcleo uma árvore de decisão, um método clássico como o Multi-Layer Perceptron e um algoritmo mais avançado como Long Short-Term Memory ambos com neurônios como base. Todo o processo é detalhado, desde a coleta de dados até a obtenção de previsões. Os resultados obtidos demonstram uma previsão de qualidade razoável que pode ser melhorada por técnicas de ciência de dados, como seleção de características e ajuste de hiperparâmetros. / Mestre
|
254 |
Representation and Interpretation of Manual and Non-Manual Information for Automated American Sign Language RecognitionParashar, Ayush S 09 July 2003 (has links)
Continuous recognition of sign language has many practical applications and it can help to improve the quality of life of deaf persons by facilitating their interaction with hearing populace in public situations. This has led to some research in automated continuous American Sign Language recognition. But most work in continuous ASL recognition has only used top-down Hidden Markov Model (HMM) based approaches for recognition. There is no work on using facial information, which is considered to be fairly important. In this thesis, we explore bottom-up approach based on the use of Relational Distributions and Space of Probability Functions (SoPF) for intermediate level ASL recognition. We also use non-manual information, firstly, to decrease the number of deletion and insertion errors and secondly, to find whether the ASL sentence has 'Negation' in it, for which we use motion trajectories of the face. The experimental results show: The SoPF representation works well for ASL recognition. The accuracy based on the number of deletion errors, considering the 8 most probable signs in the sentence is 95%, while when considering 6 most probable signs, is 88%. Using facial or non-manual information increases accuracy when we consider top 6 signs, from 88% to 92%. Thus face does have information content in it. It is difficult to directly combine the manual information (information from hand motion) with non-manual (facial information) to improve the accuracy because of following two reasons: Manual images are not synchronized with the non-manual images. For example the same facial expressions is not present at the same manual position in two instances of the same sentences. One another problem in finding the facial expresion related with the sign, occurs when there is presence of a strong non-manual indicating 'Assertion' or 'Negation' in the sentence. In such cases the facial expressions are totally dominated by the face movements which is indicated by 'head shakes' or 'head nods'. The number of sentences, that have 'Negation' in them and are correctly recognized with the help of motion trajectories of the face are, 27 out of 30.
|
255 |
Iterative issues of ICA, quality of separation and number of sources: a study for biosignal applicationsNaik, Ganesh Ramachandra, ganesh.naik@rmit.edu.au January 2009 (has links)
This thesis has evaluated the use of Independent Component Analysis (ICA) on Surface Electromyography (sEMG), focusing on the biosignal applications. This research has identified and addressed the following four issues related to the use of ICA for biosignals: The iterative nature of ICA The order and magnitude ambiguity problems of ICA Estimation of number of sources based on dependency and independency nature of the signals Source separation for non-quadratic ICA (undercomplete and overcomplete) This research first establishes the applicability of ICA for sEMG and also identifies the shortcomings related to order and magnitude ambiguity. It has then developed, a mitigation strategy for these issues by using a single unmixing matrix and neural network weight matrix corresponding to the specific user. The research reports experimental verification of the technique and also the investigation of the impact of inter-subject and inter-experimental variations. The results demonstrate that while using sEMG without separation gives only 60% accuracy, and sEMG separated using traditional ICA gives an accuracy of 65%, this approach gives an accuracy of 99% for the same experimental data. Besides the marked improvement in accuracy, the other advantages of such a system are that it is suitable for real time operations and is easy to train by a lay user. The second part of this thesis reports research conducted to evaluate the use of ICA for the separation of bioelectric signals when the number of active sources may not be known. The work proposes the use of value of the determinant of the Global matrix generated using sparse sub band ICA for identifying the number of active sources. The results indicate that the technique is successful in identifying the number of active muscles for complex hand gestures. The results support the applications such as human computer interface. This thesis has also developed a method of determining the number of independent sources in a given mixture and has also demonstrated that using this information, it is possible to separate the signals in an undercomplete situation and reduce the redundancy in the data using standard ICA methods. The experimental verification has demonstrated that the quality of separation using this method is better than other techniques such as Principal Component Analysis (PCA) and selective PCA. This has number of applications such as audio separation and sensor networks.
|
256 |
Tree species classification using support vector machine on hyperspectral images / Trädslagsklassificering med en stödvektormaskin på hyperspektrala bilderHedberg, Rikard January 2010 (has links)
<p>For several years, FORAN Remote Sensing in Linköping has been using pulseintense laser scannings together with multispectral imaging for developing analysismethods in forestry. One area these laser scannings and images are used for is toclassify the species of single trees in forests. The species have been divided intopine, spruce and deciduous trees, classified by a Maximum Likelihood classifier.This thesis presents the work done on a more spectrally high-resolution imagery,hyperspectral images. These images are divided into more, and finer gradedspectral components, but demand more signal processing. A new classifier, SupportVector Machine, is tested against the previously used Maximum LikelihoodClassifier, to see if it is possible to increase the performance. The classifiers arealso set to divide the deciduous trees into aspen, birch, black alder and gray alder.The thesis shows how the new data set is handled and processed to the differentclassifiers, and shows how a better result can be achieved using a Support VectorMachine.</p>
|
257 |
Traces of Repolarization Inhomogeneity in the ECGKesek, Milos January 2005 (has links)
<p>Repolarization inhomogeneity is arrhythmogenic. QT dispersion (QTd) is an easily accessible ECG-variable, related to the repolarization and shown to carry prognostic information. It was originally thought to reflect repolarization inhomogeneity. Lately, arguments have been risen against this hypothesis. Other measures of inhomogeneity are being investigated, such as nondipolar components from principal component analysis (PCA) of the T-wave. In all here described populations, continuous 12-lead ECG was collected during the initial hours of observation and secondary parameters used for description of a large number of ECG-recordings.</p><p>Paper I studied QTd in 548 patients with chest pain with a median number of 985 ECG-recordings per patient. Paper II explored a spatial aspect of QTd in 276 patients with unstable coronary artery disease. QTd and a derived localized ECG-parameter were compared to angiographical measures. QTd, expressed as the mean value during the observation was a powerful marker of risk. It was however not effective in identifying high-risk patients. Variations in QTd contained no additional prognostic information. In unstable coronary artery disease, QTd was increased by a mechanism unrelated to localization of the disease.</p><p>Two relevant conditions for observing repolarization inhomogeneity might occur with conduction disturbances and during initial course of ST-elevation myocardial infarction (STEMI). Paper III compared the PCA-parameters of the T-wave in 135 patients with chest pain and conduction disturbance to 665 patients with normal conduction. Nondipolar components were quantified by medians of the nondipolar residue (TWRabsMedian) and ratio of this residue to the total power of the T-wave (TWRrelMedian). Paper IV described the changes in the nondipolar components of the T-wave in 211 patients with thrombolyzed STEMI. TWRabsMedian increased with increasing conduction disturbance and contained a moderate amount of prognostic information. In thrombolyzed STEMI, TWRabsMedian was elevated and has an increased variability. A greater decrease in absolute TWR during initial observation was seen in patients with early ST-resolution. Nondipolar components do however not reflect identical ECG-properties as the ST-elevation and their change does not occur at the same time.</p>
|
258 |
Learning in wireless sensor networks for energy-efficient environmental monitoring/Apprentissage dans les réseaux de capteurs pour une surveillance environnementale moins coûteuse en énergieLe Borgne, Yann-Aël 30 April 2009 (has links)
Wireless sensor networks form an emerging class of computing devices capable of observing the world with an unprecedented resolution, and promise to provide a revolutionary instrument for environmental monitoring. Such a network is composed of a collection of battery-operated wireless sensors, or sensor nodes, each of which is equipped with sensing, processing and wireless communication capabilities. Thanks to advances in microelectronics and wireless technologies, wireless sensors are small in size, and can be deployed at low cost over different kinds of environments in order to monitor both over space and time the variations of physical quantities such as temperature, humidity, light, or sound.
In environmental monitoring studies, many applications are expected to run unattended for months or years. Sensor nodes are however constrained by limited resources, particularly in terms of energy. Since communication is one order of magnitude more energy-consuming than processing, the design of data collection schemes that limit the amount of transmitted data is therefore recognized as a central issue for wireless sensor networks.
An efficient way to address this challenge is to approximate, by means of mathematical models, the evolution of the measurements taken by sensors over space and/or time. Indeed, whenever a mathematical model may be used in place of the true measurements, significant gains in communications may be obtained by only transmitting the parameters of the model instead of the set of real measurements. Since in most cases there is little or no a priori information about the variations taken by sensor measurements, the models must be identified in an automated manner. This calls for the use of machine learning techniques, which allow to model the variations of future measurements on the basis of past measurements.
This thesis brings two main contributions to the use of learning techniques in a sensor network. First, we propose an approach which combines time series prediction and model selection for reducing the amount of communication. The rationale of this approach, called adaptive model selection, is to let the sensors determine in an automated manner a prediction model that does not only fits their measurements, but that also reduces the amount of transmitted data.
The second main contribution is the design of a distributed approach for modeling sensed data, based on the principal component analysis (PCA). The proposed method allows to transform along a routing tree the measurements taken in such a way that (i) most of the variability in the measurements is retained, and (ii) the network load sustained by sensor nodes is reduced and more evenly distributed, which in turn extends the overall network lifetime. The framework can be seen as a truly distributed approach for the principal component analysis, and finds applications not only for approximated data collection tasks, but also for event detection or recognition tasks.
/
Les réseaux de capteurs sans fil forment une nouvelle famille de systèmes informatiques permettant d'observer le monde avec une résolution sans précédent. En particulier, ces systèmes promettent de révolutionner le domaine de l'étude environnementale. Un tel réseau est composé d'un ensemble de capteurs sans fil, ou unités sensorielles, capables de collecter, traiter, et transmettre de l'information. Grâce aux avancées dans les domaines de la microélectronique et des technologies sans fil, ces systèmes sont à la fois peu volumineux et peu coûteux. Ceci permet leurs deploiements dans différents types d'environnements, afin d'observer l'évolution dans le temps et l'espace de quantités physiques telles que la température, l'humidité, la lumière ou le son.
Dans le domaine de l'étude environnementale, les systèmes de prise de mesures doivent souvent fonctionner de manière autonome pendant plusieurs mois ou plusieurs années. Les capteurs sans fil ont cependant des ressources limitées, particulièrement en terme d'énergie. Les communications radios étant d'un ordre de grandeur plus coûteuses en énergie que l'utilisation du processeur, la conception de méthodes de collecte de données limitant la transmission de données est devenue l'un des principaux défis soulevés par cette technologie.
Ce défi peut être abordé de manière efficace par l'utilisation de modèles mathématiques modélisant l'évolution spatiotemporelle des mesures prises par les capteurs. En effet, si un tel modèle peut être utilisé à la place des mesures, d'importants gains en communications peuvent être obtenus en utilisant les paramètres du modèle comme substitut des mesures. Cependant, dans la majorité des cas, peu ou aucune information sur la nature des mesures prises par les capteurs ne sont disponibles, et donc aucun modèle ne peut être a priori défini. Dans ces cas, les techniques issues du domaine de l'apprentissage machine sont particulièrement appropriées. Ces techniques ont pour but de créer ces modèles de façon autonome, en anticipant les mesures à venir sur la base des mesures passées.
Dans cette thèse, deux contributions sont principalement apportées permettant l'applica-tion de techniques d'apprentissage machine dans le domaine des réseaux de capteurs sans fil. Premièrement, nous proposons une approche qui combine la prédiction de série temporelle avec la sélection de modèles afin de réduire la communication. La logique de cette approche, appelée sélection de modèle adaptive, est de permettre aux unités sensorielles de determiner de manière autonome un modèle de prédiction qui anticipe correctement leurs mesures, tout en réduisant l'utilisation de leur radio.
Deuxièmement, nous avons conçu une méthode permettant de modéliser de façon distribuée les mesures collectées, qui se base sur l'analyse en composantes principales (ACP). La méthode permet de transformer les mesures le long d'un arbre de routage, de façon à ce que (i) la majeure partie des variations dans les mesures des capteurs soient conservées, et (ii) la charge réseau soit réduite et mieux distribuée, ce qui permet d'augmenter également la durée de vie du réseau. L'approche proposée permet de véritablement distribuer l'ACP, et peut être utilisée pour des applications impliquant la collecte de données, mais également pour la détection ou la classification d'événements.
|
259 |
Traces of Repolarization Inhomogeneity in the ECGKesek, Milos January 2005 (has links)
Repolarization inhomogeneity is arrhythmogenic. QT dispersion (QTd) is an easily accessible ECG-variable, related to the repolarization and shown to carry prognostic information. It was originally thought to reflect repolarization inhomogeneity. Lately, arguments have been risen against this hypothesis. Other measures of inhomogeneity are being investigated, such as nondipolar components from principal component analysis (PCA) of the T-wave. In all here described populations, continuous 12-lead ECG was collected during the initial hours of observation and secondary parameters used for description of a large number of ECG-recordings. Paper I studied QTd in 548 patients with chest pain with a median number of 985 ECG-recordings per patient. Paper II explored a spatial aspect of QTd in 276 patients with unstable coronary artery disease. QTd and a derived localized ECG-parameter were compared to angiographical measures. QTd, expressed as the mean value during the observation was a powerful marker of risk. It was however not effective in identifying high-risk patients. Variations in QTd contained no additional prognostic information. In unstable coronary artery disease, QTd was increased by a mechanism unrelated to localization of the disease. Two relevant conditions for observing repolarization inhomogeneity might occur with conduction disturbances and during initial course of ST-elevation myocardial infarction (STEMI). Paper III compared the PCA-parameters of the T-wave in 135 patients with chest pain and conduction disturbance to 665 patients with normal conduction. Nondipolar components were quantified by medians of the nondipolar residue (TWRabsMedian) and ratio of this residue to the total power of the T-wave (TWRrelMedian). Paper IV described the changes in the nondipolar components of the T-wave in 211 patients with thrombolyzed STEMI. TWRabsMedian increased with increasing conduction disturbance and contained a moderate amount of prognostic information. In thrombolyzed STEMI, TWRabsMedian was elevated and has an increased variability. A greater decrease in absolute TWR during initial observation was seen in patients with early ST-resolution. Nondipolar components do however not reflect identical ECG-properties as the ST-elevation and their change does not occur at the same time.
|
260 |
Acquiring 3D Full-body Motion from Noisy and Ambiguous InputLou, Hui 2012 May 1900 (has links)
Natural human motion is highly demanded and widely used in a variety of applications such as video games and virtual realities. However, acquisition of full-body motion remains challenging because the system must be capable of accurately capturing a wide variety of human actions and does not require a considerable amount of time and skill to assemble. For instance, commercial optical motion capture systems such as Vicon can capture human motion with high accuracy and resolution while they often require post-processing by experts, which is time-consuming and costly. Microsoft Kinect, despite its high popularity and wide applications, does not provide accurate reconstruction of complex movements when significant occlusions occur. This dissertation explores two different approaches that accurately reconstruct full-body human motion from noisy and ambiguous input data captured by commercial motion capture devices.
The first approach automatically generates high-quality human motion from noisy data obtained from commercial optical motion capture systems, eliminating the need for post-processing. The second approach accurately captures a wide variety of human motion even under significant occlusions by using color/depth data captured by a single Kinect camera. The common theme that underlies two approaches is the use of prior knowledge embedded in pre-recorded motion capture database to reduce the reconstruction ambiguity caused by noisy and ambiguous input and constrain the solution to lie in the natural motion space. More specifically, the first approach constructs a series of spatial-temporal filter bases from pre-captured human motion data and employs them along with robust statistics techniques to filter noisy motion data corrupted by noise/outliers. The second approach formulates the problem in a Maximum a Posterior (MAP) framework and generates the most likely pose which explains the observations as well as consistent with the patterns embedded in the pre-recorded motion capture database. We demonstrate the effectiveness of our approaches through extensive numerical evaluations on synthetic data and comparisons against results created by commercial motion capture systems. The first approach can effectively denoise a wide variety of noisy motion data, including walking, running, jumping and swimming while the second approach is shown to be capable of accurately reconstructing a wider range of motions compared with Microsoft Kinect.
|
Page generated in 0.1179 seconds