1 |
Spatial normalization of diffusion models and tensor analysisIngalhalikar, Madhura Aditya 01 July 2009 (has links)
Diffusion tensor imaging provides the ability to study white matter connectivity and integrity noninvasively. The information contained in the diffusion tensors is very complex. Therefore a simple way of dealing with tensors is to compute rotationally invariant scalar quantities. These scalar indices have been used to perform population studies between controls and patients with neurological and psychiatric disorders. Implementing the scalar values may reduce the information contained in the whole tensor. A group analysis using the full tensors may give better estimate of white matter changes that occur in the diseased subjects. For spatial normalization of diffusion tensors, it is necessary to interpolate the tensor representation as well as rotate the diffusion tensors after transformation to keep the tensors consistent with the tissue reorientation. Existing reorientation methods cannot be directly used for higher order diffusion models (e.g. q-ball imaging). A novel technique called gradient rotation is introduced where the rotation is directly applied to the diffusion sensitizing gradients providing a voxel by voxel estimate of the diffusion gradients instead of a volume of by volume estimate. The technique is validated by comparing it with an existing method where the transformation is applied to the resulting diffusion tensors. For better matching of diffusion tensors a novel multichannel registration method is proposed based on a non-parametric diffeomorphic demons algorithm. The channels used for the registration include T1-weighted volume and tensor components. A fractional anisotropy (FA) channel is used for defining the contribution of each channel. Including the anatomical data together with the tensors, allows the registration to accurately match the global brain shape and the underlying white matter architecture simultaneously. Using this multichannel registration framework, 10 healthy controls and 9 patients of schizophrenia were spatially normalized. For the group analysis, the tensors were transformed to log-euclidean space. Linear regression analysis was performed on the transformed tensors. Results show that there is a significant difference in the anisotropy between patients and controls especially in the anterior regions that include genu of the corpus callosum and anterior and superior corona radiata, forceps minor and anterior limb on the internal capsule.
|
2 |
Quantitative damage assessment of concrete structures using Acoustic EmissionBeck, Paul January 2004 (has links)
This thesis examines the role of Acoustic Emission (AE) as a non-destructive testing technique for concrete structures. The work focuses on the development of experimental techniques and data analysis methods for the detection, location and assessment of AE from the failure of plain and reinforced concrete specimens. Four key topics are investigated:
|
3 |
Neutron Transmutation and Hydrogenation Study of Hg₁₋xCdxTeZhao, Wei 12 1900 (has links)
Anomalous Hall behavior of HgCdTe refers to a "double cross-over" feature of the Hall coefficient in p-type material, or a peak in the Hall mobility or Hall coefficient in n-type material. A magnetoconductivity tensor approach was utilized to identify presence of two electrons contributing to the conduction as well as transport properties of each electron in the material. The two electron model for the mobility shows that the anomalous Hall behavior results from the competition of two electrons, one in the energy gap graded region near the CdZnTe/HgCdTe interface with large band gap and the other in the bulk of the LPE film with narrow band gap. Hg0.78Cd0.22Te samples grown by LPE on CdZnTe(111B)-oriented substrates were exposed to various doses of thermal neutrons (~1.7 x 1016 - 1.25 x 1017 /cm2) and subsequently annealed at ~220oC for ~24h in Hg saturated vapor to recover damage and reduce the presence of Hg vacancies. Extensive Magnetotransport measurements were performed on these samples. SIMS profile for impurities produced by neutron irradiation was also obtained. The purpose for this study is to investigate the influence of neutron irradiation on this material as a basis for further study on HgCdTe74Se. The result shows that total mobility is observed to decrease with increased neutron dose and can be fitted by including a mobility inverse proportional to neutron dose. Electron introduction rate of thermal neutron is much smaller than that of fission neutrons. Total recovering of the material is suggested to have longer time annealing. Using Kane's model, we also fitted carrier concentration change at low temperature by introducing a donor level with activation energy changing with temperature. Results on Se diffusion in liquid phase epitaxy (LPE) grown HgCdTe epilayers is reported. The LPE Hg0.78Cd0.22Te samples were implanted with Se of 2.0×1014/cm2 at 100keV and annealed at 350-450oC in mercury saturated vapor. Secondary ions mass spectrometry (SIMS) profiles were obtained for each sample. From a Gaussian fit we find that the Se diffusion coefficient DSe is about one to two orders of magnitude smaller than that of arsenic. The as-implanted Se distribution is taken into account in case of small diffusion length in Gaussian fitting. Assuming a Te vacancy based mechanism, the Arrhenius relationship yields an activation energy 1.84eV. Dislocations introduced in HgCdTe materials result in two energy levels, where one is a donor and one is an acceptor. Hydrogenation treatment can effectively neutralize these dislocation defect levels. Both experimental results and theoretical calculation show that the mobility due to dislocation scattering remains constant in the low temperature range (<77K), and increases with temperature between 77K and 150K. Dislocation scattering has little effect on electrical transport properties of HgCdTe with an EPD lower than 107/cm2. Dislocations may have little effect on carrier concentration for semiconductor material with zinc blende structure due to self compensation.
|
4 |
Estimation and Uncertainty Quantification in Tensor Completion with Side InformationSomnooma Hilda Marie Bernadette Ibriga (11206167) 30 July 2021 (has links)
<div>This work aims to provide solutions to two significant issues in the effective use and practical application of tensor completion as a machine learning method. The first solution addresses the challenge in designing fast and accurate recovery methods in tensor completion in the presence of highly sparse and highly missing data. The second takes on the need for robust uncertainty quantification methods for the recovered tensor.</div><div><br></div><div><b>Covariate-assisted Sparse Tensor Completion</b></div><div><b><br></b></div><div>In the first part of the dissertation, we aim to provably complete a sparse and highly missing tensor in the presence of covariate information along tensor modes. Our motivation originates from online advertising where users click-through-rates (CTR) on ads over various devices form a CTR tensor that can have up to 96% missing entries and has many zeros on non-missing entries. These features makes the standalone tensor completion method unsatisfactory. However, beside the CTR tensor, additional ad features or user characteristics are often available. We propose Covariate-assisted Sparse Tensor Completion (COSTCO) to incorporate covariate information in the recovery of the sparse tensor. The key idea is to jointly extract latent components from both the tensor and the covariate matrix to learn a synthetic representation. Theoretically, we derive the error bound for the recovered tensor components and explicitly quantify the improvements on both the reveal probability condition and the tensor recovery accuracy due to covariates. Finally, we apply COSTCO to an advertisement dataset from a major internet platform consisting of a CTR tensor and ad covariate matrix, leading to 23% accuracy improvement over the baseline methodology. An important by-product of our method is that clustering analysis on ad latent components from COSTCO reveal interesting and new ad clusters, that link different product industries which are not formed in existing clustering methods. Such findings could be directly useful for better ad planning procedures.</div><div><b><br></b></div><div><b>Uncertainty Quantification in Covariate-assisted Tensor Completion</b></div><div><br></div><div>In the second part of the dissertation, we propose a framework for uncertainty quantification for the imputed tensor factors obtained from completing a tensor with covariate information. We characterize the distribution of the non-convex estimator obtained from using the algorithm COSTCO down to fine scales. This distributional theory in turn allows us to construct proven valid and tight confidence intervals for the unseen tensor factors. The proposed inferential procedure enjoys several important features: (1) it is fully adaptive to noise heteroscedasticity, (2) it is data-driven and automatically adapts to unknown noise distributions and (3) in the high missing data regime, the inclusion of side information in the tensor completion model yields tighter confidence intervals compared to those obtained from standalone tensor completion methods.</div><div><br></div>
|
5 |
Channel estimation techniques applied to massive MIMO systems using sparsity and statistics approachesAraújo, Daniel Costa 29 September 2016 (has links)
ARAÚJO, D. C. Channel estimation techniques applied to massive MIMO systems using sparsity and statistics approaches. 2016. 124 f. Tese (Doutorado em Engenharia de Teleinformática)–Centro de
Tecnologia, Universidade Federal do Ceará, Fortaleza, 2016. / Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-06-21T13:52:26Z
No. of bitstreams: 1
2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5) / Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-06-21T16:17:55Z (GMT) No. of bitstreams: 1
2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5) / Made available in DSpace on 2017-06-21T16:17:55Z (GMT). No. of bitstreams: 1
2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5)
Previous issue date: 2016-09-29 / Massive MIMO has the potential of greatly increasing the system spectral efficiency
by employing many individually steerable antenna elements at the base station (BS).
This potential can only be achieved if the BS has sufficient channel state information
(CSI) knowledge. The way of acquiring it depends on the duplexing mode employed
by the communication system. Currently, frequency division duplexing (FDD) is the
most used in the wireless communication system. However, the amount of overhead
necessary to estimate the channel scales with the number of antennas which poses a
big challenge in implementing massive MIMO systems with FDD protocol. To enable
both operating together, this thesis tackles the channel estimation problem by proposing
methods that exploit a compressed version of the massive MIMO channel. There are mainly
two approaches used to achieve such a compression: sparsity and second order statistics. To
derive sparsity-based techniques, this thesis uses a compressive sensing (CS) framework to
extract a sparse-representation of the channel. This is investigated initially in a flat channel
and afterwards in a frequency-selective one. In the former, we show that the Cramer-Rao
lower bound (CRLB) for the problem is a function of pilot sequences that lead to a
Grassmannian matrix. In the frequency-selective case, a novel estimator which combines
CS and tensor analysis is derived. This new method uses the measurements obtained of the
pilot subcarriers to estimate a sparse tensor channel representation. Assuming a Tucker3
model, the proposed solution maps the estimated sparse tensor to a full one which describes
the spatial-frequency channel response. Furthermore, this thesis investigates the problem of
updating the sparse basis that arises when the user is moving. In this study, an algorithm
is proposed to track the arrival and departure directions using very few pilots. Besides
the sparsity-based techniques, this thesis investigates the channel estimation performance
using a statistical approach. In such a case, a new hybrid beamforming (HB) architecture
is proposed to spatially multiplex the pilot sequences and to reduce the overhead. More
specifically, the new solution creates a set of beams that is jointly calculated with the
channel estimator and the pilot power allocation using the minimum mean square error
(MMSE) criterion. We show that this provides enhanced performance for the estimation
process in low signal-noise ratio (SNR) scenarios. / Pesquisas em sistemas MIMO massivo (do inglês multiple-input multiple-output) ganha-
ram muita atenção da comunidade científica devido ao seu potencial em aumentar a
eficiência espectral do sistema comunicações sem-fio utilizando centenas de elementos de
antenas na estação de base (EB). Porém, tal potencial só poderá é obtido se a EB possuir
suficiente informação do estado de canal. A maneira de adquiri-lo depende de como os
recursos de comunicação tempo-frequência são empregados. Atualmente, a solução mais
utilizada em sistemas de comunicação sem fio é a multiplexação por divisão na frequência
(FDD) dos pilotos. Porém, o grande desafio em implementar esse tipo solução é porque
a quantidade de tons pilotos exigidos para estimar o canal aumenta com o número de
antenas. Isso resulta na perda do eficiência espectral prometido pelo sistema massivo.
Esta tese apresenta métodos de estimação de canal que demandam uma quantidade de
tons pilotos reduzida, mas mantendo alta precisão na estimação do canal. Esta redução
de tons pilotos é obtida porque os estimadores propostos exploram a estrutura do canal
para obter uma redução das dimensões do canal. Nesta tese, existem essencialmente duas
abordagens utilizadas para alcançar tal redução de dimensionalidade: uma é através da
esparsidade e a outra através das estatísticas de segunda ordem. Para derivar as soluções
que exploram a esparsidade do canal, o estimador de canal é obtido usando a teoria
de “compressive sensing” (CS) para extrair a representação esparsa do canal. A teoria
é aplicada inicialmente ao problem de estimação de canais seletivos e não-seletivos em
frequência. No primeiro caso, é mostrado que limitante de Cramer-Rao (CRLB) é definido
como uma função das sequências pilotos que geram uma matriz Grassmaniana. No segundo
caso, CS e a análise tensorial são combinado para derivar um novo algoritmo de estimatição
baseado em decomposição tensorial esparsa para canais com seletividade em frequência.
Usando o modelo Tucker3, a solução proposta mapeia o tensor esparso para um tensor
cheio o qual descreve a resposta do canal no espaço e na frequência. Além disso, a tese
investiga a otimização da base de representação esparsa propondo um método para estimar
e corrigir as variações dos ângulos de chegada e de partida, causados pela mobilidade do
usuário. Além das técnicas baseadas em esparsidade, esta tese investida aquelas que usam
o conhecimento estatístico do canal. Neste caso, uma nova arquitetura de beamforming
híbrido é proposta para realizar multiplexação das sequências pilotos. A nova solução
consite em criar um conjunto de feixes, que são calculados conjuntamente com o estimator
de canal e alocação de potência para os pilotos, usand o critério de minimização erro
quadrático médio. É mostrado que esta solução reduz a sequencia pilot e mostra bom
desempenho e cenários de baixa relação sinal ruído (SNR).
|
6 |
Mining Tera-Scale Graphs: Theory, Engineering and DiscoveriesKang, U 01 May 2012 (has links)
How do we find patterns and anomalies, on graphs with billions of nodes and edges, which do not fit in memory? How to use parallelism for such Tera- or Peta-scale graphs? In this thesis, we propose PEGASUS, a large scale graph mining system implemented on the top of the HADOOP platform, the open source version of MAPREDUCE. PEGASUS includes algorithms which help us spot patterns and anomalous behaviors in large graphs.
PEGASUS enables the structure analysis on large graphs. We unify many different structure analysis algorithms, including the analysis on connected components, PageRank, and radius/diameter, into a general primitive called GIM-V. GIM-V is highly optimized, achieving good scale-up on the number of edges and available machines. We discover surprising patterns using GIM-V, including the 7-degrees of separation in one of the largest publicly available Web graphs, with 7 billion edges.
PEGASUS also enables the inference and the spectral analysis on large graphs. We design an efficient distributed belief propagation algorithm which infer the states of unlabeled nodes given a set of labeled nodes. We also develop an eigensolver for computing top k eigenvalues and eigenvectors of the adjacency matrices of very large graphs. We use the eigensolver to discover anomalous adult advertisers in the who-follows-whom Twitter graph with 3 billion edges. In addition, we develop an efficient tensor decomposition algorithm and use it to analyze a large knowledge base tensor.
Finally, PEGASUS allows the management of large graphs. We propose efficient graph storage and indexing methods to answer graph mining queries quickly. We also develop an edge layout algorithm for better compressing graphs.
|
7 |
Three-dimensional stress measurement technique based on electrical resistivity tomography / 電気比抵抗トモグラフィ-に基づく三次元応力計測技術Lu, Zirui 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第24896号 / 工博第5176号 / 新制||工||1988(附属図書館) / 京都大学大学院工学研究科都市社会工学専攻 / (主査)准教授 PIPATPONGSA Thirapong, 教授 肥後 陽介, 教授 岸田 潔, 教授 安原 英明 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
8 |
Sur une approche à objets généralisée pour la mécanique non linéaireSaad, Roy 05 December 2011 (has links)
Les problèmes qui se posent aujourd'hui en mécanique numérique et domaines connexes sont complexes, et impliquent de plus en plus souvent plusieurs physiques à différentes échelles de temps et d’espace. Leur traitement numérique est en général long et difficile, d’où l’intérêt d’avoir accès à des méthodes et outils facilitant l’intégration de nouveaux modèles physiques dans des outils de simulation. Ce travail se pose dans la problématique du développement de codes de calcul numérique. L’approche proposée couvre la démarche de développement du modèle numérique depuis la formulation variationnelle jusqu’à l’outil de simulation. L’approche est appliquée à la méthode des éléments finis. Nous avons développé des concepts génériques afin d’automatiser la méthode des éléments finis. Nous nous sommes appuyés sur l'analyse tensorielle dans le contexte de la méthode des éléments finis. Le formalisme mathématique est basé sur l’algèbre tensorielle appliquée à la description de la discrétisation des formes variationnelles. Ce caractère générique est conservé grâce à l'approche logicielle choisie pour l’implantation; orientée objet en Java. Nous proposons donc un cadre orienté objet, basé sur des concepts symboliques, capables de gérer de manière symbolique les développements assistés des contributions élémentaires pour la méthode éléments finis. Ces contributions sont ensuite automatiquement programmées dans un code de calcul. L'intérêt de cette approche est la généricité de la description qui peut être étendue naturellement à tout autre modèle de discrétisation (spatiale ou temporelle). Dans ce travail, les concepts sont validés dans le cadre de problèmes linéaires simples (élasticité, chaleur,...), dans le cadre du traitement de formulations variationnelles mixtes (thermomécanique, Navier-Stokes,…) et dans un cadre Lagrangien (élasticité en grandes transformations, hyperélasticité,…). / The problems occurring today in computational mechanics and related domains are complex, and may involve several physics at different time and space scales. The numerical treatment of complex problems is in general tough and time consuming. In this context, the interest to develop methods and tools to accelerate the integration of new formulations into simulation tools is obvious. This work arises on the issue of the development of computational tool. The proposed approach covers the development process of numerical models from the variational statement to the simulation tool. The approach is applied to the finite element method. We have developed generic concepts to automate the development of the finite element method. To achieve this goal, we relied on tensor analysis applied in the context of the finite element method. The mathematical formalism is based on the tensor algebra to describe the discretization of a variational formulation. The generic character of the approach is preserved through the object-oriented approach in Java. We propose a framework based on object-oriented concepts capable of handling symbolic developments of elemental contributions for finite element codes. The advantage of this approach is the generic description that can be extended naturally to any discretization model in space or time. This concept is fully validated for simple linear problems (elasticity, heat convection, ...), for the treatment of mixed variational formulations (thermo-mechanical, Navier-Stokes for incompressible flows...) and Lagrangian frameworks (elasticity in larges transformations, hyperelasticity, ...).
|
9 |
Reconnaissance d’activités humaines à partir de séquences vidéo / Human activity recognition from video sequencesSelmi, Mouna 12 December 2014 (has links)
Cette thèse s’inscrit dans le contexte de la reconnaissance des activités à partir de séquences vidéo qui est une des préoccupations majeures dans le domaine de la vision par ordinateur. Les domaines d'application pour ces systèmes de vision sont nombreux notamment la vidéo surveillance, la recherche et l'indexation automatique de vidéos ou encore l'assistance aux personnes âgées. Cette tâche reste problématique étant donnée les grandes variations dans la manière de réaliser les activités, l'apparence de la personne et les variations des conditions d'acquisition des activités. L'objectif principal de ce travail de thèse est de proposer une méthode de reconnaissance efficace par rapport aux différents facteurs de variabilité. Les représentations basées sur les points d'intérêt ont montré leur efficacité dans les travaux d'art; elles ont été généralement couplées avec des méthodes de classification globales vue que ses primitives sont temporellement et spatialement désordonnées. Les travaux les plus récents atteignent des performances élevées en modélisant le contexte spatio-temporel des points d'intérêts par exemple certains travaux encodent le voisinage des points d'intérêt à plusieurs échelles. Nous proposons une méthode de reconnaissance des activités qui modélise explicitement l'aspect séquentiel des activités tout en exploitant la robustesse des points d'intérêts dans les conditions réelles. Nous commençons par l'extractivité des points d'intérêt dont a montré leur robustesse par rapport à l'identité de la personne par une étude tensorielle. Ces primitives sont ensuite représentées en tant qu'une séquence de sac de mots (BOW) locaux: la séquence vidéo est segmentée temporellement en utilisant la technique de fenêtre glissante et chacun des segments ainsi obtenu est représenté par BOW des points d'intérêt lui appartenant. Le premier niveau de notre système de classification séquentiel hybride consiste à appliquer les séparateurs à vaste marge (SVM) en tant que classifieur de bas niveau afin de convertir les BOWs locaux en des vecteurs de probabilités des classes d'activité. Les séquences de vecteurs de probabilité ainsi obtenues sot utilisées comme l'entrées de classifieur séquentiel conditionnel champ aléatoire caché (HCRF). Ce dernier permet de classifier d'une manière discriminante les séries temporelles tout en modélisant leurs structures internes via les états cachés. Nous avons évalué notre approche sur des bases publiques ayant des caractéristiques diverses. Les résultats atteints semblent être intéressant par rapport à celles des travaux de l'état de l'art. De plus, nous avons montré que l'utilisation de classifieur de bas niveau permet d'améliorer la performance de système de reconnaissance vue que le classifieur séquentiel HCRF traite directement des informations sémantiques des BOWs locaux, à savoir la probabilité de chacune des activités relativement au segment en question. De plus, les vecteurs de probabilités ont une dimension faible ce qui contribue à éviter le problème de sur apprentissage qui peut intervenir si la dimension de vecteur de caractéristique est plus importante que le nombre des données; ce qui le cas lorsqu'on utilise les BOWs qui sont généralement de dimension élevée. L'estimation les paramètres du HCRF dans un espace de dimension réduite permet aussi de réduire le temps d'entrainement / Human activity recognition (HAR) from video sequences is one of the major active research areas of computer vision. There are numerous application HAR systems, including video-surveillance, search and automatic indexing of videos, and the assistance of frail elderly. This task remains a challenge because of the huge variations in the way of performing activities, in the appearance of the person and in the variation of the acquisition conditions. The main objective of this thesis is to develop an efficient HAR method that is robust to different sources of variability. Approaches based on interest points have shown excellent state-of-the-art performance over the past years. They are generally related to global classification methods as these primitives are temporally and spatially disordered. More recent studies have achieved a high performance by modeling the spatial and temporal context of interest points by encoding, for instance, the neighborhood of the interest points over several scales. In this thesis, we propose a method of activity recognition based on a hybrid model Support Vector Machine - Hidden Conditional Random Field (SVM-HCRF) that models the sequential aspect of activities while exploiting the robustness of interest points in real conditions. We first extract the interest points and show their robustness with respect to the person's identity by a multilinear tensor analysis. These primitives are then represented as a sequence of local "Bags of Words" (BOW): The video is temporally fragmented using the sliding window technique and each of the segments thus obtained is represented by the BOW of interest points belonging to it. The first layer of our hybrid sequential classification system is a Support Vector Machine that converts each local BOW extracted from the video sequence into a vector of activity classes’ probabilities. The sequence of probability vectors thus obtained is used as input of the HCRF. The latter permits a discriminative classification of time series while modeling their internal structures via the hidden states. We have evaluated our approach on various human activity datasets. The results achieved are competitive with those of the current state of art. We have demonstrated, in fact, that the use of a low-level classifier (SVM) improves the performance of the recognition system since the sequential classifier HCRF directly exploits the semantic information from local BOWs, namely the probability of each activity relatively to the current local segment, rather than mere raw information from interest points. Furthermore, the probability vectors have a low-dimension which prevents significantly the risk of overfitting that can occur if the feature vector dimension is relatively high with respect to the training data size; this is precisely the case when using BOWs that generally have a very high dimension. The estimation of the HCRF parameters in a low dimension allows also to significantly reduce the duration of the HCRF training phase
|
10 |
Žemės plutos horizantaliųjų judesių Ignalinos atominės elektrinės rajone tyrimas geodeziniais metodais / Research of the eartch’s crust horizontal movements in the Ignalina nuclear power plant region by geodetic methodsStanionis, Arminas 17 January 2006 (has links)
A method prepared and created algorithm for computations and evaluation of relation between Earth’s crust horizontal deformations and variations of tectonic stresses. Hooke’s law was used for relation description. Method of evaluation of Earth’s crust horizontal deformations was improved as well as modelling method based on observation data. Elements of deformation tensors were evaluated by applying finite element method. New data of Earth’s crust horizontal movements and their geodynamic interpretation at the Ignalina Nuclear Power Plant region were obtained.
|
Page generated in 0.0943 seconds