• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 46
  • 35
  • 20
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 417
  • 417
  • 86
  • 50
  • 47
  • 40
  • 37
  • 35
  • 30
  • 29
  • 27
  • 26
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Efficient solutions to Toeplitz-structured linear systems for signal processing

Turnes, Christopher Kowalczyk 22 May 2014 (has links)
This research develops efficient solution methods for linear systems with scalar and multi-level Toeplitz structure. Toeplitz systems are common in one-dimensional signal-processing applications, and typically correspond to temporal- or spatial-invariance in the underlying physical phenomenon. Over time, a number of algorithms have been developed to solve these systems economically by exploiting their structure. These developments began with the Levinson-Durbin recursion, a classical fast method for solving Toeplitz systems that has become a standard algorithm in signal processing. Over time, more advanced routines known as superfast algorithms were introduced that are capable of solving Toeplitz systems with even lower asymptotic complexity. For multi-dimensional signals, temporally- and spatially-invariant systems have linear-algebraic descriptions characterized by multi-level Toeplitz matrices, which exhibit Toeplitz structure on multiple levels. These matrices lack the same algebraic properties and structural simplicity of their scalar analogs. As a result, it has proven exceedingly difficult to extend the existing scalar Toeplitz algorithms for their treatment. This research presents algorithms to solve scalar and two-level Toeplitz systems through a constructive approach, using methods devised for specialized cases to build more general solution methods. These methods extend known scalar Toeplitz inversion results to more general scalar least-squares problems and to multi-level Toeplitz problems. The resulting algorithms have the potential to provide substantial computational gains for a large class of problems in signal processing, such as image deconvolution, non-uniform resampling, and the reconstruction of spatial volumes from non-uniform Fourier samples.
312

A probabilistic and multi-objective conceptual design methodology for the evaluation of thermal management systems on air-breathing hypersonic vehicles

Ordaz, Irian 18 November 2008 (has links)
This thesis addresses the challenges associated with thermal management systems (TMS) evaluation and selection in the conceptual design of hypersonic, air-breathing vehicles with sustained cruise. The proposed methodology identifies analysis tools and techniques which allow the proper investigation of the design space for various thermal management technologies. The design space exploration environment and alternative multi-objective decision making technique defined as Pareto-based Joint Probability Decision Making (PJPDM) is based on the approximation of 3-D Pareto frontiers and probabilistic technology effectiveness maps. These are generated through the evaluation of a Pareto Fitness function and Monte Carlo analysis. In contrast to Joint Probability Decision Making (JPDM), the proposed PJPDM technique does not require preemptive knowledge of weighting factors for competing objectives or goal constraints which can introduce bias into the final solution. Preemptive bias in a complex problem can degrade the overall capabilities of the final design. The implementation of PJPDM in this thesis eliminates the need for the numerical optimizer which is required with JPDM in order to improve upon a solution. In addition, a physics-based formulation is presented for the quantification of TMS safety effectiveness corresponding to debris impact/damage and how it can be applied towards risk mitigation. Lastly, a formulation loosely based on non-preemptive Goal Programming with equal weighted deviations is provided for the resolution of the inverse design space. This key step helps link vehicle capabilities to TMS technology subsystems in a top-down design approach. The methodology provides the designer more knowledge up front to help make proper engineering decisions and assumptions in the conceptual design phase regarding which technologies show greatest promise, and how to guide future technology research.
313

Compact physical models for power supply noise and chip/package co-design in gigascale integration (GSI) and three-dimensional (3-D) integration systems

Huang, Gang 25 September 2008 (has links)
The objective of this dissertation is to derive a set of compact physical models addressing power integrity issues in high performance gigascale integration (GSI) systems and three-dimensional (3-D) systems. The aggressive scaling of CMOS integrated circuits makes the design of power distribution networks a serious challenge. This is because the supply current and clock frequency are increasing, which increases the power supply noise. The scaling of the supply voltage slowed down in recent years, but the logic on the integrated circuit (IC) still becomes more sensitive to any supply voltage change because of the decreasing clock cycle and therefore noise margin. Excessive power supply noise can lead to severe degradation of chip performance and even logic failure. Therefore, power supply noise modeling and power integrity validation are of great significance in GSI systems and 3-D systems. Compact physical models enable quick recognition of the power supply noise without doing dedicated simulations. In this dissertation, accurate and compact physical models for the power supply noise are derived for power hungry blocks, hot spots, 3-D chip stacks, and chip/package co-design. The impacts of noise on transmission line performance are also investigated using compact physical modeling schemes. The models can help designers gain sufficient physical insights into the complicated power delivery system and tradeoff various important chip and package design parameters during the early stages of design. The models are compared with commercial tools and display high accuracy.
314

Electromagnetic modeling of interconnections in three-dimensional integration

Han, Ki Jin 14 May 2009 (has links)
As the convergence of multiple functions in a single electronic device drives current electronic trends, the need for increasing integration density is becoming more emphasized than in the past. To keep up with the industrial need and realize the new system integration law, three-dimensional (3-D) integration called System-on-Package (SoP) is becoming necessary. However, the commercialization of 3-D integration should overcome several technical barriers, one of which is the difficulty for the electrical design of interconnections. The 3-D interconnection design is difficult because of the modeling challenge of electrical coupling from the complicated structures of a large number of interconnections. In addition, mixed-signal design requires broadband modeling, which covers a large frequency spectrum for integrated microsystems. By using currently available methods, the electrical modeling of 3-D interconnections can be a very challenging task. This dissertation proposes a new method for constructing a broadband model of a large number of 3-D interconnections. The basic idea to address the many interconnections is using modal basis functions that capture electrical effects in interconnections. Since the use of global modal basis functions alleviates the need for discretization process of the interconnection structure, the computational cost is reduced considerably. The resultant interconnection model is a RLGC model that describes the broadband electrical behavior including losses and couplings. The smaller number of basis functions makes the interconnection model simpler, and therefore allows the generation of network parameters at reduced computational cost. Focusing on the modeling of bonding wires in stacked ICs and through-silicon via (TSV) interconnections, this research validates the interconnection modeling approach using several examples from 3-D full-wave EM simulation results.
315

Formulation de la tomographie des temps de première arrivée par une méthode de gradient : un pas vers une tomographie interactive

Taillandier, Cédric 02 December 2008 (has links) (PDF)
La tomographie des temps de première arrivée cherche à estimer un modèle de vitesse de propagation des ondes sismiques à partir des temps de première arrivée pointés sur les sismogrammes. Le modèle de vitesse obtenu peut alors permettre une interprétation structurale du milieu ou bien servir de modèle initial pour d'autres traitements de l'imagerie sismique. Les domaines d'application de cette méthode s'étendent, à des échelles différentes, de la géotechnique à la sismologie en passant par la géophysique pétrolière. Le savoir-faire du géophysicien joue un rôle important dans la difficile résolution du problème tomographique non-linéaire et mal posé. De nombreuses recherches ont entrepris de faciliter et d'améliorer cette résolution par des approches mathématique ou physique. Dans le cadre de ce travail, nous souhaitons développer une approche pragmatique, c'est-à-dire que nous considérons que le problème tomographique doit être résolu par un algorithme interactif dont les paramètres de réglage sont clairement définis. L'aspect interactif de l'algorithme facilite l'acquisition du savoir-faire tomographique car il permet de réaliser, dans un temps raisonnable, de nombreuses simulations pour des paramétrisations différentes. Le but poursuivi dans cette thèse est de définir, pour le cas spécifique de la tomographie des temps de première arrivée, un algorithme qui réponde au mieux à ces critères. Les algorithmes de tomographie des temps de première arrivée classiquement mis en oeuvre aujourd'hui ne répondent pas à nos critères d'une approche pragmatique. En effet, leur implémentation ne permet pas d'exploiter l'architecture parallèle des supercalculateurs actuels pour réduire les temps de calcul. De plus, leur mise en oeuvre nécessite une paramétrisation rendue complexe du fait de la résolution du système linéaire tomographique. Toutes ces limitations pratiques sont liées à la formulation même de l'algorithme à partir de la méthode de Gauss-Newton. Cette thèse repose sur l'idée de formuler la résolution du problème tomographique à partir de la méthode de plus grande descente pour s'affranchir de ces limitations. L'étape clé de cette formulation réside dans le calcul du gradient de la fonction coût par rapport aux paramètres du modèle. Nous utilisons la méthode de l'état adjoint et une méthode définie à partir d'un tracé de rais a posteriori pour calculer ce gradient. Ces deux méthodes se distinguent par leur formulation, respectivement non-linéaire et linéarisée, et par leur mise en oeuvre pratique. Nous définissons ensuite clairement la paramétrisation du nouvel algorithme de tomographie et validons sur un supercalculateur ses propriétés pratiques : une parallélisation directe et efficace, une occupation mémoire indépendante du nombre de données observées et une mise en oeuvre simple. Finalement, nous présentons des résultats de tomographie pour des acquisitions de type sismique réfraction, 2-D et 3-D, synthétiques et réelles, marines et terrestres, qui valident le bon comportement de l'algorithme, en termes de résultats obtenus et de stabilité. La réalisation d'un grand nombre de simulations a été rendue possible par la rapidité d'exécution de l'algorithme, de l'ordre de quelques minutes en 2-D.
316

Méthode de mesure tridimensionnelle active appliquée au contexte de l’analyse endoscopique ou coloscopique / Three dimensional measurement method in the context of endoscopic or coloscopic analysis

Dupont, Erwan 10 July 2015 (has links)
Cette thèse, consacrée à la mesure endoscopique de formes tridimensionnelle, se place dans un double contexte applicatif : tout d'abord industriel, avec l'inspection endoscopique de pièces mécaniques en milieu contraint (notamment tubulaire) à des résolutions micrométriques. Le second contexte est médical avec la détection de formes tridimensionnelle lors de coloscopies pour l'aide au diagnostic. L'endoscopie souple est obtenue par l'utilisation de guides optiques, la méthode de mesure tridimensionnelle est basée sur la stéréovision active avec la génération de lumière structurée par une matrice de micro-miroirs. Après avoir établi l'état de l'art, une méthode de conception et d'évaluation optique appliquée à la stéréovision en endoscopie souple est décrite. C'est ensuite la réalisation instrumentale, son évaluation métrologique, et une méthode innovante de basculement de modes dynamique entre stéréovision active et passive qui sont détaillées. Des méthodes algorithmiques de reconstruction tridimensionnelle adaptées à ce type d'instrument sont enfin proposées. Les contributions scientifiques de cette étude sont multiples. Une méthode d'analyse optique basée sur l'utilisation de fonctions de transfert de modulation pour la conception d'un endoscope mesurant par stéréovision est proposée. Des méthodes de traitement d'image pour un étalonnage robuste malgré une défocalisation optique ainsi qu'un nouvel algorithme à décalage de phase constituent également des contributions de l'étude. L'association de ces méthodes a permis d'extraire un principe de réalisation permettant la mesure tridimensionnelle par endoscopie souple. / This thesis aims at developing a tri-dimensional measurement endoscopic device in a double context: the first one is industrial with endoscopic inspection of mechanical pieces (tubular inspection, for example) at micrometric resolution. The second context is medical with tri-dimensional shape detection during colonoscopy to help the surgeon diagnosis. In this study, flexible endoscopy is made possible by using image guides and the tri-dimensional reconstruction method is based on active stereovision where a digital micro-mirror device is used to spatially structure the incoming light. After developing the state of the art, an optical conception and evaluation method, applied to stereovision for flexible endoscopic devices is described. The instrumental realization is then detailed and metrologically evaluated. An innovative method that allows to switch dynamically between active and passive stereovision is then detailed. Finally, 3D reconstruction algorithms adapted to this endoscopic instrument are proposed. The scientific contributions of this study are multiple. Firstly, an optical analysis method based on the modulation transfer function to design an endoscopic stereovision system is proposed. An image processing method for robust calibration in a defocused optical environment and a new phase-shifting algorithm for 3D reconstruction are proposed. Finally, a realization principle for 3D measurement in flexible endoscopy was extracted from the combination of all these methods.
317

MOIRAE : a computational strategy to predict 3-D structures of polypeptides

Dorn, Márcio January 2012 (has links)
Currently, one of the main research problems in Structural Bioinformatics is associated to the study and prediction of the 3-D structure of proteins. The 1990’s GENOME projects resulted in a large increase in the number of protein sequences. However, the number of identified 3-D protein structures have not followed the same growth trend. The number of protein sequences is much higher than the number of known 3-D structures. Many computational methodologies, systems and algorithms have been proposed to address the protein structure prediction problem. However, the problem still remains challenging because of the complexity and high dimensionality of a protein conformational search space. This work presents a new computational strategy for the 3-D protein structure prediction problem. A first principle strategy which uses database information for the prediction of the 3-D structure of polypeptides was developed. The proposed technique manipulates structural information from the PDB in order to generate torsion angles intervals. Torsion angles intervals are used as input to a genetic algorithm with a local-search operator in order to search the protein conformational space and predict its 3-D structure. Results show that the 3-D structures obtained by the proposed method were topologically comparable to their correspondent experimental structure.
318

[en] ESTIMATES OF VOLUMETRIC CURVATURE ATTRIBUTES IN SEISMIC DATA / [pt] ESTIMATIVAS DE ATRIBUTOS VOLUMÉTRICOS DE CURVATURA EM DADOS SÍSMICOS

LEONARDO DE OLIVEIRA MARTINS 24 September 2018 (has links)
[pt] Atributos de curvatura são uma importante ferramenta para visualização e interpretação de feições estruturais em dados sísmicos. Tais medidas podem realçar falhas e fraturas sutis que não estavam evidentes no dado de amplitude, fornecendo ao intérprete informações importantes para a construção do modelo geológico da área de interesse. Neste trabalho é apresentado um método para estimar atributos de curvatura volumérica em dados sísmicos empilhados. A partir do dado de amplitude, é computado um atributo identificador de horizonte, o qual permite que horizontes sísmicos sejam representados como superfícies de nível. Dessa maneira, o gradiente desse atributo fornece uma representação coerente do campo de normais do volume. Fórmulas para o cálculo de curvatura em superfícies implícitas são usadas para obter vários atributos de curvatura úteis na delineação e predição de importantes feições estratigráficas. Testes realizados com dados sintéticos e reais mostram que o método proposto é capaz de fornecer estimativas coerentes de atributos de curvatura a um baixo custo de processamento. São avaliados três atributos identificadores de horizontes: fase instantânea, derivada vertical e atributo de ridges. / [en] Curvature attributes are powerful tools for visualization and interpretation of structural features in seismic data. Such measures may highlight faults and subtle fractures that were not evident in amplitude data, providing important information to the interpreter to build the geological model of the area of interest. This paper presents a method for estimating volumetric curvature attributes in post-stack seismic data. Using amplitude volume, an horizon identifier attribute is computed, in order to represent seismic horizons as level surfaces. Thus, the gradient of this attribute provides a coherent estimate of volumetric normal field. Formulas for the calculation of curvature in implicit surfaces are used to compute several curvature attributes useful in the delineation and prediction of important stratigraphic features. Tests with synthetic and real data show that the proposed method is able to provide consistent estimates of attributes of curvature at low cost processing. Three horizon identifer attributes are evaluated: instantaneous phase, vertical derivative and ridge attribute.
319

MOIRAE : a computational strategy to predict 3-D structures of polypeptides

Dorn, Márcio January 2012 (has links)
Currently, one of the main research problems in Structural Bioinformatics is associated to the study and prediction of the 3-D structure of proteins. The 1990’s GENOME projects resulted in a large increase in the number of protein sequences. However, the number of identified 3-D protein structures have not followed the same growth trend. The number of protein sequences is much higher than the number of known 3-D structures. Many computational methodologies, systems and algorithms have been proposed to address the protein structure prediction problem. However, the problem still remains challenging because of the complexity and high dimensionality of a protein conformational search space. This work presents a new computational strategy for the 3-D protein structure prediction problem. A first principle strategy which uses database information for the prediction of the 3-D structure of polypeptides was developed. The proposed technique manipulates structural information from the PDB in order to generate torsion angles intervals. Torsion angles intervals are used as input to a genetic algorithm with a local-search operator in order to search the protein conformational space and predict its 3-D structure. Results show that the 3-D structures obtained by the proposed method were topologically comparable to their correspondent experimental structure.
320

Robust face recognition based on three dimensional data / La reconnaissance faciale robuste utilisant les données trois dimensions

Huang, Di 09 September 2011 (has links)
La reconnaissance faciale est l'une des meilleures modalités biomêtriques pour des applications liées à l'identification ou l'authentification de personnes. En effet, c'est la modalité utilisée par les humains; elle est non intrusive, et socialement bien acceptée. Malheureusement, les visages humains sont semblables et offrent par conséquent une faible distinctivité par rapport à d'autres modalités biométriques, comme par exemple, les empreintes digitales et l'iris. Par ailleurs, lorsqu'il s'agit d'images de texture de visages, les variations intra-classe, dues à des facteurs aussi divers que les changements des conditions d'éclairage mais aussi de pose, sont généralement supérieures aux variations inter-classe, ce qui rend la reconnaissance faciale 2D peu fiable dans des conditions réelles. Récemment, les représentations 3D de visages ont été largement étudiées par la communauté scientifique pour palier les problèmes non résolus dans la reconnaissance faciale 2D, qui sont notamment causés par les changements d'illumination et de pose. Cette thèse est consacrée à la reconnaissance faciale robuste utilisant les données faciales 3D, incluant la reconnaissance de visage 3D, la reconnaissance de visage 3D texturé ainsi que la reconnaissance faciale asymétrique 3D-2D. La reconnaissance faciale 3D, utilisant l'information géométrique 3D représentée sous forme de nuage de points 3D ou d'image de profondeur, est théoriquement non affectée par les changements dans les conditions d'illumination et peut facilement corriger, par l'application d'une approche de recalage rigide comme ICP, les changements de pose. Le principal défi réside dans la représentation, avec précision, de la surface faciale 3D, mais aussi dans le recalage robuste aux changements d'expression faciale. Dans cette thèse, nous concevons une approche efficace et performante pour la reconnaissance de visage 3D. Concernant la description du visage, nous proposons une représentation géométrique basée sur les cartes extended Local Binary Pattern (eLBP), qui décrivent de manière précise les variations de la géométrie locale de la surface faciale 3D; tandis qu'une étape combinant l'appariement local, basé 81FT, aux informations compositionnelles du visage et aux contraintes de configuration permet d'apparier des points caractéristiques, d'un même individu, entre les différentes représentations de son visage. Évaluée sur les bases de données FRGC v2.0 et Gavab DB, l'approche proposée prouve son efficacité. Par ailleurs, contrairement à la plupart des approches nécessitant une étape d'alignement précise et couteuse, notre approche, en raison de l'utilisation de l'appariement local, ne nécessite pas d'enrôlement dans des conditions de pose frontale précise et se contente seulement d'un alignement grossier. Considérant que la plupart des systèmes actuels d'imagerie 3D permettent la capture simultanée de modèles 3D du visage ainsi que de leur texture, une tendance majeure dans la littérature scientifique est d'adopter à la fois la modalité 3D et celle de texture 2D. On fait valoir que l'utilisation conjointe de ces deux types d'informations aboutit généralement à des résultats plus précis et plus robustes que ceux obtenus par l'un des deux séparément. Néanmoins, les deux facteurs clés de la réussite sont la représentation bimodale du visage ainsi que la fusion des résultats obtenus selon chaque modalité. Dans cette thèse, nous proposons une représentation bio-inspirée du visage, appelée Cartes de Gradients Orientés (Oriented Gradient Maps: OGMs), qui peut être appliqué à la fois à la modalité 3D et à celle de texture 2D. Les OGMs simulent la réponse des neurones complexes, à l'information de gradient dans un voisinage donné et ont la propriété d'être très distinctifs et robustes aux transformations affines d'illumination et géométriques. [...] / The face is one of the best biometrics for person identification and verification related applications, because it is natural, non-intrusive, and socially weIl accepted. Unfortunately, an human faces are similar to each other and hence offer low distinctiveness as compared with other biometrics, e.g., fingerprints and irises. Furthermore, when employing facial texture images, intra-class variations due to factors as diverse as illumination and pose changes are usually greater than inter-class ones, making 2D face recognition far from reliable in the real condition. Recently, 3D face data have been extensively investigated by the research community to deal with the unsolved issues in 2D face recognition, Le., illumination and pose changes. This Ph.D thesis is dedicated to robust face recognition based on three dimensional data, including only 3D shape based face recognition, textured 3D face recognition as well as asymmetric 3D-2D face recognition. In only 3D shape-based face recognition, since 3D face data, such as facial pointclouds and facial scans, are theoretically insensitive to lighting variations and generally allow easy pose correction using an ICP-based registration step, the key problem mainly lies in how to represent 3D facial surfaces accurately and achieve matching that is robust to facial expression changes. In this thesis, we design an effective and efficient approach in only 3D shape based face recognition. For facial description, we propose a novel geometric representation based on extended Local Binary Pattern (eLBP) depth maps, and it can comprehensively describe local geometry changes of 3D facial surfaces; while a 81FT -based local matching process further improved by facial component and configuration constraints is proposed to associate keypoints between corresponding facial representations of different facial scans belonging to the same subject. Evaluated on the FRGC v2.0 and Gavab databases, the proposed approach proves its effectiveness. Furthermore, due tq the use of local matching, it does not require registration for nearly frontal facial scans and only needs a coarse alignment for the ones with severe pose variations, in contrast to most of the related tasks that are based on a time-consuming fine registration step. Considering that most of the current 3D imaging systems deliver 3D face models along with their aligned texture counterpart, a major trend in the literature is to adopt both the 3D shape and 2D texture based modalities, arguing that the joint use of both clues can generally provides more accurate and robust performance than utilizing only either of the single modality. Two important factors in this issue are facial representation on both types of data as well as result fusion. In this thesis, we propose a biological vision-based facial representation, named Oriented Gradient Maps (OGMs), which can be applied to both facial range and texture images. The OGMs simulate the response of complex neurons to gradient information within a given neighborhood and have properties of being highly distinctive and robust to affine illumination and geometric transformations. The previously proposed matching process is then adopted to calculate similarity measurements between probe and gallery faces. Because the biological vision-based facial representation produces an OGM for each quantized orientation of facial range and texture images, we finally use a score level fusion strategy that optimizes weights by a genetic algorithm in a learning pro cess. The experimental results achieved on the FRGC v2.0 and 3DTEC datasets display the effectiveness of the proposed biological vision-based facial description and the optimized weighted sum fusion. [...]

Page generated in 0.0337 seconds