• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 117
  • 102
  • 48
  • 33
  • 25
  • 14
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 974
  • 134
  • 120
  • 110
  • 99
  • 86
  • 82
  • 72
  • 71
  • 71
  • 70
  • 70
  • 70
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Surface reconstruction using variational interpolation

Joseph Lawrence, Maryruth Pradeepa 24 November 2005
Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. <p> Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer.
362

Caractérisation de SER Basse Fréquence et Modes Caractéristiques

Cognault, Aurore 28 April 2009 (has links) (PDF)
La SER, est la grandeur qui permet de quantifier le pouvoir réflecteur d'un objet, ou a contrario sa discrétion électromagnétique. Maîtriser la SER, voire la diminuer, est un enjeu majeur dans le domaine aéronautique de défense. C'est en particulier un gage de survivabilité pour les aéronefs. Historiquement, les fréquences RADAR d'intérêt étaient celles de la bande Super Haute Fréquence, ce qui équivaut à des longueurs d'onde de 2 à 30 centimètres. Des outils d'analyse adaptés ainsi que des moyens de mesure ou de caractérisation de la SER ont été mis au point. Ils se sont révélés extrêmement performants. On peut citer par exemple la chambre anéchoïque CAMELIA du CESTA. En revanche, dans le domaine des basses fréquences, il est plus délicat de réaliser des mesures précises. Pour des longueurs d'onde de 1 à 5 mètres, l'épaisseur des absorbants est souvent trop faible ; même les dimensions des chambres anéchoïques ne représentent que quelques longueurs d'onde. Notre objectif, lors de cette thèse, était de proposer et d'étudier des algorithmes nouveaux permettant d'améliorer ou de faciliter la caractérisation de la SER en basse fréquence. La notion de courants caractéristiques, introduite par Harrington et Mautz dans les années 70, puis reprise par Y. Morel dans le cas d'objets parfaitement conducteurs, permet la décomposition d'un courant induit quelconque en courants élémentaires. Les modes caractéristiques sont obtenus en faisant rayonner ces courants caractéristiques. Cependant, il n'existe pas d'outil de détermination des modes lorsque l'objet n'est plus parfaitement conducteur. Nous nous sommes donc dotés d'un tel outil, que nous avons construit et validé. Pour cela, nous avons repris dans un premier temps le cadre mathématique qui permet de définir l'opérateur de Perturbation, ses propriétés mathématiques et sa décomposition en éléments propres. Nous avons montré que cet opérateur discrétisé conserve ses propriétés mathématiques. Nous avons ensuite validé notre méthode de calcul direct des modes caractéristiques, issus de la diagonalisation de l'opérateur de perturbation discrétisé. Dans un deuxième temps, nous avons mené des études phénoménologiques. Nous avons tout d'abord observé l'évolution des éléments propres de l'opérateur de perturbation en fonction de l'impédance, et nous nous sommes intéressés au cas particulier de l'impédance égale à 1. Nous avons ensuite observé les phénomènes lorsque la fréquence évolue. En nous concentrant sur les valeurs propres, nous avons pu différencier deux types de modes. Enfin, nous avons détaillé quelques exemples d'applications concrètes de cette méthode de détermination des modes, qui permettent d'améliorer ou de faciliter la caractérisation de la SER en basse fréquence. L'outil ORFE (Outil de Reformulation, Filtrage et Extrapolation de données) permet d'atténuer les termes d'erreurs inhérents à toute caractérisation, et d'extrapoler des données existantes à des cas de figure non acquis ou non accessibles en mesure. Il a donné lieu à un brevet. Un outil d'interpolation de SER en basse fréquence a aussi été construit. Il permet d'obtenir de meilleurs résultats que l'interpolation linéaire de la SER. Nous avons aussi mis en place une méthode d'imagerie basse fréquence. Elle permet de localiser d'éventuels défauts de métallisation de l'objet considéré, en utilisant la base des courants caractéristiques. Enfin, nous avons présenté une méthodologie de caractérisation de SER qui intègre les limites des moyens de mesure. Nous avons mis en évidence que cette caractérisation donne une information absolue sur la SER de l'objet, dans un périmètre de validité. Un brevet a été déposé sur cette méthode.
363

Contribution à l'amélioration de la qualité des surfaces fabriquées sur centre d'usinage à 5 axes

Tournier, Christophe 01 October 2009 (has links) (PDF)
Le thème abordé est celui de la fabrication de pièces de formes complexes sur centre d'usinage à 5 axes et plus particulièrement le point de vue de la génération de trajectoires et de leurs exécutions sur les machines. Quatre thèmes distincts seront abordés dans les quatre chapitres proposés : la prise en compte des performances cinématiques du couple MO-CN en FAO, la prise en compte du modèle géométrique de la machine en FAO, Les formats d'échange et de description des données dans la chaîne numérique de fabrication et enfin l'industrialisation du polissage automatique sur MOCN 5 axes. Enfin dans la troisième partie sont regroupés les articles publiés dans des revues internationales et qui sont proposés comme documents de références pour les développements de la deuxième partie.
364

Real-Time View-Interpolation System for Super Multi-View 3D Display

HONDA, Toshio, FUJII, Toshiaki, HAMAGUCHI, Tadahiko 01 January 2003 (has links)
No description available.
365

Nonlinear model reduction via discrete empirical interpolation

January 2012 (has links)
This thesis proposes a model reduction technique for nonlinear dynamical systems based upon combining Proper Orthogonal Decomposition (POD) and a new method, called the Discrete Empirical Interpolation Method (DEIM). The popular method of Galerkin projection with POD basis reduces dimension in the sense that far fewer variables are present, but the complexity of evaluating the nonlinear term generally remains that of the original problem. DEIM, a discrete variant of the approach from [11], is introduced and shown to effectively overcome this complexity issue. State space error estimates for POD-DEIM reduced systems are also derived. These [Special characters omitted.] error estimates reflect the POD approximation property through the decay of certain singular values and explain how the DEIM approximation error involving the nonlinear term comes into play. An application to the simulation of nonlinear miscible flow in a 2-D porous medium shows that the dynamics of a complex full-order system of dimension 15000 can be captured accurately by the POD-DEIM reduced system of dimension 40 with a factor of [Special characters omitted.] (1000) reduction in computational time.
366

Automatisk identifiering av branter för orienteringskartor

Sundlöf, Martin, Persson, Hans January 2011 (has links)
Orientering är en sport som går ut på att besöka ett antal förutbestämda kontrollpunkter med hjälp av en karta. Orienteringskartan redovisar olika objekt som finns i verkligheten så som stenar, gropar, höjder och branter. Att tillverka en orienteringskarta är dyrt och tidskrävande. Omkring 120 000–150 000 kr och mellan 20–30 h/km2 fältarbete läggs ner på varje karta som skapas. Eftersom orienteringskartorna framställs av ideella föreningar är alla sätt som gör kartframställningen billigare välkomna.   I detta examensarbete har en funktion skapats i ett befintligt program vid namn OL Laser. Funktionens syfte är att automatiskt identifiera branter i laserdata för användning som grundmaterial vid framställning av orienteringskartor. För att räknas som en orienteringsbrant krävs det att tre stycken kriterier uppfylls, nämligen minst 1 m höjdskillnad, minst 1 m utbredning och en lutning större än 85°. Dessa kriterier bestämdes genom att komplettera de befintliga avgränsningarna som anges i Internationella Orienteringsförbundets regleringar för orienteringskartor med egna mätningar i tre stycken olika referensområden kring Gävle. Därefter programmerades funktionen så att genom att klicka på en knapp startas en sökning i ett höjdraster. Steg för steg söks höjdrastret igenom efter pixlar som uppfyller de givna parametrarna för höjdskillnad, utbredning och lutning. Värdet på parametrarna för lutning, höjdskillnad och utbredning bestämdes genom att kalibrera funktionen mot referensområdena. Kalibrering gjordes för att det skulle vara möjligt att automatiskt identifiera branter. De inställningar på parametrarna som användes i funktionen efter kalibrering var 42,5° lutning, 0,6 m höjdskillnad och en utbredning över minst två sammanhängande pixlar. Resultatet utgörs av de pixlar som funktionen identifierar som en brant.   Resultatet visar att funktionen klarar av att hitta branter automatiskt, även i områden som den inte kalibrerats mot. För att använda branterna till en orienteringskarta krävs det att en kartritare verifierar resultat av funktionen ute i fält. Med hjälp av funktionen sparas både tid och pengar i framställningen av orienteringskartor. / Orienteering is a sport where the purpose is to visit a number of predefined control points using a map. The orienteering map shows various objects such as rocks, pits, knolls and cliffs. It is expensive and time consuming to produce an orienteering map. Approximately 120.000-150.000 SEK and 20–30 h/km2 field work is invested in every map produced. Considering orienteering maps are financed by non-profit orienteering organizations every time and money saving process is welcome.   In this degree project a function has been created in a software called OL Laser. The aim of the function is to automatically identify cliffs in laser data for the usage as base maps in the production of orienteering maps. First the definition for cliffs in orienteering was defined. To be classified as a cliff three requirements had to be fulfilled, namely at least 1 m in height difference, at least 1 meter wide and a gradient greater than 85°. These requirements were determined by supplementing the existing restrictions specified in the regulations for orienteering maps with own measurements in three different reference areas around Gävle. The function was programmed so that a search in a height raster was started. Step by step the raster was scanned for pixels that meet the given parameters of the height difference, the width and gradient. The values of the parameters were determined by calibrating the function in the reference areas. The calibration was made to make it possible to automatically identify cliffs. The settings of the parameters used in the function after the calibration were 42.5° gradient, 0.6 m height difference and a propagation of at least two consecutive pixels. The pixels that the function identified as a cliff is the result.   The result shows that the function is able to automatically find the cliffs, even in areas which it is not calibrated against. To be able to use the cliffs on an orienteering map, the cartographer has to verify the result of the function in the field. Both time and money is saved by using the function when producing orienteering maps.
367

Design and Implementation of a high-efficiency low-power analog-to-digital converter for high-speed transceivers

Younis, Choudhry Jabbar January 2012 (has links)
Modern communication systems require higher data rates which have increased thedemand for high speed transceivers. For a system to work efficiently, all blocks ofthat system should be fast. It can be seen that analog interfaces are the main bottleneckin whole system in terms of speed and power. This fact has led researchersto develop high speed analog to digital converters (ADCs) with low power consumption.Among all the ADCs, flash ADC is the best choice for faster data conversion becauseof its parallel structure. This thesis work describes the design of such a highspeed and low power flash ADC for analog front end (AFE) of a transceiver. Ahigh speed highly linear track and hold (TnH) circuit is needed in front of ADCwhich gives a stable signal at the input of ADC for accurate conversion. Twodifferent track and hold architectures are implemented, one is bootstrap TnH andother is switched source follower TnH. Simulations show that high speed with highlinearity can be achieved from bootstrap TnH circuit which is selected for the ADCdesign.Averaging technique is employed in the preamplifier array of ADC to reduce thestatic offsets of preamplifiers. The averaging technique can be made more efficientby using the smaller number of amplifiers. This can be done by using the interpolationtechnique which reduces the number of amplifiers at the input of ADC. Thereduced number of amplifiers is also advantageous for getting higher bandwidthsince the input capacitance at the first stage of preamplifier array is reduced.The flash ADC is designed and implemented in 150 nm CMOS technology for thesampling rate of 1.6 GSamples/sec. The bootstrap TnH consumes power of 27.95mW from a 1.8 V supply and achieves the signal to noise and distortion ratio(SNDR) of 37.38 dB for an input signal frequency of 195.3 MHz. The ADC withideal TnH and comparator consumes power of 78.2 mW and achieves 4.8 effectivenumber of bits (ENOB).
368

Processing Techniques of Aeromagnetic Data. Case Studies from the Precambrian of Mozambique

Magaia, Luis January 2009 (has links)
During 2002-2006 geological field work were carried out in Mozambique. The purpose was to check the preliminary geological interpretations and also to resolve the problems that arose during the compilation of preliminary geological maps and collect samples for laboratory studies. In parallel, airborne geophysical data were collected in many parts of the country to support the geological interpretation and compilation of geophysical maps. In the present work the aeromagnetic data collected in 2004 and 2005 in two small areas northwest of Niassa province and another one in eastern part of Tete province is analysed using GeosoftTM. The processing of aeromagnetic data began with the removal of diurnal variations and corrections for IGRF model of the Earth in the data set. The study of the effect of height variations on recorded magnetic field, levelling and interpolation techniques were also studied. La Porte interpolation showed to be a good tool for interpolation of aeromagnetic data using measured horizontal gradient. Depth estimation techniques are also used to obtain semi-quantitative interpretation of geological bodies. It was showed that many features in the study areas are located at shallow depth (less than 500 m) and few geological features are located at depths greater than 1000 m. This interpretation could be used to draw conclusions about the geology or be incorporated into further investigations in these areas.
369

Efficient Computation with Sparse and Dense Polynomials

Roche, Daniel Steven January 2011 (has links)
Computations with polynomials are at the heart of any computer algebra system and also have many applications in engineering, coding theory, and cryptography. Generally speaking, the low-level polynomial computations of interest can be classified as arithmetic operations, algebraic computations, and inverse symbolic problems. New algorithms are presented in all these areas which improve on the state of the art in both theoretical and practical performance. Traditionally, polynomials may be represented in a computer in one of two ways: as a "dense" array of all possible coefficients up to the polynomial's degree, or as a "sparse" list of coefficient-exponent tuples. In the latter case, zero terms are not explicitly written, giving a potentially more compact representation. In the area of arithmetic operations, new algorithms are presented for the multiplication of dense polynomials. These have the same asymptotic time cost of the fastest existing approaches, but reduce the intermediate storage required from linear in the size of the input to a constant amount. Two different algorithms for so-called "adaptive" multiplication are also presented which effectively provide a gradient between existing sparse and dense algorithms, giving a large improvement in many cases while never performing significantly worse than the best existing approaches. Algebraic computations on sparse polynomials are considered as well. The first known polynomial-time algorithm to detect when a sparse polynomial is a perfect power is presented, along with two different approaches to computing the perfect power factorization. Inverse symbolic problems are those for which the challenge is to compute a symbolic mathematical representation of a program or "black box". First, new algorithms are presented which improve the complexity of interpolation for sparse polynomials with coefficients in finite fields or approximate complex numbers. Second, the first polynomial-time algorithm for the more general problem of sparsest-shift interpolation is presented. The practical performance of all these algorithms is demonstrated with implementations in a high-performance library and compared to existing software and previous techniques.
370

Surface reconstruction using variational interpolation

Joseph Lawrence, Maryruth Pradeepa 24 November 2005 (has links)
Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. <p> Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer.

Page generated in 0.0324 seconds