• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Cascade adaptive array structures

Hanson, Timothy B. January 1990 (has links)
No description available.
12

Design, Analysis and Fabrication of Complex Structures using Voxel-based modeling for Additive Manufacturing

Tedia, Saish 20 November 2017 (has links)
A key advantage of Additive Manufacturing (AM) is the opportunity to design and fabricate complex structures that cannot be made via traditional means. However, this potential is significantly constrained by the use of a facet-based geometry representation (e.g., the STL and the AMF file formats); which do not contain any volumetric information and often, designing/slicing/printing complex geometries exceeds the computational power available to the designer and the AM system itself. To enable efficient design and fabrication of complex/multi-material complex structures, several algorithms are presented that represent and process solid models as a set of voxels (three-dimensional pixels). Through this, one is able to efficiently realize parts featuring complex geometries and functionally graded materials. This thesis specifically aims to explore applications in three distinct fields namely, (i) Design for AM, (ii) Design for Manufacturing (DFM) education, and (iii) Reverse engineering from imaging data wherein voxel-based representations have proven to be superior to the traditional AM digital workflow. The advantages demonstrated in this study cannot be easily achieved using traditional AM workflows, and hence this work emphasizes the need for development of new voxel based frameworks and systems to fully utilize the capabilities of AM. / MS / Additive Manufacturing(AM) (also referred to as 3D Printing) is a process by which 3D objects are constructed by successively forming one-part cross-section at a time. Typically, the input file format for most AM systems is in the form of surface representation format (most commonly. STL file format). A STL file is a triangular representation of a 3-dimensional surface geometry where the part surface is broken down logically into a series of small triangles (facets). A key advantage of Additive Manufacturing is the opportunity to design and fabricate complex structures that cannot be made easily via traditional manufacturing techniques. However, this potential is significantly constrained by the use of a facet-based (triangular) geometry representation (e.g., the STL file format described above); which does not contain any volumetric (for e.g. material, texture, color etc.) information. Also, often, designing/slicing/printing complex geometries using these file formats can be computationally expensive. To enable more efficient design and fabrication of complex/multi-material structures, several algorithms are presented that represent and process solid models as a set of voxels (three-dimensional pixels). A voxel represents the smallest representable element of volume. For binary voxel model, a value of ‘1’ means that voxel is ‘on’ and value of 0 means voxel is ‘off’. Through this, one is able to efficiently realize parts featuring complex geometries with multiple materials. This thesis specifically aims to explore applications in three distinct fields namely, (i) Design for AM, (ii) Design for Manufacturing (DFM) education, and (iii) Fabricating models (Reverse engineering) directly from imaging data. In the first part of the thesis, a software tool is developed for automated manufacturability analysis of a part that is to be produced by AM. Through a series of simple computations, the tool provides feedback on infeasible features, amount of support material, optimum orientation and manufacturing time for fabricating the part. The results from this tool were successfully validated using a simple case study and comparison with an existing pre-processing AM software. Next, the above developed software tool is implemented for teaching instruction in a sophomore undergraduate classroom to improve students’ understanding of design constraints in Additive Manufacturing. Assessments are conducted to measure students’ understanding of a variety of topics in manufacturability both before and after the study to measure the effectiveness of this approach. The third and final part of this thesis aims to explore fabrication of models directly from medical imaging data (like CT Scan and MRI). A novel framework is proposed which is validated by fabricating three distinct medical models: a mouse skull, a partial human skull and a horse leg directly from corresponding CT Scan data. The advantages demonstrated in this thesis cannot be easily achieved using traditional AM workflows, and hence this work emphasizes the need for development of new voxel based frameworks and systems to fully utilize the capabilities of AM.
13

Highly efficient supply modulator for mobile communication systems

Kim, Eung Jung 20 May 2011 (has links)
Switching frequency modulation techniques, an inductor current sensing circuit for fast switching converter, and a dual converter are proposed, and the simulation results and experimental results are drawn. The experimental results for monotonic and pseudo-random modulation techniques show that the switching noise peak was effectively reduced as much as -19 dBc. The inductor current sensing circuit accurately tracks the output current of the switching converter that switches up to 30MHz. This current sensing circuit is used to drive the slow converter in the dual converter. The dual converter consists of a fast converter and a slow converter. The fast converter provides only the high frequency conponents in the output current, and the slow converter provides the majority portion of the output current with a higher efficiency. Therefore, the dual converter can have a fast transient response without sacrificing its efficiency. All chips are fabricated in a standard CMOS 0.18um process.
14

Détection et caractérisation d'exoplanètes : développement et exploitation du banc d'interférométrie annulante Nulltimate et conception d'un système automatisé de classement des transits détectés par CoRoT

Demangeon, Olivier 28 June 2013 (has links) (PDF)
Parmi les méthodes qui permettent de détecter des exoplanètes, la photométrie des transits est celle qui a connu le plus grand essor ces dernières années grâce à l'arrivée des télescopes spatiaux CoRoT (en 2006) puis Kepler (en 2009). Ces deux satellites ont permis de détecter des milliers de transits potentiellement planétaires. Étant donnés leur nombre et l'effort nécessaire à la confirmation de leur nature, il est essentiel d'effectuer, à partir des données photométriques, un classement efficace permettant d'identifier les transits les plus prometteurs et qui soit réalisable en un temps raisonnable. Pour ma thèse, j'ai développé un outil logiciel, rapide et automatisé, appelé BART (Bayesian Analysis for the Ranking of Transits) qui permet de réaliser un tel classement grâce une estimation de la probabilité que chaque transit soit de nature planétaire. Pour cela, mon outil s'appuie notamment sur le formalisme bayésien des probabilités et l'exploration de l'espace des paramètres libres par méthode de Monte Carlo avec des chaînes de Markov (mcmc).Une fois les exoplanètes détectées, l'étape suivante consiste à les caractériser. L'étude du système solaire nous a démontré, si cela était nécessaire, que l'information spectrale est un point clé pour comprendre la physique et l'histoire d'une planète. L'interférométrie annulante est une solution technologique très prometteuse qui pourrait permettre cela. Pour ma thèse, j'ai travaillé sur le banc optique Nulltimate afin d'étudier la faisabilité de certains objectifs technologiques liés à cette technique. Au-delà de la performance d'un taux d'extinction de 3,7.10^-5 en monochromatique et de 6,3.10^-4 en polychromatique dans l'infrarouge proche, ainsi qu'une stabilité de σN30 ms = 3,7.10^-5 estimée sur 1 heure, mon travail a permis d'assainir la situation en réalisant un budget d'erreur détaillé, une simulation en optique gaussienne de la transmission du banc et une refonte complète de l'informatique de commande. Tout cela m'a finalement permis d'identifier les faiblesses de Nulltimate.
15

On Process Variation Tolerant Low Cost Thermal Sensor Design

Remarsu, Spandana 01 January 2011 (has links) (PDF)
Thermal management has emerged as an important design issue in a range of designs from portable devices to server systems. Internal thermal sensors are an integral part of such a management system. Process variations in CMOS circuits cause accuracy problems for thermal sensors which can be fixed by calibration tables. Stand-alone thermal sensors are calibrated to fix such problems. However, calibration requires going through temperature steps in a tester, increasing test application time and cost. Consequently, calibrating thermal sensors in typical digital designs including mainstream desktop and notebook processors increases the cost of the processor. This creates a need for design of thermal sensors whose accuracy does not vary significantly with process variations. Other qualities desired from thermal sensors include low area requirement so that many of them maybe integrated in a design as well as low power dissipation, such that the sensor itself does not become a significant source of heat. In this work, we developed a process variation tolerant thermal sensor design with (i) active compensation circuitry and (ii) signal dithering based self calibration technique to meet the above requirements in 32nm technology. Results show that we achieve 3ºC temperature accuracy, with a relatively small design which compares well with designs that are currently used.
16

Halvtoning för realtidsrendering i dataspelsutveckling : En litteraturundersökning av forskning kring dithering-algoritmer / Halftoning for realtime rendering in computer game development : A literature review of research on dithering algorithms

Engström, Erik January 2024 (has links)
Det här examensarbetet utforskar användningen av olika halvtoningstekniker, känt som dithering, i realtidsrendering för datorspelsutveckling. Halvtoning är en metod för att skapa illusionen av flera färger i bilder med begränsat färgdjup genom att använda mönster av punkter. Vad som tidigare var en lösning på ett optimeringsproblem i äldre datorer har i modern användning blivit ett stilistiskt val. Med hjälp av en litteraturundersökning fyller studien ett kunskapsgap kring halvtoning som estetiskt verktyg och diskuterar dess tillämplighet i realtidsgrafik för moderna datorspel. En kronologisk genomgång presenteras av kända algoritmer där varje algoritm blir betygsatt efter dess egenskaper. Analysen fokuserar på algoritmernas effektivitet, bildkvalitet, sammanhållning mellan frames, möjligheten att ändra parametrar, och implementation på modern hårdvara, särskilt GPUer. En diskussion förs kring etiska och samhälleliga aspekter och algoritmernas potential för framtida forskning.
17

Robust Repetitive Control of DC/AC Converter

Wang, Sing-han 29 August 2012 (has links)
This thesis applies digital repetitive control to a single-phase DC-to-AC converter, with some proposed designs to improve stability and enhance performance of the converter under various load variations. A practical DC-to-AC converter is required to convert DC power to stable AC power with low harmonic distortion when attached to various linear or nonlinear loads. This thesis combines repetitive control with feedback dithering modulation and optimal state feedback to control the converter. The repetitive control is responsible for regulating output power and eliminating harmonics, while the feedback dithering modulation for switching the power transistors with reduced switching noise and the state feedback for stabilizing the converter under various load variations. The presented control and modulation schemes of the power converter are implemented on an FPGA (Field Programmable Gate Array). The experiments confirm the excellent performance and robustness of the converter, indicating a total harmonic distortion of less than 0.5% for the converter when attached to various linear or nonlinear loads.
18

Σχεδίαση και ανάπτυξη ψηφιακά ελεγχόμενου ταλαντωτή (Digitally Controlled Oscillator) στις συχνότητες 1.6-2 GHz

Ζωγράφος, Βασίλης 17 July 2014 (has links)
Σε αυτήν την εργασία μελετήθηκε και σχεδιάστηκε ένας ψηφιακά ελεγχόμενος ταλαντωτής (DCO) με σκοπό GSM εφαρμογή. Οι συχνότητες λειτουργίας κυμαίνονται στο φάσμα 1.6GHz – 2GHz με βήμα 20kHz. Ο θόρυβος φάσης ποσοτικοποιείται στα -160dB/Hz σε 20 MHz απόκλιση. Ο έλεγχος του DCO γίνεται πλήρως ψηφιακά επιτρέποντας την υλοποίηση πλήρους ψηφιακού βρόχου κλειδώματος φάσης (ADPLL) και καθολικού system on chip design (SoC). Ο ταλαντωτής καταναλώνει 4,5 mWatt με 3,76 mA ρεύμα σε 1.2 V τροφοδοσία. / A Digitally Controlled Oscillator is studied and designed for GSM application. The operating frequencies are 1.6-2GHz with tuning range of 400MHz and finest step size 20 KHz. A fully digital control is achieved form where arises the opportunity for fabrication of an All-Digital Phase Locked Loop (ADPLL) and the whole system on chip (SoC). The proposed DCO core consumes 3.76mA from a 1.2V supply.
19

Metody ditheringu obrazu / Methods of image dithering

Pelc, Lukáš January 2014 (has links)
Master’s thesis discusses methods for dithering image. The basis is the explanation of the theory of digital images, color models, color depth and color range. Followed by the dismantling of the basic dithering methods which are a thresholding method, a random and matrix diffusion. Discussed are advanced methods of dithering with error distribution, bee with best known method Floyd-Steinberg. Included is a comparison of different methods including subjective comparison using a questionnaire. Program part is JAVA applet that shows the possibility of generating images using various dithering methods.
20

Fast luminosity monitoring and feedback using monocrystalline CVD diamond detectors at the SuperKEKB electron-positron collider in Japan / Monitorage rapide et asservissement de la luminosité du collisionneur électron-positron japonais SuperKEKB avec des capteurs diamant CVD monocristallins

Pang, Chengguo 05 September 2019 (has links)
Le collisionneur SuperKEKB, dédié à l'expérience Belle II, prévoit une très haute luminosité, inégalée à ce jour. Son objectif est de fournir une luminosité instantanée de 8x10³⁵ cm⁻²s⁻¹ en mettant en collision des faisceaux minuscules au point d'interaction (IP) sur la base du schéma "nano-beam". Par conséquent, un excellent contrôle de l'orbite du faisceau à l’IP est nécessaire pour assurer un recouvrement géométrique optimal entre les deux faisceaux en collision, et ainsi maximiser la luminosité. Dans ce cadre, cette thèse présente le développement et l'implémentation d'un système de monitoring rapide de la luminosité de SuperKEKB basé sur des détecteurs en diamant sCVD. Pour atteindre une précision relative aussi élevée et couvrir une gamme dynamique de luminosité élevée, le processus de diffusion Bhabha radiatif à très petit angle est utilisé, dont la section efficace d’interaction est très importante et relativement bien connue. Des détecteurs diamant sCVD, dont le signal est rapide et qui ont une bonne tolérance au rayonnement, sont utilisés pour détecter les particules chargées dans les gerbes électromagnétiques induites par l’interaction entre les particules Bhabha diffusées et perdues dans le tube à vide du faisceau, et dans les autres matériaux, en particulier un radiateur, à des emplacements choisis spécialement en aval de l'IP, dans les deux anneaux LER et HER. Une simulation de bout en bout du système d'asservissement de l'orbite du faisceau à l'IP basé sur notre signal de luminosité rapide et précis a été réalisé, qui comprend: une estimation du signal du détecteur de diamant sCVD, basé sur des mesures de laboratoire à l'aide d'une source radioactive, la construction de séquences de signal représentative de SuperKEKB comprenant les bruits de fond à un seul faisceau et les particules diffusées par le processus Bhabha, un traitement du signal de luminosité, et la simulation de l'asservissement de l'orbite. Il a été possible de vérifier la faisabilité de ce système pour maintenir la très haute luminosité de SuperKEKB en présence des mouvements du sol et a de déterminer la précision relative du signal de luminosité rapide qu'il est possible d'obtenir toutes les 1 ms. Au cours des phases de mise en service de SuperKEKB, la phase 2 et le début de la phase 3, notre moniteur de luminosité rapide basé sur des détecteurs en diamant sCVD a été installé et utilisé avec succès. Les processus de perte de faisceau, principalement ceux provenant des processus de Bremsstrahlung et de Touschek, ont été étudiés en détail et, par rapport à la simulation, un bon accord a été trouvé. Lors de la mise en service de la collision, des signaux de luminosité intégrés toutes les secondes étaient fournis en continu pour le réglage des paramètres des faisceaux à l'IP. En outre, un signal de luminosité intégré toutes les 1 ms avec la précision relative attendue a également été fourni et utilisé comme entrée du système d'asservissement de l'orbite à l'IP, notamment pour des premiers tests conduits avec succès avec des décalages de faisceau horizontaux introduits volontairement. Davantage de tests de ce système d'asservissement sont attendus pour assurer son bon fonctionnement en continu à l'avenir. Cette thèse présente le développement et l’application d’un système de surveillance rapide de la luminosité basé sur les détecteurs de sCVD diamant de SuperKEKB. / SuperKEKB is at the foremost frontier of high luminosity e⁺e⁻ colliders, dedicated to the Belle-II experiment. It aims to provide an instantaneous luminosity of 8x10³⁵ by involving extremely tiny beams colliding at the Interaction Point (IP) based on the "nano-beam scheme". Therefore, excellent control of its beam orbit at the IP is required to ensure the optimum geometrical overlap between the two colliding beams, and thereby maximize the luminosity. Besides, effective instrumentation to diagnose the behavior of the beam at the IP and possible beam interactions between bunches along the train are also quite essential during the long and rather difficult process of machine tuning towards the nominal beam parameters. This thesis presents the development and application of a fast luminosity monitoring system based on sCVD diamond detectors at SuperKEKB, including: (1),train integrated luminosity signals every 1 ms which will be used as input to the dithering orbit feedback system, its relative precision is expected to be better than 1% when luminosity reaches 10³⁴ (2), sensitive train integrated luminosity signals over a large luminosity dynamic range every 1 s which will be sent to the SuperKEKB control room as immediate observable for machine collision tuning, and (3) bunch integrated luminosity signals every 1 s with sufficient relative precision to monitor the collision performance for each single bunch. To achieve such high relative precision and cover a large luminosity dynamic range, the radiative Bhabha process events at vanishing scattering angle will be measured, whose interaction cross-section is quite large and reasonably well known. The sCVD diamond detectors, which have fast signal formation and good radiation tolerance, were used to detect the charged particles in the secondary showers induced by the interaction between the lost Bhabha scattered particles and the beam pipe and specific radiator materials at carefully chosen locations downstream of the IP in both the LER and HER. A start-to-end simulation was performed on the dithering orbit feedback system using fast, precise luminosity signal as input, which includes: sCVD diamond detector signal estimation based on laboratory measurements with a radioactive source, signal sequence construction at SuperKEKB including single beam backgrounds and Bhabha scattered particles, luminosity signal procession, dithering orbit feedback simulation. It enabled verifying the feasibility of this system to maintain very high luminosity in the presence of ground motion, in particular it determined the relative precision of the fast luminosity signal every 1 ms. Besides, the radiation damage of the sCVD diamond detectors in the LER was also estimated based on a FLUKA simulation and applying the NIEL hypothesis. During the Phase-2 and early Phase-3 commissioning periods of SuperKEKB, our fast luminosity monitor based on sCVD diamond detectors was installed and operated successfully. Single beam loss processes, mainly Bremsstrahlung and Touschek, were studied in detail and compared with the simulation, showing good agreement. During the collision commissioning, train and bunch integrated luminosity signals every 1 s were provided for machine tuning. e.g. the vertical beam sizes were determined with the vertical offset scan technique based on our luminosity signals, both the average and for the individual bunches, which is very important and useful for the collision and IP local optics tuning during the long and rather difficult process of SuperKEKB machine tuning towards the nominal beam parameters. Besides, a train integrated luminosity signal every 1 ms with the expected relative precision was also provided and used as input to the dithering orbit feedback system for its first successful tests with deliberately introduced horizontal beam-beam offsets. More tests on the dithering orbit feedback system are expected to ensure its future continuous operation.

Page generated in 0.0531 seconds