• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 17
  • 15
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 147
  • 147
  • 79
  • 53
  • 36
  • 30
  • 29
  • 26
  • 24
  • 22
  • 21
  • 21
  • 21
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Imaging and Object Detection under Extreme Lighting Conditions and Real World Adversarial Attacks

Xiangyu Qu (16385259) 22 June 2023 (has links)
<p>Imaging and computer vision systems deployed in real-world environments face the challenge of accommodating a wide range of lighting conditions. However, the cost, the demand for high resolution, and the miniaturization of imaging devices impose physical constraints on sensor design, limiting both the dynamic range and effective aperture size of each pixel. Consequently, conventional CMOS sensors fail to deliver satisfactory capture in high dynamic range scenes or under photon-limited conditions, thereby impacting the performance of downstream vision tasks. In this thesis, we address two key problems: 1) exploring the utilization of spatial multiplexing, specifically spatially varying exposure tiling, to extend sensor dynamic range and optimize scene capture, and 2) developing techniques to enhance the robustness of object detection systems under photon-limited conditions.</p> <p><br></p> <p>In addition to challenges imposed by natural environments, real-world vision systems are susceptible to adversarial attacks in the form of artificially added digital content. Therefore, this thesis presents a comprehensive pipeline for constructing a robust and scalable system to counter such attacks.</p>
132

Exploiter la coopérativité d'assemblages supramoléculaires d'ADN pour contrôler la plage dynamique d'interrupteurs moléculaires

Lauzon, Dominic 04 1900 (has links)
L’autoassemblage de diverses biomolécules pour former des complexes moléculaires est à la base de la machinerie cellulaire et des processus biologiques qui s’y rattachent. Il est typiquement considéré qu’un assemblage de plusieurs protéines offre des avantages régulatifs comparativement à une structure protéique similaire construite avec une ou un nombre inférieur de composantes. Ces assemblages offrent, par exemple, la possibilité de contrôler l’activité d’un complexe grâce à la dépendance directe de l’assemblage sur la concentration de ces composantes. De plus, la coopérativité d’interaction entre ces diverses composantes ouvre la voie vers l’obtention de nouveaux mécanismes de régulation. Toutefois, les avantages et les inconvénients directement reliés au nombre de composantes impliquées dans un assemblage ne sont pas totalement bien compris puisque les protéines ont évolué et ont divergé suivant des millions d’années d’évolution. L’objectif principal de cette thèse est d’abord de créer un modèle moléculaire simplifié permettant de mieux comprendre les avantages coopératifs des autoassemblages biologiques pour ensuite s’en inspirer afin de mettre au point de nouveaux mécanismes moléculaires permettant d’optimiser la plage dynamique d’interrupteurs moléculaires autoassemblés. En même temps, il sera possible de mettre en lumière certains avantages évolutifs qui ont poussé les protéines à acquérir plus de composantes moléculaires. Tout d’abord, la création d’assemblages moléculaires fut effectuée en fragmentant une structure unimoléculaire en plusieurs fragments qui pourront, grâce à leurs interactions, reformer la structure originale. Grâce à une nanostructure simple d’ADN, c.-à-d. une jonction à trois branches, il fut possible d’étudier directement l’impact du nombre de composantes sur la fonctionnalité et la régulation d’assemblages multimériques. Il fut observé, malgré l’association plus lente d’un assemblage de trois composantes, que ce même assemblage s’associe de manière plus coopérative tout en permettant la création de nouveaux mécanismes de régulation (p. ex. plage dynamique étendue, auto-inhibition et minuterie moléculaire). Ce système simplifié d’ADN a donc permis de conclure que la fragmentation d’une nanostructure en plusieurs composantes est une méthode simple permettant d’optimiser un nanosystème artificiel ou naturel. Ensuite, une autre méthode de création d’assemblages moléculaires fut étudiée. Celle-ci consiste à fusionner des domaines interagissant par le biais d’un espaceur. Dans une telle stratégie, l’espaceur est appelé à jouer un rôle important dans les propriétés de l’assemblage. Ainsi, en utilisant le même modèle d’ADN à trois composantes, il fut en effet observé que les propriétés de l’espaceur (p. ex. sa longueur, sa composition ou sa nature chimique) affectent grandement les propriétés d’assemblage d’un système à trois composantes (p. ex. sa stabilité, son niveau de coopérativité ou sa plage dynamique d’assemblage). En effectuant une étude thermodynamique approfondie sur divers assemblages trimériques d’ADN, il fut découvert qu’un espaceur optimal stabilise l’association des diverses composantes en créant une structure plus compacte où les espaceurs se cachent au coeur de la jonction. Il fut aussi démontré qu’en optimisant l’espaceur, il est possible de programmer précisément la plage dynamique d’un assemblage moléculaire à trois composantes. Finalement, ces découvertes sur les avantages d’un assemblage à trois composantes ont permis la création d’une nouvelle stratégie afin d’optimiser la plage dynamique d’interrupteurs moléculaires. À l’inverse des activateurs allostériques classiques qui altèrent la force d’interaction d’un ligand, c.-à-d. le KD, en modifiant la conformation de l’interrupteur, un activateur multivalent permet de programmer précisément la plage dynamique de l’interrupteur en exploitant une nouvelle surface d’interaction grâce à la formation d’un assemblage à trois composantes. Cette nouvelle stratégie d’optimisation des interrupteurs moléculaires fut validée grâce à une tige-boucle d’ADN servant comme balise moléculaire. Cette preuve de concept permet de démontrer la viabilité des assemblages moléculaires pour conceptualiser de nouvelles nanotechnologies avec une plage dynamique optimisée. Il est donc possible d’imaginer que les assemblages moléculaires auront un impact immédiat dans divers domaines de la nanotechnologie comme en diagnostic médical, en délivrance contrôlée de médicaments ou en imagerie moléculaire. / The self-assembly of various biomolecules to form molecular complexes is at the basis of the cellular machinery and their related biological processes. It is typically thought that an assembly of several proteins provides regulatory advantages compared to a similar protein built with one or fewer molecular components. These molecular assemblies offer, for example, the possibility to control their activity through the direct dependency of the assembly on the concentration of its components. Moreover, the cooperativity of interaction between their multiple components opens the door to acquiring novel regulation mechanisms. However, the advantages and disadvantages directly related to the number of components involved in an assembly are not totally understood since proteins have evolved and diverged over millions of years of evolution. The main objective of this thesis is to first create a simplified molecular model that will enable to better understand the cooperative advantages of biological self-assemblies. Then, inspired by these new understandings, novel molecular mechanisms will be developed to enable the optimization of the dynamic range of self-assembled molecular switches. Meanwhile, it will be possible to highlight some advantages that have pushed proteins to acquire more molecular components. The creation of molecular assemblies was demonstrated by fragmenting a nanostructure into multiple fragments which, through their intermolecular interactions, reassemble into the original structure. Using a simple DNA-based nanostructure, i.e., a three-way junction, it was possible to directly study the impact of the number of components on the functionality and regulation of multimeric assemblies. It was found that despite the slower assembly rate of a three-component assembly, this same assembly undergoes a more cooperative assembly enabling the creation of new regulatory mechanisms (e.g., extended dynamic range, self-inhibition and molecular timers). This simplified DNA-based system has therefore made it possible to conclude that fragmenting a nanostructure into multiple components is a simple method to optimize an artificial or a natural nanosystem. Next, another method to create molecular assemblies was studied. This method consists in fusing interacting domains through a linker. In this strategy, the linker will play an important role in dictating the properties of the assembly. Therefore, by using the same three-component DNA-based model, it has been observed that the chemical properties of the linker (e.g., its length, its composition, or its chemical nature) considerably affect the assembly properties of a three-component system (e.g., its stability, its level of cooperativity, or its dynamic range). Through an exhaustive thermodynamic study on various trimeric DNA-based assemblies, it was determined that the optimal linker stabilizes the association of all components by creating a more compact assembly where the linkers are buried within the core of the junction. It was also demonstrated that the optimization of the linkers allows to precisely program the dynamic range of the assembly. Finally, these discoveries on the advantages of a three-component assembly have enabled the creation of a new design strategy to optimize the dynamic range of molecular switches. In contrast to the classic allosteric activator which alters the affinity of a ligand (i.e., the KD) by changing the conformation of the switch, a multivalent activator enables to precisely program the dynamic range of a switch by exploiting a new interacting interface through the formation of a three-component assembly. This new strategy to optimize molecular switches was validated using a DNA-based molecular beacon. This proof of concept demonstrates the viability of molecular assemblies to design novel nanotechnologies with optimized dynamic range. It is possible to imagine that these molecular assemblies could have a direct impact on multiple fields of nanotechnology including medical diagnostics, controlled drug delivery and molecular imaging.
133

Diode laser 1.5 micron de puissance et faible bruit pour l’optique hyperfréquence. / High power, low noise 1.5 micron diode lasers for microwave photonics.

Faugeron, Mickael 22 October 2012 (has links)
Cette thèse porte sur la conception, la réalisation et la caractérisation de diodes lasers de puissance, faible bruit à 1.5 µm sur InP pour des applications d’optique hyperfréquence, notamment pour des liaisons optiques analogiques de grande dynamique pour les systèmes radar. La première partie du travail a consisté à modéliser et concevoir des structures laser DFB ayant de faibles pertes internes. Ces structures, appelées lasers à semelle, incorporent une couche épaisse de matériaux entre la zone active et le substrat pour agrandir et délocaliser le mode propre optique des zones dopées p. La complexité de la conception résidait dans le bon compromis à trouver entre les performances statiques et dynamiques. Nous avons réalisé des diodes-lasers DFB avec une puissance > 150 mW, un rendement de 0.4 W/A, un niveau de bruit de 160 dB/Hz et une bande passante de modulation à 3 dB de 7.5 GHz. Les composants ont ensuite été caractérisés puis évalués dans des liaisons analogiques. Nous avons démontré des performances de gain de liaison, de dynamique et de point de compression à l’état de l’art mondial. En bande L (1-2 GHz) par exemple, nous avons montré des liaisons avec 0.5 dB de gain, un point de compression de 21 dBm et une dynamique (SFDR) de 122 dB.Hz2/3.En utilisant la même méthodologie de conception, la dernière partie du travail de thèse a été consacrée à la réalisation et à la caractérisation de lasers de puissance à verrouillage de modes pour la génération de train d’impulsions ultra-courts et la génération de peignes de fréquences. Ces structures présentent de très faibles largeurs de raie RF (550 Hz) et de très fortes puissances optiques (> 18 W en puissance crête). / This work focuses on the design, realization and characterization of high power, low noise 1.5 µm diode lasers for microwave applications and more particularly for high dynamic optical analog link for radar systems. The first part of this study deals with modeling and design of low internal losses DFB laser structures. These specific structures are called slab-coupled optical waveguide lasers, and are composed of a thick layer between the active layer and the substrate. The aim of this waveguide is to enlarge the optical eigenmode and to move the optical mode away from p-doped layers. The main difficulty was to find the good trade-off between laser static performances (optical power, efficiency) and dynamic performances (RIN and modulation bandwidth). We have succeeded in developing high efficiency (0.4 W/A), low noise (RIN ≈ 160 dB/Hz) DFB lasers with more than 150 mW and a 3 dB modulation bandwidth up to 7.5 GHz. We have then characterized our components on wide band and narrow band analog links. We have demonstrated state of the art gain links, dynamic and 1 dB compression power. In the L band (1-2 GHz) for example, we have obtained an optical link with a gain of 0.5 dB, a compression power of 21 dBm and a dynamic (SFDR) of 122 dB.Hz2/3.Finally we have applied the methodology and the design of slab-coupled optical waveguide structures to develop high power mode-locked lasers for ultra-short pulses generation and for optical and electrical comb generation. We have demonstrated narrow RF linewidth (550 Hz) lasers with very high power (continuous power > 400 mW and peak power > 18 W).
134

Reverse audio engineering for active listening and other applications / Rétroingénierie du son pour l’écoute active et autres applications

Gorlow, Stasnislaw 16 December 2013 (has links)
Ce travail s’intéresse au problème de la rétroingénierie du son pour l’écoute active. Le format considéré correspond au CD audio. Le contenu musical est vu comme le résultat d’un enchaînement de la composition, l’enregistrement, le mixage et le mastering. L’inversion des deux dernières étapes constitue le fond du problème présent. Le signal audio est traité comme un mélange post-non-linéaire. Ainsi, le mélange est « décompressé » avant d'être « décomposé » en pistes audio. Le problème est abordé dans un contexte informé : l’inversion est accompagnée d'une information qui est spécifique à la production du contenu. De cette manière, la qualité de l’inversion est significativement améliorée. L’information est réduite de taille en se servant des méthodes de quantification, codage, et des faits sur la psychoacoustique. Les méthodes proposées s’appliquent en temps réel et montrent une complexité basse. Les résultats obtenus améliorent l’état de l’art et contribuent aux nouvelles connaissances. / This work deals with the problem of reverse audio engineering for active listening. The format under consideration corresponds to the audio CD. The musical content is viewed as the result of a concatenation of the composition, the recording, the mixing, and the mastering. The inversion of the two latter stages constitutes the core of the problem at hand. The audio signal is treated as a post-nonlinear mixture. Thus, the mixture is “decompressed” before being “decomposed” into audio tracks. The problem is tackled in an informed context: The inversion is accompanied by information which is specific to the content production. In this manner, the quality of the inversion is significantly improved. The information is reduced in size by the use of quantification and coding methods, and some facts on psychoacoustics. The proposed methods are applicable in real time and have a low complexity. The obtained results advance the state of the art and contribute new insights.
135

Molecularly imprinted polymers for detection of volatile organics associated with fuel combustion

Ngwanya, Olwethu January 2018 (has links)
Magister Scientiae - MSc (Chemistry) / Pollutants such as polycyclic aromatic hydrocarbons (PAHs) are known for their toxic effects which may lead to the cause of degenerative diseases in both humans and animals. PAHs are widespread in the environment, and may be found in water, food, automotive industry and petrochemical industries to name but a few sources. Literature reports have highlighted industrial workplace exposure to PAHs as a leading cause for development of cancer in workers. Particularly, workers in the petrochemical industry are adversely affected and the incidence of skin and lung cancer in this population group is high. The United States of America in its guidelines developed by environmental protection agency (EPA) has identified 18 PAHs as priority pollutants. Among these are anthracene, benzo[a]pyrene and pyrene which have been selected as the focal point of this study due to their significance in the petrochemical industry. Due to the carcinogenic and mutagenic properties reported in literature for certain PAHs, there have been monitoring procedures taken in most countries around the world. The commonly used analytical methods for the detection of PAHs from industrial samples are high performance liquid chromatography (HPLC) coupled to fluorescence detection, membrane filtration, ozonation and reverse osmosis. Analysis of PAHs from the petrochemical industry is typically performed by HPLC method as well as sono-degredation in the presence of oxygen and hydrogen peroxide.
136

An empirically derived system for high-speed rendering

Rautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
137

Dvojitě vyvážený směšovač – laboratorní přípravek / Doble-balanced mixer - laboratory equipment

Dušek, Libor January 2008 (has links)
The aim of this work was double-balanced mixer implementation, which will be used like laboratory equipment. This thesis deals with design of the double-balanced mixer from first theoretical principles to a practical design of a laboratory equipment. For the practical design the integrated mixer SA612 was used. Input signal to the mixer up to 500 MHz frequency can be used. For required operation external oscillator and fifth-order low pass filter were constructed. Oscillator was designed for fixed frequency 32 MHz. Fifth-order low pass filter was inserted between the mixer and the oscillator, because of filtering higher harmonics. The second aim of the work was measuring double-balanced mixer basic parameters, such as Compression Point (P-1dB) and Intercept Point (IP3). For the IP3 measurement, another one device was required. It consists of the power combiner for mixing two frequency close signals and third-order bandpass filter, which selects required frequency band. Finally, the laboratory equipment was fabricated and its real parameters were measured.
138

Generátor přesného kmitočtu - DDS / Precise Frequency Generator - DDS

Kratochvíl, Petr January 2009 (has links)
This work deals with frequency generators based on the direct digital synthesis method DDS. Basic principles and attributes of the frequency generator DDS are explained. The text describes parameters influencing and defining a quality of the generated signal. The list of available integrated circuits realizing the direct digital synthesis is mentioned. A construction of the DDS generator with a device AD9954 and the generator control are described. At the end of the work, the function and parameters of the designed generator are verified.
139

Návrh a realizace filtru ADSR / Design and realization of ADSR filter

Pokorný, Martin January 2009 (has links)
The master´s thesis is focused on design of ADSR filter and voltage controlled amplifier (VCA). Three additional circuits performing analog signal processing are added. Functionality of designed circuits is verified in simulation program. All designed circuits are practically realized. Thesis includes complete design of the mentioned circuits and all necessary informations for its practical realization. All designed circuits are measured and the results are presented.
140

Analogový vstupní díl pro softwarový přijímač / Front end for software receiver

Slezák, Jakub January 2012 (has links)
This thesis deals with a theoretical analysis of the basic parameters of receivers, input circuit architecture and signal digitization. According to the specified assignment it is outlined block scheme of front end for software receiver with specified components and the total bilance is calculated. Individual parts of the system are designed and realized. This is a set of four input filters for bandwidths: short waves up to 30 MHz, 87,5-108 MHz, 144-148 MHz and 174-230 MHz. The main point of design is a circuit containing a low-noise amplifiers, switches, and two amplifiers with adjustable amplification. Mainly are used integrated circuits from Analog Devices corporation. To control the various switches and adjustable amplifiers was designed a separate panel, which is connected to the main circuit via a cable. In the last phase was the whole system and its components subjected to measurements. Thanks to a number of mounted SMA connectors it is possible to measure different parts of the system and we are able to modify it partially.

Page generated in 0.04 seconds