• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 30
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Optical Flow for Event Detection Camera

Almatrafi, Mohammed Mutlaq January 2019 (has links)
No description available.
12

Nové organické polovodiče pro bioelektroniku / New organic semiconductors for bioelectronics

Malečková, Romana January 2020 (has links)
This thesis focuses on the characterization of PEDOT:DBSA, a new semiconducting polymer for use in bioelectronic devices. It also deals with possibilities of surface treatment in order to enhance its biocompatibility and stability in aqueous environments. For this purpose, the organic polymer films were crosslinked with two crosslinking agents – GOPS and DVS. The ability of these agents to prevent leaching of some fractions of the polymer films in an aqueous environment and the ability to bind polymer molecules to each other as well as to the glass substrate was studied using the delamination test. Subsequently, the effects of these crosslinking agents on the film properties essential for the proper functions of bioelectronics made of these materials, was studied by contact angle measurements and four-point probes respectively. Moreover, several OECTs were prepared using original and crosslinked material as an active layer and were characterized by measuring transconductance and volumetric capacitance. PEDOT:DBSA has been shown to be a suitable material for use in bioelectronics, but its thin layers need to be stabilized in an aqueous environment. The agent DVS appears to be unsuitable for this purpose, mainly due to its insufficient film stabilization and its increased hydrophilicity of the film surface, thus increased tendency to interact with water, resulting in degradation of these thin layers. In contrast, GOPS, despite some reduction in film conductivity, has been able to stabilize the polymer layer over the long term, and thus appears to be a suitable way to stabilize PEDOT:DBSA.
13

Heated Tool Welding of Thick-Walled Components

Friedrich, Fabian 09 December 2019 (has links)
In the field of apparatus engineering and pipeline construction wall thicknesses of 100 mm are processed and joined with heated tool butt welding. The welding procedure is regarded as well-understood. However, there are no systematic experimental investigations into the wall thicknesses above 30 mm published. The welding parameters for high wall thicknesses of PE (polyethylene) and PP (polypropylene) are extrapolated as it is given in the guidelines of DVS (Deutscher Verband für Schweißen und verwandte Verfahren e.V.). However, cases of damage to welded pipelines show that not enough understanding of the process management and the weld properties is available for the welding of large pipe dimensions. The present study investigates the welding behaviour of semi- finished products (plates and pipes) with wall thicknesses up to 100 mm. The mechanical properties are tested in short-term tests and in long-term tensile creep tests. The results relating to the fusion behaviour basically show a curved melt profile which influences the mechanical behaviour of the welded components. The tensile creep tests establish a tendency towards the premature failure of the peripheral regions.
14

Heizelementschweißen von Bauteilen großer Wanddicke

Friedrich, Fabian, Gehde, Michael, Seefried, Andreas 06 December 2023 (has links)
Das Heizelementschweißen von Bauteilen großer Wanddicke wurde bisher wissenschaftlich kaum untersucht. In der DVS-Richtlinie wurden die Parameter für große Wanddicken linear extrapoliert. Die Veröffentlichung beschäftigt sich mit der systematischen Analyse des Heizelementschweißens von Wanddicken bis 100 mm. Weiterhin werden Prozessstrategien diskutiert, die zur Verbesserung der Langzeitfestigkeit von heizelementgeschweißten Bauteilen großer Wanddicke beitragen sollen.
15

Design and Characterization of SRAMs for Ultra Dynamic Voltage Scalable (U-DVS) Systems

Viveka, K R January 2016 (has links) (PDF)
The ever expanding range of applications for embedded systems continues to offer new challenges (and opportunities) to chip manufacturers. Applications ranging from exciting high resolution gaming to routine tasks like temperature control need to be supported on increasingly small devices with shrinking dimensions and tighter energy budgets. These systems benefit greatly by having the capability to operate over a wide range of supply voltages, known as ultra dynamic voltage scaling (U-DVS). This refers to systems capable of operating from nominal voltages down to sub-threshold voltages. Memories play an important role in these systems with future chips estimated to have over 80% of chip area occupied by memories. This thesis presents the design and characterization of an ultra dynamic voltage scalable memory (SRAM) that functions from nominal voltages down to sub-threshold voltages without the need for external support. The key contributions of the thesis are as follows: 1) A variation tolerant reference generation for single ended sensing: We present a reference generator, for U-DVS memories, that tracks the memory over a wide range of voltages and is tunable to allow functioning down to sub-threshold voltages. Replica columns are used to generate the reference voltage which allows the technique to track slow changes such as temperature and aging. A few configurable cells in the replica column are found to be sufficient to cover the whole range of voltages of interest. The use of tunable delay line to generate timing is shown to help in overcoming the effects of process variations. 2) Random-sampling based tuning algorithm: Tuning is necessary to overcome the in-creased effects of variation at lower voltages. We present an random-sampling based BIST tuning algorithm that significantly speed-up the tuning ensuring that the time required to tune is comparable to a single MBIST algorithm. Further, the use of redundancy after delay tuning enables maximum utilization of redundancy infrastructure to reduce power consumption and enhance performance. 3) Testing and Characterization for U-DVS systems: Testing and characterization is an important challenge in U-DVS systems that have remained largely unexplored. We propose an iterative technique that allows realization of an on-chip oscilloscope with minimal area overhead. The all digital nature of the technique makes it simple to design and implement across technology nodes. Combining the proposed techniques allows the designed 4 Kb SRAM array to function from 1.2 V down to 310 mV with reads functioning down to 190 mV. This would contribute towards moving ultra wide voltage operation a step closer towards implementation in commercial designs.
16

Analysing the Energy Efficiency of Training Spiking Neural Networks / Analysering av Energieffektiviteten för Träning av Spikande Neuronnät

Liu, Richard, Bixo, Fredrik January 2022 (has links)
Neural networks have become increasingly adopted in society over the last few years. As neural networks consume a lot of energy to train, reducing the energy consumption of these networks is desirable from an environmental perspective. Spiking neural network is a type of neural network inspired by the human brain which is significantly more energy efficient than traditional neural networks. However, there is little research about how the hyper parameters of these networks affect the relationship between accuracy and energy. The aim of this report is therefore to analyse this relationship. To do this, we measure the energy usage of training several different spiking network models. The results of this study shows that the choice of hyper-parameters in a neural network does affect the efficiency of the network. While correlation between any individual factors and energy consumption is inconclusive, this work could be used as a springboard for further research in this area. / Under de senaste åren har neuronnät blivit allt vanligare i samhället. Eftersom neuronnät förbrukar mycket energi för att träna dem är det önskvärt ur miljösynpunkt att minska energiförbrukningen för dessa nätverk. Spikande neuronnät är en typ av neuronnät inspirerade av den mänskliga hjärnan som är betydligt mer energieffektivt än traditionella neuronnät. Det finns dock lite forskning om hur hyperparametrarna i dessa nätverk påverkar sambandet mellan noggrannhet och energi. Syftet med denna rapport är därför att analysera detta samband. För att göra detta mäter vi energiförbrukningen vid träning av flera olika modeller av spikande neuronnät-modeller. Resultaten av denna studie visar att valet av hyperparametrar i ett neuronnät påverkar nätverkets effektivitet. Även om korrelationen mellan enskilda faktorer och energiförbrukning inte är entydig kan detta arbete användas som en startpunkt för ytterligare forskning inom detta område.
17

Analysis and Design of Resilient VLSI Circuits

Garg, Rajesh 2009 May 1900 (has links)
The reliable operation of Integrated Circuits (ICs) has become increasingly difficult to achieve in the deep sub-micron (DSM) era. With continuously decreasing device feature sizes, combined with lower supply voltages and higher operating frequencies, the noise immunity of VLSI circuits is decreasing alarmingly. Thus, VLSI circuits are becoming more vulnerable to noise effects such as crosstalk, power supply variations and radiation-induced soft errors. Among these noise sources, soft errors (or error caused by radiation particle strikes) have become an increasingly troublesome issue for memory arrays as well as combinational logic circuits. Also, in the DSM era, process variations are increasing at an alarming rate, making it more difficult to design reliable VLSI circuits. Hence, it is important to efficiently design robust VLSI circuits that are resilient to radiation particle strikes and process variations. The work presented in this dissertation presents several analysis and design techniques with the goal of realizing VLSI circuits which are tolerant to radiation particle strikes and process variations. This dissertation consists of two parts. The first part proposes four analysis and two design approaches to address radiation particle strikes. The analysis techniques for the radiation particle strikes include: an approach to analytically determine the pulse width and the pulse shape of a radiation induced voltage glitch in combinational circuits, a technique to model the dynamic stability of SRAMs, and a 3D device-level analysis of the radiation tolerance of voltage scaled circuits. Experimental results demonstrate that the proposed techniques for analyzing radiation particle strikes in combinational circuits and SRAMs are fast and accurate compared to SPICE. Therefore, these analysis approaches can be easily integrated in a VLSI design flow to analyze the radiation tolerance of such circuits, and harden them early in the design flow. From 3D device-level analysis of the radiation tolerance of voltage scaled circuits, several non-intuitive observations are made and correspondingly, a set of guidelines are proposed, which are important to consider to realize radiation hardened circuits. Two circuit level hardening approaches are also presented to harden combinational circuits against a radiation particle strike. These hardening approaches significantly improve the tolerance of combinational circuits against low and very high energy radiation particle strikes respectively, with modest area and delay overheads. The second part of this dissertation addresses process variations. A technique is developed to perform sensitizable statistical timing analysis of a circuit, and thereby improve the accuracy of timing analysis under process variations. Experimental results demonstrate that this technique is able to significantly reduce the pessimism due to two sources of inaccuracy which plague current statistical static timing analysis (SSTA) tools. Two design approaches are also proposed to improve the process variation tolerance of combinational circuits and voltage level shifters (which are used in circuits with multiple interacting power supply domains), respectively. The variation tolerant design approach for combinational circuits significantly improves the resilience of these circuits to random process variations, with a reduction in the worst case delay and low area penalty. The proposed voltage level shifter is faster, requires lower dynamic power and area, has lower leakage currents, and is more tolerant to process variations, compared to the best known previous approach. In summary, this dissertation presents several analysis and design techniques which significantly augment the existing work in the area of resilient VLSI circuit design.
18

Das "Auto der Zukunft" in der Bibliothek

Goller, Niels 17 January 2007 (has links) (PDF)
Am Donnerstag, 26. Oktober 2006, fand der diesjährige Aktionstag "Innovationen im Automobilbau", organisiert vom Arbeitskreis DVS-Studenten des Bezirksverbandes für Schweißen und verwandte Verfahren" (DVS)...
19

Computational Delay in Vehicles and Its Effect on Real Time Scheduling

Jain, Abhinna 01 January 2012 (has links) (PDF)
Present research into critical embedded control systems tends to focus on the computational elements and largely ignore the link between the computational and physical elements. This link is very important since the computational capability of the computer can greatly affect the performance and dynamics of the system it controls. The control computer is in the feedback loop of control systems and contributes to feedback delay in addition to already existing mechanical delays. While mechanical delays are compensated in control design, variable computational delays cause system to underperform in its intended physical behavior and impose a cost in terms of fuel or time. For this reason, the scheduler in a real-time operating systems should not focus only on the task deadlines, but also on efficient scheduling which minimizes the effect of computational delay on the controlled plant. The proposed work provides a systematic framework to manage and evaluate the implications of computational delay in vehicles. The work also includes cost sensitive real-time control task scheduling heuristics and Dynamic Voltage Scaling (DVS) for better energy/thermal control. We show through simulations that our heuristic achieves a significant improvement in cost over the traditional real-time scheduling algorithm Earliest Deadline First (EDF) and show that it can adjust according to energy constraints imposed on the system.
20

Contribution aux méthodes de reconstruction d'images appliquées à la tomographie d'émission par positrons par l'exploitation des symétries du système

Leroux, Jean-Daniel January 2014 (has links)
Le désir d’atteindre une haute résolution spatiale en imagerie médicale pour petits animaux conduit au développement d’appareils composés de détecteurs de plus en plus petits. Des appareils s’approchant de la résolution théorique maximale en tomographie d’émission par positrons (TEP) sont à nos portes. Pour retirer le maximum d’information de ces appareils, il importe d’utiliser des méthodes de traitement évoluées qui prennent en considération l’ensemble des phénomènes physiques entourant la prise de mesure en TEP. Le problème est d’autant plus complexe à résoudre du fait que ces caméras sont composées de milliers de détecteurs qui donnent lieu à des millions de lignes de réponses mesurées pouvant alors être traitées par un algorithme de reconstruction d’images. Cette situation mène à des problèmes de reconstruction d’images en 3 dimensions (3D) qui sont difficiles à résoudre principalement à cause des limites en ressources mémoires et de calcul des ordinateurs modernes. Les travaux réalisés dans le cadre de cette thèse répondent à deux grands besoins relatifs au domaine de la reconstruction d’images en TEP, soit l'atteinte d'une meilleure qualité d'image et l'accélération des calculs menant à l'obtention de celle-ci. Le premier volet des travaux repose sur le l'élaboration de méthodes de modélisation 3D précises du processus d’acquisition en TEP permettant d'atteindre une meilleure qualité d’image. Ces modèles 3D s'expriment sous forme de matrices systèmes qui sont utilisées par un algorithme de reconstruction d'images. Pour générer ces modèles 3D pour la TEP, des méthodes de calculs analytiques et basées sur des simulations Monte Carlo (MC) ont été développées. Des méthodes hybrides, basé sur des stratégies analytiques et Monte Carlo, ont également été mises en œuvre afin de combiner les avantages des deux approches. Les méthodes proposées se distinguent de l'art antérieur en ce qu'elles tirent profit des symétries du système afin de réduire considérablement le temps de calcul requis pour l'obtention de matrices 3D précises. Pour l’approche analytique, le calcul de la matrice est divisé en diverses étapes qui favorisent la réutilisation de modèles pré-calculés entre les lignes de réponses symétriques de l’appareil. Pour l’approche par simulations MC, la réutilisation des événements MC collectés entre les lignes de réponse symétriques de l’appareil permet d’augmenter la statistique utilisée pour générer la matrice MC et du même coup de réduire le temps de simulation. La méthode hybride proposée permet de réduire encore davantage le temps de simulation MC et cela, sans faire de compromis sur la qualité de la matrice système. Le second volet des travaux repose sur le développement de nouvelles méthodes de reconstruction d’images basées sur un référentiel en coordonnées cylindriques permettant de réduire les contraintes d’espace mémoire et d'accélérer les calculs menant à l’image. Ces méthodes se divisent en deux catégories distinctes. Les premières sont des méthodes dites itératives qui permettent de résoudre le problème de reconstruction d’images par un processus itératif qui réalise une nouvelle estimation de l’image à chaque itération de façon à maximiser le degré de vraisemblance entre l’image et la mesure de l’appareil. Les secondes sont des méthodes dites directes qui permettent de résoudre le problème en inversant la matrice système qui relie l’image à la mesure de projections par une décomposition en valeurs singulières (DVS) de la matrice. La matrice inverse ainsi obtenue peut alors être multipliée directement avec la mesure pour obtenir l’image reconstruite. L’utilisation d’une image en coordonnées cylindriques entraîne une redondance au niveau des coefficients de la matrice système obtenue. En exploitant ces redondances, il est possible d’obtenir une matrice système avec une structure dite bloc circulante qui peut alors être transformée dans le domaine de Fourier afin d’accélérer les calculs lors du processus de reconstruction d’images itératif ou par DVS. De plus, pour la méthode par DVS, l’utilisation d’une matrice bloc circulante factorisée facilite grandement la procédure d'inversion de la matrice par DVS, ce qui rend l’application de la méthode possible pour des problèmes de reconstruction d’images en 3D. Or, la résolution de problèmes aussi complexes n’était jusqu’ici pas possible avec les méthodes par DVS de l’art antérieur dû aux contraintes d’espace mémoire et à la charge excessive de calcul. En somme, les travaux combinés ont pour objectif ultime de réunir à la fois la vitesse de calcul et une qualité d'image optimale en un même algorithme afin de créer un outil de reconstruction 3D idéal pour l'utilisation dans un contexte clinique.

Page generated in 0.1242 seconds