• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 13
  • 12
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 47
  • 35
  • 27
  • 26
  • 24
  • 24
  • 22
  • 19
  • 17
  • 16
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Toward an energy harvester for leadless pacemakers / Vers un récupérateur d'énergie pour stimulateur intracardiaque

Deterre, Martin 09 July 2013 (has links)
Ce travail consiste à développer un système convertissant une partie de l’énergie mécanique du cœur en électricité pour alimenter les stimulateurs cardiaques de nouvelle génération, implants sans sonde ni batteries implantés directement dans la cavité cardiaque. Après études de différentes sources d’énergies et concept associés, l’option liée à la pression sanguine, appliquant sur une partie souple du boîtier de l’implant des efforts transmis à un transducteur interne les convertissant en électricité, s’est révélée la plus prometteuse. Cette solution présente les avantages principaux suivants par rapport aux systèmes inertiels usuels : grande densité de puissance, adaptabilité au rythme cardiaque et potentiel de miniaturisation. Un boîtier ultra-souple électro-déposé de 10 µm d’épaisseur en forme de soufflet a été modélisé, fabriqué et caractérisé, validant ainsi le concept de récupérateur proposé. Un transducteur électrostatique novateur (3D multicouche à peignes interdigités et à chevauchement hors-plan), étudié par des modélisations analytiques et numériques, est en cours de fabrication. Selon l’électronique associée, ce transducteur promet une grande densité d’énergie extraite. Un transducteur piézoélectrique micro-usiné en forme de spirale et à électrodes micro-structurées, est également présenté. Les défis spécifiques des spirales dontla flexibilité permet d’augmenter l’énergie mécanique d’entrée sont étudiés notamment par simulation numériques, et des prototypes ont été micro-fabriqués et caractérisés. Au final, une énergie de 3 µJ/cm3/cycle est obtenue et de nombreuses perspectives d’amélioration permettent d’envisager une puissance au moins 10 fois supérieure. / This work consists in the development and design of an energy harvesting device to supply power to the new generation pacemakers, miniaturized leadless implants without battery placed directly in heart chambers. After analyzing different mechanical energy sources in the cardiac environment and associated energy harvesting mechanisms, a concept based on regular blood pressure variation stood out: an implant with a flexible packaging that transmits blood forces to an internal transducer. Advantages compared to traditional inertial scavengers are mainly: greater power density, adaptability to heartbeat frequency changes and miniaturization potential. Ultra-flexible 10-µm thin metal bellows have been designed, fabricated and tested. These prototypes acting as implant packaging that deforms under blood pressure actuation have validated the proposed harvesting concept. A new type of electrostatic transducer (3D multi-layer out-of-plane overlap structure with interdigitated combs) has been introduced and fully analyzed. Promising numerical results and associated fabrication processes are presented. Also, large stroke optimized piezoelectric spiral transducers including their complex electrodes patterns have been studied through a design analysis, numerical simulations, prototype fabrication and experimental testing. Apower density of 3 µJ/cm3/cycle has been experimentally achieved. With further addressed developments, the proposed device should provide enough energy to power autonomously and virtually perpetually the next generation of pacemakers.
142

Finite elements for modeling of localized failure in reinforced concrete / Éléments finis pour la modélisation de la rupture localisée dans le béton armé / Končni elementi za modeliranje lokaliziranih porušitev v armiranem betonu

Jukic, Miha 13 December 2013 (has links)
Dans ce travail, différentes formulations d'éléments de poutres sont proposées pour l'analyse à rupture de structures de type poutres ou portiques en béton armé soumises à des chargements statiques monotones. La rupture localisée des matériaux est modélisée par la méthode à discontinuité forte, qui consiste à enrichir l'interpolation standard des déplacements (ou rotations) avec des fonctions discontinues associées à un paramètre cinématique supplémentaire interprété comme un saut de déplacement (ou rotation). Ces paramètres additionnels sont locaux et condensés au niveau élémentaire. Un élément fini écrit en efforts résultants et deux éléments finis multi-couches sont développés dans ce travail. L'élément de poutre d'Euler Bernouilli écrit en effort résultant présente une discontinuité en rotation. La réponse en flexion du matériau hors discontinuité est décrite par un modèle élastoplastique en effort résultant et la relation cohésive liant moment et saut de rotation sur la rotule plastique est, quant à elle, décrite par un modèle rigide plastique. La réponse axiale est suppposée élastique. Pour ce qui concerne l'approche multi-couche, chaque couche est considérée comme une barre constituée de béton ou d'acier. La partie régulière de la déformation de chaque couche est calculée en s'appuyant sur la cinématique associée à la théorie d'Euler Bernoulli ou de Timoshenko. Une déformation axiale additionnelle est considérée par l'introduction d'une discontinuité du déplacement axial, introduite indépendamment dans chaque couche. Le comportement du béton est pris en compte par un modèle élasto-endommageable alors que celui de l'acier est décrit par un modèle élastoplastique. La relation cohésive entre la traction sur la discontinuité et le saut de déplacement axial est décrit par un modèle rigide endommageable adoucissant pour les barres (couches) en béton et rigide plastique adoucissant pour les barres en acier. La réponse en cisaillement pour l'élement de Timoshenko est supposée élastique. Enfin, l'élément multi-couche de Timoshenko est enrichi en introduisant une partie visqueuse dans la réponse adoucissante. L'implantation numérique des différents éléments développés dans ce travail est présentée en détail. La résolution par une procédure d'«operator split» est décrite pour chaque type d'élément. Les différentes quantités nécessaires pour le calcul au niveau local des variables internes des modèles non linéaires ainsi que pour la construction du système global fournissant les valeurs des dégrés de liberté sont précisées. Les performances des éléments développés sont illustrées à travers des exemples numériques montrant que la formulation basée sur un élément multicouche d'Euler Bernouilli n'est pas robuste alors les simulations s'appuyant sur des éléments d'Euler Bernouilli en efforts résultants ou sur des éléments multicouche de Timoshenko fournissent des résultats très satisfaisants. / In this work, several beam finite element formulations are proposed for failure analysis of planar reinforced concrete beams and frames under monotonic static loading. The localized failure of material is modeled by the embedded strong discontinuity concept, which enhances standard interpolation of displacement (or rotation) with a discontinuous function, associated with an additional kinematic parameter representing jump in displacement (or rotation). The new parameters are local and are condensed on the element level. One stress resultant and two multi-layer beam finite elements are derived. The stress resultant Euler-Bernoulli beam element has embedded discontinuity in rotation. Bending response of the bulk of the element is described by elasto-plastic stress resultant material model. The cohesive relation between the moment and the rotational jump at the softening hinge is described by rigid-plastic model. Axial response is elastic. In the multi-layer beam finite elements, each layer is treated as a bar, made of either concrete or steel. Regular axial strain in a layer is computed according to Euler-Bernoulli or Timoshenko beam theory. Additional axial strain is produced by embedded discontinuity in axial displacement, introduced individually in each layer. Behavior of concrete bars is described by elastodamage model, while elasto-plasticity model is used for steel bars. The cohesive relation between the stress at the discontinuity and the axial displacement jump is described by rigid-damage softening model in concrete bars and by rigid-plastic softening model in steel bars. Shear response in the Timoshenko element is elastic. Finally, the multi-layer Timoshenko beam finite element is upgraded by including viscosity in the softening model. Computer code implementation is presented in detail for the derived elements. An operator split computational procedure is presented for each formulation. The expressions, required for the local computation of inelastic internal variables and for the global computation of the degrees of freedom, are provided. Performance of the derived elements is illustrated on a set of numerical examples, which show that the multi-layer Euler-Bernoulli beam finite element is not reliable, while the stress-resultant Euler-Bernoulli beam and the multi-layer Timoshenko beam finite elements deliver satisfying results. / V disertaciji predlagamo nekaj formulacij končnih elementov za porušno analizo armiranobetonskih nosilcev in okvirjev pod monotono statično obteˇzbo. Lokalizirano porušitev materiala modeliramo z metodo vgrajene nezveznosti, pri kateri standardno interpolacijo pomikov (ali zasukov) nadgradimo z nezvezno interpolacijsko funkcijo in z dodatnim kinematičnim parametrom, ki predstavlja velikost nezveznosti v pomikih (ali zasukih). Dodatni parametri so lokalnega značaja in jih kondenziramo na nivoju elementa. Izpeljemo en rezultantni in dva večslojna končna elementa za nosilec. Rezultantni element za Euler-Bernoullijev nosilec ima vgrajeno nezveznost v zasukih. Njegov upogibni odziv opišemo z elasto-plastičnim rezultantnim materialnim modelom. Kohezivni zakon, ki povezuje moment v plastičnem členku s skokom v zasuku, opišemo s togo-plastičnim modelom mehčanja. Osni odziv je elastičen. V večslojnih končnih elementih vsak sloj obravnavamo kot betonsko ali jekleno palico. Standardno osno deformacijo v palici izračunamo v skladu z Euler-Bernoullijevo ali s Timošenkovo teorijo nosilcev. Vgrajena nezveznost v osnem pomiku povzroči dodatno osno deformacijo v posamezni palici. Obnašanje betonskega sloja opišemo z modelom elasto-poškodovanosti, za sloj armature pa uporabimo elasto-plastični model. Kohezivni zakon, ki povezuje napetost v nezveznosti s skokom v osnem pomiku, opišemo z modelom mehčanja v poškodovanosti za beton in s plastičnim modelom mehčanja za jeklo.Striˇzni odziv Timošenkovega nosilca je elastičen. Večslojni končni element za Timošenkov nosilec nadgradimo z viskoznim modelom mehčanja. Za vsak končni element predstavimo računski algoritem ter vse potrebne izraze za lokalni izračun neelastičnih notranjih spremenljivk in za globalni izračun prostostnih stopenj. Delovanje končnih elementov preizkusimo na več numeričnih primerih. Ugotovimo, da večslojni končni element za Euler-Bernoullijev nosilec ni zanesljiv, medtem ko rezultantni končni element za Euler-Bernoullijev nosilec in večslojni končni element za Timošenkov nosilec dajeta zadovoljive rezultate.
143

Development of lightweight and low-cost microwave components for remote-sensing applications

Donado Morcillo, Carlos Alberto 11 January 2013 (has links)
The objective of the proposed research is to design, implement, and characterize low-cost, lightweight front-end components and subsystems in the microwave domain through innovative packaging architectures for remote sensing applications. Particular emphasis is placed on system-on-package (SoP) solutions implemented in organic substrates as a low-cost alternative to conventional, expensive, rigid, and fragile radio- frequency substrates. To this end, the dielectric properties of organic substrates RT/duroid 5880, 6002 and 6202 are presented from 30 GHz to 70 GHz, covering most of the Ka and V radar bands, giving also a thorough insight on the uncertainty of the microstrip ring resonator method by means of the Monte Carlo uncertainty analysis. Additionally, an ultra-thin, high-power antenna-array technology, with transmit/ receive (T/R) functionality is introduced for mobile applications in the X band. Two lightweight SoP T/R array panels are presented in this work using novel technologies such as Silicon Germanium integrated circuits and microelectromechanical system switches on a hybrid organic package of liquid crystal polymer and RT/duroid 5880LZ. A maximum power of 47 dBm is achieved in a package with a thickness of 1.8 mm without the need of bulky thermal management devices. Finally, to address the thermal limitations of thin-film substrates of interest (liquid crystal polymer, RT/duroid 6002, alumina and Aluminum Nitride), a thermal assessment of microstrip structures is presented in the X band, along with the thermal characterization of the dielectric properties of RT/duroid 6002 from 20 C to 200 C and from 30 GHz to 70 GHz. Additional high-power, X-band technologies presented in this work include: a novel and compact topology for evanescent mode filters, and low-profile Wilkinson power dividers implemented on Aluminum Nitride using Tantalum Nitride thin-film resistors.
144

Superconducting Nanostructures for Quantum Detection of Electromagnetic Radiation

Jafari Salim, Amir 06 September 2014 (has links)
In this thesis, superconducting nanostructures for quantum detection of electromagnetic radiation are studied. In this regard, electrodynamics of topological excitations in 1D superconducting nanowires and 2D superconducting nanostrips is investigated. Topological excitations in superconducting nanowires and nanostrips lead to crucial deviation from the bulk properties. In 1D superconductors, topological excitations are phase slippages of the order parameter in which the magnitude of the order parameter locally drops to zero and the phase jumps by integer multiple of 2\pi. We investigate the effect of high-frequency field on 1D superconducting nanowires and derive the complex conductivity. Our study reveals that the rate of the quantum phase slips (QPSs) is exponentially enhanced under high-frequency irradiation. Based on this finding, we propose an energy-resolving terahertz radiation detector using superconducting nanowires. In superconducting nanostrips, topological fluctuations are the magnetic vortices. The motion of magnetic vortices result in dissipative processes that limit the efficiency of devices using superconducting nanostrips. It will be shown that in a multi-layer structure, the potential barrier for vortices to penetrate inside the structure is elevated. This results in significant reduction in dissipative process. In superconducting nanowire single photon detectors (SNSPDs), vortex motion results in dark counts and reduction of the critical current which results in low efficiency in these detectors. Based on this finding, we show that a multi-layer SNSPD is capable of approaching characteristics of an ideal single photon detector in terms of the dark count and quantum efficiency. It is shown that in a multi-layer SNSPD the photon coupling efficiency is dramatically enhanced due to the increase in the optical path of the incident photon.
145

Utilisation combinée des rayons X et gamma émis lors de l'interaction avec la matière d'ions légers aux énergies intermédiaires : des mécanismes primaires de réaction aux applications / Combined used of X and gamma ray emission induced by the interaction of light charged ions with matter at medium energy : from primary reactions mechanisms to applications

Subercaze, Alexandre 28 November 2017 (has links)
PIXE (Particle Induced X-ray Emission) et PIGE (Particle Induced Gamma-ray Emission) sont des méthodes d’analyse par faisceau d’ions, multiélémentaires et non destructives. Elles sont basées sur la détection des rayons X et gamma caractéristiques émis suite à l’interaction de particules chargées avec la matière. La méthode PIXE permet de quantifier les éléments de numéro atomique Z>11 avec une limite de détection au niveau du μg/g (ppm). Les rayons X émis par les éléments légers (Z<11) sont fortement atténués par la matière, limitant la sensibilité de PIXE pour cette gamme de numéro atomique. Ces éléments peuvent légers être analysés, simultanément, par la méthode PIGE. Un des nombreux avantages de la méthode PIXE/PIGE est sa capacité à pouvoir effectuer différentes analyses (cartographie des concentrations, analyse en profondeur, objets précieux). Il est possible d’analyser des échantillons aussi bien homogènes que non homogènes. La méthode PIXE à haute énergie a été développée au cyclotron ARRONAX avec des faisceaux de particules pouvant atteindre 70 MeV. La technique PIXE à haute énergie permet, notamment, l’analyse d’échantillons épais et limite les risques d’endommagement. Premièrement la plateforme PIXE/PIGE à haute énergie est décrite. Ensuite une étude de la méthode PIGE à haute énergie ainsi que la mise en place d’un protocole de mesure de sections efficaces sont présentées. Pour finir les méthodes mises en place ainsi que les résultats obtenus lors de l’analyse de plusieurs types d’échantillons non homogènes (multicouches et granulaires) sont présentés et discutés. / Particle Induced X-ray Emission (PIXE) and Particle Induced Gamma-ray Emission (PIGE) are multi-elemental and non-destructives techniques. They are based on the detection of characteristic X-ray and gamma emission induced by the interaction of accelerated charged particles with matter. Elements with an atomic number Z> 11 can be quantified reaching a limit of detection in the order of μg/g (ppm). X-rays from light elements are strongly attenuate by matter. Therefore, PIXE shows little sensitivity for lights elements. Those elements are analyzed simultaneously using PIGE. One of the benefits of PIXE/PIGE is its ability to perform analysis with different requirement (elemental concentration mapping, in-depth analysis, valuable objects). Homogeneous and non-homogenous samples can be studied thanks to PIXE/PIGE. High energy PIXE (HEPIXE) has been developed at the ARRONAX cyclotron using particles beams up to 70 MeV. Thus analysis of thick samples is achievable using HEPIXE. Using high energy beams can also reduce the risk of damaging the sample. First of all, the high energy PIXE/PIGE platform develop at ARRONAX is described. Then the results given by high energy PIGE analysis and the experimental procedure for gamma emission cross section measurements are discussed. Finally, the methods developed and the results obtained during the analysis of inhomogeneous samples (multi-layer and granular samples) are presented and discussed.
146

An Extension Of Multi Layer IPSec For Supporting Dynamic QoS And Security Requirements

Kundu, Arnab 02 1900 (has links) (PDF)
Governments, military, corporations, financial institutions and others exchange a great deal of confidential information using Internet these days. Protecting such confidential information and ensuring their integrity and origin authenticity are of paramount importance. There exist protocols and solutions at different layers of the TCP/IP protocol stack to address these security requirements. Application level encryption viz. PGP for secure mail transfer, TLS based secure TCP communication, IPSec for providing IP layer security are among these security solutions. Due to scalability, wide acceptance of the IP protocol, and its application independent character, the IPSec protocol has become a standard for providing Internet security. The IPSec provides two protocols namely the Authentication header (AH) and the Encapsulating Security Payload (ESP). Each protocol can operate in two modes, viz. transport and tunnel mode. The AH provides data origin authentication, connectionless integrity and anti replay protection. The ESP provides all the security functionalities of AH along with confidentiality. The IPSec protocols provide end-to-end security for an entire IP datagram or the upper layer protocols of IP payload depending on the mode of operation. However, this end-to-end model of security restricts performance enhancement and security related operations of intermediate networking and security devices, as they can not access or modify transport and upper layer headers and original IP headers in case of tunnel mode. These intermediate devices include routers providing Quality of Service (QoS), TCP Performance Enhancement Proxies (PEP), Application level Proxy devices and packet filtering firewalls. The interoperability problem between IPSec and intermediate devices has been addressed in literature. Transport friendly ESP (TF-ESP), Transport Layer Security (TLS), splitting of single IPSec tunnel into multiple tunnels, Multi Layer IPSec (ML-IPSec) are a few of the proposed solutions. The ML-IPSec protocol solves this interoperability problem without violating the end-to-end security for the data or exposing some important header fields unlike the other solutions. The ML-IPSec uses a multilayer protection model in place of the single end-to-end model. Unlike IPSec where the scope of encryption and authentication applies to the entire IP datagram, this scheme divides the IP datagram into zones. It applies different protection schemes to different zones. When ML-IPSec protects a traffic stream from its source to its destination, it first partitions the IP datagram into zones and applies zone-specific cryptographic protections. During the flow of the ML-IPSec protected datagram through an authorized intermediate gateway, certain type I zones of the datagram may be decrypted and re-encrypted, but the other zones will remain untouched. When the datagram reaches its destination, the ML-IPSec will reconstruct the entire datagram. The ML-IPSec protocol, however suffers from the problem of static configuration of zones and zone specific cryptographic parameters before the commencement of the communication. Static configuration requires a priori knowledge of routing infrastructure and manual configuration of all intermediate nodes. While this may not be an issue in a geo-stationary satellite environment using TCP-PEP, it could pose problems in a mobile or distributed environment, where many stations may be in concurrent use. The ML-IPSec endpoints may not be trusted by all intermediate nodes in a mobile environment for manual configuration without any prior arrangement providing the mutual trust. The static zone boundary of the protocol forces one to ignore the presence of TCP/IP datagrams with variable header lengths (in case of TCP or IP headers with OPTION fields). Thus ML-IPSec will not function correctly if the endpoints change the use of IP or TCP options, especially in case of tunnel mode. The zone mapping proposed in ML-IPSec is static in nature. This forces one to configure the zone mapping before the commencement of the communication. It restricts the protocol from dynamically changing the zone mapping for providing access to intermediate nodes without terminating the existing ML-IPSec communication. The ML-IPSec endpoints can off course, configure the zone mapping with maximum number of zones. This will lead to unnecessary overheads that increase with the number of zones. Again, static zone mapping could pose problems in a mobile or distributed environment, where communication paths may change. Our extension to the ML-IPSec protocol, called Dynamic Multi Layer IPSec (DML-IPSec) proposes a multi layer variant with the capabilities of dynamic zone configuration and sharing of cryptographic parameters between IPSec endpoints and intermediate nodes. It also accommodates IP datagrams with variable length headers. The DML-IPSec protocol redefines some of the IPSec and ML-IPSec fundamentals. It proposes significant modifications to the datagram processing stage of ML-IPSec and proposes a new key sharing protocol to provide the above-mentioned capabilities. The DML-IPSec supports the AH and ESP protocols of the conventional IPSec with some modifications required for providing separate cryptographic protection to different zones of an IP datagram. This extended protocol defines zone as a set of non-overlapping and contiguous partitions of an IP datagram, unlike the case of ML-IPSec where a zone may consist of non-contiguous portions. Every zone is provided with cryptographic protection independent of other zones. The DML-IPSec categorizes zones into two separate types depending on the accessibility requirements at the intermediate nodes. The first type of zone, called type I zone, is defined on headers of IP datagram and is required for examination and modification by intermediate nodes. One type I zone may span over a single header or over a series of contiguous headers of an IP datagram. The second type of zone, called type II zone, is meant for the payload portion and is kept secure between endpoints of IPSec communications. The single type II zone starts immediately after the last type I zone and spans till the end of the IP datagram. If no intermediate processing is required during the entire IPSec session, the single type II zone may cover the whole IP datagram; otherwise the single type II zone follows one or more type I zones of the IP datagram. The DML-IPSec protocol uses a mapping from the octets of the IP datagram to different zones, called zone map for partitioning an IP datagram into zones. The zone map contains logical boundaries for the zones, unlike physical byte specific boundaries of ML-IPSec. The physical boundaries are derived on-the-fly, using either the implicit header lengths or explicit header length fields of the protocol headers. This property of the DML-IPSec zones, enables it to accommodate datagrams with variable header lengths. Another important feature of DML-IPSec zone is that the zone maps need not remain constant through out the entire lifespan of IPSec communication. The key sharing protocol may modify any existing zone map for providing service to some intermediate node. The DML-IPSec also redefines Security Association (SA), a relationship between two endpoints of IPSec communication that describes how the entities will use security services to communicate securely. In the case of DML-IPSec, several intermediate nodes may participate in defining these security protections to the IP datagrams. Moreover, the scope of one particular set of security protection is valid on a single zone only. So a single SA is defined for each zone of an IP datagram. Finally all these individual zonal SA’s are combined to represent the security relationship of the entire IP datagram. The intermediate nodes can have the cryptographic information of the relevant type I zones. The cryptographic information related to the type II zone is, however, hidden from any intermediate node. The key sharing protocol is responsible for selectively sharing this zone information with the intermediate nodes. The DML-IPSec protocol has two basic components. The first one is for processing of datagrams at the endpoints as well as intermediate nodes. The second component is the key sharing protocol. The endpoints of a DML-IPSec communication involves two types of processing. The first one, called Outbound processing, is responsible for generating a DML-IPSec datagram from an IP datagram. It first derives the zone boundaries using the zone map and individual header field lengths. After this partitioning of IP datagram, zone wise encryption is applied (in case of ESP). Finally zone specific authentication trailers are calculated and appended after each zone. The other one, Inbound processing, is responsible for generating the original IP datagram from a DML-IPSec datagram. The first step in the inbound processing, the derivation of zone boundary, is significantly different from that of outbound processing as the length fields of zones remain encrypted. After receiving a DML-IPSec datagram, the receiver starts decrypting type I zones till it decrypts the header length field of the header/s. This is followed by zone-wise authentication verification and zone-wise decryption. The intermediate nodes processes an incoming DML-IPSec datagram depending on the presence of the security parameters for that particular DML-IPSec communication. In the absence of the security parameters, the key sharing protocol gets executed; otherwise, all the incoming DML-IPSec datagrams get partially decrypted according to the security association and zone mapping at the inbound processing module. After the inbound processing, the partially decrypted IP datagram traverses through the networking stack of the intermediate node . Before the IP datagram leaves the intermediate node, it is processed by the outbound module to reconstruct the DML-IPSec datagram. The key sharing protocol for sharing zone related cryptographic information among the intermediate nodes is the other important component of the DML-IPSec protocol. This component is responsible for dynamically enabling intermediate nodes to access zonal information as required for performing specific services relating to quality or security. Whenever a DML-IPSec datagram traverses through an intermediate node, that requires access to some of the type I zones, the inbound security database is searched for cryptographic parameters. If no entry is present in the database, the key sharing protocol is invoked. The very first step in this protocol is a header inaccessible message from the intermediate node to the source of the DML-IPSec datagram. The intermediate node also mentions the protocol headers that it requires to access in the body portion of this message. This first phase of the protocol, called the Zone reorganization phase, is responsible for deciding the zone mapping to provide access to intermediate nodes. If the current zone map can not serve the header request, the DML-IPSec endpoint reorganizes the existing zone map in this phase. The next phase of the protocol, called the Authentication Phase is responsible for verifying the identity of the intermediate node to the source of DML-IPSec session. Upon successful authentication, the third phase, called the Shared secret establishment phase commences. This phase is responsible for the establishment of a temporary shared secret between the source and intermediate nodes. This shared secret is to be used as key for encrypting the actual message transfer of the DML-IPSec security parameters at the next phase of the protocol. The final phase of the protocol, called the Security parameter sharing phase, is solely responsible for actual transfer of the security parameters from the source to the intermediate nodes. This phase is also responsible for updation of security and policy databases of the intermediate nodes. The successful execution of the four phases of the key sharing protocol enables the DML-IPSec protocol to dynamically modify the zone map for providing access to some header portions for intermediate nodes and also to share the necessary cryptographic parameters required for accessing relevant type I zones without disturbing an existing DML-IPSec communication. We have implemented the DML-IPSec for ESP protocol according to the definition of zones along with the key sharing algorithm. RHEL version 4 and Linux kernel version 2.6.23.14 was used for the implementation. We implemented the multi-layer IPSec functionalities inside the native Linux implementation of IPSec protocol. The SA structure was updated to hold necessary SA information for multiple zones instead of single SA of the normal IPSec. The zone mapping for different zones was implemented along with the kernel implementation of SA. The inbound and outbound processing modules of the IPSec endpoints were re-implemented to incorporate multi-layer IPSec capability. We also implemented necessary modules for providing partial IPSec processing capabilities at the intermediate nodes. The key sharing protocol consists of some user space utilities and corresponding kernel space components. We use ICMP protocol for the communications required for the execution of the protocol. At the kernel level, pseudo character device driver was implemented to update the kernel space data structures and necessary modifications were made to relevant kernel space functions. User space utilities and corresponding kernel space interface were provided for updating the security databases. As DML-IPSec ESP uses same Security Policy mechanism as IPSec ESP, existing utilities (viz. setkey) are used for the updation of security policy. However, the configuration of the SA is significantly different as it depends on the DML-IPSec zones. The DML-IPSec ESP implementation uses the existing utilities (setkey and racoon) for configuration of the sole type II zone. The type I zones are configured using the DML-IPSec application. The key sharing protocol also uses this application to reorganize the zone mapping and zone-wise cryptographic parameters. The above feature enables one to use default IPSec mechanism for the configuration of the sole type II zone. For experimental validation of DML-IPSec, we used the testbed as shown in the above figure. An ESP tunnel is configured between the two gateways GW1 and GW2. IN acts as an intermediate node and is installed with several intermediate applications. Clients C11 and C21 are connected to GW1 and GW2 respectively. We carried out detailed experiments for validating our solution w.r.t firewalling service. We used stateful packet filtering using iptables along with string match extension at IN. First, we configured the firewall to allow only FTP communication (using port information of TCP header and IP addresses of Inner IP header ) between C11 and C21. In the second experiment, we configured the firewall to allow only Web connection between C11 and C21 using the Web address of C11 (using HTTP header, port information of TCP header and IP addresses of Inner IP header ). In both experiments, we initiated the FTP and WEB sessions before the execution of the key sharing protocol. The session could not be established as the access to upper layer headers was denied. After the execution of the key sharing protocol, the sessions could be established, showing the availability of protocol headers to the iptables firewall at IN following the successful key sharing. We use record route option of ping program to validate the claim of handling datagrams with variable header lengths. This option of ping program records the IP addresses of all the nodes traversed during a round trip path in the IP OPTION field. As we used ESP in tunnel mode between GW1 and GW2, the IP addresses would be recorded inside the encrypted Inner IP header. We executed ping between C11 and C21 and observed the record route output. Before the execution of the key sharing protocol, the IP addresses of IN were absent in the record route output. After the successful execution of key sharing protocol, the IP addresses for IN were present at the record route output. The DML-IPSec protocol introduces some processing overhead and also increases the datagram size as compared to IPSec and ML-IPSec. It increases the datagram size compared to the standard IPSec. However, this increase in IP datagram size is present in the case of ML-IPSec as well. The increase in IP datagram length depends on the number of zones. As the number of zone increases this overhead also increases. We obtain experimental results about the processing delay introduced by DML-IPSec processing. For this purpose, we executed ping program from C11 to C21 in the test bed setup for the following cases: 1.ML-IPSec with one type I and one type II zone and 2. DML-IPSec with one type I and one type II zone. We observe around 10% increase in RTT in DML-IPSec with two dynamic zones over that of ML-IPSec with two static zones. This overhead is due to on-the-fly derivation of the zone length and related processing. The above experiment analyzes the processing delay at the endpoints without intermediate processing. We also analyzed the effect of intermediate processing due to dynamic zones of DML-IPSec. We used iptables firewall in the above mentioned experiment. The RTT value for DML-IPSec with dynamic zones increases by less than 10% over that of ML-IPSec with static zones. To summarize our work, we have proposed an extension to the multilayer IPSec protocol, called Dynamic Multilayer IPSec (DML-IPSec). It is capable of dynamic modification of zones and sharing of cryptographic parameters between endpoints and intermediate nodes using a key sharing protocol. The DML-IPSec also accommodates datagrams with variable header lengths. The above mentioned features enable any intermediate node to dynamically access required header portions of any DML-IPSec protected datagrams. Consequently they make the DML-IPSec suited for providing IPSec over mobile and distributed networks. We also provide complete implementation of ESP protocol and provide experimental validation of our work. We find that our work provides the dynamic support for QoS and security services without any significant extra overhead compared to that of ML-IPSec. The thesis begins with an introduction to communication security requirements in TCP/IP networks. Chapter 2 provides an overview of communication security protocols at different layers. It also describes the details of IPSec protocol suite. Chapter 3 provides a study on the interoperability issues between IPSec and intermediate devices and discusses about different solutions. Our proposed extension to the ML-IPSec protocol, called Dynamic ML-IPSec(DML-IPSec) is presented in Chapter 4. The design and implementation details of DML-IPSec in Linux environment is presented in Chapter 5. It also provides experimental validation of the protocol. In Chapter 6, we summarize the research work, highlight the contributions of the work and discuss the directions for further research.
147

Pokovování polyetylentereftalátu mědí a realizace vodivých struktur / Polyethylenterepthalate Copper Plating for Conductive Structures Realisation

Chmela, Ondřej January 2013 (has links)
The content of this master’s thesis are methods of pretreatment and coating of the surface of PET to produce conductive copper structure and quality control. Thesis also includes theoretical analysis of these methods. Physical and chemical techniques of surface pretreatment methods are discussed in the theoretical part as well as methods making surface of substrate conductive, the subsequent galvanic copper plating and quality control of coating and testing of the adhesion between layers. The experimental part focuses on two methods of the polymer material surface pretreatments. The properties of these pretreatments were evaluated by using the atomic force microscopy and detection of surface energy by wetting and contact angle measurements. The surface is making conductive with cathode sputtering and electrochemical coating of copper. Adhesion of layers is tested mainly with scratch test and other methods. The results of these sub-operations are used for the realization of multi-layer conductive structures.
148

Detekce hran pomocí neuronové sítě / Neural Network Based Edge Detection

Janda, Miloš January 2010 (has links)
Aim of this thesis is description of neural network based edge detection methods that are substitute for classic methods of detection using edge operators. First chapters generally discussed the issues of image processing, edge detection and neural networks. The objective of the main part is to show process of generating synthetic images, extracting training datasets and discussing variants of suitable topologies of neural networks for purpose of edge detection. The last part of the thesis is dedicated to evaluating and measuring accuracy values of neural network.
149

Development of a Software Reliability Prediction Method for Onboard European Train Control System

Longrais, Guillaume Pierre January 2021 (has links)
Software prediction is a complex area as there are no accurate models to represent reliability throughout the use of software, unlike hardware reliability. In the context of the software reliability of on-board train systems, ensuring good software reliability over time is all the more critical given the current density of rail traffic and the risk of accidents resulting from a software malfunction. This thesis proposes to use soft computing methods and historical failure data to predict the software reliability of on-board train systems. For this purpose, four machine learning models (Multi-Layer Perceptron, Imperialist Competitive Algorithm Multi-Layer Perceptron, Long Short-Term Memory Network and Convolutional Neural Network) are compared to determine which has the best prediction performance. We also study the impact of having one or more features represented in the dataset used to train the models. The performance of the different models is evaluated using the Mean Absolute Error, Mean Squared Error, Root Mean Squared Error and the R Squared. The report shows that the Long Short-Term Memory Network is the best performing model on the data used for this project. It also shows that datasets with a single feature achieve better prediction. However, the small amount of data available to conduct the experiments in this project may have impacted the results obtained, which makes further investigations necessary. / Att förutsäga programvara är ett komplext område eftersom det inte finns några exakta modeller för att representera tillförlitligheten under hela programvaruanvändningen, till skillnad från hårdvarutillförlitlighet. När det gäller programvarans tillförlitlighet i fordonsbaserade tågsystem är det ännu viktigare att säkerställa en god tillförlitlighet över tiden med tanke på den nuvarande tätheten i järnvägstrafiken och risken för olyckor till följd av ett programvarufel. I den här avhandlingen föreslås att man använder mjuka beräkningsmetoder och historiska data om fel för att förutsäga programvarans tillförlitlighet i fordonsbaserade tågsystem. För detta ändamål jämförs fyra modeller för maskininlärning (Multi-Layer Perceptron, Imperialist Competitive Algorithm Mult-iLayer Perceptron, Long Short-Term Memory Network och Convolutional Neural Network) för att fastställa vilken som har den bästa förutsägelseprestandan. Vi undersöker också effekten av att ha en eller flera funktioner representerade i den datamängd som används för att träna modellerna. De olika modellernas prestanda utvärderas med hjälp av medelabsolut fel, medelkvadratfel, rotmedelkvadratfel och R-kvadrat. Rapporten visar att Long Short-Term Memory Network är den modell som ger bäst resultat på de data som använts för detta projekt. Den visar också att dataset med en enda funktion ger bättre förutsägelser. Den lilla mängd data som fanns tillgänglig för att genomföra experimenten i detta projekt kan dock ha påverkat de erhållna resultaten, vilket gör att ytterligare undersökningar är nödvändiga.
150

Radar based tank level measurement using machine learning : Agricultural machines / Nivåmätning av tank med radar sensorer och maskininlärning

Thorén, Daniel January 2021 (has links)
Agriculture is becoming more dependent on computerized solutions to make thefarmer’s job easier. The big step that many companies are working towards is fullyautonomous vehicles that work the fields. To that end, the equipment fitted to saidvehicles must also adapt and become autonomous. Making this equipment autonomoustakes many incremental steps, one of which is developing an accurate and reliable tanklevel measurement system. In this thesis, a system for tank level measurement in a seedplanting machine is evaluated. Traditional systems use load cells to measure the weightof the tank however, these types of systems are expensive to build and cumbersome torepair. They also add a lot of weight to the equipment which increases the fuel consump-tion of the tractor. Thus, this thesis investigates the use of radar sensors together witha number of Machine Learning algorithms. Fourteen radar sensors are fitted to a tankat different positions, data is collected, and a preprocessing method is developed. Then,the data is used to test the following Machine Learning algorithms: Bagged RegressionTrees (BG), Random Forest Regression (RF), Boosted Regression Trees (BRT), LinearRegression (LR), Linear Support Vector Machine (L-SVM), Multi-Layer Perceptron Re-gressor (MLPR). The model with the best 5-fold crossvalidation scores was Random For-est, closely followed by Boosted Regression Trees. A robustness test, using 5 previouslyunseen scenarios, revealed that the Boosted Regression Trees model was the most robust.The radar position analysis showed that 6 sensors together with the MLPR model gavethe best RMSE scores.In conclusion, the models performed well on this type of system which shows thatthey might be a competitive alternative to load cell based systems.

Page generated in 0.3579 seconds