• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 77
  • 32
  • 1
  • Tagged with
  • 371
  • 368
  • 367
  • 364
  • 364
  • 65
  • 56
  • 41
  • 40
  • 40
  • 36
  • 21
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Monocular depth estimation in images and sequences using occlusion cues

Palou Visa, Guillem 21 February 2014 (has links)
When humans observe a scene, they are able to perfectly distinguish the different parts composing it. Moreover, humans can easily reconstruct the spatial position of these parts and conceive a consistent structure. The mechanisms involving visual perception have been studied since the beginning of neuroscience but, still today, not all the processes composing it are known. In usual situations, humans can make use of three different methods to estimate the scene structure. The first one is the so called divergence and it makes use of both eyes. When objects lie in front of the observed at a distance up to hundred meters, subtle differences in the image formation in each eye can be used to determine depth. When objects are not in the field of view of both eyes, other mechanisms should be used. In these cases, both visual cues and prior learned information can be used to determine depth. Even if these mechanisms are less accurate than divergence, humans can almost always infer the correct depth structure when using them. As an example of visual cues, occlusion, perspective or object size provide a lot of information about the structure of the scene. A priori information depends on each observer, but it is normally used subconsciously by humans to detect commonly known regions such as the sky, the ground or different types of objects. In the last years, since technology has been able to handle the processing burden of vision systems, there has been lots of efforts devoted to design automated scene interpreting systems. In this thesis we address the problem of depth estimation using only one point of view and using only occlusion depth cues. The thesis objective is to detect occlusions present in the scene and combine them with a segmentation system so as to generate a relative depth order depth map for a scene. We explore both static and dynamic situations such as single images, frame inside sequences or full video sequences. In the case where a full image sequence is available, a system exploiting motion information to recover depth structure is also designed. Results are promising and competitive with respect to the state of the art literature, but there is still much room for improvement when compared to human depth perception performance. / Quan els humans observen una escena, son capaços de distingir perfectament les parts que la composen i organitzar-les espacialment per tal de poder-se orientar. Els mecanismes que governen la percepció visual han estat estudiats des dels principis de la neurociència, però encara no es coneixen tots els processos biològic que hi prenen part. En situacions normals, els humans poden fer servir tres eines per estimar l’estructura de l’escena. La primera és l’anomenada divergència. Aprofita l’ús de dos punts de vista (els dos ulls) i és capaç¸ de determinar molt acuradament la posició dels objectes ,que a una distància de fins a cent metres, romanen enfront de l’observador. A mesura que augmenta la distància o els objectes no es troben en el camp de visió dels dos ulls, altres mecanismes s’han d’utilitzar. Tant l’experiència anterior com certs indicis visuals s’utilitzen en aquests casos i, encara que la seva precisió és menor, els humans aconsegueixen quasi bé sempre interpretar bé el seu entorn. Els indicis visuals que aporten informació de profunditat més coneguts i utilitzats són per exemple, la perspectiva, les oclusions o el tamany de certs objectes. L’experiència anterior permet resoldre situacions vistes anteriorment com ara saber quins regions corresponen al terra, al cel o a objectes. Durant els últims anys, quan la tecnologia ho ha permès, s’han intentat dissenyar sistemes que interpretessin automàticament diferents tipus d’escena. En aquesta tesi s’aborda el tema de l’estimació de la profunditat utilitzant només un punt de vista i indicis visuals d’oclusió. L’objectiu del treball es la detecció d’aquests indicis i combinar-los amb un sistema de segmentació per tal de generar automàticament els diferents plans de profunditat presents a una escena. La tesi explora tant situacions estàtiques (imatges fixes) com situacions dinàmiques, com ara trames dins de seqüències de vídeo o seqüències completes. En el cas de seqüències completes, també es proposa un sistema automàtic per reconstruir l’estructura de l’escena només amb informació de moviment. Els resultats del treball son prometedors i competitius amb la literatura del moment, però mostren encara que la visió per computador té molt marge de millora respecte la precisió dels humans.
202

Characterization of nanomechanical resonators based on silicon nanowires

Sansa Perna, Marc 23 July 2013 (has links)
Els sensors de massa nanomecànics han atret un gran interès darrerament per la seva alta sensibilitat, que ve donada per les petites dimensions del ressonador que actua com a element sensor. Aquesta tesi tracta sobre la fabricació i caracterització de ressonadors nanomecànics per a aplicacions de sensat de massa. Aquest objectiu inclou diferents aspectes: 1) el desenvolupament d’una tecnologia de fabricació per a ressonadors nanomecànics basats en nanofils de silici, 2) la caracterització de la seva resposta freqüencial utilitzant mètodes elèctrics i 3) l’avaluació del seu rendiment com a sensors de massa. Durant aquest treball hem fabricat ressonadors nanomecànics basats en nanofils de silici doblement fixats, utilitzant dues estratègies de fabricació diferents: els nanofils crescuts amb mètodes bottom-up (“de baix a dalt”), i els definits amb mètodes de litografia top-down (“de dalt a baix”). Aprofitant les característiques d’ambdues tècniques, hem fabricat nanofils amb dimensions laterals de fins a 50 nanòmetres, i amb un alt nombre de dispositius per xip, aconseguint un alt grau de rendiment per a estructures d’aquestes dimensions. Hem aplicat esquemes avançats de detecció elèctrica basats en la mescla de senyals cap a freqüències baixes per tal de caracteritzar la resposta freqüencial dels ressonadors. Hem demostrat que el mètode de freqüència modulada (FM) proporciona la millor eficiència en la transducció de l’oscil·lació mecànica en una senyal elèctrica. Aquesta tècnica ha permès detectar múltiples modes de ressonància del ressonador, a freqüències de fins a 590 MHz. La detecció de modes de ressonància superiors és important per tal de solucionar una de les principals problemàtiques en el camp dels sensors de massa nanomecànics: desacoblar els efectes de la posició i la massa de la partícula dipositada. També hem combinat la informació obtinguda de la caracterització elèctrica amb simulacions d’elements finits per tal de quantificar l’estrès acumulat als nanofils durant la seva fabricació. Hem estudiat els sistemes de transducció electromecànica en ressonadors basats en nanofils de silici comparant l’eficiència de tres mètodes de detecció: el mètode FM ja esmentat i els mètodes de dos generadors, 1ω i de dos generadors, 2ω. D’aquesta manera hem demostrat que dos mecanismes de transducció diferents coexisteixen en els nanofils de silici bottom-up: el mecanisme lineal (en què la senyal transduïda és proporcional al moviment del ressonador) i el quadràtic (en què la senyal transduïda és proporcional al quadrat del moviment del ressonador). Per altra banda, en els ressonadors top-down només és present el mecanisme de transducció lineal. Aquest mecanisme lineal és el que permet la gran eficiència del mètode FM per a la caracterització de la resposta freqüencial de ressonadors basats en nanofils de silici. Per tal d’utilitzar els ressonadors nanomecànics com a sensors de massa, el seguiment de la freqüència de ressonància en temps real és indispensable. Hem dissenyat i implementat una configuració en llaç tancat basada en la caracterització FM i un algorisme de detecció de pendent. Aquest sistema permet el seguiment de canvis en la magnitud i freqüència de la resposta del ressonador, possibilitant la detecció de massa en temps real i la caracterització de l’estabilitat temporal del sistema. D’aquesta manera s’ha pogut avaluar l’eficiència del sistema per a aplicacions de sensat de massa. La sensibilitat en massa dels sensors de dimensions més reduïdes és de l’ordre de 6 Hz/zg (1 zg = 6·10-21 g), i les mesures d’estabilitat en freqüència en llaç tancat mostren una resolució en massa de 6 zg a temperatura ambient. / Nanomechanical mass sensors have attracted interest during the last years thanks to their unprecedented sensitivities, which arise from the small dimensions of the resonator which comprises the sensing element. This thesis deals with the fabrication and characterization of nanomechanical resonators for mass sensing applications. This objective comprises three different aspects: 1) the development of a fabrication technology of nanomechanical resonators based on silicon nanowires (SiNW), 2) the characterization of their frequency response by electrical methods and 3) the evaluation of their performance as mass sensors. During this work, we have fabricated nanomechanical resonators based on SiNW clamped-clamped beams, using two different approaches: bottom-up growth of SiNW and top-down definition by lithography methods. By exploiting the advantages of each technique, we have succeeded in fabricating nanowires of small lateral dimensions, in the order of 50 nanometers, and with high number of devices per chip, achieving a high throughput taking into account the dimensions of these structures. We have applied advanced electrical detection schemes based on frequency down-mixing techniques for the characterization of the frequency response of the devices. We have found that the frequency modulation (FM) detection method provides the best efficiency in transducing the mechanical oscillation into an electrical signal. This technique has enabled the detection of multiple resonance modes of the resonator at frequencies up to 590 MHz. The detection of high modes of resonance is important to address one of the issues in nanomechanical mass sensing, decoupling the effects of the position and mass of the deposited species. Moreover, by combining the information obtained from the experimental characterization of the frequency response with FEM simulations, we have quantified the stress accumulated in the SiNWs during the fabrication. We have studied the electromechanical transduction mechanisms in SiNW resonators by the comparative performance of three electrical detection methods: the aforementioned FM and two more detection techniques (namely the two-source, 1ω and the two-source, 2ω). We have proved that two different transduction mechanisms co-exist in bottom-up grown SiNWs: linear (in which the transduced signal is proportional to the motion of the resonator) and quadratic (in which the transduced signal is proportional to the square of the motion of the resonator). On the other hand, in the top-down nanowires only the linear transduction mechanism is present. It is this newly found linear transduction which enables the outstanding performance of the FM detection method when characterizing the frequency response of SiNW resonators. For the use of nanomechanical resonators in mass sensing applications, the real-time tracking of their resonance frequency is needed. We have designed and implemented a novel closed-loop configuration, based on the FM detection technique and a slope detection algorithm. It allows the monitoring of changes in the magnitude and the frequency of the response of the resonator, enabling not only the real time detection of mass, but also the characterization of the temporal stability of the system. In this way, its overall performance for mass sensing applications has been characterized. The mass sensitivity of the system for the smallest resonators stands in the range of 6 Hz/zg (1 zg = 6·10-21 g) and the frequency stability measurements in the closed loop configuration reveal a mass resolution of 6 zg at room temperature.
203

Adaptive self-mixing interferometry for metrology applications

Atashkhooei, Reza 4 November 2013 (has links)
Among the laser based techniques proposed for metrology applications, classical interferometers offer the highest precision measurements. However, the cost of some of the elements involved and the number of optical components used in the setup complicates using them in several industrial applications. Apart from cost, the complexities due to optical alignment and the required quality of the environmental conditions can be quite restrictive for those systems. Within the category of optical interferometers, optical feedback interferometry (OFI), also called self-mixing interferometry (SMI) has the potential to overcome some of the complexities of classical interferometry. It is compact in size, cost effective, robust, self-aligned, and it doesn't require a large number of optical components in the experimental configuration. In OFI, a portion of the emitted laser beam re-enters to the laser cavity after backreflection from the target, causing the wavelength of the laser to change, modifying the power spectrum and consequently the emitted output power, which can be detected for measurement purposes. Thus, the laser operates simultaneously as the light source, the light detector, and as the ultra-sensitive coherent sensor for optical path changes. The present PhD pursued improving the performance of OFI-based sensors using a novel and compact optical system. A solution using an adaptive optical element in the form of a voltage programmable liquid lens was proposed for automated focus adjustments. The amount of backreflected light re-entering the laser cavity could be controlled, and the laser feedback level was adjusted to the best condition in different situations, enabling the power signal to be adjusted to the best possible conditions for measurement. Feedback control enabled the proposal of a novel solution called differential OFI, which improved the measurement resolution down to the nanometre order, even if the displacements were below half-wavelength of the laser, for first time in OFI sensors. Another relevant part of the PhD was devoted to the analysis of speckle-affected optical power signals in feedback interferometers. Speckle effect appears when the displacements of the target are large, and introduces an undesired modulation of the amplitude of the signal. After an analysis of the speckle-affected signal and the main factors contributing to it, two novel solutions were proposed for the control of speckle noise. The adaptive optical head developed previously was used in a real time setup to control the presence of speckle effect, by tracking the signal to noise ratio of the emitted power, and modifying the spot size on the target when required using a feedback loop. Besides, a sensor diversity solution was proposed to enable enhancements in signal detection in fast targets, when real time control could not be applied. Finally, two industrial applications of the technique with the presence of different levels of speckle noise have been presented. A complete measurement methodology for the control of motor shaft runout in permanent magnet electrical motors, enabling complete monitoring of the displacement of the shaft has been developed and implemented in practice. Results here are validated with those obtained using a commercial laser Doppler vibrometer, an equipment with a much higher cost. A second application in the monitoring the displacement of polymer-reinforced beams used in civil engineering under dynamic loading was also demonstrated. Results here are validated using a conventional contact probe (a Linear Vertical Differential Transducer, LVDT). Both applications show that with controlled speckle features OFI performs adequately in industrial environments as a non-contact proximity probe with resolution limited by the constraints defined by the setup
204

Radio and computing resource management in SDR clouds

Gómez, Ismael 19 December 2013 (has links)
The aim of this thesis is defining and developing the concept of an efficient management of radio and computing resources in an SDR cloud. The SDR cloud breaks with today's cellular architecture. A set of distributed antennas are connected by optical fibre to data processing centres. The radio and computing infrastructure can be shared between different operators (virtualization), reducing costs and risks, while increasing the capacity and creating new business models and opportunities. The data centre centralizes the management of all system resources: antennas, spectrum, computing, routing, etc. Specially relevant is the computing resource management (CRM), whose objective is dynamically providing sufficient computing resources for a real-time execution of signal processing algorithms. Current CRM techniques are not designed for wireless applications. We demonstrate that this imposes a limit on the wireless traffic a CRM entity is capable to support. Based on this, a distributed management is proposed, where multiple CRM entities manage a cluster of processors, whose optimal size is derived from the traffic density. Radio resource management techniques (RRM) also need to be adapted to the characteristics of the new SDR cloud architecture. We introduce a linear cost model to measure the cost associated to the infrastructure resources consumed according to the pay-per-use model. Based on this model, we formulate the efficiency maximization power allocation problem (EMPA). The operational costs per transmitted bit achieved by EMPA are 6 times lower than with traditional power allocation methods. Analytical solutions are obtained for the single channel case, with and without channel state information at the transmitter. It is shown that the optimal transmission rate is an increasing function of the product of the channel gain with the operational costs divided by the power costs. The EMPA solution for multiple channels has the form of water-filling, present in many power allocation problems. In order to be able to obtain insights about how the optimal solution behaves as a function of the problem parameters, a novel technique based on ordered statistics has been developed. This technique allows solving general water-filling problems based on the channel statistics rather than their realization. This approach has allowed designing a low complexity EMPA algorithm (2 to 4 orders of magnitude faster than state-of-the-art algorithms). Using the ordered statistics technique, we have shown that the optimal transmission rate behaviour with respect to the average channel gains and cost parameters is equivalent to the single channel case and that the efficiency increases with the number of available channels. The results can be applied to design more efficient SDR clouds. As an example, we have derived the optimal ratio of number of antennas per user that maximizes the efficiency. As new users enter and leave the network, this ratio should be kept constant, enabling and disabling antennas dynamically. This approach exploits the dynamism and elasticity provided by the SDR cloud. In summary, this dissertation aims at influencing towards a change in the communications system management model (typically RRM), considering the introduction of the new infrastructure model (SDR cloud), new business models (based on Cloud Computing) and a more conciliatory view of an efficient resource management, not only focused on the optimization of the spectrum usage. / El objetivo de esta tesis es de nir y desarrollar el concepto de gesti on e ciente de los recursos de radio y computaci on en un SDR cloud. El SDR cloud rompe con la estructura del sistema celular actual. Un conjunto de antenas distribuidas se conectan a centros de procesamiento mediante enlaces de comunicaci on de bra optica. La infraestructura de radio y procesamiento puede ser compartida entre distintos operadores (virtualizacion), disminuyendo costes y riesgos, aumentando la capacidad y abriendo nuevos modelos y oportunidades de negocio. La centralizaci on de la gesti on del sistema viene soportada por el centro de procesamiento, donde se realiza una gesti on de todos los recursos del sistema: antenas, espectro, computaci on, enrutado, etc. Resulta de especial relevancia la gesti on de los recursos de computaci on (CRM) cuyo objetivo es el de proveer, din amicamente, de su cientes recursos de computaci on para la ejecuci on en tiempo real de algoritmos de procesado del señal. Las t ecnicas actuales de CRM no han sido diseñadas para aplicaciones de comunicaciones. Demostramos que esta caracter stica impone un l ímite en el tr áfi co que un gestor CRM puede soportar. En base a ello, proponemos una gesti on distribuida donde m ultiples entidades CRM gestionan grupos de procesadores, cuyo tamaño optimo se deriva de la densidad de tr áfi co. Las t ecnicas actuales de gesti on de recursos radio (RRM) tambi en deben ser adaptadas a las caracter sticas de la nueva arquitectura SDR cloud. Introducimos un modelo de coste lineal que caracteriza los costes asociados al consumo de recursos de la infraestructura seg un el modelo de pago-por-uso. A partir de este modelo, formulamos el problema de asignaci on de potencia de m axima e ciencia (EMPA). Mediante una asignaci on EMPA, los costes de operaci on por bit transmitido son del orden de 6 veces menores que con los m etodos tradicionales. Se han obtenido soluciones anal ticas para el caso de un solo canal, con y sin informacion del canal disponible en el transmisor, y se ha demostrado que la velocidad optima de transmisi on es una funci on creciente del producto de la ganancia del canal por los costes operativos dividido entre los costes de potencia. La soluci on EMPA para varios canales satisface el modelo "water- lling", presente en muchos tipos de optimizaci on de potencia. Con el objetivo de conocer c omo esta se comporta en funci on de los par ametros del sistema, se ha desarrollado una t ecnica nueva basada en estadí sticas ordenadas. Esta t ecnica permite solucionar el problema del water- lling bas andose en la estadí stica del canal en vez de en su realizaci on. Este planteamiento, despu es de profundos an alisis matem aticos, ha permitido desarrollar un algoritmo de asignaci on de potencia de baja complejidad (2 a 4 ordenes de magnitud m as r apido que el estado del arte). Mediante esta t ecnica, se ha demostrado que la velocidad optima de transmisi on se comporta de forma equivalente al caso de un solo canal y que la e ciencia incrementa a medida que aumentan el numero de canales disponibles. Estos resultados pueden aplicarse a diseñar un SDR cloud de forma m as e ciente. A modo de ejemplo, hemos obtenido el ratio optimo de n umero de antenas por usuario que maximiza la e ciencia. A medida que los usuarios entran y salen de la red, este ratio debe mantenerse constante, a fin de mantener una efi ciencia lo m as alta posible, activando o desactivando antenas din amicamente. De esta forma se explota completamente el dinamismo ofrecido por una arquitectura el astica como el SDR cloud. En de nitiva, este trabajo pretende incidir en un cambio del modelo de gesti on de un sistema de comunicaciones (t ípicamente RRM) habida cuenta de la introducci on de una nueva infraestructura (SDR cloud), nuevos modelos de negocio (basados en Cloud Computing) y una visi on m as integradora de la gesti on e ciente de los recursos del sistema, no solo centrada en la optimizaci on del uso del espectro.
205

Acoustic event detection and localization using distributed microphone arrays

Chakraborty, Rupayan 18 December 2013 (has links)
Automatic acoustic scene analysis is a complex task that involves several functionalities: detection (time), localization (space), separation, recognition, etc. This thesis focuses on both acoustic event detection (AED) and acoustic source localization (ASL), when several sources may be simultaneously present in a room. In particular, the experimentation work is carried out with a meeting-room scenario. Unlike previous works that either employed models of all possible sound combinations or additionally used video signals, in this thesis, the time overlapping sound problem is tackled by exploiting the signal diversity that results from the usage of multiple microphone array beamformers. The core of this thesis work is a rather computationally efficient approach that consists of three processing stages. In the first, a set of (null) steering beamformers is used to carry out diverse partial signal separations, by using multiple arbitrarily located linear microphone arrays, each of them composed of a small number of microphones. In the second stage, each of the beamformer output goes through a classification step, which uses models for all the targeted sound classes (HMM-GMM, in the experiments). Then, in a third stage, the classifier scores, either being intra- or inter-array, are combined using a probabilistic criterion (like MAP) or a machine learning fusion technique (fuzzy integral (FI), in the experiments). The above-mentioned processing scheme is applied in this thesis to a set of complexity-increasing problems, which are defined by the assumptions made regarding identities (plus time endpoints) and/or positions of sounds. In fact, the thesis report starts with the problem of unambiguously mapping the identities to the positions, continues with AED (positions assumed) and ASL (identities assumed), and ends with the integration of AED and ASL in a single system, which does not need any assumption about identities or positions. The evaluation experiments are carried out in a meeting-room scenario, where two sources are temporally overlapped; one of them is always speech and the other is an acoustic event from a pre-defined set. Two different databases are used, one that is produced by merging signals actually recorded in the UPC¿s department smart-room, and the other consists of overlapping sound signals directly recorded in the same room and in a rather spontaneous way. From the experimental results with a single array, it can be observed that the proposed detection system performs better than either the model based system or a blind source separation based system. Moreover, the product rule based combination and the FI based fusion of the scores resulting from the multiple arrays improve the accuracies further. On the other hand, the posterior position assignment is performed with a very small error rate. Regarding ASL and assuming an accurate AED system output, the 1-source localization performance of the proposed system is slightly better than that of the widely-used SRP-PHAT system, working in an event-based mode, and it even performs significantly better than the latter one in the more complex 2-source scenario. Finally, though the joint system suffers from a slight degradation in terms of classification accuracy with respect to the case where the source positions are known, it shows the advantage of carrying out the two tasks, recognition and localization, with a single system, and it allows the inclusion of information about the prior probabilities of the source positions. It is worth noticing also that, although the acoustic scenario used for experimentation is rather limited, the approach and its formalism were developed for a general case, where the number and identities of sources are not constrained.
206

Radio Frequency Identification (RFID) Tags and Reader Antennas Based on Conjugate Matching and Metamaterial Concepts

Zamora González, Gerard 02 October 2013 (has links)
La identificación por radiofrecuencia (RFID) es una tecnología de rápido desarrollo que proporciona la identificación inalámbrica y capacidad de seguimiento mediante el uso de dispositivos simples que se usan para etiquetar objetos o personas en un extremo, llamados etiquetas o tags, y dispositivos más complejos en el otro extremo del enlace, denominados lectores. RFID es una tecnología emergente y uno de los segmentos de más rápido crecimiento de la actual industria de la identificación automática y captura de datos (AIDC). RFID es utilizada por cientos, si no miles, de aplicaciones en la actualidad. RFID está revolucionando la gestión de las cadenas de suministro, en sustitución de los códigos de barras, pasando a ser el sistema de seguimiento de objetos principal, y se está convirtiendo rápidamente en una tecnología rentable. Sin embargo, el diseño de etiquetas capaces de cubrir todas las bandas UHF-RFID reguladas, proporcionando unas prestaciones de lectura adecuadas, supone un reto importante. Además, hay una falta de sistematización en la metodología de diseño de etiquetas UHF-RFID. Otro problema que impide una expansión más rápida de la tecnología UHF-RFID se encuentra en la gestión de artículos de venta al por menor, en la dificultad de ofrecer al mismo tiempo la posibilidad de controlar el pago los artículos en las tiendas y el inventario de elementos presentes en el almacén. La reducción de costes es de especial preocupación en la implementación de sistemas RFID de microondas, ya que estos sistemas suelen utilizar etiquetas activas cuyo consumo de energía debe ser reducido al mínimo. El objetivo principal de esta tesis es aportar soluciones a los problemas mencionados anteriormente, contribuyendo de este modo al progreso y mejora de la tecnología RFID. Esto se consigue mediante la propuesta de nuevas estrategias y una metodología simple para el diseño de las etiquetas de UHF-RFID basadas en adaptación conjugada, y antenas para lectores RFID basadas en conceptos de metamateriales. / Radio frequency identification (RFID) is a fast developing technology that provides wireless identification and tracking capability by using simple devices used for tagging objects or people on one end, called tags, and more complex devices on the other end of the link, called readers. RFID is an emerging technology and one of the most rapidly growing segments of today’s automatic identification and data capture (AIDC) industry. RFID is used for hundreds, if not thousands, of applications at present. RFID is revolutionizing supply chain management, replacing bar codes as the main object tracking system, and it is rapidly becoming a cost-effective technology. However, the design of tags able to cover the whole UHF-RFID regulated bands, providing appropriate read performance, becomes an important challenge. Also, there is a lack of systematization in the design methodology of UHF-RFID tags. Another problem which prevents a faster expansion of the UHF-RFID technology is found in the retail item management, in the difficulty of simultaneously offer the possibility of controlling items payment in stores and inventory of elements present in the store. Cost reduction is a special concern in the implementation of microwave RFID systems since they typically use active tags whose power consumption should be minimized. The main objective of this thesis is to provide solutions to the aforementioned problems, contributing to the progress and improvement of the RFID technology. This is achieved by proposing new strategies and a simple methodology for the design of UHF-RFID tags based on conjugate matching, and RFID reader antennas based on metamaterial concepts.
207

Distributed consensus algorithms for wireless sensor networks: convergence analysis and optimization

Silva Pereira, Silvana 26 January 2012 (has links)
Wireless sensor networks are developed to monitor areas of interest with the purpose of estimating physical parameters or/and detecting emergency events in a variety of military and civil applications. A wireless sensor network can be seen as a distributed computer, where spatially deployed sensor nodes are in charge of gathering measurements from the environment to compute a given function. The research areas for wireless sensor networks extend from the design of small, reliable hardware to low-complexity algorithms and energy saving communication protocols. Distributed consensus algorithms are low-complexity iterative schemes that have received increased attention in different fields due to a wide range of applications, where neighboring nodes communicate locally to compute the average of an initial set of measurements. Energy is a scarce resource in wireless sensor networks and therefore, the convergence of consensus algorithms, characterized by the total number of iterations until reaching a steady-state value, is an important topic of study. This PhD thesis addresses the problem of convergence and optimization of distributed consensus algorithms for the estimation of parameters in wireless sensor networks. The impact of quantization noise in the convergence is studied in networks with fixed topologies and symmetric communication links. In particular, a new scheme including quantization is proposed, whose mean square error with respect to the average consensus converges. The limit of the mean square error admits a closed-form expression and an upper bound for this limit depending on general network parameters is also derived. The convergence of consensus algorithms in networks with random topology is studied focusing particularly on convergence in expectation, mean square convergence and almost sure convergence. Closed-form expressions useful to minimize the convergence time of the algorithm are derived from the analysis. Regarding random networks with asymmetric links, closed-form expressions are provided for the mean square error of the state assuming equally probable uniform link weights, and mean square convergence to the statistical mean of the initial measurements is shown. Moreover, an upper bound for the mean square error is derived for the case of different probabilities of connection for the links, and a practical scheme with randomized transmission power exhibiting an improved performance in terms of energy consumption with respect to a fixed network with the same consumption on average is proposed. The mean square error expressions derived provide a means to characterize the deviation of the state vector with respect to the initial average when the instantaneous links are asymmetric. A useful criterion to minimize the convergence time in random networks with spatially correlated links is considered, establishing a sufficient condition for almost sure convergence to the consensus space. This criterion, valid also for topologies with spatially independent links, is based on the spectral radius of a positive semidefinite matrix for which we derive closed-form expressions assuming uniform link weights. The minimization of this spectral radius is a convex optimization problem and therefore, the optimum link weights minimizing the convergence time can be computed efficiently. The expressions derived are general and apply not only to random networks with instantaneous directed topologies but also to random networks with instantaneous undirected topologies. Furthermore, the general expressions can be particularized to obtain known protocols found in literature, showing that they can be seen as particular cases of the expressions derived in this thesis. / Las redes de sensores inalámbricos se utilizan para monitorizar zonas de interés con el propósito final de estimar parámetros físicos y/o detectar situaciones de emergencia en gran variedad de aplicaciones militares y civiles. Una red de sensores inalámbricos puede ser considerada como un método de computación distribuido, donde nodos provistos de sensores toman medidas del entorno para calcular una función que depende de éstas. Las áreas de investigación comprenden desde el diseño de dispositivos hardware pequeños y fiables hasta algoritmos de baja complejidad o protocolos de comunicación de bajo consumo energético. Los algoritmos de consenso distribuidos son esquemas iterativos de baja complejidad que han suscitado mucha atención en diferentes campos debido a su gran espectro de aplicaciones, en los que nodos vecinos se comunican para calcular el promedio de un conjunto de medidas iniciales de la red. Dado que la energía es un recurso escaso en redes de sensores inalámbricos, la convergencia de dichos algoritmos de consenso, caracterizada por el número total de iteraciones hasta alcanzar un valor estacionario, es un importante tema de estudio. Esta tesis doctoral aborda problemas de convergencia y optimización de algoritmos de consenso distribuidos para la estimación de parámetros en redes de sensores inalámbricos. El impacto del ruido de cuantización en la convergencia se estudia en redes con topología fija y enlaces de comunicación simétricos. En particular, se propone un nuevo esquema que incluye el proceso de cuantización y se demuestra que el error cuadrático medio respecto del promedio inicial converge. Igualmente, se obtiene una expresión cerrada del límite del error cuadrático medio, y una cota superior para este límite que depende únicamente de parámetros generales de la red. La convergencia de los algoritmos de consenso en redes con topología aleatoria se estudia prestando especial atención a la convergencia en valor esperado, la convergencia en media cuadrática y la convergencia casi segura, y a partir del análisis se derivan expresiones cerradas útiles para minimizar el tiempo de convergencia. Para redes aleatorias con enlaces asimétricos, se obtienen expresiones cerradas del error cuadrático medio del estado suponiendo enlaces con probabilidad idéntica y con pesos uniformes, y se demuestra la convergencia en media cuadrática al promedio estadístico de las medidas iniciales. Se deduce una cota superior para el error cuadrático medio para el caso de enlaces con probabilidades de conexión diferentes y se propone, además, un esquema práctico con potencias de transmisión aleatorias, que mejora el rendimiento en términos de consumo de energía con respecto a una red fija. Las expresiones para el error cuadrático medio proporcionan una forma de caracterizar la desviación del vector de estado con respecto del promedio inicial cuando los enlaces instantáneos son asimétricos. Con el fin de minimizar el tiempo de convergencia en redes aleatorias con enlaces correlados espacialmente, se considera un criterio que establece una condición suficiente que garantiza la convergencia casi segura al espacio de consenso. Este criterio, que también es válido para topologías con enlaces espacialmente independientes, utiliza el radio espectral de una matriz semidefinida positiva para la cual se obtienen expresiones cerradas suponiendo enlaces con pesos uniformes. La minimización de dicho radio espectral es un problema de optimización convexa y, por lo tanto, el valor de los pesos óptimos puede calcularse de forma eficiente. Las expresiones obtenidas son generales y aplican no sólo para redes aleatorias con topologías dirigidas, sino también para redes aleatorias con topologías no dirigidas. Además, las expresiones generales pueden ser particularizadas para obtener protocolos conocidos en la literatura, demostrando que éstos últimos pueden ser considerados como casos particulares de las expresiones proporcionadas en esta tesis.
208

Acoustic event detection and classification

Temko, Andriy 23 January 2007 (has links)
L'activitat humana que té lloc en sales de reunions o aules d'ensenyament es veu reflectida en una rica varietat d'events acústics, ja siguin produïts pel cos humà o per objectes que les persones manegen. Per això, la determinació de la identitat dels sons i de la seva posició temporal pot ajudar a detectar i a descriure l'activitat humana que té lloc en la sala. A més a més, la detecció de sons diferents de la veu pot ajudar a millorar la robustes de tecnologies de la parla com el reconeixement automàtica a condicions de treball adverses. L'objectiu d'aquesta tesi és la detecció i classificació automàtica d'events acústics. Es tracta de processar els senyals acústics recollits per micròfons distants en sales de reunions o aules per tal de convertir-los en descripcions simbòliques que es corresponguin amb la percepció que un oient tindria dels diversos events sonors continguts en els senyals i de les seves fonts. En primer lloc, s'encara la tasca de classificació automàtica d'events acústics amb classificadors de màquines de vectors suport (Support Vector Machines (SVM)), elecció motivada per l'escassetat de dades d'entrenament. Per al problema de reconeixement multiclasse es desenvolupa un esquema d'agrupament automàtic amb conjunt de característiques variable i basat en matrius de confusió. Realitzant proves amb la base de dades recollida, aquest classificador obté uns millors resultats que la tècnica basada en models de barreges de Gaussianes (Gaussian Mixture Models (GMM)), i aconsegueix una reducció relativa de l'error mitjà elevada en comparació amb el millor resultat obtingut amb l'esquema convencional basat en arbre binari. Continuant amb el problema de classificació, es comparen unes quantes maneres alternatives d'estendre els SVM al processament de seqüències, en un intent d'evitar l'inconvenient de treballar amb vectors de longitud fixa que presenten els SVM quan han de tractar dades d'àudio. En aquestes proves s'observa que els nuclis de deformació temporal dinàmica funcionen bé amb sons que presenten una estructura temporal. A més a més, s'usen conceptes i eines manllevats de la teoria de lògica difusa per investigar, d'una banda, la importància de cada una de les característiques i el grau d'interacció entre elles, i d'altra banda, tot cercant l'augment de la taxa de classificació, s'investiga la fusió de lessortides de diversos sistemes de classificació. Els sistemes de classificació d'events acústicsdesenvolupats s'han testejat també mitjançant la participació en unes quantes avaluacions d'àmbitinternacional, entre els anys 2004 i 2006. La segona principal contribució d'aquest treball de tesi consisteix en el desenvolupament de sistemes de detecció d'events acústics. El problema de la detecció és més complex, ja que inclou tant la classificació dels sons com la determinació dels intervals temporals on tenen lloc. Es desenvolupen dues versions del sistema i es proven amb els conjunts de dades de les dues campanyes d'avaluació internacional CLEAR que van tenir lloc els anys 2006 i 2007, fent-se servir dos tipus de bases de dades: dues bases d'events acústics aïllats, i una base d'enregistraments de seminaris interactius, les quals contenen un nombre relativament elevat d'ocurrències dels events acústics especificats. Els sistemes desenvolupats, que consisteixen en l'ús de classificadors basats en SVM que operen dinsd'una finestra lliscant més un post-processament, van ser els únics presentats a les avaluacionsesmentades que no es basaven en models de Markov ocults (Hidden Markov Models) i cada un d'ellsva obtenir resultats competitius en la corresponent avaluació. La detecció d'activitat oral és un altre dels objectius d'aquest treball de tesi, pel fet de ser un cas particular de detecció d'events acústics especialment important. Es desenvolupa una tècnica de millora de l'entrenament dels SVM per fer front a la necessitat de reducció de l'enorme conjunt de dades existents. El sistema resultant, basat en SVM, és testejat amb uns quants conjunts de dades de l'avaluació NIST RT (Rich Transcription), on mostra puntuacions millors que les del sistema basat en GMM, malgrat que aquest darrer va quedar entre els primers en l'avaluació NIST RT de 2006.Per acabar, val la pena esmentar alguns resultats col·laterals d'aquest treball de tesi. Com que s'ha dut a terme en l'entorn del projecte europeu CHIL, l'autor ha estat responsable de l'organització de les avaluacions internacionals de classificació i detecció d'events acústics abans esmentades, liderant l'especificació de les classes d'events, les bases de dades, els protocols d'avaluació i, especialment, proposant i implementant les diverses mètriques utilitzades. A més a més, els sistemes de detecciós'han implementat en la sala intel·ligent de la UPC, on funcionen en temps real a efectes de test i demostració. / The human activity that takes place in meeting-rooms or class-rooms is reflected in a rich variety of acoustic events, either produced by the human body or by objects handled by humans, so the determination of both the identity of sounds and their position in time may help to detect and describe that human activity.Additionally, detection of sounds other than speech may be useful to enhance the robustness of speech technologies like automatic speech recognition. Automatic detection and classification of acoustic events is the objective of this thesis work. It aims at processing the acoustic signals collected by distant microphones in meeting-room or classroom environments to convert them into symbolic descriptions corresponding to a listener's perception of the different sound events that are present in the signals and their sources. First of all, the task of acoustic event classification is faced using Support Vector Machine (SVM) classifiers, which are motivated by the scarcity of training data. A confusion-matrix-based variable-feature-set clustering scheme is developed for the multiclass recognition problem, and tested on the gathered database. With it, a higher classification rate than the GMM-based technique is obtained, arriving to a large relative average error reduction with respect to the best result from the conventional binary tree scheme. Moreover, several ways to extend SVMs to sequence processing are compared, in an attempt to avoid the drawback of SVMs when dealing with audio data, i.e. their restriction to work with fixed-length vectors, observing that the dynamic time warping kernels work well for sounds that show a temporal structure. Furthermore, concepts and tools from the fuzzy theory are used to investigate, first, the importance of and degree of interaction among features, and second, ways to fuse the outputs of several classification systems. The developed AEC systems are tested also by participating in several international evaluations from 2004 to 2006, and the resultsare reported. The second main contribution of this thesis work is the development of systems for detection of acoustic events. The detection problem is more complex since it includes both classification and determination of the time intervals where the sound takes place. Two system versions are developed and tested on the datasets of the two CLEAR international evaluation campaigns in 2006 and 2007. Two kinds of databases are used: two databases of isolated acoustic events, and a database of interactive seminars containing a significant number of acoustic events of interest. Our developed systems, which consist of SVM-based classification within a sliding window plus post-processing, were the only submissions not using HMMs, and each of them obtained competitive results in the corresponding evaluation. Speech activity detection was also pursued in this thesis since, in fact, it is a -especially important - particular case of acoustic event detection. An enhanced SVM training approach for the speech activity detection task is developed, mainly to cope with the problem of dataset reduction. The resulting SVM-based system is tested with several NIST Rich Transcription (RT) evaluation datasets, and it shows better scores than our GMM-based system, which ranked among the best systems in the RT06 evaluation. Finally, it is worth mentioning a few side outcomes from this thesis work. As it has been carried out in the framework of the CHIL EU project, the author has been responsible for the organization of the above mentioned international evaluations in acoustic event classification and detection, taking a leading role in the specification of acoustic event classes, databases, and evaluation protocols, and, especially, in the proposal and implementation of the various metrics that have been used. Moreover, the detection systems have been implemented in the UPC's smart-room and work in real time for purposes of testing and demonstration.
209

Avalanche Ruggedness of Local Charge Balance Power Super Junction Transistors

Villamor Baliarda, Ana 10 July 2013 (has links)
L’objectiu principal de la Tesi Doctoral és augmentar la fiabilitat dels transistors MOS de potència d’alta capacitat en tensió (600 V) basats en el concepte Super-Unió quan aquests components es sotmeten a les condicions més extremes en convertidors DC/DC i circuits reguladors del factor de potència, on el seu díode intrínsec ha d’absorbir una gran quantitat d’energia en molt poc temps. La Tesi s’ha realitzat en el marc d’una col·laboració entre l’Institut de Microelectrònica de Barcelona (IMB-CNMCSIC) i ON Semiconductor (Oudenaarde, Bèlgica). El procés tecnològic dels nous transistors MOS de potència tipus Super-Unió dissenyats a ON Semiconductor (anomenats UltiMOS) ha estat optimitzat per tal d’incrementar-ne la seva robustesa, independentment del balanç de càrrega que existeixi al dispositiu. Els transistors són per aplicacions de 400 V de línia que requereixen una capacitat en tensió superior a 600 V i una resistència en conducció mínima per operar a alta freqüència. La tesis comença amb una introducció de l’estat de l’art dels transistors MOS dispositius Super-Unió, incloent-hi la tecnologia emprada pels competidors. A continuació es descriuen els paràmetres elèctrics i tecnològics de l’estructura i la seva repercussió en el comportament elèctric. El gruix de la recerca es centra en l’estudi de la física involucrada en els mecanismes de fallida a partir de simulacions TCAD i amb mesures experimentals d’on es conclou que és necessari aportar una solució tecnològica per tal d’augmentar la capacitat energètica dels transistors UltiMOS per obtenir una finestra prou àmplia del balanç de càrrega que en garanteixi la seva industrialització. Per dur a terme l’estudi de fiabilitat s’han fabricat a la Sala Blanca de ON Semiconductor diferents dispositius derivats del transistor UltiMOS (Transistors MOS convencionals amb porta en trinxera, Díodes Super-Unió i Transistors Bipolars Super-Unió). Tots els resultats derivats de mesures amb tècniques complementàries (Unclamped Inductive Switching, Emission Microscopy, Thermal Infrared Thermography, Transmission Line Pulse, Transient Interferometric Mapping, etc.), apunten en el mateix sentit: el corrent es focalitza en certes àrees del dispositiu, afavorint l’activació del transistor bipolar paràsit inherent a la pròpia estructura UltiMOS. S’han proposat dues solucions per augmentar la robustesa que, un cop demostrada la seva eficàcia, s’han incorporat al procés tecnològic definitiu que durà a la producció massiva dels components. / The main objective of the thesis is the reliability increase of high voltage (600 V) power MOSFETS based in the Super Junction concept when they are submitted to the most extreme conditions in DC/DC converters and Factor Power Correction circuits and the intrinsic body diode has to handle a big amount of energy in a very short period of time. The research has been carried out in the framework of a collaboration between Institut de Microelectrònica de Barcelona (IMB-CNM-CSIC) and ON Semiconductor (Oudenaarde, Bèlgica). The process technology of the new Super Junction power MOSFET transistors designed in ON Semiconductor (named UltiMOS) has been optimized with the aim of robustness enhancement, which has to be totally independent of the charge balance in the device. The transistors are destined to 400 V line applications that require a voltage capability above 600 V and a minimal on-state resistance to operate at high frequency. The thesis starts with an introduction to the state of the art of Super Junction transistors, including a description of the different process technologies used in the commercial counterparts. Afterwards, the most relevant electrical and technological parameters are introduced and linked to the electrical characterization of the UltiMOS transistor. The research is centered in the study of the physics involved in the failure mechanisms combining TCAD simulations and experimental measurements, from where it is concluded that a technological solution to increase the energy capability of UltiMOS transistors is needed, with a wide CB manufacturability window. Different devices derived from the UltiMOS structure (conventional UMOS transistor, SJ Diodes and SJ Bipolar transistors) were fabricated in the ON Semiconductor’s Clean Room and tested under the same avalanche conditions as UltiMOS transistors. All the results derived from complementary techniques (Unclamped Inductive Switching, Emission Microscopy, Thermal Infrared Thermography, Transmission Line Pulse, Transient Interferometric Mapping, etc.) lead to the same conclusion: the current is focalized at a certain region of the UltiMOS transistor, enhancing the activation of the parasitic bipolar transistor. Two approaches are proposed to increase the energy capability of UltiMOS transistors and, once its efficiency has been demonstrated, they have been included on the process technology of the device designed to go into production.
210

Nonlinear pulse compression

Grün, Alexander 07 November 2014 (has links)
In this thesis I investigate two methods for generating ultrashort laser pulses in spectral regions which are ordinarily difficult to achieve by the existing techniques. These pulses are specially attractive in the study of ultrafast (few femtosecond) atomic and molecular dynamics. The first involves Optical Parametric Amplification (OPA) mediated by four-wave-mixing in gas and supports the generation of ultrashort pulses in the Near-InfraRed (NIR) to the Mid-InfraRed (MIR) spectral region. By combining pulses at a centre wavelength of 800 nm and their second harmonic in an argon-filled hollow-core fibre, we demonstrate near-infrared pulses, peaked at 1.4 µm, with 5 µJ energy and 45 fs duration at the fibre output. The four-wave-mixing process involved in the OPA is expected to lead carrier-envelope phase stable pulses which is of great importance for applications in extreme nonlinear optics. These NIR to MIR pulses can be used directly for nonlinear light-matter interactions making use of its long-wavelength characteristics. The second method allows the compression of intense femtosecond pulses in the ultraviolet (UV) region by sum-frequency mixing two bandwidth limited NIR pulses in a noncollinear phasematching geometry under particular conditions of group-velocity mismatch. Specifically, the crystal has to be chosen such that the group velocities of the NIR pump pulses, v1 and v2 , and of the sum-frequency generated pulse, vSF, meet the following condition, v1 < vSF < v2. In case of strong energy exchange and an appropriate pre-delay between the pump waves, the leading edge of the faster pump pulse and the trailing edge of the slower one are depleted. This way the temporal overlap region of the pump pulses remains narrow resulting in the shortening of the upconverted pulse. The noncollinear beam geometry allows to control the relative group velocities while maintaining the phasematching condition. To ensure parallel wavefronts inside the crystal and that the sum-frequency generated pulses emerge untilted, pre-compensation of the NIR pulse-front tilts is essential. I show that these pulse-front tilts can be achieved using a very compact setup based on transmission gratings and a more complex setup based on prisms combined with telescopes. UV pulses as short as 32 fs (25 fs) have been generated by noncollinear nonlinear pulse compression in a type II phasematching BBO crystal, starting with NIR pulses of 74 fs (46 fs) duration. This is of interest, because there is no crystal that can be used for nonlinear pulse compression at wavelengths near 800 nm in a collinear geometry. Compared to state-of-the-art compression techniques based on self-phase modulation, pulse compression by sum-frequency generation is free of aperture limitation, and thus scalable in energy. Such femtosecond pulses in the visible and in the ultraviolet are strongly desired for studying ultrafast dynamics of a variety of (bio)molecular systems. / En esta tesis he investigado dos métodos para generar pulsos láser ultracortos en regiones espectrales que son típicamente difíciles de lograr con las técnicas existentes. Estos pulsos son especialmente atractivos en el estudio de la dinámica ultrarrápida (pocos femtosegundos) en átomos y moléculas. La primera técnica implica Amplificación Paramétrica Óptica (OPA) mediante mezcla de cuatro ondas en fase gaseosa y soporta la generación de pulsos ultracortos desde el Infrarrojo-Cercano (NIR) hasta la región espectral del Infrarrojo-Medio (MIR). Mediante la combinación de pulsos centrados a una longitud de onda de 800 nm y su segundo armónico en una fibra hueca rellena de argón, hemos demostrado a la salida de la fibra la generación de pulsos en el NIR, centrados a 1.4 µm, con 5 µJ de energía y 45 fs de duración. Se espera que el proceso de mezcla de cuatro ondas involucrado en el OPA lleve a pulsos con fase de la envolvente de la portadora estables, ya que es de gran importancia para aplicaciones en óptica extrema no lineal. Estos pulsos desde el NIR hasta el MIR se pueden utilizar directamente en interacciones no-lineales materia-radiación, haciendo uso de sus características de longitud de onda largas. El segundo método permite la compresión de pulsos intensos de femtosegundos en la región del ultravioleta (UV) mediante la mezcla de suma de frecuencias de dos pulsos en el NIR limitados en el ancho de banda en una geometría de ajuste de fases no-colineal bajo condiciones particulares de discrepancia de velocidades de grupo. Específicamente, el cristal debe ser elegido de tal manera que las velocidades de grupo de los pulsos de bombeo del NIR, v1 y v2, y la del pulso suma-de-frecuencias generado, vSF, cumplan la siguiente condición, v1 < vSF < v2. En el caso de un fuerte intercambio de energía y un pre-retardo adecuado entre las ondas de bombeo, el borde delantero del pulso de bombeo más rápido y el borde trasero del más lento se agotan. De esta manera la región de solapamiento temporal de los impulsos de bombeo permanece estrecha, resultando en el acortamiento del impulso generado. La geometría de haces no-colineales permite controlar las velocidades de grupo relativas mientras mantiene la condición de ajuste de fase. Para asegurar frentes de onda paralelos dentro del cristal y que los pulsos generados por suma de frecuencias se generen sin inclinación, es esencial la pre-compensación de la inclinación de los frente de onda de los pulsos NIR. En esta tesis se muestra que estas inclinaciones de los frentes de onda se pueden lograr utilizando una configuración muy compacta basada en rejillas de transmisión y una configuración más compleja basada en prismas combinados con telescopios. Pulsos en el UV tan cortos como 32 fs (25 fs) se han generado mediante compresión de pulsos no-lineal no-colineal en un cristal BBO de ajuste de fase tipo II, comenzando con pulsos en el NIR de 74 fs (46 fs) de duración. El interés de este método radica en la inexistencia de cristales que se puedan utilizar para la compresión de impulsos no-lineal a longitudes de onda entorno a 800 nm en una geometría colineal. En comparación con las técnicas de última generación de compresión basadas en la automodulación de fase, la compresión de pulsos por suma de frecuencias esta libre de restricciones en la apertura de los pulsos, y por lo tanto es expandible en energía. Tales pulsos de femtosegundos en el visible y en el ultravioleta son fuertemente deseados en el estudio de dinámica ultrarrápida de una gran variedad de sistemas (bio)moleculares.

Page generated in 0.0747 seconds