• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 92
  • 38
  • 1
  • Tagged with
  • 398
  • 386
  • 379
  • 364
  • 364
  • 77
  • 73
  • 41
  • 40
  • 40
  • 39
  • 21
  • 20
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Variability-aware architectures based on hardware redundancy for nanoscale reliable computation

Aymerich Capdevila, Nivard 16 December 2013 (has links)
During the last decades, human beings have experienced a significant enhancement in the quality of life thanks in large part to the fast evolution of Integrated Circuits (IC). This unprecedented technological race, along with its significant economic impact, has been grounded on the production of complex processing systems from highly reliable compounding devices. However, the fundamental assumption of nearly ideal devices, which has been true within the past CMOS technology generations, today seems to be coming to an end. In fact, as MOSFET technology scales into nanoscale regime it approaches to fundamental physical limits and starts experiencing higher levels of variability, performance degradation, and higher rates of manufacturing defects. On the other hand, ICs with increasing number of transistors require a decrease in the failure rate per device in order to maintain the overall chip reliability. As a result, it is becoming increasingly important today the development of circuit architectures capable of providing reliable computation while tolerating high levels of variability and defect rates. The main objective of this thesis is to analyze and propose new fault-tolerant architectures based on redundancy for future technologies. Our research is founded on the principles of redundancy established by von Neumann in the 1950s and extends them to three new dimensions: 1. Heterogeneity: Most of the works on fault-tolerant architectures based on redundancy assume homogeneous variability in the replicas like von Neumann's original work. Instead, we explore the possibilities of redundancy when heterogeneity between replicas is taken into account. In this sense, we propose compensating mechanisms that select the weighting of the redundant information to maximize the overall reliability. 2. Asynchrony: Each of the replicas of a redundant system may have associated different processing delays due to variability and degradation; especially in future nanotechnologies. If we design our system to work locally in asynchronous mode then we may consider different voting policies to deal with the redundant information. Depending on how many replicas we collect before taking a decision we can obtain different trade-off between processing delay and reliability. We propose a mechanism for providing these facilities and analyze and simulate its operation. 3. Hierarchy: Finally, we explore the possibilities of redundancy applied at different hierarchy layers of complex processing systems. We propose to distribute redundancy across the various hierarchy layers and analyze the benefits that can be obtained. Drawing on the scenario of future ICs technologies, we push the concept of redundancy to its fullest expression through the study of realistic nano-device architectures. Most of the redundant architectures considered so far do not face properly the era of Terascale Computing and the nanotechnology trends. Since von Neumann applied for the first time redundancy at electronic circuits, never until now effects as common in nanoelectronics as degradation and interconnection failures have been treated directly from the standpoint of redundancy. In this thesis we address in a comprehensive manner the reliability of digital processing systems in the upcoming technology generations.
102

Compressive sensing based candidate detector and its applications to spectrum sensing and through-the-wall radar imaging

Lagunas Targarona, Eva 07 March 2014 (has links)
Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications being developed in areas such as channel coding, medical imaging, computational biology and many more. Unlike majority of CS literature, the proposed Ph.D. thesis surveys the CS theory applied to signal detection, estimation and classification, which not necessary requires perfect signal reconstruction or approximation. In particular, a novel CSbased detection technique which exploits prior information about some features of the signal is presented. The basic idea is to scan the domain where the signal is expected to lie with a candidate signal estimated from the known features. The proposed detector is called candidate-based detector because their main goal is to react only when the candidate signal is present. The CS-based candidate detector is applied to two topical detection problems. First, the powerful CS theory is used to deal with the sampling bottleneck in wideband spectrum sensing for open spectrum scenarios. The radio spectrum is a natural resource which is recently becoming scarce due to the current spectrum assignment policy and the increasing number of licensed wireless systems. To deal with the crowded spectrum problem, a new spectrum management philosophy is required. In this context, the revolutionary Cognitive Radio (CR) emerges as a solution. CR benefits from the poor usage of the spectrum by allowing the use of temporarily unused licensed spectrum to secondary users who have no spectrum licenses. The identification procedure of available spectrum is commonly known as spectrum sensing. However, one of the most important problems that spectrum sensing techniques must face is the scanning of wide band of frequencies, which implies high sampling rates. The proposed CS-based candidate detector exploits some prior knowledge of primary users, not only to relax the sampling bottleneck, but also to provide an estimation of the candidate signals' frequency, power and angle of arrival without reconstructing the whole spectrum. The second application is Through-the-Wall Radar Imaging (TWRI). Sensing through obstacles such as walls, doors, and other visually opaque materials, using microwave signals is emerging as a powerful tool supporting a range of civilian and military applications. High resolution imaging is achieved if large bandwidth signals and long antenna arrays are used. However, this implies acquisition and processing of large amounts of data volume. Decreasing the number of acquired samples can also be helpful in TWRI from a logistic point of view, as some of the data measurements in space and frequency can be difficult, or impossible to attain. In this thesis, we addressed the problem of imaging building interior structures using a reduced number of measurements. The proposed technique for the determination of the building layout is based on prior knowledge about common construction practices. Real data collection experiments in a laboratory environment, using Radar Imaging Lab facility at the Center for Advanced Communications, Villanova University, USA, are conducted to validate the proposed approach. / La adquisición de datos es un tema fundamental en el procesamiento de señales. Hasta ahora, el teorema de Shannon-Nyquist ha sido el núcleo de los métodos convencionales de conversión analógico-digital. El teorema dice que para recuperar perfectamente la información, cualquier señal debe ser muestreada a una frecuencia constante igual al doble de la máxima frecuencia presente en la señal. Sin embargo, este teorema asume el peor de los casos: cuando las señales ocupan todo el espectro. En este contexto aparece la teoría del muestreo compresivo (conocido en inglés como Compressed Sensing (CS)). CS ha supuesto una auténtica revolución en lo que se refiere a la adquisición y muestreo de datos analógicos en un esfuerzo hacia resolver la problemática de recuperar un proceso continuo comprimible con un nivel suficiente de similitud si únicamente se realiza un número muy reducido de medidas o muestras del mismo. El requerimiento para el éxito de dicha técnica es que la señal debe poder expresarse de forma dispersa en algún dominio. Esto es, que la mayoría de sus componentes sean cero o puedan considerarse despreciables. La aplicación de este tipo de muestreo compresivo supone una línea de investigación de gran auge e interés investigador en áreas como la transmisión de datos, procesamiento de imágenes médicas, biología computacional, entre otras. A diferencia de la mayoría de publicaciones relacionadas con CS, en esta tesis se estudiará CS aplicado a detección, estimación y clasificación de señales, que no necesariamente requiere la recuperación perfecta ni completa de la señal. En concreto, se propone un nuevo detector basado en cierto conocimiento a priori sobre la señal a detectar. La idea básica es escanear el dominio de la señal con una señal llamada Candidata, que se obtiene a partir de la información a priori de la señal a detectar. Por lo tanto, el detector únicamente reaccionará cuando la señal candidata esté presente. El detector es aplicado a dos problemas particulares de detección. En primer lugar, la novedosa teoría de CS es aplicada al sensado de espectro o spectrum sensing, en el contexto de Radio Cognitiva (CR). El principal problema radica en que las políticas actuales de asignación de bandas frecuenciales son demasiado estrictas y no permiten un uso óptimo del espectro radioeléctrico disponible. El uso del espectro radioeléctrico puede ser mejorado significativamente si se posibilita que un usuario secundario (sin licencia) pueda acceder a un canal desocupado por un usuario primario en ciertas localizaciones y momentos temporales. La tecnología CR se ha identificado recientemente como una solución prometedora al denominado problema de escasez de espectro, es decir, la creciente demanda de espectro y su actual infrautilización. Un requerimiento esencial de los dispositivos cognitivos es la capacidad de detectar la presencia de usuarios primarios (para no causarles interferencia). Uno de los problemas que se afronta en este contexto es la necesidad de escanear grandes anchos de banda que requieren frecuencias de muestreo extremadamente elevadas. El detector propuesto basado en CS aprovecha los huecos libres de espectro no sólo para relajar los requerimientos de muestreo, sino también para proporcionar una estimación precisa de la frecuencia, potencia y ángulo de llegada de los usuarios primarios, todo ello sin necesidad de reconstruir el espectro. La segunda aplicación es en radar con visión a través de paredes (Through-the-Wall Radar Imaging - TWRI). Hace ya tiempo que la capacidad de ver a través de las paredes ya no es un asunto de ciencia ficción. Esto es posible mediante el envío de ondas de radio, capaces de atravesar objetos opacos, que rebotan en los objetivos y retornan a los receptores. Este es un tipo de radar con gran variedad de aplicaciones, tanto civiles como militares. La resolución de las imágenes proporcionadas por dichos radares mejora cuando se usan grandes anchos de banda y mayor número de antenas, lo que directamente implica la necesidad de adquirir un mayor número de muestras y un mayor volumen de datos que procesar. A veces, reducir el número de muestras es interesante en TWRI desde un punto de vista logístico, ya que puede que algunas muestras frecuenciales o espaciales sean difíciles o imposibles de obtener. En esta tesis focalizaremos el trabajo en la detección de estructuras internas como paredes internas para reconstruir la estructura del edificio. Las paredes y/o diedros formados por la intersección de dos paredes internas formaran nuestras señales candidatas para el detector propuesto. En general, las escenas de interiores de edificios están formadas por pocas estructuras internas dando paso a la aplicaci´on de CS. La validación de la propuesta se llevará a cabo con experimentos realizados en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA / L’adquisició de dades és un tema fonamental en el processament de senyals. Fins ara, el teorema de Shannon-Nyquist ha sigut la base dels mètodes convencionals de conversió analògic-digital. El teorema diu que per recuperar perfectament la informació, qualsevol senyal ha de ser mostrejada a una freqüència constant igual al doble de la màxima freqüència present a la senyal. No obstant, aquest teorema assumeix el pitjor dels casos: quan les senyals ocupen tot l’espectre. En aquest context apareix la teoria del mostreig compressiu (conegut en anglès amb el nom de Compressed Sensing (CS)). CS ha suposat una autèntica revolució pel que fa a l’adquisició i mostreig de dades analògiques en un esforç cap a resoldre la problemàtica de recuperar un procés continu comprimible amb un nivell suficient de similitud si únicament es realitza un número molt reduït de mesures o mostres del mateix. El requisit para l’èxit d’aquesta tècnica és que la senyal ha de poder ser expressada de forma dispersa en algun domini. Això és, que la majoria dels seus components siguin zero o puguin considerar-se despreciables. L’aplicació d’aquest tipus de mostreig compressiu suposa una l’ínia de investigació de gran interès en àrees com la transmissió de dades, el processament d’imatges mèdiques, biologia computacional, entre altres. A diferència de la majoria de publicacions relacionades amb CS, en aquesta tesi s’estudiarà CS aplicat a detecció, estimació i classificació de senyals, que no necessàriament requereix la recuperació perfecta ni completa de la senyal. En concret, es proposa un nou detector basat en cert coneixement a priori sobre la senyal a detectar. La idea bàsica és escanejar el domini de la senyal amb una senyal anomenada Candidata, que s’obté a partir de la informació a priori de la senyal a detectar. Per tant, el detector únicament reaccionarà quan la senyal candidata estigui present. El detector és aplicat a dos problemes particulars de detecció. En primer lloc, la teoria de CS és aplicada al sensat d’espectre o spectrum sensing, en el context de Radio Cognitiva (CR). El principal problema radica en que les polítiques actuals d’assignació de bandes freqüencials són massa estrictes i no permeten l’ús òptim de l’espectre radioelèctric disponible. L’ús de l’espectre radioelèctric pot ser significativament millorat si es possibilita que un usuari secundari (sense llicència) pugui accedir a un canal desocupat per un usuari primari en certes localitzacions i moments temporals. La tecnologia CR s’ha identificat recentment com una solució prometedora al problema d’escassetat d’espectre, és a dir, la creixent demanda d’espectre i la seva actual infrautilització. Un requeriment essencial dels dispositius cognitius és la capacitat de detectar la presència d’usuaris primaris (per no causar interferència). Un dels problemes que s’afronta en aquest context és la necessitat d’escanejar grans amples de banda que requereixen freqüències de mostreig extremadament elevades. El detector proposat basat en CS aprofita els espais buits lliures d’espectre no només per relaxar els requeriments de mostreig, sinó també per proporcionar una estimació precisa de la freqüència, potència i angle d’arribada dels usuaris primaris, tot això sense necessitat de reconstruir l’espectre. La segona aplicació ´es en radars amb visió a través de parets (Through-the-Wall Radar Imaging - TWRI). Ja fa un temps que la capacitat de veure a través de les parets no és un assumpte de ciència ficció. Això ´es possible mitjançant l’enviament d’ones de radio, capaços de travessar objectes opacs, que reboten en els objectius i retornen als receptors. Aquest és un tipus de radar amb una gran varietat d’aplicacions, tant civils como militars. La resolució de las imatges proporcionades per aquests radars millora quan s’usen grans amples de banda i més nombre d’antenes, cosa que directament implica la necessitat d’adquirir un major nombre de mostres i un major volum de dades que processar. A vegades, reduir el nombre mostres és interessant en TWRI des de un punt de vista logístic, ja que pot ser que algunes mostres freqüencials o espacials siguin difícils o impossibles d’obtenir. En aquesta tesis focalitzarem el treball en la detecció d’estructures internes com per exemple parets internes per reconstruir l’estructura de l’edifici. Les parets i/o díedres formats per la intersecció de dos parets internes formaran les nostres senyals candidates per al detector proposat. En general, les escenes d’interiors d’edificis estan formades per poques estructures internes donant pas a l’aplicació de CS. La validació de la proposta es durà a terme amb experiments realitzats en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA
103

Privacy protection of user profiles in personalized information systems

Parra Arnau, Javier 02 December 2013 (has links)
In recent times we are witnessing the emergence of a wide variety of information systems that tailor the information-exchange functionality to meet the specific interests of their users. Most of these personalized information systems capitalize on, or lend themselves to, the construction of profiles, either directly declared by a user, or inferred from past activity. The ability of these systems to profile users is therefore what enables such intelligent functionality, but at the same time, it is the source of serious privacy concerns. Although there exists a broad range of privacy-enhancing technologies aimed to mitigate many of those concerns, the fact is that their use is far from being widespread. The main reason is that there is a certain ambiguity about these technologies and their effectiveness in terms of privacy protection. Besides, since these technologies normally come at the expense of system functionality and utility, it is challenging to assess whether the gain in privacy compensates for the costs in utility. Assessing the privacy provided by a privacy-enhancing technology is thus crucial to determine its overall benefit, to compare its effectiveness with other technologies, and ultimately to optimize it in terms of the privacy-utility trade-off posed. Considerable effort has consequently been devoted to investigating both privacy and utility metrics. However, most of these metrics are specific to concrete systems and adversary models, and hence are difficult to generalize or translate to other contexts. Moreover, in applications involving user profiles, there are a few proposals for the evaluation of privacy, and those existing are not appropriately justified or fail to justify the choice. The first part of this thesis approaches the fundamental problem of quantifying user privacy. Firstly, we present a theoretical framework for privacy-preserving systems, endowed with a unifying view of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. Our theoretical analysis shows that numerous privacy metrics emerging from a broad spectrum of applications are bijectively related to this estimation error, which permits interpreting and comparing these metrics under a common perspective. Secondly, we tackle the issue of measuring privacy in the enthralling application of personalized information systems. Specifically, we propose two information-theoretic quantities as measures of the privacy of user profiles, and justify these metrics by building on Jaynes' rationale behind entropy-maximization methods and fundamental results from the method of types and hypothesis testing. Equipped with quantifiable measures of privacy and utility, the second part of this thesis investigates privacy-enhancing, data-perturbative mechanisms and architectures for two important classes of personalized information systems. In particular, we study the elimination of tags in semantic-Web applications, and the combination of the forgery and the suppression of ratings in personalized recommendation systems. We design such mechanisms to achieve the optimal privacy-utility trade-off, in the sense of maximizing privacy for a desired utility, or vice versa. We proceed in a systematic fashion by drawing upon the methodology of multiobjective optimization. Our theoretical analysis finds a closed-form solution to the problem of optimal tag suppression, and to the problem of optimal forgery and suppression of ratings. In addition, we provide an extensive theoretical characterization of the trade-off between the contrasting aspects of privacy and utility. Experimental results in real-world applications show the effectiveness of our mechanisms in terms of privacy protection, system functionality and data utility.
104

Experimental validation of optimal real-time energy management system for microgrids

Marzband, Mousa 20 January 2014 (has links)
Nowadays, power production, reliability, quality, efficiency and penetration of renewable energy sources are amongst the most important topics in the power systems analysis. The need to obtain optimal power management and economical dispatch are expressed at the same time. The interest in extracting an optimum performance minimizing market clearing price (MCP) for the consumers and provide better utilization of renewable energy sources has been increasing in recent years. Due to necessity of providing energy balance while having the fluctuations in the load demand and non-dispatchable nature of renewable sources, implementing an energy management system (EMS) is of great importance in Microgrids (MG). The appearance of new technologies such as energy storage (ES) has caused increase in the effort to present new and modified optimization methods for power management. Precise prediction of renewable energy sources power generation can only be provided with small anticipation. Hence, for increasing the efficiency of the presented optimization algorithm in large-dimension problems, new methods should be proposed, especially for short-term scheduling. Powerful optimization methods are needed to be applied in such a way to achieve maximum efficiency, enhance the economic dispatch as well as provide the best performance for these systems. Thus, real-time energy management within MG is an important factor for the operators to guarantee optimal and safe operation of the system. The proposed EMS should be able to schedule the MG generation with minimum information shares sent by generation units. To achieve this ability, the present thesis proposes an operational architecture for real time operation (RTO) of a MG operating in both islanding and grid-connected modes. The presented architecture is flexible and could be used for different configurations of MGs in different scenarios. A general formula is also presented to estimate optimum operation strategy, cost optimization plan and the reduction of the consumed electricity combined with applying demand response (DR). The proposed problem is formulated as an optimization problem with nonlinear constraints to minimize the cost related to generation sources and responsive load as well as reducing MCP. Several optimization methods including mixed linear programming, pivot source, imperialist competition, artificial bee colony, particle swarm, ant colony, and gravitational search algorithms are utilized to achieve the specified objectives. The main goal of the thesis is to validate experimentally the design of the real-time energy management system for MGs in both operating modes which is suitable for different size and types of generation resources and storage devices with plug-and-play structure. As a result, this system is capable of adapting itself to changes in the generation and storage assets in real-time, and delivering optimal operation commands to the assets quickly, using a local energy market (LEM) structure based on single side or double side auction. The study is aimed to figure the optimum operation of micro-sources out as well as to decrease the electricity production cost by hourly day-ahead and real time scheduling. Experimental results show the effectiveness of the proposed methods for optimal operation with minimum cost and plug-and-play capability in a MG. Moreover, these algorithms are feasible from computational viewpoints while having many advantages such as reducing the peak consumption, optimal operation and scheduling the generation unit as well as minimizing the electricity generation cost. Furthermore, capabilities such as the system development, reliability and flexibility are also considered in the proposed algorithms. The plug and play capability in real time applications is investigated by using different scenarios.
105

Use of locator/identifier separation to improve the future internet routing system

Jakab, Loránd 04 July 2011 (has links)
The Internet evolved from its early days of being a small research network to become a critical infrastructure many organizations and individuals rely on. One dimension of this evolution is the continuous growth of the number of participants in the network, far beyond what the initial designers had in mind. While it does work today, it is widely believed that the current design of the global routing system cannot scale to accommodate future challenges. In 2006 an Internet Architecture Board (IAB) workshop was held to develop a shared understanding of the Internet routing system scalability issues faced by the large backbone operators. The participants documented in RFC 4984 their belief that "routing scalability is the most important problem facing the Internet today and must be solved." A potential solution to the routing scalability problem is ending the semantic overloading of Internet addresses, by separating node location from identity. Several proposals exist to apply this idea to current Internet addressing, among which the Locator/Identifier Separation Protocol (LISP) is the only one already being shipped in production routers. Separating locators from identifiers results in another level of indirection, and introduces a new problem: how to determine location, when the identity is known. The first part of our work analyzes existing proposals for systems that map identifiers to locators and proposes an alternative system, within the LISP ecosystem. We created a large-scale Internet topology simulator and used it to compare the performance of three mapping systems: LISP-DHT, LISP+ALT and the proposed LISP-TREE. We analyzed and contrasted their architectural properties as well. The monitoring projects that supplied Internet routing table growth data over a large timespan inspired us to create LISPmon, a monitoring platform aimed at collecting, storing and presenting data gathered from the LISP pilot network, early in the deployment of the LISP protocol. The project web site and collected data is publicly available and will assist researchers in studying the evolution of the LISP mapping system. We also document how the newly introduced LISP network elements fit into the current Internet, advantages and disadvantages of different deployment options, and how the proposed transition mechanism scenarios could affect the evolution of the global routing system. This work is currently available as an active Internet Engineering Task Force (IETF) Internet Draft. The second part looks at the problem of efficient one-to-many communications, assuming a routing system that implements the above mentioned locator/identifier split paradigm. We propose a network layer protocol for efficient live streaming. It is incrementally deployable, with changes required only in the same border routers that should be upgraded to support locator/identifier separation. Our proof-of-concept Linux kernel implementation shows the feasibility of the protocol, and our comparison to popular peer-to-peer live streaming systems indicates important savings in inter-domain traffic. We believe LISP has considerable potential of getting adopted, and an important aspect of this work is how it might contribute towards a better mapping system design, by showing the weaknesses of current favorites and proposing alternatives. The presented results are an important step forward in addressing the routing scalability problem described in RFC 4984, and improving the delivery of live streaming video over the Internet.
106

Resource Management in Multicarrier Based Cognitive Radio Systems

Shaat, Musbah M. R. 09 March 2012 (has links)
The ever-increasing growth of the wireless application and services affirms the importance of the effective usage of the limited radio spectrum. Existing spectrum management policies have led to significant spectrum under-utilization. Recent measurements showed that large range of the spectrum is sparsely used in both temporal and spatial manner. This conflict between the inefficient usage of the spectrum and the continuous evolution in the wireless communication calls upon the development of more flexible management policies. Cognitive radio (CR) with the dynamic spectrum access (DSA) is considered to be a key technology in making the best solution of this conflict by allowing a group of secondary users (SUs) to share the radio spectrum originally allocated to the primary user (PUs). The operation of CR should not negatively alter the performance of the PUs. Therefore, the interference control along with the highly dynamic nature of PUs activities open up new resource allocation problems in CR systems. The resource allocation algorithms should ensure an effective share of the temporarily available frequency bands and deliver the solutions in timely fashion to cope with quick changes in the network. In this dissertation, the resource management problem in multicarrier based CR systems is considered. The dissertation focuses on three main issues: 1) design of efficient resource allocation algorithms to allocate subcarriers and powers between SUs such that no harmful interference is introduced to PUs, 2) compare the spectral efficiency of using different multicarrier schemes in the CR physical layer, specifically, orthogonal frequency division multiplexing (OFDM) and filter bank multicarrier (FBMC) schemes, 3) investigate the impact of the different constraints values on the overall performance of the CR system. Three different scenarios are considered in this dissertation, namely downlink transmission, uplink transmission, and relayed transmission. For every scenario, the optimal solution is examined and efficient sub-optimal algorithms are proposed to reduce the computational burden of obtaining the optimal solution. The suboptimal algorithms are developed by separate the subcarrier and power allocation into two steps in downlink and uplink scenarios. In the relayed scenario, dual decomposition technique is used to obtain an asymptotically optimal solution, and a joint heuristic algorithm is proposed to find the suboptimal solution. Numerical simulations show that the proposed suboptimal algorithms achieve a near optimal performance and perform better than the existing algorithms designed for cognitive and non-cognitive systems. Eventually, the ability of FBMC to overcome the OFDM drawbacks and achieve more spectral efficiency is verified which recommends the consideration of FBMC in the future CR systems. / El crecimiento continuo de las aplicaciones y servicios en sistemas inal´ambricos, indica la importancia y necesidad de una utilizaci´on eficaz del espectro radio. Las pol´ıticas actuales de gesti´on del espectro han conducido a una infrautilizaci´on del propio espectro radioel´ectrico. Recientes mediciones en diferentes entornos han mostrado que gran parte del espectro queda poco utilizado en sus ambas vertientes, la temporal, y la espacial. El permanente conflicto entre el uso ineficiente del espectro y la evoluci´on continua de los sistemas de comunicaci´on inal´ambrica, hace que sea urgente y necesario el desarrollo de esquemas de gesti´on del espectro m´as flexibles. Se considera el acceso din´amico (DSA) al espectro en los sistemas cognitivos como una tecnolog´ıa clave para resolver este conflicto al permitir que un grupo de usuarios secundarios (SUs) puedan compartir y acceder al espectro asignado inicialmente a uno o varios usuarios primarios (PUs). Las operaciones de comunicaci´on llevadas a cabo por los sistemas radio cognitivos no deben en ning´un caso alterar (interferir) los sistemas primarios. Por tanto, el control de la interferencia junto al gran dinamismo de los sistemas primarios implica nuevos retos en el control y asignaci´on de los recursos radio en los sistemas de comunicaci´on CR. Los algoritmos de gesti´on y asignaci´on de recursos (Radio Resource Management-RRM) deben garantizar una participaci´on efectiva de las bandas con frecuencias disponibles temporalmente, y ofrecer en cada momento oportunas soluciones para hacer frente a los distintos cambios r´apidos que influyen en la misma red. En esta tesis doctoral, se analiza el problema de la gesti´on de los recursos radio en sistemas multiportadoras CR, proponiendo varias soluciones para su uso eficaz y coexistencia con los PUs. La tesis en s´ı, se centra en tres l´ıneas principales: 1) el dise˜no de algoritmos eficientes de gesti´on de recursos para la asignaci´on de sub-portadoras y distribuci´on de la potencia en sistemas segundarios, evitando asi cualquier interferencia que pueda ser perjudicial para el funcionamiento normal de los usuarios de la red primaria, 2) analizar y comparar la eficiencia espectral alcanzada a la hora de utilizar diferentes esquema de transmisi´on multiportadora en la capa f´ısica del sistema CR, espec´ıficamente en sistemas basados en OFDM y los basados en banco de filtros multiportadoras (Filter bank Multicarrier-FBMC), 3) investigar el impacto de las diferentes limitaciones en el rendimiento total del sistema de CR. Los escenarios considerados en esta tesis son tres, es decir; modo de transmisi´on descendente (downlink), modo de transmisi´on ascendente (uplink), y el modo de transmisi´on ”Relay”. En cada escenario, la soluci´on ´optima es examinada y comparada con algoritmos sub- ´optimos que tienen como objetivo principal reducir la carga computacional. Los algoritmos sub-´optimos son llevados a cabo en dos fases mediante la separaci´on del propio proceso de distribuci´on de subportadoras y la asignaci´on de la potencia en los modos de comunicaci´on descendente (downlink), y ascendente (uplink). Para los entornos de tipo ”Relay”, se ha utilizado la t´ecnica de doble descomposici´on (dual decomposition) para obtener una soluci´on asint´oticamente ´optima. Adem´as, se ha desarrollado un algoritmo heur´ıstico para poder obtener la soluci´on ´optima con un reducido coste computacional. Los resultados obtenidos mediante simulaciones num´ericas muestran que los algoritmos sub-´optimos desarrollados logran acercarse a la soluci´on ´optima en cada uno de los entornos analizados, logrando as´ı un mayor rendimiento que los ya existentes y utilizados tanto en entornos cognitivos como no-cognitivos. Se puede comprobar en varios resultados obtenidos en la tesis la superioridad del esquema multiportadora FBMC sobre los sistemas basados en OFDM para los entornos cognitivos, causando una menor interferencia que el OFDM en los sistemas primarios, y logrando una mayor eficiencia espectral. Finalmente, en base a lo analizado en esta tesis, podemos recomendar al esquema multiportadora FBMC como una id´onea y potente forma de comunicaci´on para las futuras redes cognitivas.
107

Optimization of emerging extended FTTH WDM/TDM PONs and financial overall assessment

Chatzi, Sotiria 26 November 2013 (has links)
Optical access technology has experienced a boost in the last years, thanks to the continuously migrating multimedia services that are offered over the internet. Though the technologies used for deploying Fiber-To-The-x (FTTx) and Fiber-to-the-Home (FTTH) are mostly based on either Active solutions or as far as Passsive Optical Networks (PONs) is concerned, in Time Division Multiplexing (TDM), an evolution towards Hybrid solutions such as Wavelength Division Multiplexing/Time Division Multiplexing (WDM/TDM) can be foreseen. What needs to be researched and finally established are the exact designs for this important step of integration, which should be optimized in terms of transmission performance and cost, to address all requirements of next-generation passive optical networks. As the most critical elements in optical access network, the design and its cost are the main topics of this discussion. The covered topics span over a wide range and include cost estimation of several optical network technologies - architectures and their comparison and furthermore, subjects of design optimization. In this last category, in-line remote amplification, use of an alternative and an extended frequency band, dispersion compensation and equalization techniques have been examined as well as a combination of the aforementioned means of network optimization. Next to the principal proof of the proposed techniques, the benefits are highlighted in different case studies, while the most representative designs are further discussed.
108

Resource management research in ethernet passive optical networks

Garfias Hernández, Paola 25 November 2013 (has links)
The last decades, we have witnessed different phenomenology in the telecommunications sector. One of them is the widespread use of the Internet, which has brought a sharp increase in traffic, forcing suppliers to continuously expand the capacity of networks. In the near future, Internet will be composed of long-range highspeed optical networks; a number of wireless networks at the edge; and, in between, several access technologies. Today one of the main problems of the Internet is the bottleneck in the access segment. To address this issue the Passive Optical Networks (PONs) are very likely to succeed, due to their simplicity, low-cost, and increased bandwidth. A PON is made up of fiber optic cabling and passive splitters and couplers that distribute an optical signal to connectors that terminate each fiber segment. Among the different PON technologies, the Ethernet-PON (EPON) is a great alternative to satisfy operator and user needs, due to its cost, flexibility and interoperability with other technologies. One of the most interesting challenges in such technologies relates to the scheduling and allocation of resources in the upstream (shared) channel, i.e., the resource management. The aim of this thesis is to study and evaluate current contributions and propose new efficient solutions to address the resource management issues mainly in EPON. Key issues in this context are future end-user needs, quality of service (QoS) support, energy-saving and optimized service provisioning for real-time and elastic flows. This thesis also identifies research opportunities, issue recommendations and proposes novel mechanisms associated with access networks based on optical fiber technologies.
109

Contribution to spectrum management in cognitive radio networks: a cognitive management framework

Bouali, Faouzi 06 September 2013 (has links)
To overcome the current under-utilization of spectrum resources, the CR (Cognitive Radio) paradigm has gained an increasing interest to perform the so-called Dynamic Spectrum Access (DSA). In this respect, Cognitive Radio networks (CRNs) have been strengthened with cognitive management support to push forward their deployment and commercialization. This dissertation has assessed the relevance of exploiting several cognitive management functionalities in various scenarios and case studies. Specifically, this dissertation has constructed a generic cognitive management framework, based on the fittingness factor concept, to support spectrum management in CRNs. Under this framework, the dissertation has addressed two of the most promising CR applications, namely an Opportunistic Spectrum Access (OSA) to licensed bands and open sharing of license-exempt bands. In the former application, several strategies that exploit temporal statistical dependence between primary activity/inactivity durations to perform a proactive spectrum selection have been discussed. A set of guidelines to select the most relevant strategy for a given environment have been provided. In the latter application, a fittingness factor-based spectrum selection strategy has been proposed to efficiency exploit the different bands. Several formulations of the fittingness factor have been compared, and their relevance have been assessed under different settings. Drawing inspiration from these applications, a more general proactive strategy exploiting a characterization of spectrum resources at both the time and frequency domains has been developed to jointly assist spectrum selection (SS) and spectrum mobility (SM) functionalities. Several variants of the proposed strategy, each combining different choices and options of implementation, have been compared to identify which of its components have the most significant impact on performance depending on the working conditions of the CRN. To assess rationality of the proposed strategy with respect to other strategies, a cost-benefit analysis has been conducted to confront the introduced gain in terms of user satisfaction level to the incurred cost in terms of signaling amount. Finally, the dissertation has conducted an analysis of practicality aspects in terms of robustness to environment uncertainty and applicability to realistic environments. With respect to the former aspect, robustness has been assessed in front of two sources of uncertainty, namely imperfection of the acquisition process and non-stationarity of the environment, and additional functionalities have been developed, when needed, to improve robustness. With respect to the latter, the proposed framework has been applied to a Digital Home (DH) environment to validate the obtained key findings under realistic conditions.
110

Channel selection and reverberation-robust automatic speech recognition

Wolf, Martin 11 November 2013 (has links)
If speech is acquired by a close-talking microphone in a controlled and noise-free environment, current state-of-the-art recognition systems often show an acceptable error rate. The use of close-talking microphones, however, may be too restrictive in many applications. Alternatively, distant-talking microphones, often placed several meters far from the speaker, may be used. Such setup is less intrusive, since the speaker does not have to wear any microphone, but the Automatic Speech Recognition (ASR) performance is strongly affected by noise and reverberation. The thesis is focused on ASR applications in a room environment, where reverberation is the dominant source of distortion, and considers both single- and multi-microphone setups. If speech is recorded in parallel by several microphones arbitrarily located in the room, the degree of distortion may vary from one channel to another. The difference among the signal quality of each recording may be even more evident if those microphones have different characteristics: some are hanging on the walls, others standing on the table, or others build in the personal communication devices of the people present in the room. In a scenario like that, the ASR system may benefit strongly if the signal with the highest quality is used for recognition. To find such signal, what is commonly referred as Channel Selection (CS), several techniques have been proposed, which are discussed in detail in this thesis. In fact, CS aims to rank the signals according to their quality from the ASR perspective. To create such ranking, a measure that either estimates the intrinsic quality of a given signal, or how well it fits the acoustic models of the recognition system is needed. In this thesis we provide an overview of the CS measures presented in the literature so far, and compare them experimentally. Several new techniques are introduced, that surpass the former techniques in terms of recognition accuracy and/or computational efficiency. A combination of different CS measures is also proposed to further increase the recognition accuracy, or to reduce the computational load without any significant performance loss. Besides, we show that CS may be used together with other robust ASR techniques, and that the recognition improvements are cumulative up to some extent. An online real-time version of the channel selection method based on the variance of the speech sub-band envelopes, which was developed in this thesis, was designed and implemented in a smart room environment. When evaluated in experiments with real distant-talking microphone recordings and with moving speakers, a significant recognition performance improvement was observed. Another contribution of this thesis, that does not require multiple microphones, was developed in cooperation with the colleagues from the chair of Multimedia Communications and Signal Processing at the University of Erlangen-Nuremberg, Erlangen, Germany. It deals with the problem of feature extraction within REMOS (REverberation MOdeling for Speech recognition), which is a generic framework for robust distant-talking speech recognition. In this framework, the use of conventional methods to obtain decorrelated feature vector coefficients, like the discrete cosine transform, is constrained by the inner optimization problem of REMOS, which may become unsolvable in a reasonable time. A new feature extraction method based on frequency filtering was proposed to avoid this problem. / Los actuales sistemas de reconocimiento del habla muestran a menudo una tasa de error aceptable si la voz es registrada por micr ofonos próximos a la boca del hablante, en un entorno controlado y libre de ruido. Sin embargo, el uso de estos micr ofonos puede ser demasiado restrictivo en muchas aplicaciones. Alternativamente, se pueden emplear micr ofonos distantes, los cuales a menudo se ubican a varios metros del hablante. Esta con guraci on es menos intrusiva ya que el hablante no tiene que llevar encima ning un micr ofono, pero el rendimiento del reconocimiento autom atico del habla (ASR, del ingl es Automatic Speech Recognition) en dicho caso se ve fuertemente afectado por el ruido y la reverberaci on. Esta tesis se enfoca a aplicaciones ASR en el entorno de una sala, donde la reverberaci on es la causa predominante de distorsi on y se considera tanto el caso de un solo micr ofono como el de m ultiples micr ofonos. Si el habla es grabada en paralelo por varios micr ofonos distribuidos arbitrariamente en la sala, el grado de distorsi on puede variar de un canal a otro. Las diferencias de calidad entre las señales grabadas pueden ser m as acentuadas si dichos micr ofonos muestran diferentes características y colocaciones: unos en las paredes, otros sobre la mesa, u otros integrados en los dispositivos de comunicaci on de las personas presentes en la sala. En dicho escenario el sistema ASR se puede bene ciar enormemente de la utilizaci on de la señal con mayor calidad para el reconocimiento. Para hallar dicha señal se han propuesto diversas t ecnicas, denominadas CS (del ingl es Channel Selection), las cuales se discuten detalladament en esta tesis. De hecho, la selecci on de canal busca ranquear las señales conforme a su calidad desde la perspectiva ASR. Para crear tal ranquin se necesita una medida que tanto estime la calidad intr nseca de una selal, como lo bien que esta se ajusta a los modelos ac usticos del sistema de reconocimiento. En esta tesis proporcionamos un resumen de las medidas CS hasta ahora presentadas en la literatura, compar andolas experimentalmente. Diversas nuevas t ecnicas son presentadas que superan las t ecnicas iniciales en cuanto a exactitud de reconocimiento y/o e ciencia computacional. Tambi en se propone una combinaci on de diferentes medidas CS para incrementar la exactitud de reconocimiento, o para reducir la carga computacional sin ninguna p erdida signi cativa de rendimiento. Adem as mostramos que la CS puede ser empleada junto con otras t ecnicas robustas de ASR, tales como matched condition training o la normalizaci on de la varianza y la media, y que las mejoras de reconocimiento de ambas aproximaciones son hasta cierto punto acumulativas. Una versi on online en tiempo real del m etodo de selecci on de canal basado en la varianza del speech sub-band envelopes, que fue desarrolladas en esta tesis, fue diseñada e implementada en una sala inteligente. Reportamos una mejora signi cativa en el rendimiento del reconocimiento al evaluar experimentalmente grabaciones reales de micr ofonos no pr oximos a la boca con hablantes en movimiento. La otra contribuci on de esta tesis, que no requiere m ultiples micr ofonos, fue desarrollada en colaboraci on con los colegas del departamento de Comunicaciones Multimedia y Procesamiento de Señales de la Universidad de Erlangen-Nuremberg, Erlangen, Alemania. Trata sobre el problema de extracci on de caracter sticas en REMOS (del ingl es REverberation MOdeling for Speech recognition). REMOS es un marco conceptual gen erico para el reconocimiento robusto del habla con micr ofonos lejanos. El uso de los m etodos convencionales para obtener los elementos decorrelados del vector de caracter sticas, como la transformada coseno discreta, est a limitado por el problema de optimizaci on inherente a REMOS, lo que har a que, utilizando las herramientas convencionales, se volviese un problema irresoluble en un tiempo razonable. Para resolver este problema hemos desarrollado un nuevo m etodo de extracci on de caracter sticas basado en fi ltrado frecuencial / Els sistemes actuals de reconeixement de la parla mostren sovint una taxa d'error acceptable si la veu es registrada amb micr ofons pr oxims a la boca del parlant, en un entorn controlat i lliure de soroll. No obstant, l' us d'aquests micr ofons pot ser massa restrictiu en moltes aplicacions. Alternativament, es poden utilitzar micr ofons distants, els quals sovint s on ubicats a diversos metres del parlant. Aquesta con guraci o es menys intrusiva, ja que el parlant no ha de portar a sobre cap micr ofon, per o el rendiment del reconeixement autom atic de la parla (ASR, de l'angl es Automatic Speech Recognition) en aquest cas es veu fortament afectat pel soroll i la reverberaci o. Aquesta tesi s'enfoca a aplicacions ASR en un ambient de sala, on la reverberaci o es la causa predominant de distorsi o i es considera tant el cas d'un sol micr ofon com el de m ultiples micr ofons. Si la parla es gravada en paral lel per diversos micr ofons distribuï ts arbitràriament a la sala, el grau de distorsi o pot variar d'un canal a l'altre. Les difer encies en qualitat entre els senyals enregistrats poden ser m es accentuades si els micr ofons tenen diferents caracter stiques i col locacions: uns a les parets, altres sobre la taula, o b e altres integrats en els aparells de comunicaci o de les persones presents a la sala. En un escenari com aquest, el sistema ASR es pot bene ciar enormement de l'utilitzaci o del senyal de m es qualitat per al reconeixement. Per a trobar aquest senyal s'han proposat diverses t ecniques, anomenades CS (de l'angl es Channel Selection), les quals es discuteixen detalladament en aquesta tesi. De fet, la selecci o de canal busca ordenar els senyals conforme a la seva qualitat des de la perspectiva ASR. Per crear tal r anquing es necessita una mesura que estimi la qualitat intr nseca d'un senyal, o b e una que valori com de b e aquest s'ajusta als models ac ustics del sistema de reconeixement. En aquesta tesi proporcionem un resum de les mesures CS ns ara presentades en la literatura, comparant-les experimentalment. A m es, es presenten diverses noves t ecniques que superen les anteriors en termes d'exactitud de reconeixement i / o e ci encia computacional. Tamb e es proposa una combinaci o de diferents mesures CS amb l'objectiu d'incrementar l'exactitud del reconeixement, o per reduir la c arrega computacional sense cap p erdua signi cativa de rendiment. A m es mostrem que la CS pot ser utilitzada juntament amb altres t ecniques robustes d'ASR, com ara matched condition training o la normalitzaci o de la varian ca i la mitjana, i que les millores de reconeixement de les dues aproximacions s on ns a cert punt acumulatives. Una versi o online en temps real del m etode de selecci o de canal basat en la varian ca de les envolvents sub-banda de la parla, desenvolupada en aquesta tesi, va ser dissenyada i implementada en una sala intel ligent. A l'hora d'avaluar experimentalment gravacions reals de micr ofons no pr oxims a la boca amb parlants en moviment, es va observar una millora signi cativa en el rendiment del reconeixement. L'altra contribuci o d'aquesta tesi, que no requereix m ultiples micr ofons, va ser desenvolupada en col laboraci o amb els col legues del departament de Comunicacions Multimedia i Processament de Senyals de la Universitat de Erlangen-Nuremberg, Erlangen, Alemanya. Tracta sobre el problema d'extracci o de caracter stiques a REMOS (de l'angl es REverberation MOdeling for Speech recognition). REMOS es un marc conceptual gen eric per al reconeixement robust de la parla amb micr ofons llunyans. L' us dels m etodes convencionals per obtenir els elements decorrelats del vector de caracter stiques, com ara la transformada cosinus discreta, est a limitat pel problema d'optimitzaci o inherent a REMOS. Aquest faria que, utilitzant les eines convencionals, es torn es un problema irresoluble en un temps raonable. Per resoldre aquest problema hem desenvolupat un nou m etode d'extracci o de caracter ístiques basat en fi ltrat frecuencial.

Page generated in 0.0604 seconds