• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 77
  • 32
  • 1
  • Tagged with
  • 371
  • 368
  • 367
  • 364
  • 364
  • 65
  • 56
  • 41
  • 40
  • 40
  • 36
  • 21
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Fault diagnosis and fault tolerant control of multiphase voltage source converters for application in traction drives

Salehifar, Mehdi 15 July 2014 (has links)
There is an increasing demand for vehicles with less environmental impact and higher fuel efficiency. To meet these requirements, the transportation electrification has been introduced in both academia and industry during last years. Electric vehicle (EV) and hybrid Electric vehicle (HEV) are two practical examples in transportation systems. The typical power train in the EVs consists of three main parts including energy source, power electronics and an electrical motor. Regarding the machine, permanent magnet (PM) motors are the dominant choice for light duty hybrid vehicles in industry due to their higher efficiency and power density. In order to operate the power train, the electrical machine can be supplied and controlled by a voltage source inverter (VSI). The converter is subjected to various fault types. According to the statistics, 38% of faults in a motor drive are due to the power converter. On the other side, the electrical power train should meet a high level of reliability. Multiphase PM machines can meet the reliability requirements due to their fault-tolerant characteristics. The machine can still be operational with faults in multiple phases. Consequently, to realize a multiphase fault-tolerant motor drive, three main concepts should be developed including fault detection (FD), fault isolation and fault-tolerant control. This PhD thesis is therefore focused on FD and fault-tolerant control of a multiphase VSI. To achieve this research goal, the presented FD and control methods of the power converter are thoroughly investigated through literature review. Following that, the operational condition of the multiphase converter supplying the electrical machine is studied. Regarding FD methods in multiphase, three new algorithms are presented in this thesis. These proposed FD methods are also embedded in new fault-tolerant control algorithms. At the first step, a novel model based FD method is proposed to detect multiple open switch faults. This FD method is included in the developed adaptive proportional resonant control algorithm of the power converter. At the second step, two signal based FD methods are proposed. Fault-tolerant control of the power converter with the conventional PI controller is discussed. Furthermore, the theory of SMC is developed. At the last step, finite control set (FCS) model predictive control (MPC) of the five-phase brushless direct current (BLDC) motor is discussed for the first time in this thesis. A simple FD method is derived from the control signals. Inputs to all developed methods are the five-phase currents of the motor. The theory of each method is explained and compared with available methods. To validate the developed theory at each part, FD algorithm is embedded in the fault-tolerant control algorithm. Experimental results are conducted on a five-phase BLDC motor drive. The electrical motor used in the experimental results has an in-wheel outer rotor structure. This motor is suitable for electric vehicles. At the end of each part, the remarkable points and conclusions are presented / Hay una creciente demanda de vehículos con menor impacto ambiental y una mayor eficiencia de combustible. Para cumplir estos requisitos, la electrificación del transporte se ha introducido en la academia y la industria en los últimos años. Vehículos eléctricos y vehículos eléctricos híbridos son dos ejemplos prácticos en los sistemas de transporte. El tren de potencia típico en los vehículos eléctricos se compone de tres partes principales, incluyendo la fuente de energía, la electrónica de potencia y un motor eléctrico. En cuanto a la máquina, de imán permanente motores son la opción dominante para vehículos híbridos ligeros en la industria debido a su mayor eficiencia y densidad de potencia. Con el fin de operar el tren de potencia, la máquina eléctrica se puede suministrar y controlado por un inversor de fuente de tensión. El convertidor se somete a diversos tipos de fallos. Según las estadísticas, 38 % de las fallas en un motor se deben al convertidor de potencia. Por otro lado, el tren de potencia eléctrica debe cumplir con un alto nivel de fiabilidad. Máquinas multifase PM pueden cumplir con los requisitos de fiabilidad debido a sus características de tolerancia a fallos. La máquina puede seguir siendo operativo con fallas en múltiples fases. En consecuencia, para realizar una unidad de motor de alta disponibilidad de múltiples fases, tres conceptos principales deben desarrollarse incluyendo la detección de fallos, el aislamiento de fallas y control tolerante a fallos. Por tanto, esta tesis doctoral se centra en la FD y control tolerante a fallos de un VSI multifase. Para lograr este objetivo la investigación, los productos alimenticios y bebidas y métodos de control que se presentan del convertidor de potencia se investigan a fondo a través de revisión de la literatura. Después de eso, se estudió la condición operativa del convertidor de múltiples el suministro de la máquina eléctrica. En cuanto a los métodos de FD en múltiples fases, tres nuevos algoritmos se presentan en esta tesis. Estos métodos FD propuestas también están integrados en los nuevos algoritmos de control con tolerancia a fallos. En el primer paso, se propone un método FD modelo novela basada detectar fallas múltiples del interruptor abierto. Este método FD está incluido en el algoritmo de control adaptativo desarrollado proporcional resonante del convertidor de potencia. En el segundo paso, se proponen dos métodos FD señal basada. Se discute el control tolerante a fallos del convertidor de potencia con el controlador PI convencional. Además, la teoría de la SMC se desarrolla. En el último paso, el control conjunto finito modelo de control predictivo del motor de cinco fases sin escobillas de corriente continua se discutió por primera vez en esta tesis. Un método FD sencilla se deriva de las señales de control. Las entradas a todos los métodos desarrollados son las corrientes de cinco de fase del motor. La teoría de cada método se explica y se compara con los métodos disponibles. Para validar la teoría desarrollada en cada parte, FD algoritmo está incorporado en el algoritmo de control tolerante a fallos. Los resultados experimentales se llevan a cabo en una unidad de motor BLDC de cinco fases. El motor eléctrico usado en los resultados experimentales tiene una estructura de rotor exterior en las cuatro ruedas. Este motor es adecuado para los vehículos eléctricos. Al final de cada parte, se presentan los puntos notables y conclusiones
102

Wideband cognitive radio: monitoring, detection and sparse noise subspace communication

Font Segura, Josep 24 July 2014 (has links)
We are surrounded by electronic devices that take advantage of wireless technologies, from our computer mice, which require little amounts of information, to our cellphones, which demand increasingly higher data rates. Until today, the coexistence of such a variety of services has been guaranteed by a fixed assignment of spectrum resources by regulatory agencies. This has resulted into a blind alley, as current wireless spectrum has become an expensive and a scarce resource. However, recent measurements in dense areas paint a very different picture: there is an actual underutilization of the spectrum by legacy systems. Cognitive radio exhibits a tremendous promise for increasing the spectral efficiency for future wireless systems. Ideally, new secondary users would have a perfect panorama of the spectrum usage, and would opportunistically communicate over the available resources without degrading the primary systems. Yet in practice, monitoring the spectrum resources, detecting available resources for opportunistic communication, and transmitting over the resources are hard tasks. This thesis addresses the tasks of monitoring, detecting and transmitting, in challenging scenarios including wideband signals, nonuniform sampling, inaccurate side information, and frequency-selective fading channels. In the first task of monitoring the spectrum resources, this thesis derives the periodogram and Capon spectral estimates in nonuniform sampling exploiting a correlation-matching fitting from linearly projected data. It is shown that nonuniform sampling incurs the phenomenon of noise enhancement, which is circumvented by the proposed spectral estimates by implementing a denoising process, and further theoretically characterized in Bernoulli nonuniform sampling by establishing equivalence between nonuniform sampling and signal-to-noise ratio (SNR). In the second task of detecting the available resources, this thesis considers the problems of multi-frequency signal detection, asymptotic performance, and cyclostationary signal detection. In multi-frequency signal detection, a unified framework based on the generalized likelihood ratio test (GLRT) is derived by considering different degrees of side information and performing maximum likelihood (ML) and correlation-matching estimation over the unknown parameters in uniform and nonuniform sampling, respectively. The asymptotic performance of signal detection is considered from two perspectives: the Stein's lemma, which allows discovering the influence of the main parameters on the error exponents in the error probabilities; and the asymptotic statistical characterization of the GLRT in Bernoulli nonuniform sampling, which allows the derivation of sampling walls in noise uncertainty, i.e., sampling densities below which the target detection probabilities cannot be guaranteed. Finally, this thesis exploits the cyclostationarity properties of primary signals by deriving the quadratic sphericity test (QST), which is the ratio between the squared mean and the arithmetic mean of the eigenvalues of the autocorrelation matrix of the observations; and the optimal GLRT in a parameterized model of the frequency-selective channel, which exploits the low rank structure of small spectral covariance matrices. In the last task of transmitting over the available resources, a cyclostationary secondary waveform scheme is first proposed to mitigate the interference that an active cognitive radio may cause to an inactive cognitive radio that performs spectrum sensing, by projecting the oversampled observations into a reduced subspace. Second, this thesis derives and statistically characterizes the sphericity minimum description length (MDL) for estimating the primary signal subspace. And third, this thesis finally considers the minimum norm waveform optimization problem with imperfect side information, whose benefits are those of linear predictors: flat frequency response and rotationally invariance. / Estem envoltats de dispositius electrònics que utilitzen tecnologia sense fils, des del ratolí de l'ordinador que requereix petites quantitats d'informació, fins als nostres telèfons mòbil que demanen cada vegada més velocitat de dades. Fins avui, la coexistència de tants serveis ha estat garantida per una assignació fixa dels recursos freqüencials per part de les agències de regulació. Això ens ha portat a un atzucac, ja que l'espectre actual ha esdevingut un recurs car i escàs. Tanmateix, mesures recents dibuixen una situació molt diferent: de fet hi ha una utilització molt baixa de l'espectre per part dels sistemes amb llicència. La tecnologia de ràdio cognitiva promet millorar l'eficiència espectral dels futurs sistemes de comunicació sense fils. En teoria, un usuari secundari coneix perfectament la utilització de l'espectre, i és capaç de transmetre de manera oportuna sense degradar els sistemes primaris. A la pràctica, però, monitoritzar els recursos freqüencials, detectar-los i transmetre-hi són tasques difícils. Aquesta tesi tracta aquestes tres tasques en escenaris complicats com senyals de banda ampla, mostreig no uniforme, informació lateral imprecisa i canals selectius en freqüència. En la primera tasca de monitoritzar els recursos freqüencials, aquesta tesi desenvolupa els estimadors espectrals de periodograma i Capon en mostreig no uniforme a partir d'un ajust per correlació de les observacions linealment projectades. Es demostra que el mostreig no uniforme genera el fenomen d'increment de soroll, el qual és solucionat pels estimadors espectrals proposats, i a més a més és caracteritzat teòricament pel cas de Bernoulli, establint una equivalència entre el mostreig no uniforme i la relació senyal soroll (SNR). En la segona tasca de detectar els recursos disponibles, la tesi considera els problemes de detecció de senyals multifreqüència, avaluació de les prestacions asimptòtiques, i detecció de senyals cicloestacionàries. En detecció multifreqüència, es proposa una formulació unificada basada en el test generalitzat de màxima versemblança (GLRT), considerant diferents graus d'informació lateral, i efectuant estimació de màxima versemblança (ML) i d'ajust per correlació dels paràmetres desconeguts en mostreig uniforme i mostreig no uniforme, respectivament. Les prestacions asimptòtiques dels detectors són avaluades des de dues perspectives: el lema d'Stein, que permet descobrir la influència dels diferents paràmetres sobre els exponents de les probabilitats d'error; i la caracterització estadística asimptòtica del GLRT en mostreig no uniforme de Bernoulli, que permet derivar les parets de mostreig en incertesa de soroll, és a dir, aquelles densitats de mostreig per sota de les quals les probabilitats de detecció objectiu no són garantides. Finalment, la tesi explota les propietats cicloestacionàries dels senyals primaris: es deriva el test d'esfericitat quadràtica (QST), que és la divisió entre la mitjana quadràtica i la mitjana aritmètica dels autovalors de la matriu de correlació de les observacions; i també es deriva el GLRT en un model parametritzat del canal selectiu en freqüència, que explota l'estructura rang deficient de petites matrius de covariància espectral. En l'última tasca de transmetre en els recursos disponibles, es proposa en primer lloc un esquema de forma d'ona cicloestacionària per reduir la interferència que un usuari cognitiu pot causar a un altre usuari cognitiu inactiu que fa sensat de l'espectre, projectant les observacions sobremostrejades en un subespai reduït. En segon lloc, aquesta tesi deriva i caracteritza estadísticament la llargària mínima de descripció (MDL) d'esfericitat per estimar el subespai de senyal primària. I en tercer lloc, la tesi considera el problema d'optimització de forma d'ona de norma mínima amb informació lateral imperfecta, els beneficis del qual són els dels predictors lineals: resposta freqüencial plana i invariància a la rotació.
103

Variability-aware architectures based on hardware redundancy for nanoscale reliable computation

Aymerich Capdevila, Nivard 16 December 2013 (has links)
During the last decades, human beings have experienced a significant enhancement in the quality of life thanks in large part to the fast evolution of Integrated Circuits (IC). This unprecedented technological race, along with its significant economic impact, has been grounded on the production of complex processing systems from highly reliable compounding devices. However, the fundamental assumption of nearly ideal devices, which has been true within the past CMOS technology generations, today seems to be coming to an end. In fact, as MOSFET technology scales into nanoscale regime it approaches to fundamental physical limits and starts experiencing higher levels of variability, performance degradation, and higher rates of manufacturing defects. On the other hand, ICs with increasing number of transistors require a decrease in the failure rate per device in order to maintain the overall chip reliability. As a result, it is becoming increasingly important today the development of circuit architectures capable of providing reliable computation while tolerating high levels of variability and defect rates. The main objective of this thesis is to analyze and propose new fault-tolerant architectures based on redundancy for future technologies. Our research is founded on the principles of redundancy established by von Neumann in the 1950s and extends them to three new dimensions: 1. Heterogeneity: Most of the works on fault-tolerant architectures based on redundancy assume homogeneous variability in the replicas like von Neumann's original work. Instead, we explore the possibilities of redundancy when heterogeneity between replicas is taken into account. In this sense, we propose compensating mechanisms that select the weighting of the redundant information to maximize the overall reliability. 2. Asynchrony: Each of the replicas of a redundant system may have associated different processing delays due to variability and degradation; especially in future nanotechnologies. If we design our system to work locally in asynchronous mode then we may consider different voting policies to deal with the redundant information. Depending on how many replicas we collect before taking a decision we can obtain different trade-off between processing delay and reliability. We propose a mechanism for providing these facilities and analyze and simulate its operation. 3. Hierarchy: Finally, we explore the possibilities of redundancy applied at different hierarchy layers of complex processing systems. We propose to distribute redundancy across the various hierarchy layers and analyze the benefits that can be obtained. Drawing on the scenario of future ICs technologies, we push the concept of redundancy to its fullest expression through the study of realistic nano-device architectures. Most of the redundant architectures considered so far do not face properly the era of Terascale Computing and the nanotechnology trends. Since von Neumann applied for the first time redundancy at electronic circuits, never until now effects as common in nanoelectronics as degradation and interconnection failures have been treated directly from the standpoint of redundancy. In this thesis we address in a comprehensive manner the reliability of digital processing systems in the upcoming technology generations.
104

Compressive sensing based candidate detector and its applications to spectrum sensing and through-the-wall radar imaging

Lagunas Targarona, Eva 07 March 2014 (has links)
Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications being developed in areas such as channel coding, medical imaging, computational biology and many more. Unlike majority of CS literature, the proposed Ph.D. thesis surveys the CS theory applied to signal detection, estimation and classification, which not necessary requires perfect signal reconstruction or approximation. In particular, a novel CSbased detection technique which exploits prior information about some features of the signal is presented. The basic idea is to scan the domain where the signal is expected to lie with a candidate signal estimated from the known features. The proposed detector is called candidate-based detector because their main goal is to react only when the candidate signal is present. The CS-based candidate detector is applied to two topical detection problems. First, the powerful CS theory is used to deal with the sampling bottleneck in wideband spectrum sensing for open spectrum scenarios. The radio spectrum is a natural resource which is recently becoming scarce due to the current spectrum assignment policy and the increasing number of licensed wireless systems. To deal with the crowded spectrum problem, a new spectrum management philosophy is required. In this context, the revolutionary Cognitive Radio (CR) emerges as a solution. CR benefits from the poor usage of the spectrum by allowing the use of temporarily unused licensed spectrum to secondary users who have no spectrum licenses. The identification procedure of available spectrum is commonly known as spectrum sensing. However, one of the most important problems that spectrum sensing techniques must face is the scanning of wide band of frequencies, which implies high sampling rates. The proposed CS-based candidate detector exploits some prior knowledge of primary users, not only to relax the sampling bottleneck, but also to provide an estimation of the candidate signals' frequency, power and angle of arrival without reconstructing the whole spectrum. The second application is Through-the-Wall Radar Imaging (TWRI). Sensing through obstacles such as walls, doors, and other visually opaque materials, using microwave signals is emerging as a powerful tool supporting a range of civilian and military applications. High resolution imaging is achieved if large bandwidth signals and long antenna arrays are used. However, this implies acquisition and processing of large amounts of data volume. Decreasing the number of acquired samples can also be helpful in TWRI from a logistic point of view, as some of the data measurements in space and frequency can be difficult, or impossible to attain. In this thesis, we addressed the problem of imaging building interior structures using a reduced number of measurements. The proposed technique for the determination of the building layout is based on prior knowledge about common construction practices. Real data collection experiments in a laboratory environment, using Radar Imaging Lab facility at the Center for Advanced Communications, Villanova University, USA, are conducted to validate the proposed approach. / La adquisición de datos es un tema fundamental en el procesamiento de señales. Hasta ahora, el teorema de Shannon-Nyquist ha sido el núcleo de los métodos convencionales de conversión analógico-digital. El teorema dice que para recuperar perfectamente la información, cualquier señal debe ser muestreada a una frecuencia constante igual al doble de la máxima frecuencia presente en la señal. Sin embargo, este teorema asume el peor de los casos: cuando las señales ocupan todo el espectro. En este contexto aparece la teoría del muestreo compresivo (conocido en inglés como Compressed Sensing (CS)). CS ha supuesto una auténtica revolución en lo que se refiere a la adquisición y muestreo de datos analógicos en un esfuerzo hacia resolver la problemática de recuperar un proceso continuo comprimible con un nivel suficiente de similitud si únicamente se realiza un número muy reducido de medidas o muestras del mismo. El requerimiento para el éxito de dicha técnica es que la señal debe poder expresarse de forma dispersa en algún dominio. Esto es, que la mayoría de sus componentes sean cero o puedan considerarse despreciables. La aplicación de este tipo de muestreo compresivo supone una línea de investigación de gran auge e interés investigador en áreas como la transmisión de datos, procesamiento de imágenes médicas, biología computacional, entre otras. A diferencia de la mayoría de publicaciones relacionadas con CS, en esta tesis se estudiará CS aplicado a detección, estimación y clasificación de señales, que no necesariamente requiere la recuperación perfecta ni completa de la señal. En concreto, se propone un nuevo detector basado en cierto conocimiento a priori sobre la señal a detectar. La idea básica es escanear el dominio de la señal con una señal llamada Candidata, que se obtiene a partir de la información a priori de la señal a detectar. Por lo tanto, el detector únicamente reaccionará cuando la señal candidata esté presente. El detector es aplicado a dos problemas particulares de detección. En primer lugar, la novedosa teoría de CS es aplicada al sensado de espectro o spectrum sensing, en el contexto de Radio Cognitiva (CR). El principal problema radica en que las políticas actuales de asignación de bandas frecuenciales son demasiado estrictas y no permiten un uso óptimo del espectro radioeléctrico disponible. El uso del espectro radioeléctrico puede ser mejorado significativamente si se posibilita que un usuario secundario (sin licencia) pueda acceder a un canal desocupado por un usuario primario en ciertas localizaciones y momentos temporales. La tecnología CR se ha identificado recientemente como una solución prometedora al denominado problema de escasez de espectro, es decir, la creciente demanda de espectro y su actual infrautilización. Un requerimiento esencial de los dispositivos cognitivos es la capacidad de detectar la presencia de usuarios primarios (para no causarles interferencia). Uno de los problemas que se afronta en este contexto es la necesidad de escanear grandes anchos de banda que requieren frecuencias de muestreo extremadamente elevadas. El detector propuesto basado en CS aprovecha los huecos libres de espectro no sólo para relajar los requerimientos de muestreo, sino también para proporcionar una estimación precisa de la frecuencia, potencia y ángulo de llegada de los usuarios primarios, todo ello sin necesidad de reconstruir el espectro. La segunda aplicación es en radar con visión a través de paredes (Through-the-Wall Radar Imaging - TWRI). Hace ya tiempo que la capacidad de ver a través de las paredes ya no es un asunto de ciencia ficción. Esto es posible mediante el envío de ondas de radio, capaces de atravesar objetos opacos, que rebotan en los objetivos y retornan a los receptores. Este es un tipo de radar con gran variedad de aplicaciones, tanto civiles como militares. La resolución de las imágenes proporcionadas por dichos radares mejora cuando se usan grandes anchos de banda y mayor número de antenas, lo que directamente implica la necesidad de adquirir un mayor número de muestras y un mayor volumen de datos que procesar. A veces, reducir el número de muestras es interesante en TWRI desde un punto de vista logístico, ya que puede que algunas muestras frecuenciales o espaciales sean difíciles o imposibles de obtener. En esta tesis focalizaremos el trabajo en la detección de estructuras internas como paredes internas para reconstruir la estructura del edificio. Las paredes y/o diedros formados por la intersección de dos paredes internas formaran nuestras señales candidatas para el detector propuesto. En general, las escenas de interiores de edificios están formadas por pocas estructuras internas dando paso a la aplicaci´on de CS. La validación de la propuesta se llevará a cabo con experimentos realizados en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA / L’adquisició de dades és un tema fonamental en el processament de senyals. Fins ara, el teorema de Shannon-Nyquist ha sigut la base dels mètodes convencionals de conversió analògic-digital. El teorema diu que per recuperar perfectament la informació, qualsevol senyal ha de ser mostrejada a una freqüència constant igual al doble de la màxima freqüència present a la senyal. No obstant, aquest teorema assumeix el pitjor dels casos: quan les senyals ocupen tot l’espectre. En aquest context apareix la teoria del mostreig compressiu (conegut en anglès amb el nom de Compressed Sensing (CS)). CS ha suposat una autèntica revolució pel que fa a l’adquisició i mostreig de dades analògiques en un esforç cap a resoldre la problemàtica de recuperar un procés continu comprimible amb un nivell suficient de similitud si únicament es realitza un número molt reduït de mesures o mostres del mateix. El requisit para l’èxit d’aquesta tècnica és que la senyal ha de poder ser expressada de forma dispersa en algun domini. Això és, que la majoria dels seus components siguin zero o puguin considerar-se despreciables. L’aplicació d’aquest tipus de mostreig compressiu suposa una l’ínia de investigació de gran interès en àrees com la transmissió de dades, el processament d’imatges mèdiques, biologia computacional, entre altres. A diferència de la majoria de publicacions relacionades amb CS, en aquesta tesi s’estudiarà CS aplicat a detecció, estimació i classificació de senyals, que no necessàriament requereix la recuperació perfecta ni completa de la senyal. En concret, es proposa un nou detector basat en cert coneixement a priori sobre la senyal a detectar. La idea bàsica és escanejar el domini de la senyal amb una senyal anomenada Candidata, que s’obté a partir de la informació a priori de la senyal a detectar. Per tant, el detector únicament reaccionarà quan la senyal candidata estigui present. El detector és aplicat a dos problemes particulars de detecció. En primer lloc, la teoria de CS és aplicada al sensat d’espectre o spectrum sensing, en el context de Radio Cognitiva (CR). El principal problema radica en que les polítiques actuals d’assignació de bandes freqüencials són massa estrictes i no permeten l’ús òptim de l’espectre radioelèctric disponible. L’ús de l’espectre radioelèctric pot ser significativament millorat si es possibilita que un usuari secundari (sense llicència) pugui accedir a un canal desocupat per un usuari primari en certes localitzacions i moments temporals. La tecnologia CR s’ha identificat recentment com una solució prometedora al problema d’escassetat d’espectre, és a dir, la creixent demanda d’espectre i la seva actual infrautilització. Un requeriment essencial dels dispositius cognitius és la capacitat de detectar la presència d’usuaris primaris (per no causar interferència). Un dels problemes que s’afronta en aquest context és la necessitat d’escanejar grans amples de banda que requereixen freqüències de mostreig extremadament elevades. El detector proposat basat en CS aprofita els espais buits lliures d’espectre no només per relaxar els requeriments de mostreig, sinó també per proporcionar una estimació precisa de la freqüència, potència i angle d’arribada dels usuaris primaris, tot això sense necessitat de reconstruir l’espectre. La segona aplicació ´es en radars amb visió a través de parets (Through-the-Wall Radar Imaging - TWRI). Ja fa un temps que la capacitat de veure a través de les parets no és un assumpte de ciència ficció. Això ´es possible mitjançant l’enviament d’ones de radio, capaços de travessar objectes opacs, que reboten en els objectius i retornen als receptors. Aquest és un tipus de radar amb una gran varietat d’aplicacions, tant civils como militars. La resolució de las imatges proporcionades per aquests radars millora quan s’usen grans amples de banda i més nombre d’antenes, cosa que directament implica la necessitat d’adquirir un major nombre de mostres i un major volum de dades que processar. A vegades, reduir el nombre mostres és interessant en TWRI des de un punt de vista logístic, ja que pot ser que algunes mostres freqüencials o espacials siguin difícils o impossibles d’obtenir. En aquesta tesis focalitzarem el treball en la detecció d’estructures internes com per exemple parets internes per reconstruir l’estructura de l’edifici. Les parets i/o díedres formats per la intersecció de dos parets internes formaran les nostres senyals candidates per al detector proposat. En general, les escenes d’interiors d’edificis estan formades per poques estructures internes donant pas a l’aplicació de CS. La validació de la proposta es durà a terme amb experiments realitzats en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA
105

Privacy protection of user profiles in personalized information systems

Parra Arnau, Javier 02 December 2013 (has links)
In recent times we are witnessing the emergence of a wide variety of information systems that tailor the information-exchange functionality to meet the specific interests of their users. Most of these personalized information systems capitalize on, or lend themselves to, the construction of profiles, either directly declared by a user, or inferred from past activity. The ability of these systems to profile users is therefore what enables such intelligent functionality, but at the same time, it is the source of serious privacy concerns. Although there exists a broad range of privacy-enhancing technologies aimed to mitigate many of those concerns, the fact is that their use is far from being widespread. The main reason is that there is a certain ambiguity about these technologies and their effectiveness in terms of privacy protection. Besides, since these technologies normally come at the expense of system functionality and utility, it is challenging to assess whether the gain in privacy compensates for the costs in utility. Assessing the privacy provided by a privacy-enhancing technology is thus crucial to determine its overall benefit, to compare its effectiveness with other technologies, and ultimately to optimize it in terms of the privacy-utility trade-off posed. Considerable effort has consequently been devoted to investigating both privacy and utility metrics. However, most of these metrics are specific to concrete systems and adversary models, and hence are difficult to generalize or translate to other contexts. Moreover, in applications involving user profiles, there are a few proposals for the evaluation of privacy, and those existing are not appropriately justified or fail to justify the choice. The first part of this thesis approaches the fundamental problem of quantifying user privacy. Firstly, we present a theoretical framework for privacy-preserving systems, endowed with a unifying view of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. Our theoretical analysis shows that numerous privacy metrics emerging from a broad spectrum of applications are bijectively related to this estimation error, which permits interpreting and comparing these metrics under a common perspective. Secondly, we tackle the issue of measuring privacy in the enthralling application of personalized information systems. Specifically, we propose two information-theoretic quantities as measures of the privacy of user profiles, and justify these metrics by building on Jaynes' rationale behind entropy-maximization methods and fundamental results from the method of types and hypothesis testing. Equipped with quantifiable measures of privacy and utility, the second part of this thesis investigates privacy-enhancing, data-perturbative mechanisms and architectures for two important classes of personalized information systems. In particular, we study the elimination of tags in semantic-Web applications, and the combination of the forgery and the suppression of ratings in personalized recommendation systems. We design such mechanisms to achieve the optimal privacy-utility trade-off, in the sense of maximizing privacy for a desired utility, or vice versa. We proceed in a systematic fashion by drawing upon the methodology of multiobjective optimization. Our theoretical analysis finds a closed-form solution to the problem of optimal tag suppression, and to the problem of optimal forgery and suppression of ratings. In addition, we provide an extensive theoretical characterization of the trade-off between the contrasting aspects of privacy and utility. Experimental results in real-world applications show the effectiveness of our mechanisms in terms of privacy protection, system functionality and data utility.
106

Experimental validation of optimal real-time energy management system for microgrids

Marzband, Mousa 20 January 2014 (has links)
Nowadays, power production, reliability, quality, efficiency and penetration of renewable energy sources are amongst the most important topics in the power systems analysis. The need to obtain optimal power management and economical dispatch are expressed at the same time. The interest in extracting an optimum performance minimizing market clearing price (MCP) for the consumers and provide better utilization of renewable energy sources has been increasing in recent years. Due to necessity of providing energy balance while having the fluctuations in the load demand and non-dispatchable nature of renewable sources, implementing an energy management system (EMS) is of great importance in Microgrids (MG). The appearance of new technologies such as energy storage (ES) has caused increase in the effort to present new and modified optimization methods for power management. Precise prediction of renewable energy sources power generation can only be provided with small anticipation. Hence, for increasing the efficiency of the presented optimization algorithm in large-dimension problems, new methods should be proposed, especially for short-term scheduling. Powerful optimization methods are needed to be applied in such a way to achieve maximum efficiency, enhance the economic dispatch as well as provide the best performance for these systems. Thus, real-time energy management within MG is an important factor for the operators to guarantee optimal and safe operation of the system. The proposed EMS should be able to schedule the MG generation with minimum information shares sent by generation units. To achieve this ability, the present thesis proposes an operational architecture for real time operation (RTO) of a MG operating in both islanding and grid-connected modes. The presented architecture is flexible and could be used for different configurations of MGs in different scenarios. A general formula is also presented to estimate optimum operation strategy, cost optimization plan and the reduction of the consumed electricity combined with applying demand response (DR). The proposed problem is formulated as an optimization problem with nonlinear constraints to minimize the cost related to generation sources and responsive load as well as reducing MCP. Several optimization methods including mixed linear programming, pivot source, imperialist competition, artificial bee colony, particle swarm, ant colony, and gravitational search algorithms are utilized to achieve the specified objectives. The main goal of the thesis is to validate experimentally the design of the real-time energy management system for MGs in both operating modes which is suitable for different size and types of generation resources and storage devices with plug-and-play structure. As a result, this system is capable of adapting itself to changes in the generation and storage assets in real-time, and delivering optimal operation commands to the assets quickly, using a local energy market (LEM) structure based on single side or double side auction. The study is aimed to figure the optimum operation of micro-sources out as well as to decrease the electricity production cost by hourly day-ahead and real time scheduling. Experimental results show the effectiveness of the proposed methods for optimal operation with minimum cost and plug-and-play capability in a MG. Moreover, these algorithms are feasible from computational viewpoints while having many advantages such as reducing the peak consumption, optimal operation and scheduling the generation unit as well as minimizing the electricity generation cost. Furthermore, capabilities such as the system development, reliability and flexibility are also considered in the proposed algorithms. The plug and play capability in real time applications is investigated by using different scenarios.
107

Use of locator/identifier separation to improve the future internet routing system

Jakab, Loránd 04 July 2011 (has links)
The Internet evolved from its early days of being a small research network to become a critical infrastructure many organizations and individuals rely on. One dimension of this evolution is the continuous growth of the number of participants in the network, far beyond what the initial designers had in mind. While it does work today, it is widely believed that the current design of the global routing system cannot scale to accommodate future challenges. In 2006 an Internet Architecture Board (IAB) workshop was held to develop a shared understanding of the Internet routing system scalability issues faced by the large backbone operators. The participants documented in RFC 4984 their belief that "routing scalability is the most important problem facing the Internet today and must be solved." A potential solution to the routing scalability problem is ending the semantic overloading of Internet addresses, by separating node location from identity. Several proposals exist to apply this idea to current Internet addressing, among which the Locator/Identifier Separation Protocol (LISP) is the only one already being shipped in production routers. Separating locators from identifiers results in another level of indirection, and introduces a new problem: how to determine location, when the identity is known. The first part of our work analyzes existing proposals for systems that map identifiers to locators and proposes an alternative system, within the LISP ecosystem. We created a large-scale Internet topology simulator and used it to compare the performance of three mapping systems: LISP-DHT, LISP+ALT and the proposed LISP-TREE. We analyzed and contrasted their architectural properties as well. The monitoring projects that supplied Internet routing table growth data over a large timespan inspired us to create LISPmon, a monitoring platform aimed at collecting, storing and presenting data gathered from the LISP pilot network, early in the deployment of the LISP protocol. The project web site and collected data is publicly available and will assist researchers in studying the evolution of the LISP mapping system. We also document how the newly introduced LISP network elements fit into the current Internet, advantages and disadvantages of different deployment options, and how the proposed transition mechanism scenarios could affect the evolution of the global routing system. This work is currently available as an active Internet Engineering Task Force (IETF) Internet Draft. The second part looks at the problem of efficient one-to-many communications, assuming a routing system that implements the above mentioned locator/identifier split paradigm. We propose a network layer protocol for efficient live streaming. It is incrementally deployable, with changes required only in the same border routers that should be upgraded to support locator/identifier separation. Our proof-of-concept Linux kernel implementation shows the feasibility of the protocol, and our comparison to popular peer-to-peer live streaming systems indicates important savings in inter-domain traffic. We believe LISP has considerable potential of getting adopted, and an important aspect of this work is how it might contribute towards a better mapping system design, by showing the weaknesses of current favorites and proposing alternatives. The presented results are an important step forward in addressing the routing scalability problem described in RFC 4984, and improving the delivery of live streaming video over the Internet.
108

Resource Management in Multicarrier Based Cognitive Radio Systems

Shaat, Musbah M. R. 09 March 2012 (has links)
The ever-increasing growth of the wireless application and services affirms the importance of the effective usage of the limited radio spectrum. Existing spectrum management policies have led to significant spectrum under-utilization. Recent measurements showed that large range of the spectrum is sparsely used in both temporal and spatial manner. This conflict between the inefficient usage of the spectrum and the continuous evolution in the wireless communication calls upon the development of more flexible management policies. Cognitive radio (CR) with the dynamic spectrum access (DSA) is considered to be a key technology in making the best solution of this conflict by allowing a group of secondary users (SUs) to share the radio spectrum originally allocated to the primary user (PUs). The operation of CR should not negatively alter the performance of the PUs. Therefore, the interference control along with the highly dynamic nature of PUs activities open up new resource allocation problems in CR systems. The resource allocation algorithms should ensure an effective share of the temporarily available frequency bands and deliver the solutions in timely fashion to cope with quick changes in the network. In this dissertation, the resource management problem in multicarrier based CR systems is considered. The dissertation focuses on three main issues: 1) design of efficient resource allocation algorithms to allocate subcarriers and powers between SUs such that no harmful interference is introduced to PUs, 2) compare the spectral efficiency of using different multicarrier schemes in the CR physical layer, specifically, orthogonal frequency division multiplexing (OFDM) and filter bank multicarrier (FBMC) schemes, 3) investigate the impact of the different constraints values on the overall performance of the CR system. Three different scenarios are considered in this dissertation, namely downlink transmission, uplink transmission, and relayed transmission. For every scenario, the optimal solution is examined and efficient sub-optimal algorithms are proposed to reduce the computational burden of obtaining the optimal solution. The suboptimal algorithms are developed by separate the subcarrier and power allocation into two steps in downlink and uplink scenarios. In the relayed scenario, dual decomposition technique is used to obtain an asymptotically optimal solution, and a joint heuristic algorithm is proposed to find the suboptimal solution. Numerical simulations show that the proposed suboptimal algorithms achieve a near optimal performance and perform better than the existing algorithms designed for cognitive and non-cognitive systems. Eventually, the ability of FBMC to overcome the OFDM drawbacks and achieve more spectral efficiency is verified which recommends the consideration of FBMC in the future CR systems. / El crecimiento continuo de las aplicaciones y servicios en sistemas inal´ambricos, indica la importancia y necesidad de una utilizaci´on eficaz del espectro radio. Las pol´ıticas actuales de gesti´on del espectro han conducido a una infrautilizaci´on del propio espectro radioel´ectrico. Recientes mediciones en diferentes entornos han mostrado que gran parte del espectro queda poco utilizado en sus ambas vertientes, la temporal, y la espacial. El permanente conflicto entre el uso ineficiente del espectro y la evoluci´on continua de los sistemas de comunicaci´on inal´ambrica, hace que sea urgente y necesario el desarrollo de esquemas de gesti´on del espectro m´as flexibles. Se considera el acceso din´amico (DSA) al espectro en los sistemas cognitivos como una tecnolog´ıa clave para resolver este conflicto al permitir que un grupo de usuarios secundarios (SUs) puedan compartir y acceder al espectro asignado inicialmente a uno o varios usuarios primarios (PUs). Las operaciones de comunicaci´on llevadas a cabo por los sistemas radio cognitivos no deben en ning´un caso alterar (interferir) los sistemas primarios. Por tanto, el control de la interferencia junto al gran dinamismo de los sistemas primarios implica nuevos retos en el control y asignaci´on de los recursos radio en los sistemas de comunicaci´on CR. Los algoritmos de gesti´on y asignaci´on de recursos (Radio Resource Management-RRM) deben garantizar una participaci´on efectiva de las bandas con frecuencias disponibles temporalmente, y ofrecer en cada momento oportunas soluciones para hacer frente a los distintos cambios r´apidos que influyen en la misma red. En esta tesis doctoral, se analiza el problema de la gesti´on de los recursos radio en sistemas multiportadoras CR, proponiendo varias soluciones para su uso eficaz y coexistencia con los PUs. La tesis en s´ı, se centra en tres l´ıneas principales: 1) el dise˜no de algoritmos eficientes de gesti´on de recursos para la asignaci´on de sub-portadoras y distribuci´on de la potencia en sistemas segundarios, evitando asi cualquier interferencia que pueda ser perjudicial para el funcionamiento normal de los usuarios de la red primaria, 2) analizar y comparar la eficiencia espectral alcanzada a la hora de utilizar diferentes esquema de transmisi´on multiportadora en la capa f´ısica del sistema CR, espec´ıficamente en sistemas basados en OFDM y los basados en banco de filtros multiportadoras (Filter bank Multicarrier-FBMC), 3) investigar el impacto de las diferentes limitaciones en el rendimiento total del sistema de CR. Los escenarios considerados en esta tesis son tres, es decir; modo de transmisi´on descendente (downlink), modo de transmisi´on ascendente (uplink), y el modo de transmisi´on ”Relay”. En cada escenario, la soluci´on ´optima es examinada y comparada con algoritmos sub- ´optimos que tienen como objetivo principal reducir la carga computacional. Los algoritmos sub-´optimos son llevados a cabo en dos fases mediante la separaci´on del propio proceso de distribuci´on de subportadoras y la asignaci´on de la potencia en los modos de comunicaci´on descendente (downlink), y ascendente (uplink). Para los entornos de tipo ”Relay”, se ha utilizado la t´ecnica de doble descomposici´on (dual decomposition) para obtener una soluci´on asint´oticamente ´optima. Adem´as, se ha desarrollado un algoritmo heur´ıstico para poder obtener la soluci´on ´optima con un reducido coste computacional. Los resultados obtenidos mediante simulaciones num´ericas muestran que los algoritmos sub-´optimos desarrollados logran acercarse a la soluci´on ´optima en cada uno de los entornos analizados, logrando as´ı un mayor rendimiento que los ya existentes y utilizados tanto en entornos cognitivos como no-cognitivos. Se puede comprobar en varios resultados obtenidos en la tesis la superioridad del esquema multiportadora FBMC sobre los sistemas basados en OFDM para los entornos cognitivos, causando una menor interferencia que el OFDM en los sistemas primarios, y logrando una mayor eficiencia espectral. Finalmente, en base a lo analizado en esta tesis, podemos recomendar al esquema multiportadora FBMC como una id´onea y potente forma de comunicaci´on para las futuras redes cognitivas.
109

Optimization of emerging extended FTTH WDM/TDM PONs and financial overall assessment

Chatzi, Sotiria 26 November 2013 (has links)
Optical access technology has experienced a boost in the last years, thanks to the continuously migrating multimedia services that are offered over the internet. Though the technologies used for deploying Fiber-To-The-x (FTTx) and Fiber-to-the-Home (FTTH) are mostly based on either Active solutions or as far as Passsive Optical Networks (PONs) is concerned, in Time Division Multiplexing (TDM), an evolution towards Hybrid solutions such as Wavelength Division Multiplexing/Time Division Multiplexing (WDM/TDM) can be foreseen. What needs to be researched and finally established are the exact designs for this important step of integration, which should be optimized in terms of transmission performance and cost, to address all requirements of next-generation passive optical networks. As the most critical elements in optical access network, the design and its cost are the main topics of this discussion. The covered topics span over a wide range and include cost estimation of several optical network technologies - architectures and their comparison and furthermore, subjects of design optimization. In this last category, in-line remote amplification, use of an alternative and an extended frequency band, dispersion compensation and equalization techniques have been examined as well as a combination of the aforementioned means of network optimization. Next to the principal proof of the proposed techniques, the benefits are highlighted in different case studies, while the most representative designs are further discussed.
110

Resource management research in ethernet passive optical networks

Garfias Hernández, Paola 25 November 2013 (has links)
The last decades, we have witnessed different phenomenology in the telecommunications sector. One of them is the widespread use of the Internet, which has brought a sharp increase in traffic, forcing suppliers to continuously expand the capacity of networks. In the near future, Internet will be composed of long-range highspeed optical networks; a number of wireless networks at the edge; and, in between, several access technologies. Today one of the main problems of the Internet is the bottleneck in the access segment. To address this issue the Passive Optical Networks (PONs) are very likely to succeed, due to their simplicity, low-cost, and increased bandwidth. A PON is made up of fiber optic cabling and passive splitters and couplers that distribute an optical signal to connectors that terminate each fiber segment. Among the different PON technologies, the Ethernet-PON (EPON) is a great alternative to satisfy operator and user needs, due to its cost, flexibility and interoperability with other technologies. One of the most interesting challenges in such technologies relates to the scheduling and allocation of resources in the upstream (shared) channel, i.e., the resource management. The aim of this thesis is to study and evaluate current contributions and propose new efficient solutions to address the resource management issues mainly in EPON. Key issues in this context are future end-user needs, quality of service (QoS) support, energy-saving and optimized service provisioning for real-time and elastic flows. This thesis also identifies research opportunities, issue recommendations and proposes novel mechanisms associated with access networks based on optical fiber technologies.

Page generated in 0.0869 seconds