• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 781
  • 423
  • 159
  • 2
  • Tagged with
  • 1364
  • 653
  • 632
  • 454
  • 389
  • 379
  • 367
  • 337
  • 281
  • 266
  • 196
  • 182
  • 181
  • 177
  • 165
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Experimental evidence for the quantum condensation of ultracold dipolar excitons

Alloing, Mathieu 28 May 2014 (has links)
In this thesis, we report experimental evidence of a "gray" condensate of excitons, as predicted theoretically by M. Combescot et al. Most importantly, the condensate is characterized by the macroscopic population of dark excitons coherently coupled to a weak population of bright excitons through fermion exchanges. Such quantum condensation results from the excitons internal structure, with a dark i.e. optically inactive ground state. It is actually very similar to what occurs in the phases of superfluid 3He or in the more recent spinor condensates of ultracold atomic Bose gases. While it is our belief that such a "gray" condensate will eventually be observed in other excitonic systems, our study focus on its appearance together with the macroscopic auto-organization of dipolar excitons. Precisely we emphasize fragmented exciton rings in an electrically biased GaAs single quantum well. This very striking pattern was first observed independently by the groups of L. Butov and D. Snoke. It was interpreted as the result of an ambipolar diffusion of carriers in the quantum wells. The fragmentation of the macrosopic ring observed at low temperature by Butov and coworkers, and the subsequent evidence for long-range spatial coherence together with complex pattern of polarization, led Butov et al. to interpret the fragmentation as an evidence for the transition to a quantum regime where coherent exciton transport dominates. Our experiments led us to a very different interpretation. Indeed, we show that for our sample the formation of the fragmented ring is dominated by the diffusion of dipolar excitons in an optically induced electrostatic landscape. This potential landscape arises from the modulation of the internal electric field by excess charges injected in the QW by the same excitation beam which induces the ring. Dipolar excitons then explore a potential landscape characterized by a wide anti-trap inside the ring and more strikingly by microscopic traps distributed along the circumference of the ring. There, i.e. in the outside vicinity of the ring, a confining potential is responsible for the formation of "islands" where the population of dark excitons is dominant. Due to the low energy splitting between the bright and dark excitonic states in our sample, the observation of a dominant population of dark excitons signals that excitons condense in the low-lying dark states. To confirm this interpretation we show that the weak photoluminescence emitted in the outer vicinity exhibits macroscopic spatial coherence, up to 10 times larger than the de Broglie wavelength. Islands of extended coherence are in fact identified and quickly disappear upon increase of the bath temperature. This leads to an evolution of the coherence length strongly dependent on the temperature. Finally, we show that the photoluminescence emitted in the vicinity of the fragmented ring is dominantly linearly polarized and also organized in islands outside the ring. All these observations confirm the predicted signatures of a "gray" condensate, as formulated by M. and R. Combescot. / En aquesta tesis, mostrem evidència experimental d'un condensat "gris" d'excitons, tal com prediu la teoria de M. Combescot et al. En particular, el condensat està caracteritzat per la població macroscòpica d'excitons foscos acoblats coherentment a una població baixa d'excitons brillants a través d'intercanvis fermiònics. Aquesta condensació quàntica es dóna com a resultat de l'estructura interna dels excitons, amb un estat fonamental fosc i.e. òpticament inactiu. És de fet molt similar al que passa en les fases de 3He superfluid o en els més recents condensats d'espinors de gasos atòmics ultrafreds de Bose. Encara que nosaltres creiem que un condensat "gris" serà eventualment observat en altres sistemes excitònics, el nostre estudi es focalitza en la seva manifestació juntament amb l'auto-organització macroscòpica d'excitons dipolars. Precisament, ens centrem en els anells excitònics fragmentats en un sol pou quàntic elèctricament polaritzat. Aquest sorprenent patró va ser observat independentment per primer cop pels grups de L. Butanov i D. Snoke. Va ser interpretat com el resultat d'una difusió ambipolar de portadors en pous quàntics. La fragmentació de l'anell macroscòpic observada a baixes temperatures per Butov i els seus col.laboradors, i la posterior evidència de coherència espacial de llarg abast juntament amb un patró de polarització complex, va portar a Butov et al. a interpretar la fragmentació com una evidència de la transició cap al règim quàntic en el que domina el transport coherent d'excitons. El nostre experiment ens va portar cap una interpretació molt diferent. En efecte, mostrem que per la nostra mostra la formació d'anells fragmentats és dominada per la difusió d'excitacions dipolars en un perfil electrostàtic òpticament induït. Aquest perfil de potencial sorgeix de la modulació del camp elèctric intern per un excés de càrregues injectades en el PQ pel mateix feix d'excitació que indueix l'anell. Les excitacions dipolars exploren per tant un perfil de potencial caracteritzat per una anti-trampa ampla dins de l'anell i més sorprenentment per trampes microscòpiques distribuïdes al llarg de la circumferència de l'anell. Allà, i.e. en la proximitat exterior de l'anell, un potencial de confinament és el responsable de la formació d'"illes" on la població d'excitons foscos és dominant. Degut a la baixa separació energètica entre els estats excitònics brillant i fosc en la nostra mostra, l'observació d'una població dominant d'excitons foscos senyala que els excitons es condensen en els estats foscos de més baixa energia. Per tal de confirmar aquesta interpretació, mostrem que la dèbil fotoluminescència emesa en la proximitat exterior exhibeix coherència espacial macroscòpica, fins a 10 vegades major que la longitud d'ona de de Broglie. Illes de coherència ampliada són de fet identificades i desapareixen ràpidament en incrementar la temperatura del focus. Això porta cap a una evolució de la longitud de coherència que depèn fortament de la temperatura. Finalment, mostrem que la fotoluminescència emesa en la proximitat de l'anell fragmentat està dominantment polaritzada linealment i organitzada també en illes fora de l'anell. Totes les observacions confirmen les senyals característiques previstes per un condensat "gris", tal com està formulat en la teoria desenvolupada per M. i R. Combescot
302

Light generation and manipulation from nonlinear randomly distributed domains in SBN

Yao, Can 25 June 2014 (has links)
Disordered media with refractive index variations can be found in the atmosphere, the ocean, and in many materials or biological tissues. Several technologies that make use of such random media, as image formation, satellite communication, astronomy or microscopy, must deal with an unavoidable light scattering or diffusion. This is why for many years light propagation through random media has been a subject of intensive study. Interesting phenomena such as speckle, coherent backscattering or random lasing have been discovered and studied. More recently, researchers are beginning to investigate mechanisms to control light propagation through such media to enhance light transmission and sharpen the focus. On the other hand, it has been known for several years that nonlinear random structures are able to generate light in an ultra-broad frequency range, without the need of angle or temperature tuning. Particularly interesting is the nonlinear light diffusion observed from materials with no change in the refractive index and which appear to be fully diffusion less to linear light propagation. However, a comprehensive understanding of the scattering when a nonlinear interaction takes place has not yet been given. The core of the thesis focuses on the study of the nonlinear light generation and propagation from crystalline structures with disordered nonlinear domains but with a homogenous refractive index. A random distribution of non-linear domains is found naturally in the Strontium Barium Niobate (SBN) ferroelectric crystal. As opposed to other mono-domain nonlinear optical crystals commonly used for frequency up-conversion, such as Potassium Titanyl Phosphate (KTP) or Lithium Niobate (LiNbO3), in SBN the nonlinear domain size is, typically, on the order of the coherence length or many times smaller than the size of the whole crystal. Such domains are usually several times longer in the c-axis direction relative to the plane perpendicular to that axis. Adjacent domains exhibit antiparallel polarization along such crystalline axis, with no change in refractive index. In Chapter 1 we give a brief introduction to light generation and propagation in random media, describing the speckle, light manipulation and second harmonic generation (SHG). In chapter 2, we study the nonlinear light generation and manipulation from a transparent SBN crystal. In its theoretical description we use a two-dimensional random structure consisting of a homogeneous background polarized in one direction with uniform rectangular boundaries, and a group of square reverse polarization domains with random sizes and located in random positions. The SHG from each domain is obtained using the Green's function formalism. In the experiments, we alter the ferroelectric domain structure of the SBN crystal by electric field poling or thermal treatments at different temperatures. The SBN crystal structures after such different treatments are shown to be characterized by their SHG patterns. In chapter 3, by measuring the spatial distribution of the second harmonic light in the c-plane, we demonstrate that the randomness in the nonlinear susceptibility results in a speckle pattern. We explain the observations as a result of the linear interference among the second harmonic waves generated in all directions by each of the nonlinear domains. In chapter 4, we report on our experimental implementation of the wave-front phase modulation method to control and focus the SHG speckle from the random SBN crystal. This research creates a bridge between light phase modulation and nonlinear optics. Finally we perform a theoretical analysis to demonstrate enhanced efficiencies for nonlinear light focusing by the wave-front phase modulation method in different directions. Various types of nonlinear structures are considered, including the homogeneous rectangular crystal, the group of random domains, and the combination of both.
303

Fault diagnosis and fault tolerant control of multiphase voltage source converters for application in traction drives

Salehifar, Mehdi 15 July 2014 (has links)
There is an increasing demand for vehicles with less environmental impact and higher fuel efficiency. To meet these requirements, the transportation electrification has been introduced in both academia and industry during last years. Electric vehicle (EV) and hybrid Electric vehicle (HEV) are two practical examples in transportation systems. The typical power train in the EVs consists of three main parts including energy source, power electronics and an electrical motor. Regarding the machine, permanent magnet (PM) motors are the dominant choice for light duty hybrid vehicles in industry due to their higher efficiency and power density. In order to operate the power train, the electrical machine can be supplied and controlled by a voltage source inverter (VSI). The converter is subjected to various fault types. According to the statistics, 38% of faults in a motor drive are due to the power converter. On the other side, the electrical power train should meet a high level of reliability. Multiphase PM machines can meet the reliability requirements due to their fault-tolerant characteristics. The machine can still be operational with faults in multiple phases. Consequently, to realize a multiphase fault-tolerant motor drive, three main concepts should be developed including fault detection (FD), fault isolation and fault-tolerant control. This PhD thesis is therefore focused on FD and fault-tolerant control of a multiphase VSI. To achieve this research goal, the presented FD and control methods of the power converter are thoroughly investigated through literature review. Following that, the operational condition of the multiphase converter supplying the electrical machine is studied. Regarding FD methods in multiphase, three new algorithms are presented in this thesis. These proposed FD methods are also embedded in new fault-tolerant control algorithms. At the first step, a novel model based FD method is proposed to detect multiple open switch faults. This FD method is included in the developed adaptive proportional resonant control algorithm of the power converter. At the second step, two signal based FD methods are proposed. Fault-tolerant control of the power converter with the conventional PI controller is discussed. Furthermore, the theory of SMC is developed. At the last step, finite control set (FCS) model predictive control (MPC) of the five-phase brushless direct current (BLDC) motor is discussed for the first time in this thesis. A simple FD method is derived from the control signals. Inputs to all developed methods are the five-phase currents of the motor. The theory of each method is explained and compared with available methods. To validate the developed theory at each part, FD algorithm is embedded in the fault-tolerant control algorithm. Experimental results are conducted on a five-phase BLDC motor drive. The electrical motor used in the experimental results has an in-wheel outer rotor structure. This motor is suitable for electric vehicles. At the end of each part, the remarkable points and conclusions are presented / Hay una creciente demanda de vehículos con menor impacto ambiental y una mayor eficiencia de combustible. Para cumplir estos requisitos, la electrificación del transporte se ha introducido en la academia y la industria en los últimos años. Vehículos eléctricos y vehículos eléctricos híbridos son dos ejemplos prácticos en los sistemas de transporte. El tren de potencia típico en los vehículos eléctricos se compone de tres partes principales, incluyendo la fuente de energía, la electrónica de potencia y un motor eléctrico. En cuanto a la máquina, de imán permanente motores son la opción dominante para vehículos híbridos ligeros en la industria debido a su mayor eficiencia y densidad de potencia. Con el fin de operar el tren de potencia, la máquina eléctrica se puede suministrar y controlado por un inversor de fuente de tensión. El convertidor se somete a diversos tipos de fallos. Según las estadísticas, 38 % de las fallas en un motor se deben al convertidor de potencia. Por otro lado, el tren de potencia eléctrica debe cumplir con un alto nivel de fiabilidad. Máquinas multifase PM pueden cumplir con los requisitos de fiabilidad debido a sus características de tolerancia a fallos. La máquina puede seguir siendo operativo con fallas en múltiples fases. En consecuencia, para realizar una unidad de motor de alta disponibilidad de múltiples fases, tres conceptos principales deben desarrollarse incluyendo la detección de fallos, el aislamiento de fallas y control tolerante a fallos. Por tanto, esta tesis doctoral se centra en la FD y control tolerante a fallos de un VSI multifase. Para lograr este objetivo la investigación, los productos alimenticios y bebidas y métodos de control que se presentan del convertidor de potencia se investigan a fondo a través de revisión de la literatura. Después de eso, se estudió la condición operativa del convertidor de múltiples el suministro de la máquina eléctrica. En cuanto a los métodos de FD en múltiples fases, tres nuevos algoritmos se presentan en esta tesis. Estos métodos FD propuestas también están integrados en los nuevos algoritmos de control con tolerancia a fallos. En el primer paso, se propone un método FD modelo novela basada detectar fallas múltiples del interruptor abierto. Este método FD está incluido en el algoritmo de control adaptativo desarrollado proporcional resonante del convertidor de potencia. En el segundo paso, se proponen dos métodos FD señal basada. Se discute el control tolerante a fallos del convertidor de potencia con el controlador PI convencional. Además, la teoría de la SMC se desarrolla. En el último paso, el control conjunto finito modelo de control predictivo del motor de cinco fases sin escobillas de corriente continua se discutió por primera vez en esta tesis. Un método FD sencilla se deriva de las señales de control. Las entradas a todos los métodos desarrollados son las corrientes de cinco de fase del motor. La teoría de cada método se explica y se compara con los métodos disponibles. Para validar la teoría desarrollada en cada parte, FD algoritmo está incorporado en el algoritmo de control tolerante a fallos. Los resultados experimentales se llevan a cabo en una unidad de motor BLDC de cinco fases. El motor eléctrico usado en los resultados experimentales tiene una estructura de rotor exterior en las cuatro ruedas. Este motor es adecuado para los vehículos eléctricos. Al final de cada parte, se presentan los puntos notables y conclusiones
304

Wideband cognitive radio: monitoring, detection and sparse noise subspace communication

Font Segura, Josep 24 July 2014 (has links)
We are surrounded by electronic devices that take advantage of wireless technologies, from our computer mice, which require little amounts of information, to our cellphones, which demand increasingly higher data rates. Until today, the coexistence of such a variety of services has been guaranteed by a fixed assignment of spectrum resources by regulatory agencies. This has resulted into a blind alley, as current wireless spectrum has become an expensive and a scarce resource. However, recent measurements in dense areas paint a very different picture: there is an actual underutilization of the spectrum by legacy systems. Cognitive radio exhibits a tremendous promise for increasing the spectral efficiency for future wireless systems. Ideally, new secondary users would have a perfect panorama of the spectrum usage, and would opportunistically communicate over the available resources without degrading the primary systems. Yet in practice, monitoring the spectrum resources, detecting available resources for opportunistic communication, and transmitting over the resources are hard tasks. This thesis addresses the tasks of monitoring, detecting and transmitting, in challenging scenarios including wideband signals, nonuniform sampling, inaccurate side information, and frequency-selective fading channels. In the first task of monitoring the spectrum resources, this thesis derives the periodogram and Capon spectral estimates in nonuniform sampling exploiting a correlation-matching fitting from linearly projected data. It is shown that nonuniform sampling incurs the phenomenon of noise enhancement, which is circumvented by the proposed spectral estimates by implementing a denoising process, and further theoretically characterized in Bernoulli nonuniform sampling by establishing equivalence between nonuniform sampling and signal-to-noise ratio (SNR). In the second task of detecting the available resources, this thesis considers the problems of multi-frequency signal detection, asymptotic performance, and cyclostationary signal detection. In multi-frequency signal detection, a unified framework based on the generalized likelihood ratio test (GLRT) is derived by considering different degrees of side information and performing maximum likelihood (ML) and correlation-matching estimation over the unknown parameters in uniform and nonuniform sampling, respectively. The asymptotic performance of signal detection is considered from two perspectives: the Stein's lemma, which allows discovering the influence of the main parameters on the error exponents in the error probabilities; and the asymptotic statistical characterization of the GLRT in Bernoulli nonuniform sampling, which allows the derivation of sampling walls in noise uncertainty, i.e., sampling densities below which the target detection probabilities cannot be guaranteed. Finally, this thesis exploits the cyclostationarity properties of primary signals by deriving the quadratic sphericity test (QST), which is the ratio between the squared mean and the arithmetic mean of the eigenvalues of the autocorrelation matrix of the observations; and the optimal GLRT in a parameterized model of the frequency-selective channel, which exploits the low rank structure of small spectral covariance matrices. In the last task of transmitting over the available resources, a cyclostationary secondary waveform scheme is first proposed to mitigate the interference that an active cognitive radio may cause to an inactive cognitive radio that performs spectrum sensing, by projecting the oversampled observations into a reduced subspace. Second, this thesis derives and statistically characterizes the sphericity minimum description length (MDL) for estimating the primary signal subspace. And third, this thesis finally considers the minimum norm waveform optimization problem with imperfect side information, whose benefits are those of linear predictors: flat frequency response and rotationally invariance. / Estem envoltats de dispositius electrònics que utilitzen tecnologia sense fils, des del ratolí de l'ordinador que requereix petites quantitats d'informació, fins als nostres telèfons mòbil que demanen cada vegada més velocitat de dades. Fins avui, la coexistència de tants serveis ha estat garantida per una assignació fixa dels recursos freqüencials per part de les agències de regulació. Això ens ha portat a un atzucac, ja que l'espectre actual ha esdevingut un recurs car i escàs. Tanmateix, mesures recents dibuixen una situació molt diferent: de fet hi ha una utilització molt baixa de l'espectre per part dels sistemes amb llicència. La tecnologia de ràdio cognitiva promet millorar l'eficiència espectral dels futurs sistemes de comunicació sense fils. En teoria, un usuari secundari coneix perfectament la utilització de l'espectre, i és capaç de transmetre de manera oportuna sense degradar els sistemes primaris. A la pràctica, però, monitoritzar els recursos freqüencials, detectar-los i transmetre-hi són tasques difícils. Aquesta tesi tracta aquestes tres tasques en escenaris complicats com senyals de banda ampla, mostreig no uniforme, informació lateral imprecisa i canals selectius en freqüència. En la primera tasca de monitoritzar els recursos freqüencials, aquesta tesi desenvolupa els estimadors espectrals de periodograma i Capon en mostreig no uniforme a partir d'un ajust per correlació de les observacions linealment projectades. Es demostra que el mostreig no uniforme genera el fenomen d'increment de soroll, el qual és solucionat pels estimadors espectrals proposats, i a més a més és caracteritzat teòricament pel cas de Bernoulli, establint una equivalència entre el mostreig no uniforme i la relació senyal soroll (SNR). En la segona tasca de detectar els recursos disponibles, la tesi considera els problemes de detecció de senyals multifreqüència, avaluació de les prestacions asimptòtiques, i detecció de senyals cicloestacionàries. En detecció multifreqüència, es proposa una formulació unificada basada en el test generalitzat de màxima versemblança (GLRT), considerant diferents graus d'informació lateral, i efectuant estimació de màxima versemblança (ML) i d'ajust per correlació dels paràmetres desconeguts en mostreig uniforme i mostreig no uniforme, respectivament. Les prestacions asimptòtiques dels detectors són avaluades des de dues perspectives: el lema d'Stein, que permet descobrir la influència dels diferents paràmetres sobre els exponents de les probabilitats d'error; i la caracterització estadística asimptòtica del GLRT en mostreig no uniforme de Bernoulli, que permet derivar les parets de mostreig en incertesa de soroll, és a dir, aquelles densitats de mostreig per sota de les quals les probabilitats de detecció objectiu no són garantides. Finalment, la tesi explota les propietats cicloestacionàries dels senyals primaris: es deriva el test d'esfericitat quadràtica (QST), que és la divisió entre la mitjana quadràtica i la mitjana aritmètica dels autovalors de la matriu de correlació de les observacions; i també es deriva el GLRT en un model parametritzat del canal selectiu en freqüència, que explota l'estructura rang deficient de petites matrius de covariància espectral. En l'última tasca de transmetre en els recursos disponibles, es proposa en primer lloc un esquema de forma d'ona cicloestacionària per reduir la interferència que un usuari cognitiu pot causar a un altre usuari cognitiu inactiu que fa sensat de l'espectre, projectant les observacions sobremostrejades en un subespai reduït. En segon lloc, aquesta tesi deriva i caracteritza estadísticament la llargària mínima de descripció (MDL) d'esfericitat per estimar el subespai de senyal primària. I en tercer lloc, la tesi considera el problema d'optimització de forma d'ona de norma mínima amb informació lateral imperfecta, els beneficis del qual són els dels predictors lineals: resposta freqüencial plana i invariància a la rotació.
305

Homogeneous and heterogeneous aqueous phase oxidation of phenol with fenton-like processes

Messele, Selamawit Ashagre 22 July 2014 (has links)
A les passades dècades, s’han desenvolupat diverses tècniques basades en l’oxidació química a fi de superar els inconvenients associats al tractament d’aigües residuals industrials. Els processos d’oxidació avançada (AOPs) són efectius en la degradació de contaminants no biodegradables presents en aigües residuals i freqüentment permeten una quasi total degradació sota condicions raonables de pressió i temperatura. Entre elles, el procés Fenton és àmpliament utilitzat malgrat els seus molts inconvenients, com sensibilitat al pH, formació de fangs i pèrdua de les espècies actives. Aquest treball dissenya diferents alternatives de millora d’aquests inconvenients usant processos basats en Fenton per a l’oxidació homogènia i heterogènia de fenol. Així, l’addició d’agents quelants permet ampliar el rang efectiu de pH. Així mateix, l’ús de nanoferro zero valent suportat sobre materials carbonosos millora la capacitat d’eliminació i suprimeix la requerida separació del fang d’hidròxid de ferro. / En las pasadas décadas, se desarrollaron diversas técnicas basadas en la oxidación química para superar los inconvenientes asociados al tratamiento de aguas residuales industriales. Los procesos de oxidación avanzada (AOPs) son efectivos en la degradación de contaminantes no biodegradables presentes en aguas residuales y frecuentemente permiten una casi total degradación bajo condiciones razonables de presión y temperatura. Entre ellas, el proceso Fenton es ampliamente utilizado pese a sus muchos inconvenientes, como sensibilidad al pH, formación de fangos y pérdida de las especies activas. Este trabajo diseña diferentes alternativas de mejora de estos inconvenientes usando procesos basados en Fenton para la oxidación homogénea y heterogénea de fenol. Así, la adición de agentes quelantes permite ampliar el rango efectivo de pH. Igualment, el uso de nanohierro cero valente soportado sobre materiales carbonosos mejora la capacidad de eliminación y suprime la requerida separación del fango de hidróxido de hierro. / In the last decades, various chemical oxidation techniques have been developed to overcome the inconveniences associated to conventional treatment of industrial wastewaters. Advanced oxidation processes (AOPs) have been reported to be effective for the degradation of soluble organic contaminants from wastewaters containing non-biodegradable organic pollutants, because they can often provide an almost total degradation, under reasonably mild conditions of temperature and pressure. Among them, Fenton process is widely implemented, although it has many drawbacks such as pH sensitivity, formation of sludge and loss of active species. Therefore, this work is specially focused on different alternatives to overcome the above drawbacks using Fenton-like processes for homogeneous and heterogeneous oxidation of phenol. Thus, the addition of chelating agents allowed broading the pH range of efficient operation. In turn, the use of nano zero valent iron supported on carbon materials enhances the removal performance and eliminates the subsequent separation of iron hydroxide sludge.
306

Variability-aware architectures based on hardware redundancy for nanoscale reliable computation

Aymerich Capdevila, Nivard 16 December 2013 (has links)
During the last decades, human beings have experienced a significant enhancement in the quality of life thanks in large part to the fast evolution of Integrated Circuits (IC). This unprecedented technological race, along with its significant economic impact, has been grounded on the production of complex processing systems from highly reliable compounding devices. However, the fundamental assumption of nearly ideal devices, which has been true within the past CMOS technology generations, today seems to be coming to an end. In fact, as MOSFET technology scales into nanoscale regime it approaches to fundamental physical limits and starts experiencing higher levels of variability, performance degradation, and higher rates of manufacturing defects. On the other hand, ICs with increasing number of transistors require a decrease in the failure rate per device in order to maintain the overall chip reliability. As a result, it is becoming increasingly important today the development of circuit architectures capable of providing reliable computation while tolerating high levels of variability and defect rates. The main objective of this thesis is to analyze and propose new fault-tolerant architectures based on redundancy for future technologies. Our research is founded on the principles of redundancy established by von Neumann in the 1950s and extends them to three new dimensions: 1. Heterogeneity: Most of the works on fault-tolerant architectures based on redundancy assume homogeneous variability in the replicas like von Neumann's original work. Instead, we explore the possibilities of redundancy when heterogeneity between replicas is taken into account. In this sense, we propose compensating mechanisms that select the weighting of the redundant information to maximize the overall reliability. 2. Asynchrony: Each of the replicas of a redundant system may have associated different processing delays due to variability and degradation; especially in future nanotechnologies. If we design our system to work locally in asynchronous mode then we may consider different voting policies to deal with the redundant information. Depending on how many replicas we collect before taking a decision we can obtain different trade-off between processing delay and reliability. We propose a mechanism for providing these facilities and analyze and simulate its operation. 3. Hierarchy: Finally, we explore the possibilities of redundancy applied at different hierarchy layers of complex processing systems. We propose to distribute redundancy across the various hierarchy layers and analyze the benefits that can be obtained. Drawing on the scenario of future ICs technologies, we push the concept of redundancy to its fullest expression through the study of realistic nano-device architectures. Most of the redundant architectures considered so far do not face properly the era of Terascale Computing and the nanotechnology trends. Since von Neumann applied for the first time redundancy at electronic circuits, never until now effects as common in nanoelectronics as degradation and interconnection failures have been treated directly from the standpoint of redundancy. In this thesis we address in a comprehensive manner the reliability of digital processing systems in the upcoming technology generations.
307

Compressive sensing based candidate detector and its applications to spectrum sensing and through-the-wall radar imaging

Lagunas Targarona, Eva 07 March 2014 (has links)
Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications being developed in areas such as channel coding, medical imaging, computational biology and many more. Unlike majority of CS literature, the proposed Ph.D. thesis surveys the CS theory applied to signal detection, estimation and classification, which not necessary requires perfect signal reconstruction or approximation. In particular, a novel CSbased detection technique which exploits prior information about some features of the signal is presented. The basic idea is to scan the domain where the signal is expected to lie with a candidate signal estimated from the known features. The proposed detector is called candidate-based detector because their main goal is to react only when the candidate signal is present. The CS-based candidate detector is applied to two topical detection problems. First, the powerful CS theory is used to deal with the sampling bottleneck in wideband spectrum sensing for open spectrum scenarios. The radio spectrum is a natural resource which is recently becoming scarce due to the current spectrum assignment policy and the increasing number of licensed wireless systems. To deal with the crowded spectrum problem, a new spectrum management philosophy is required. In this context, the revolutionary Cognitive Radio (CR) emerges as a solution. CR benefits from the poor usage of the spectrum by allowing the use of temporarily unused licensed spectrum to secondary users who have no spectrum licenses. The identification procedure of available spectrum is commonly known as spectrum sensing. However, one of the most important problems that spectrum sensing techniques must face is the scanning of wide band of frequencies, which implies high sampling rates. The proposed CS-based candidate detector exploits some prior knowledge of primary users, not only to relax the sampling bottleneck, but also to provide an estimation of the candidate signals' frequency, power and angle of arrival without reconstructing the whole spectrum. The second application is Through-the-Wall Radar Imaging (TWRI). Sensing through obstacles such as walls, doors, and other visually opaque materials, using microwave signals is emerging as a powerful tool supporting a range of civilian and military applications. High resolution imaging is achieved if large bandwidth signals and long antenna arrays are used. However, this implies acquisition and processing of large amounts of data volume. Decreasing the number of acquired samples can also be helpful in TWRI from a logistic point of view, as some of the data measurements in space and frequency can be difficult, or impossible to attain. In this thesis, we addressed the problem of imaging building interior structures using a reduced number of measurements. The proposed technique for the determination of the building layout is based on prior knowledge about common construction practices. Real data collection experiments in a laboratory environment, using Radar Imaging Lab facility at the Center for Advanced Communications, Villanova University, USA, are conducted to validate the proposed approach. / La adquisición de datos es un tema fundamental en el procesamiento de señales. Hasta ahora, el teorema de Shannon-Nyquist ha sido el núcleo de los métodos convencionales de conversión analógico-digital. El teorema dice que para recuperar perfectamente la información, cualquier señal debe ser muestreada a una frecuencia constante igual al doble de la máxima frecuencia presente en la señal. Sin embargo, este teorema asume el peor de los casos: cuando las señales ocupan todo el espectro. En este contexto aparece la teoría del muestreo compresivo (conocido en inglés como Compressed Sensing (CS)). CS ha supuesto una auténtica revolución en lo que se refiere a la adquisición y muestreo de datos analógicos en un esfuerzo hacia resolver la problemática de recuperar un proceso continuo comprimible con un nivel suficiente de similitud si únicamente se realiza un número muy reducido de medidas o muestras del mismo. El requerimiento para el éxito de dicha técnica es que la señal debe poder expresarse de forma dispersa en algún dominio. Esto es, que la mayoría de sus componentes sean cero o puedan considerarse despreciables. La aplicación de este tipo de muestreo compresivo supone una línea de investigación de gran auge e interés investigador en áreas como la transmisión de datos, procesamiento de imágenes médicas, biología computacional, entre otras. A diferencia de la mayoría de publicaciones relacionadas con CS, en esta tesis se estudiará CS aplicado a detección, estimación y clasificación de señales, que no necesariamente requiere la recuperación perfecta ni completa de la señal. En concreto, se propone un nuevo detector basado en cierto conocimiento a priori sobre la señal a detectar. La idea básica es escanear el dominio de la señal con una señal llamada Candidata, que se obtiene a partir de la información a priori de la señal a detectar. Por lo tanto, el detector únicamente reaccionará cuando la señal candidata esté presente. El detector es aplicado a dos problemas particulares de detección. En primer lugar, la novedosa teoría de CS es aplicada al sensado de espectro o spectrum sensing, en el contexto de Radio Cognitiva (CR). El principal problema radica en que las políticas actuales de asignación de bandas frecuenciales son demasiado estrictas y no permiten un uso óptimo del espectro radioeléctrico disponible. El uso del espectro radioeléctrico puede ser mejorado significativamente si se posibilita que un usuario secundario (sin licencia) pueda acceder a un canal desocupado por un usuario primario en ciertas localizaciones y momentos temporales. La tecnología CR se ha identificado recientemente como una solución prometedora al denominado problema de escasez de espectro, es decir, la creciente demanda de espectro y su actual infrautilización. Un requerimiento esencial de los dispositivos cognitivos es la capacidad de detectar la presencia de usuarios primarios (para no causarles interferencia). Uno de los problemas que se afronta en este contexto es la necesidad de escanear grandes anchos de banda que requieren frecuencias de muestreo extremadamente elevadas. El detector propuesto basado en CS aprovecha los huecos libres de espectro no sólo para relajar los requerimientos de muestreo, sino también para proporcionar una estimación precisa de la frecuencia, potencia y ángulo de llegada de los usuarios primarios, todo ello sin necesidad de reconstruir el espectro. La segunda aplicación es en radar con visión a través de paredes (Through-the-Wall Radar Imaging - TWRI). Hace ya tiempo que la capacidad de ver a través de las paredes ya no es un asunto de ciencia ficción. Esto es posible mediante el envío de ondas de radio, capaces de atravesar objetos opacos, que rebotan en los objetivos y retornan a los receptores. Este es un tipo de radar con gran variedad de aplicaciones, tanto civiles como militares. La resolución de las imágenes proporcionadas por dichos radares mejora cuando se usan grandes anchos de banda y mayor número de antenas, lo que directamente implica la necesidad de adquirir un mayor número de muestras y un mayor volumen de datos que procesar. A veces, reducir el número de muestras es interesante en TWRI desde un punto de vista logístico, ya que puede que algunas muestras frecuenciales o espaciales sean difíciles o imposibles de obtener. En esta tesis focalizaremos el trabajo en la detección de estructuras internas como paredes internas para reconstruir la estructura del edificio. Las paredes y/o diedros formados por la intersección de dos paredes internas formaran nuestras señales candidatas para el detector propuesto. En general, las escenas de interiores de edificios están formadas por pocas estructuras internas dando paso a la aplicaci´on de CS. La validación de la propuesta se llevará a cabo con experimentos realizados en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA / L’adquisició de dades és un tema fonamental en el processament de senyals. Fins ara, el teorema de Shannon-Nyquist ha sigut la base dels mètodes convencionals de conversió analògic-digital. El teorema diu que per recuperar perfectament la informació, qualsevol senyal ha de ser mostrejada a una freqüència constant igual al doble de la màxima freqüència present a la senyal. No obstant, aquest teorema assumeix el pitjor dels casos: quan les senyals ocupen tot l’espectre. En aquest context apareix la teoria del mostreig compressiu (conegut en anglès amb el nom de Compressed Sensing (CS)). CS ha suposat una autèntica revolució pel que fa a l’adquisició i mostreig de dades analògiques en un esforç cap a resoldre la problemàtica de recuperar un procés continu comprimible amb un nivell suficient de similitud si únicament es realitza un número molt reduït de mesures o mostres del mateix. El requisit para l’èxit d’aquesta tècnica és que la senyal ha de poder ser expressada de forma dispersa en algun domini. Això és, que la majoria dels seus components siguin zero o puguin considerar-se despreciables. L’aplicació d’aquest tipus de mostreig compressiu suposa una l’ínia de investigació de gran interès en àrees com la transmissió de dades, el processament d’imatges mèdiques, biologia computacional, entre altres. A diferència de la majoria de publicacions relacionades amb CS, en aquesta tesi s’estudiarà CS aplicat a detecció, estimació i classificació de senyals, que no necessàriament requereix la recuperació perfecta ni completa de la senyal. En concret, es proposa un nou detector basat en cert coneixement a priori sobre la senyal a detectar. La idea bàsica és escanejar el domini de la senyal amb una senyal anomenada Candidata, que s’obté a partir de la informació a priori de la senyal a detectar. Per tant, el detector únicament reaccionarà quan la senyal candidata estigui present. El detector és aplicat a dos problemes particulars de detecció. En primer lloc, la teoria de CS és aplicada al sensat d’espectre o spectrum sensing, en el context de Radio Cognitiva (CR). El principal problema radica en que les polítiques actuals d’assignació de bandes freqüencials són massa estrictes i no permeten l’ús òptim de l’espectre radioelèctric disponible. L’ús de l’espectre radioelèctric pot ser significativament millorat si es possibilita que un usuari secundari (sense llicència) pugui accedir a un canal desocupat per un usuari primari en certes localitzacions i moments temporals. La tecnologia CR s’ha identificat recentment com una solució prometedora al problema d’escassetat d’espectre, és a dir, la creixent demanda d’espectre i la seva actual infrautilització. Un requeriment essencial dels dispositius cognitius és la capacitat de detectar la presència d’usuaris primaris (per no causar interferència). Un dels problemes que s’afronta en aquest context és la necessitat d’escanejar grans amples de banda que requereixen freqüències de mostreig extremadament elevades. El detector proposat basat en CS aprofita els espais buits lliures d’espectre no només per relaxar els requeriments de mostreig, sinó també per proporcionar una estimació precisa de la freqüència, potència i angle d’arribada dels usuaris primaris, tot això sense necessitat de reconstruir l’espectre. La segona aplicació ´es en radars amb visió a través de parets (Through-the-Wall Radar Imaging - TWRI). Ja fa un temps que la capacitat de veure a través de les parets no és un assumpte de ciència ficció. Això ´es possible mitjançant l’enviament d’ones de radio, capaços de travessar objectes opacs, que reboten en els objectius i retornen als receptors. Aquest és un tipus de radar amb una gran varietat d’aplicacions, tant civils como militars. La resolució de las imatges proporcionades per aquests radars millora quan s’usen grans amples de banda i més nombre d’antenes, cosa que directament implica la necessitat d’adquirir un major nombre de mostres i un major volum de dades que processar. A vegades, reduir el nombre mostres és interessant en TWRI des de un punt de vista logístic, ja que pot ser que algunes mostres freqüencials o espacials siguin difícils o impossibles d’obtenir. En aquesta tesis focalitzarem el treball en la detecció d’estructures internes com per exemple parets internes per reconstruir l’estructura de l’edifici. Les parets i/o díedres formats per la intersecció de dos parets internes formaran les nostres senyals candidates per al detector proposat. En general, les escenes d’interiors d’edificis estan formades per poques estructures internes donant pas a l’aplicació de CS. La validació de la proposta es durà a terme amb experiments realitzats en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA
308

Privacy protection of user profiles in personalized information systems

Parra Arnau, Javier 02 December 2013 (has links)
In recent times we are witnessing the emergence of a wide variety of information systems that tailor the information-exchange functionality to meet the specific interests of their users. Most of these personalized information systems capitalize on, or lend themselves to, the construction of profiles, either directly declared by a user, or inferred from past activity. The ability of these systems to profile users is therefore what enables such intelligent functionality, but at the same time, it is the source of serious privacy concerns. Although there exists a broad range of privacy-enhancing technologies aimed to mitigate many of those concerns, the fact is that their use is far from being widespread. The main reason is that there is a certain ambiguity about these technologies and their effectiveness in terms of privacy protection. Besides, since these technologies normally come at the expense of system functionality and utility, it is challenging to assess whether the gain in privacy compensates for the costs in utility. Assessing the privacy provided by a privacy-enhancing technology is thus crucial to determine its overall benefit, to compare its effectiveness with other technologies, and ultimately to optimize it in terms of the privacy-utility trade-off posed. Considerable effort has consequently been devoted to investigating both privacy and utility metrics. However, most of these metrics are specific to concrete systems and adversary models, and hence are difficult to generalize or translate to other contexts. Moreover, in applications involving user profiles, there are a few proposals for the evaluation of privacy, and those existing are not appropriately justified or fail to justify the choice. The first part of this thesis approaches the fundamental problem of quantifying user privacy. Firstly, we present a theoretical framework for privacy-preserving systems, endowed with a unifying view of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. Our theoretical analysis shows that numerous privacy metrics emerging from a broad spectrum of applications are bijectively related to this estimation error, which permits interpreting and comparing these metrics under a common perspective. Secondly, we tackle the issue of measuring privacy in the enthralling application of personalized information systems. Specifically, we propose two information-theoretic quantities as measures of the privacy of user profiles, and justify these metrics by building on Jaynes' rationale behind entropy-maximization methods and fundamental results from the method of types and hypothesis testing. Equipped with quantifiable measures of privacy and utility, the second part of this thesis investigates privacy-enhancing, data-perturbative mechanisms and architectures for two important classes of personalized information systems. In particular, we study the elimination of tags in semantic-Web applications, and the combination of the forgery and the suppression of ratings in personalized recommendation systems. We design such mechanisms to achieve the optimal privacy-utility trade-off, in the sense of maximizing privacy for a desired utility, or vice versa. We proceed in a systematic fashion by drawing upon the methodology of multiobjective optimization. Our theoretical analysis finds a closed-form solution to the problem of optimal tag suppression, and to the problem of optimal forgery and suppression of ratings. In addition, we provide an extensive theoretical characterization of the trade-off between the contrasting aspects of privacy and utility. Experimental results in real-world applications show the effectiveness of our mechanisms in terms of privacy protection, system functionality and data utility.
309

Experimental validation of optimal real-time energy management system for microgrids

Marzband, Mousa 20 January 2014 (has links)
Nowadays, power production, reliability, quality, efficiency and penetration of renewable energy sources are amongst the most important topics in the power systems analysis. The need to obtain optimal power management and economical dispatch are expressed at the same time. The interest in extracting an optimum performance minimizing market clearing price (MCP) for the consumers and provide better utilization of renewable energy sources has been increasing in recent years. Due to necessity of providing energy balance while having the fluctuations in the load demand and non-dispatchable nature of renewable sources, implementing an energy management system (EMS) is of great importance in Microgrids (MG). The appearance of new technologies such as energy storage (ES) has caused increase in the effort to present new and modified optimization methods for power management. Precise prediction of renewable energy sources power generation can only be provided with small anticipation. Hence, for increasing the efficiency of the presented optimization algorithm in large-dimension problems, new methods should be proposed, especially for short-term scheduling. Powerful optimization methods are needed to be applied in such a way to achieve maximum efficiency, enhance the economic dispatch as well as provide the best performance for these systems. Thus, real-time energy management within MG is an important factor for the operators to guarantee optimal and safe operation of the system. The proposed EMS should be able to schedule the MG generation with minimum information shares sent by generation units. To achieve this ability, the present thesis proposes an operational architecture for real time operation (RTO) of a MG operating in both islanding and grid-connected modes. The presented architecture is flexible and could be used for different configurations of MGs in different scenarios. A general formula is also presented to estimate optimum operation strategy, cost optimization plan and the reduction of the consumed electricity combined with applying demand response (DR). The proposed problem is formulated as an optimization problem with nonlinear constraints to minimize the cost related to generation sources and responsive load as well as reducing MCP. Several optimization methods including mixed linear programming, pivot source, imperialist competition, artificial bee colony, particle swarm, ant colony, and gravitational search algorithms are utilized to achieve the specified objectives. The main goal of the thesis is to validate experimentally the design of the real-time energy management system for MGs in both operating modes which is suitable for different size and types of generation resources and storage devices with plug-and-play structure. As a result, this system is capable of adapting itself to changes in the generation and storage assets in real-time, and delivering optimal operation commands to the assets quickly, using a local energy market (LEM) structure based on single side or double side auction. The study is aimed to figure the optimum operation of micro-sources out as well as to decrease the electricity production cost by hourly day-ahead and real time scheduling. Experimental results show the effectiveness of the proposed methods for optimal operation with minimum cost and plug-and-play capability in a MG. Moreover, these algorithms are feasible from computational viewpoints while having many advantages such as reducing the peak consumption, optimal operation and scheduling the generation unit as well as minimizing the electricity generation cost. Furthermore, capabilities such as the system development, reliability and flexibility are also considered in the proposed algorithms. The plug and play capability in real time applications is investigated by using different scenarios.
310

Use of locator/identifier separation to improve the future internet routing system

Jakab, Loránd 04 July 2011 (has links)
The Internet evolved from its early days of being a small research network to become a critical infrastructure many organizations and individuals rely on. One dimension of this evolution is the continuous growth of the number of participants in the network, far beyond what the initial designers had in mind. While it does work today, it is widely believed that the current design of the global routing system cannot scale to accommodate future challenges. In 2006 an Internet Architecture Board (IAB) workshop was held to develop a shared understanding of the Internet routing system scalability issues faced by the large backbone operators. The participants documented in RFC 4984 their belief that "routing scalability is the most important problem facing the Internet today and must be solved." A potential solution to the routing scalability problem is ending the semantic overloading of Internet addresses, by separating node location from identity. Several proposals exist to apply this idea to current Internet addressing, among which the Locator/Identifier Separation Protocol (LISP) is the only one already being shipped in production routers. Separating locators from identifiers results in another level of indirection, and introduces a new problem: how to determine location, when the identity is known. The first part of our work analyzes existing proposals for systems that map identifiers to locators and proposes an alternative system, within the LISP ecosystem. We created a large-scale Internet topology simulator and used it to compare the performance of three mapping systems: LISP-DHT, LISP+ALT and the proposed LISP-TREE. We analyzed and contrasted their architectural properties as well. The monitoring projects that supplied Internet routing table growth data over a large timespan inspired us to create LISPmon, a monitoring platform aimed at collecting, storing and presenting data gathered from the LISP pilot network, early in the deployment of the LISP protocol. The project web site and collected data is publicly available and will assist researchers in studying the evolution of the LISP mapping system. We also document how the newly introduced LISP network elements fit into the current Internet, advantages and disadvantages of different deployment options, and how the proposed transition mechanism scenarios could affect the evolution of the global routing system. This work is currently available as an active Internet Engineering Task Force (IETF) Internet Draft. The second part looks at the problem of efficient one-to-many communications, assuming a routing system that implements the above mentioned locator/identifier split paradigm. We propose a network layer protocol for efficient live streaming. It is incrementally deployable, with changes required only in the same border routers that should be upgraded to support locator/identifier separation. Our proof-of-concept Linux kernel implementation shows the feasibility of the protocol, and our comparison to popular peer-to-peer live streaming systems indicates important savings in inter-domain traffic. We believe LISP has considerable potential of getting adopted, and an important aspect of this work is how it might contribute towards a better mapping system design, by showing the weaknesses of current favorites and proposing alternatives. The presented results are an important step forward in addressing the routing scalability problem described in RFC 4984, and improving the delivery of live streaming video over the Internet.

Page generated in 0.0363 seconds