• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 27
  • 11
  • 8
  • 6
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 128
  • 19
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 11
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Perfil descritivo otimizado, aceitação e parâmetros físico-químicos de vinhos tintos de mesa / Optimized descriptive profile, acceptance and physicochemical parameters of red table wines

Olenka, Ketlyn Lucyani 27 November 2015 (has links)
Made available in DSpace on 2017-07-10T16:32:29Z (GMT). No. of bitstreams: 1 Ketlyn Lucyani Olenka.pdf: 1141365 bytes, checksum: cb0b683a1a16ca0d05b2689d26ff5aa0 (MD5) Previous issue date: 2015-11-27 / The table wine plays a fundamental role in the national wine industry as a source of income for small, medium and large producers. In southwestern Paraná, the city that stands out in the production of red table wines is Salgado Filho. However there are few studies about its features. There are many techniques and thousands of research on red wines, however, the technique of Optimized Descriptive Profile (ODP), because it is a recent methodology is not aware of published works, still used it to describe wines. For the wine industry to promote regional and agro-industrial development, it is necessary to know the possibilities and problems, and on this fact, have been able to establish manufacturing processes based on certain criteria for a safe product and proven quality in the physical, chemical and sensory and that they can win the market. The objective is therefore to apply the optimized descriptive profile, characterize the physical and chemical parameters and verify the acceptance of red wines produced in the municipality of Salgado Filho-PR. Also aimed to verify the adequacy of wines with current legislation and correlate the variables of sensory and instrumental measurements. In developing the methodology we used samples of eight different types of red table wines produced in the municipality of Salgado Filho, all prepared with the varietal Claret. To trace the Optimized Descriptive Profile was used the methodology proposed by Silva (2012). It used the acceptance test and purchase intention to analyze the sensory acceptance. The physical and chemical variables were density, alcohol, volatile acidity, total acidity, reducing sugars and sulfates. Data were analyzed by analysis of variance (ANOVA), Tukey test (p = 5%) and Pearson correlation test. The results show that the ODP, the wines differed as to color, acid taste and body and had no significant differences in aroma, sweet taste and astringency, it can be said that red wines analyzed table feature burgundy color and grape aroma with high intensity notes, with medium intensity for sweet taste and sour taste. light-bodied with low astringency. The acceptance there were no significant differences between the samples and all it obtained high acceptance rate. In general, the physical and chemical parameters fulfilled the Identity and Quality Standards established by Brazilian law. Instrumental and sensory variables showed some positive correlations and strong negative that made it possible to verify the importance of using both measures to reduce demand for time and cost of analysis. / O vinho de mesa exerce papel fundamental no setor vinícola nacional, como fonte de renda para pequenos, médios e grandes produtores. No sudoeste do Paraná, o município que se destaca na produção de vinhos tintos de mesa é Salgado Filho. No entanto, há poucos estudos sobre suas características. Existem muitas técnicas e milhares de pesquisas sobre vinhos tintos, porém, a técnica do Perfil Descritivo Otimizado (PDO), pelo fato de ser uma metodologia recente, ainda não se tem conhecimento de trabalhos publicados utilizando-a para descrever vinhos. Para que a vitivinicultura possa promover o desenvolvimento regional e agroindustrial, é necessário conhecer as possibilidades e problemas, e sobre esta realidade, têm-se condições de estabelecer processos de fabricação baseados em critérios determinados para um produto seguro e com qualidade comprovada nos aspectos físicos, químicos e sensoriais e que possam conquistar o mercado. Objetivou-se aplicar o perfil descritivo otimizado, caracterizar os parâmetros físico-químicos e verificar a aceitação de vinhos tintos produzidos no município de Salgado Filho-PR. Ainda, teve como objetivo verificar as adequações dos vinhos com a legislação vigente e correlacionar as variáveis de medidas sensoriais e instrumentais. No desenvolvimento da metodologia utilizou-se amostras de oito diferentes tipos de vinhos tinto de mesa, produzidos no município de Salgado Filho, todos elaborados com a varietal Bordô. Para traçar o PDO foi utilizada a metodologia proposta por Silva (2012). Foram utilizados os teste de aceitação e intenção de compra para analisar a aceitação sensorial. As variáveis físico-químicas analisadas foram a densidade, teor alcoólico, acidez volátil, acidez total, açúcares redutores e dióxido de enxofre livre e total. Os dados foram analisados por Análise de Variância (ANOVA), teste de Tukey (p=5%) e teste de correlação de Pearson. Os resultados demonstram que no PDO, os vinhos se diferenciaram quanto a cor, gosto ácido e corpo e não tiveram diferenças significativas de aroma, gosto doce e adstringência, pode-se dizer que os vinhos tintos de mesa analisados apresentam cor bordô e aroma de uva com notas de intensidade elevadas, com intensidade média para gosto doce e gosto ácido, pouco encorpados e com baixa adstringência. Quanto à aceitação não houve diferenças significativas entre as amostras e todas obtiveram alto índice de aceitabilidade. Em geral, os parâmetros físico-químicos se enquadraram nos Padrões de Identidade e Qualidade estabelecidos pela legislação brasileira. As variáveis instrumentais e sensoriais apresentaram algumas correlações positivas e negativas fortes que possibilitaram verificar a importância da utilização de ambas medidas para reduzir a demanda de tempo e custo das análises.
52

A development of secure and optimized AODV routing protocol using ant algorithm / Developpement d'un protocole de routage AODV sécurisé et optimisé utilisant les algorithmes de colonies de fourmis

Simaremare, Harris 29 November 2013 (has links)
Les réseaux sans fil sont devenus une technologie importante dans le secteur des télécommunications. L'une des principales technologies des réseaux sans fil sont les réseaux mobiles ad hoc (MANET). MANET est un système d'auto-configuration (autonome) des routeurs mobiles où les routeurs sont libres de se déplacer de façon aléatoire et de s'organiser arbitrairement. La topologie des réseaux sans fil peut alors changer rapidement et de manière imprévisible avec une grande mobilité et sans aucune infrastructure fixe et sans administration centrale. Les protocoles de routage MANET sont Ad Hoc on Demand Distance Vector (AODV), Optimized Link State Routing (OLSR), Topology Dissemination Based on Reverse-Path Forwarding (TBRPF) et Dynamic Source Routing (DSR).En raison des caractéristiques des réseaux mobiles ad hoc, les principaux problèmes concernent la sécurité, les performances du réseau et de la qualité de service. En termes de performances, AODV offre de meilleures performances que les autres protocoles de routage MANET. Cette thèse porte donc sur le développement d'un protocole sécurisé et sur l'acheminement optimisé basé sur le protocole de routage AODV. Dans la première partie, nous combinons la fonction de gateway de AODV + et la méthode reverse de R-AODV pour obtenir le protocole optimisé en réseau hybride. Le protocole proposé appelé AODV-UI. Mécanisme de demande inverse dans R-AODV est utilisé pour optimiser le rendement du protocole de routage AODV et le module de passerelle de AODV + est ajouté à communiquer avec le noeud d'infrastructure. Nous effectuons la simulation en utilisant NS-2 pour évaluer la performance de AODV-UI. Paramètres d'évaluation de la performance sont le taux de livraison de paquets de bout en bout retard et les frais généraux de routage. Les résultats des simulations montrent que AODV-UI surperformé AODV + en terme de performance.La consommation d'énergie et les performances sont évaluées dans les scénarios de simulation avec un nombre différent de noeuds source, la vitesse maximale différente, et également des modèles de mobilité différents. Nous comparons ces scénarios sous Random Waypoint (RWP) et Reference Point Group Mobility (RPGM) modèles. Le résultat de la simulation montre que sous le modèle de mobilité RWP, AODV-UI consommer petite énergie lorsque la vitesse et le nombre de nœuds accéder à la passerelle sont augmentés. La comparaison des performances lors de l'utilisation des modèles de mobilité différents montre que AODV-UI a une meilleure performance lors de l'utilisation modèle de mobilité RWP. Globalement, le AODV-UI est plus appropriée pour l'utilisation de modèle de mobilité RWP.Dans la deuxième partie, nous proposons un nouveau protocole AODV sécurisé appelé Trust AODV en utilisant le mécanisme de la confiance. Les paquets de communication sont envoyés uniquement aux nœuds voisins de confiance. Calcul de confiance est basée sur les comportements et les activités d'information de chaque nœud. Il est divisé en Trust Global (TG) et Trust Local (TL). TG est un calcul de confiance basée sur le total de paquets de routage reçues et le total de l'envoi de paquets de routage. TL est une comparaison entre les paquets reçus au total et nombre total de paquets transmis par nœud voisin de nœuds spécifiques. Noeuds concluent le niveau de confiance totale de ses voisins en accumulant les valeurs TL et TG. Quand un noeud est soupçonné d'être un attaquant, le mécanisme de sécurité sera l'isoler du réseau avant que la communication est établie. [...] / Currently wireless networks have grown significantly in the field of telecommunication networks. Wireless networks have the main characteristic of providing access of information without considering the geographical and the topological attributes of a user. One of the most popular wireless network technologies is mobile ad hoc networks (MANET). A MANET is a decentralized, self-organizing and infrastructure-less network. Every node acts as a router for establishing the communication between nodes over wireless links. Since there is no administrative node to control the network, every node participating in the network is responsible for the reliable operation of the whole network. Nodes forward the communication packets between each other to find or establish the communication route. As in all networks, MANET is managed and become functional with the use of routing protocols. Some of MANET routing protocol are Ad Hoc on Demand Distance Vector (AODV), Optimized Link State Routing (OLSR), Topology Dissemination Based on Reverse-Path Forwarding (TBRPF), and Dynamic Source Routing (DSR).Due to the unique characteristics of mobile ad hoc networks, the major issues to design the routing protocol are a security aspect and network performance. In term of performance, AODV has better performance than other MANET routing protocols. In term of security, secure routing protocol is divided in two categories based on the security method, i.e. cryptographic mechanism and trust based mechanism. We choose trust mechanism to secure the protocol because it has a better performance rather than cryptography method.In the first part, we combine the gateway feature of AODV+ and reverse method from R-AODV to get the optimized protocol in hybrid network. The proposed protocol called AODV-UI. Reverse request mechanism in R-AODV is employed to optimize the performance of AODV routing protocol and gateway module from AODV+ is added to communicate with infrastructure node. We perform the simulation using NS-2 to evaluate the performance of AODV-UI. Performance evaluation parameters are packet delivery rate, end to end delay and routing overhead. Simulation results show that AODV-UI outperformed AODV+ in term of performance. The energy consumption and performance are evaluated in simulation scenarios with different number of source nodes, different maximum speed, and also different mobility models. We compare these scenarios under Random Waypoint (RWP) and Reference Point Group Mobility (RPGM) models. The simulation result shows that under RWP mobility model, AODV-UI consume small energy when the speed and number of nodes access the gateway are increased. The performance comparison when using different mobility models shows that AODV-UI has a better performance when using RWP mobility model. Overall the AODV-UI is more suitable when using RWP mobility model.In the second part, we propose a new secure AODV protocol called Trust AODV using trust mechanism. Communication packets are only sent to the trusted neighbor nodes. Trust calculation is based on the behaviors and activities information’s of each node. It is divided in to Trust Global and Trust Local. Trust global (TG) is a trust calculation based on the total of received routing packets and the total of sending routing packets. Trust local (TL) is a comparison between total received packets and total forwarded packets by neighbor node from specific nodes. Nodes conclude the total trust level of its neighbors by accumulating the TL and TG values. When a node is suspected as an attacker, the security mechanism will isolate it from the network before communication is established. [...]
53

Framgångsfaktorer för arbete med funktionsöverskridande processer : En fallstudie vid Ericsson AB

Kekonius, Henrik, Martinsson, Gustaf January 2008 (has links)
<p>A product often consists for the most part of purchased materials and to reduce the total cost of production, therefore, companies often focus on reducing their purchase prices. Pushing supplier prices in this way is a popular way to improve the company's short-term profits.</p><p>Ericsson negotiates annually with its suppliers to reduce their purchase prices, which is considered to reduce purchasing costs and thus the production costs. This is achieved by the sourcing function in a process known as the VPA process. The VPA process requires information from the local supply functions, which, however, has no documented approach for these activities. The purpose of this study is, therefore, to create a support processes to sourcing VPA process. We have also from a theoretical approach critically reviewed the existing process.</p><p>What we found is that Ericsson's approach could be sub optimized, focusing on achieving internal goals within the function creates obstacles for other functions. Often, if not always, each function focuses on its own goals and do not have a holistic view on organizational performance. This means that the organization as a whole may suffer when instead of optimizing process output, local optimization leads to total sub optimization.</p><p>To prevent sub optimization, functions needs to create an understanding of the other functions goals, communication between the functions are therefore of critical nature. However, this is not enough, to reduce the focus on goals within functions and instead focus on process goals requires that senior management will create this opportunity.</p> / <p>En produkt består ofta till större delen av inköpt material och för att minska den totala tillverkningskostnaden fokuserar företagen därför ofta på att minska sina inköpspriser. Att pressa leverantörernas priser på detta vis är ett populärt sätt att förbättra företagets kortsiktiga vinster.</p><p>Ericsson förhandlar årligen med sina leverantörer för att minska sina inköpspriser, vilket alltså anses ska minska inköpskostnader och därmed även tillverkningskostnaderna. Detta sker genom funktionen sourcing i en process som kallas för VPA-processen. VPA-processen kräver information från de lokala supplyfunktionerna, vilka dock inte har något dokumenterat arbetssätt för dessa aktiviteter. Syftet med denna studie är därför att skapa en stödprocess till sourcings VPA-process. Vi har även utifrån ett teoretiskt synsätt kritiskt granskat den befintliga processen.</p><p>Vad vi kommit fram till är att Ericssons arbetssätt kan vara suboptimerat med fokusering på interna funktionsmål som skapar hinder för andra funktioners mål. Ofta, om inte alltid, fokuserar varje funktion på sitt eget mål och ser inte till helheten. Detta medför att helheten kan få lida då istället för att processutfallet blir optimerat leder lokal optimering till total suboptimering.</p><p>För att förhindra suboptimering måste förståelse för de andra funktionerna skapas, kommunikation mellan funktionerna är av kritisk natur. Dock är detta inte nog, för att minska fokuseringen på funktionsmål och istället fokusera på processmål kräver att företagsledningen skapar denna möjlighet.</p>
54

Design of a bistatic nearfield array for an expanded volume

Terrell, Stephen John 18 April 2005 (has links)
Achieving acceptable plane wave uniformity throughout an expanded volume is necessary to conduct scattering measurements on a large target in a controlled environment. An expanded volume is large relative to the size of the nearfield array configuration used to produce plane wave uniformity. The optimum set of shading coefficients for a nearfield array may not produce acceptable plane wave uniformity as the volume and frequency domain are expanded for a given array configuration. Choosing the frequency domain as a single frequency for an optimum set of coefficients will produce plane wave uniformity throughout the largest possible volume for a given array configuration. This study determines the acceptability of uniformity results produced by an optimum set of frequency dependent coefficients throughout an expanded volume for two array configurations that comprise a system for measuring bistatic target strength in the nearfield. Minimizing the frequency domain chosen for an optimum set of coefficients will produce plane wave uniformity for the largest possible volume for a given array configuration. This study determines the acceptability of uniformity results produced by an optimum set of frequency dependent coefficients throughout an optimistic volume for two array configurations that comprise a bi-static array.
55

Volumetric Phased Arrays for Satellite Communications

Barott, William Chauncey 07 July 2006 (has links)
The high amount of scientific and communications data produced by low earth orbiting satellites necessitates economical methods of communication with these satellites. A volumetric phased array for demonstrating horizon-to-horizon electronic tracking of the NASA satellite EO-1 was developed and demonstrated. As a part of this research, methods of optimizing the elemental antenna as well as the antenna on-board the satellite were investigated. Using these optimized antennas removes the variations in received signal strength that are due to the angularly dependent propagation loss exhibited by the communications link. An exhaustive study using genetic algorithms characterized two antenna architectures, and included optimizations for radiation pattern, bandwidth, impedance, and polarization. Eleven antennas were constructed and their measured characteristics were compared to those of the simulated antennas. Additional studies were conducted regarding the optimization of aperiodic arrays. A pattern-space representation of volumetric arrays was developed and used with a novel tracking algorithm for these arrays. This algorithm allows high-resolution direction finding using a small number of antennas while mitigating aliasing ambiguities. Finally, a method of efficiently applying multiple beam synthesis using the Fast Fourier Transform to aperiodic arrays was developed. This algorithm enables the operation of phased arrays combining the benefits of aperiodic element position with the efficiency of FFT multiple beam synthesis. Results of this research are presented along with the characteristics of the volumetric array used to track EO-1. Experimental data and the interpretations of that data are presented, and possible areas of future research are discussed.
56

Dehydration Of Aqueous Aprotic Solvent Mixtures By Pervaporation

Sarialp, Gokhan 01 February 2012 (has links) (PDF)
Aprotic solvents are organic solvents which do not easily react with a substance dissolved in it and they do not exchange protons despite of their high ion and polar group dissolving power. Therefore, this characteristic property makes aprotic solvents very suitable intermediates in many industries producing pharmaceuticals, textile auxiliaries, plasticizers, stabilizers, adhesives and ink. Dehydration of these mixtures and recirculation of valuable materials are substantial issues in industrial applications. The conventional method for recovery of aprotic solvents has been distillation, which requires excessive amount of energy to achieve desired recovery. Hydrophilic pervaporation, which is a membrane based dehydration method with low energy consumption, may become an alternative. Because of high dissolving power of aprotic solvents only inorganic membranes can be employed for this application. In this study three types of inorganic membranes (NaA zeolite, optimized silica and HybSi) were employed. Main objective of this studys to investigate effect of membrane type and various operationg parameters (feed composition at a range of 50-5% and temperature at a range of 50-100oC) on pervaporative dehydration of aprotic solvents / dimethylacetamide, dimethylformamide and n-methylpyrrolidone. During the experiments, feed samples were analyzed with Karl Fischer Titration Method / permeate samples were analyzed with Gas Chromatography. Experiments showed that proper dehydration of aqueous aprotic solvent mixtures was succeded with all three membranes investigated. In the target feed water content range (50 to 20%wt), permeate water contents were higher than 98%wt which was quite acceptable for all membranes. Moreover, NaA zeolite membrane performed higher fluxes than optimized silica and HybSi in composition range of 50 to 15% water at 50oC. It was also observed that HybSi membrane had higher fluxes and permeate water contents than optimized silica membrane for all solvents. On the other hand, the rates of decrease in permeate fluxes changed depending on the type of solvent for optimized silica and HybSi membranes. With both membranes, permeate flux of dimethylformamide decreased much slower than that of n-methylpyyrolidone. Furthermore, the results showed that permeate fluxes of HybSi membrane increased with increasing operation temperature due to the change of solvent activity in mixture. In addition, an Arrhenious type equation was used to describe changes in fluxes with changing temperature. It was also found that activation energy of water for diffusion through HybSi membrane was calculated as 8980 cal/mol.
57

Framgångsfaktorer för arbete med funktionsöverskridande processer : En fallstudie vid Ericsson AB

Kekonius, Henrik, Martinsson, Gustaf January 2008 (has links)
A product often consists for the most part of purchased materials and to reduce the total cost of production, therefore, companies often focus on reducing their purchase prices. Pushing supplier prices in this way is a popular way to improve the company's short-term profits. Ericsson negotiates annually with its suppliers to reduce their purchase prices, which is considered to reduce purchasing costs and thus the production costs. This is achieved by the sourcing function in a process known as the VPA process. The VPA process requires information from the local supply functions, which, however, has no documented approach for these activities. The purpose of this study is, therefore, to create a support processes to sourcing VPA process. We have also from a theoretical approach critically reviewed the existing process. What we found is that Ericsson's approach could be sub optimized, focusing on achieving internal goals within the function creates obstacles for other functions. Often, if not always, each function focuses on its own goals and do not have a holistic view on organizational performance. This means that the organization as a whole may suffer when instead of optimizing process output, local optimization leads to total sub optimization. To prevent sub optimization, functions needs to create an understanding of the other functions goals, communication between the functions are therefore of critical nature. However, this is not enough, to reduce the focus on goals within functions and instead focus on process goals requires that senior management will create this opportunity. / En produkt består ofta till större delen av inköpt material och för att minska den totala tillverkningskostnaden fokuserar företagen därför ofta på att minska sina inköpspriser. Att pressa leverantörernas priser på detta vis är ett populärt sätt att förbättra företagets kortsiktiga vinster. Ericsson förhandlar årligen med sina leverantörer för att minska sina inköpspriser, vilket alltså anses ska minska inköpskostnader och därmed även tillverkningskostnaderna. Detta sker genom funktionen sourcing i en process som kallas för VPA-processen. VPA-processen kräver information från de lokala supplyfunktionerna, vilka dock inte har något dokumenterat arbetssätt för dessa aktiviteter. Syftet med denna studie är därför att skapa en stödprocess till sourcings VPA-process. Vi har även utifrån ett teoretiskt synsätt kritiskt granskat den befintliga processen. Vad vi kommit fram till är att Ericssons arbetssätt kan vara suboptimerat med fokusering på interna funktionsmål som skapar hinder för andra funktioners mål. Ofta, om inte alltid, fokuserar varje funktion på sitt eget mål och ser inte till helheten. Detta medför att helheten kan få lida då istället för att processutfallet blir optimerat leder lokal optimering till total suboptimering. För att förhindra suboptimering måste förståelse för de andra funktionerna skapas, kommunikation mellan funktionerna är av kritisk natur. Dock är detta inte nog, för att minska fokuseringen på funktionsmål och istället fokusera på processmål kräver att företagsledningen skapar denna möjlighet.
58

Performance Study of ZigBee-based Green House Monitoring System

Nawaz, Shah January 2015 (has links)
Wireless Sensor Network (WSN) is an emerging multi-hop wireless network technology, and the greenhouse network monitoring system is one of the key applications of WSNs in which various parameters such as temperature, humidity, pressure and power can be monitored. Here, we aim to study the performance of a simulation-based greenhouse monitoring system. To design the greenhouse monitoring system based on WSN, we have used ZigBee-based devices (end devices, routers, coordinators, and actuators. Our proposed greenhouse monitoring network has been designed and simulated using the network simulator OPNET Modeller.The investigation is split into two; first, the aim is to find the optimal Transmit (Tx) power set out at sensor nodes and second, the focus is on studying how increasing the number of sensor nodes in the same greenhouse network will affect the overall network performance. ZigBee-based greenhouses corresponded to 4 network scenarios and are simulated using OPNET Modeller in which 22 different transmit (Tx) power (22 cases) in Scenario 1 is simulated, scenario 2, 3 and 4 estimated to 63, 126, 189 number of sensor nodes respectively. Investigating the performance of the greenhouse monitoring network performance metrics such as network load, throughput, packets sent/received and packets loss are considered to be evaluated under varied transmit (Tx) power and increasing number of sensor nodes. Out of the comprehensive studies concerning simulation results for 22 different transmit (Tx) power cases underlying the greenhouse monitoring network (Scenario1), it is found that packets sent/received and packets loss perform the best with the transmitted (Tx) power falling in a range of 0.9 mWatt to 1.0 mWatt while packet sent/received and packet loss are found to perform moderately with the transmitted (Tx) power values that lie in a range of 0.05 mWatt to 0.8 mWatt. Less than 0.05 mWatt and greater than 0.01 microWatt Tx power experience, the worst performance in terms of particularly packet dropped case. For instance, in the case of the packet dropped (not joined packet, i.e., generated at the application layer but not able to join the network due to lack of Tx power), with a Tx power of 0.01 mWatt, 384 packets dropped with a Tx power of 0.02 and 0.03 mWatt, 366 packets dropped, and with a Tx power of 0.04 and 0.05, 336 packet dropped.While increasing the number of sensor nodes, as in scenario 2, 3 and 4, dealing with sensor nodes 63, 126 and 189 correspondingly, the MAC load, MAC throughput, packet sent/received in scenario 2 are found to perform better than that of scenario 3 and scenario 4, while packet loss in scenarios 2, 3 and 4 appeared to be 15%, 12% and 83% correspondingly.
59

Microwave-energy harvesting at 5.8 GHz for passive devices

Valenta, Christopher Ryan 27 August 2014 (has links)
The wireless transfer of power is the enabling technology for realizing a true internet-of-things. Broad sensor networks capable of monitoring environmental pollutants, health-related biological data, and building utility usage are just a small fraction of the myriad of applications which are part of an ever evolving ubiquitous lifestyle. Realizing these systems requires a means of powering their electronics sans batteries. Removing the batteries from the billions or trillions of these envisioned devices not only reduces their size and lowers their cost, but also avoids an ecological catastrophe. Increasing the efficiency of microwave-to-DC power conversion in energy-harvesting circuits extends the range and reliability of passive sensor networks. Multi-frequency waveforms are one technique that assists in overcoming the energy-harvesting circuit diode voltage threshold which limit the energy-conversion efficiency at low RF input powers typically encountered by sensors at the fringe of their coverage area. This thesis discusses a systematic optimization approach to the design of energy-conversion circuits along with multi-frequency waveform excitation. Using this methodology, a low-power 5.8 GHz rectenna showed an output power improvement of over 20 dB at -20 dBm input power using a 3-POW (power-optimized waveform) compared to continuous waveforms (CW). The resultant efficiency is the highest reported efficiency for low-power 5.8 GHz energy harvesters. Additionally, new theoretical models help to predict the maximum possible range of the next generation of passive electronics based upon trends in the semiconductor industry. These models predict improvements in diode turn-on power of over 20 dB using modern Schottky diodes. This improvement in turn-on power includes an improvement in output power of hundreds of dB when compared to CW.
60

Storage and aggregation for fast analytics systems

Amur, Hrishikesh 13 January 2014 (has links)
Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.

Page generated in 0.1167 seconds