• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Cooperation and resource allocation in relay and multicarrier systems

Shi, Jia January 2015 (has links)
In modern wireless communications, various techniques have been developed in order to exploit the dynamics existing in wireless communications. Diversity has been recognized as one of the key techniques, which has the potential to significantly increase the capacity and reliability of wireless communication systems. Relay communication with possible cooperation among some nodes is capable of achieving spacial diversity by forming a virtual antenna array for receiving and/or transmission. Dynamic resource allocation is capable of taking the advantages of the timevarying characteristics of wireless channels and wireless systems themselves, generating promising increase of energy- and spectrum-efficiency. This thesis focuses on the cooperation and resource allocation in relay and multicarrier systems, via which we motivate to design the low-complexity algorithms that are capable of achieving the spectrum-efficiency and reliability as high as possible. First, we investigate and compare the error performance of a two-hop communication links (THCL) system with multiple relays, when distributed and cooperative relay processing schemes are respectively employed. Our main objectives include to find some general and relatively simple ways for error performance estimation, and to demonstrate the trade-off of using cooperative relay processing. The error performance of the THCL employing various relay processing schemes is investigated, with the emphasis on the cost of cooperation among relays. In order to analyze the error performance of the THCL systems novel approximation approaches, including two Nakagami approximation methods and one Gamma approximation method, are proposed. With the aid of these approximation approaches, a range of closed-form formulas for the error rate of the THCL systems are derived. Our studies show that cooperation among relays may consume a significant portion of system energy, which should not be ignored in design of cooperative systems. Second, resource allocation, including both power- and subcarrier-allocation, is investigated in the context of the single-cell downlink orthogonal frequency division multiple-access (OFDMA)and multicarrier direct-sequence code-division multiple-access (MC DS-CDMA) systems. Our resource allocation is motivated to maximize the system reliability without making a trade-off with the attainable spectrum-efficiency of the system, while demanding the complexity as low as possible. For the sake of achieving low-complexity in implementation, we carry out power- and subcarrierallocation separately in two stages, which has been proved without much performance loss. On this topic, we propose a range of subcarrier-allocation algorithms and study their performance with the OFDMA and MC DS-CDMA systems. In general, our proposed algorithms are designed either to avoid assigning users as many as possible the worst subchannels, or to assign users the best possible subchannels. Our studies show that all the proposed algorithms belong to the family of low-complexity subcarrier-allocation algorithms, and they outperform all the other reference suboptimal algorithms considered, in terms of both the error and spectrum-efficiency performance. Furthermore, some of our proposed subcarrier-allocation algorithms are capable of achieving the performance close to that achieved by the optimum subcarrier-allocation algorithm. Finally, based on our subcarrier-allocation algorithms, we investigate the resource allocation in multicell downlink OFDMA and MC DS-CDMA systems, with the emphasis on the mitigation of intercell interference (InterCI). Specifically, we extend the subcarrier-allocation algorithms proposed in the single-cell systems to the multicell scenarios, in which each base station (BS) independently carries out the subcarrier-allocation. After the subcarrier-allocation, then minimum BS cooperation is introduced to efficiently mitigate the InterCI. In the multicell downlink OFDMA systems, two novel InterCI mitigation algorithms are proposed, both of which are motivated to set up the space time block coding (STBC) aided cooperative transmissions to the users with poor signal-to-interference ratio (SIR). Our studies show that both the proposed algorithms can significantly increase the spectrum-efficiency of the multicell downlink OFDMA systems. In the multicell MC DS-CDMA systems, after the subcarrier-allocation, we propose two low-complexity code-allocation algorithms, which only require the BSs to share the large-scale fading, including the propagation pathloss and shadowing effect. Our studies show that both the code-allocation algorithms are highly efficient, and they are capable of achieving significantly better error and spectrumefficiency performance than the random code-allocation (i.e., the case without code-allocation).
622

Quantum-assisted multi-user wireless systems

Botsinis, Panagiotis January 2015 (has links)
The high complexity of numerous optimal classical communication schemes, such as the Maximum Likelihood (ML) and Maximum A posteriori Probability (MAP) Multi-User Detector (MUD) designed for coherent detection or the ML and MAP Multiple-Symbol Differential Detectors (MSDD) conceived for non-coherent receivers often prevents their practical implementation. In this thesis we commence with a review and tutorial on Quantum Search Algorithms (QSA) and propose a number of hard-output and iterative Quantum-assisted MUDs (QMUD) and MSDDs (QMSDD). We employ a QSA, termed as the Durr-Hyer Algorithm (DHA) that finds the minimum of a function in order to perform near-optimal detection with quadratic reduction in the computational complexity, when compared to that of the ML MUD / MSDD. Two further techniques conceived for reducing the complexity of the DHA-based Quantum-assisted MUD (QMUD) are also proposed. These novel QMUDs / QMSDDs are employed in the uplink of various multiple access systems, such as Direct Sequence Code Division Multiple Access systems, Space Division Multiple Access systems as well as in Direct-Sequence Spreading and Slow Subcarrier Hopping SDMA systems amalgamated with Orthogonal Frequency Division Multiplexing and Interleave Division Multiple Access systems. Furthermore, we follow a quantum approach to achieve the same performance as the optimal Soft Input Soft-Output (SISO) classical detectors by replacing them with a quantum algorithm, which estimates the weighted sum of all the evaluations of a function. We propose a SISO QMUD / QMSDD scheme, which is the quantum-domain equivalent of the MAP MUD / MSDD. Both our EXtrinsic Information Transfer (EXIT) charts and Bit Error Ratio (BER) curves show that the computational complexity of the proposed QMUD / QMSDD is significantly lower than that of the MAP MUD / MSDD, whilst their performance remains equivalent. Moreover, we propose two additional families of iterative DHA-based QMUD / QMSDDs for performing near-optimal MAP detection exhibiting an even lower tunable complexity than the QWSA QMUD. Several variations of the proposed QMUD / QMSDDs have been developed and they are shown to perform better than the state-of-the-art low-complexity MUDs / MSDDs at a given complexity. Their iterative decoding performance is investigated with the aid of non-Gaussian EXIT charts.
623

Addressing the computational issues of the Shapley value with applications in the smart grid

Maleki, Sasan January 2015 (has links)
We consider the computational issues that arise in using the Shapley value in practical applications. Calculating the Shapley value involves computing the value of an exponential number of coalitions, which poses a significant computational challenge in two cases: (i) when the number of agents (players) is large (e.g., more than 20), and (ii) when the time complexity of the characteristic function is high. However, to date, researchers have aimed to address only the first case, although with limited success. To address the first issue, we focus on approximating the Shapley value. In more detail, building upon the existing sampling-based approaches, we propose an improved error bound for approximating the Shapley value using simple random sampling (SRS), which can be used in any superadditive game. Moreover, we put forward the use of stratified sampling, which can lead to smaller standard errors. We propose two methods for minimising the standard error in supermodular games and a class of games that have a property that we call order-reflecting. We show that among others, newsvendor games, which have applications in the smart grid, exhibit this property. Furthermore, to evaluate our approach, we apply our stratified sampling methods to an instance of newsvendor games consisting of 100 agents using real data. We find that the standard error of stratified sampling in our experiments is on average 48% lower than that of SRS. To address the second issue, we propose the characteristic function of the game be approximated. This way, calculating the Shapley value becomes straightforward. However, in order to maintain fairness, we argue that, in distributing the value of the grand coalition, agents' contribution to the complexity of the characteristic function must be taken into account. As such, we propose the bounded rational Shapley value, which, using the additivity axiom of the Shapley value, ensures that the share of each agent reflects its contribution to the difficulty of computing the coalition values. We demonstrate the usefulness of this approach in a demand response scenario where a number of apartments want to fairly divide the discount they receive for coordinating their cooling loads.
624

Compositional specification and reachability checking of net systems

Stephens, Owen January 2015 (has links)
Concurrent systems are frequently scrutinised using automated model checking, routinely using Petri nets as a model. While for small system models, it is often sufficient to give the system specification in a monolithic manner, for larger systems this approach is infeasible. Instead, a compositional, or component-wise specification can be used. However, while existing model checking techniques sometimes allow the specification of nets in terms of components, the techniques used for checking properties of the system all consider the composed, global net. In this thesis, we investigate and advocate compositional system specification and an alternative approach to model checking that uses the structural compositional information to its advantage, vastly improving efficiency in many examples. In particular, we examine the categorical structure of component nets and their semantics, illustrating the functoriality of a map between the categories as compositionality. We introduce contextual Petri Nets with Boundaries (PNBs), adding read arcs, which naturally model behaviour that non-destructively reads the token state of a place. Furthermore, we introduce a type-checked specification language that allows us to compositionally construct systems to be modelled using PNBs, whilst ensuring that only correct compositions are expressible. We then discuss and implement compositional statespace generation, which can be used to check reachability. Via optimisations using weak language equivalence and memoisation, we obtain substantial speed ups and demonstrate that our checker outperforms the current state-of-the-art for several examples. A final contribution is the compositional specification of existing benchmark examples, in more natural, component-wise style.
625

Optical properties of metal nanoparticles and their influence on silicon solar cells

Temple, Tristan Leigh January 2009 (has links)
The optical properties of metal nanoparticles have been investigated by simulation and experimental techniques. The aim of this investigation was to identify how to use metal nanoparticles to improve light-trapping in silicon solar cells. To do this we require nanoparticles that exhibit a high scattering efficiency and low absorption (i.e. high radiative efficiency) at near-infrared wavelengths. The simulation results identified Ag, Au, Cu and Al as potential candidates for use with silicon solar cells. The optical properties of Ag, Au and Cu nanoparticles are very similar above 700 nm. Below this wavelength Ag was found to be the preferred choice due to a decreased effect from interband transitions in comparison with Au and Cu. Al nanoparticles were found to exhibit markedly different optical properties to identical noble metal nanoparticles, with broader, weaker resonances that can be excited further into the UV. However, Al nanoparticles were found to exhibit higher absorption than noble metals in the NIR due to a weak interband region centred at around 825 nm. Tuning of the resonance position into the NIR was demonstrated by many methods, and extinction peaks exceeding 1200 nm can be achieved by all of the metals studied. However, it is important that the method used to red-shift the extinction peak does not also decrease the radiative efficiency. Core-shell nanoparticles, triangular nanoparticles and platelet-type nanoparticles were found to be unsuitable for silicon solar cells applications due their low radiative efficiencies. Instead, we propose the use of large (> 150 nm) Ag spheroids with moderate aspect ratios. A maximum radiative efficiency of 0.98 was found for noble metal nanospheres when the diameter exceeded 150 nm. The optical properties of Au and Al nanoparticles fabricated by electron-beam lithography were found to be in good agreement with simulations, provided that the substrate and local dielectric environment were accounted for by inclusion of an effective medium in the model. Cr adhesion layers were found to substantially weaken the extinction peaks of Au nanoparticles, and also result in a strong decrease of radiative efficiency. Adhesion layers were not required for Al nanoparticles. The morphological and optical properties of Ag island films were found to be highly dependent on the layer thickness, deposition speed and anneal temperature. Dense arrays containing average particle sizes ranging from 25 nm to 250 nm were achieved using anneal temperatures lower than 200oC. The largest nanoparticles were found to exhibit high extinction from 400 nm to 800 nm. Depositing Ag nanoparticles onto a-Si:H solar cells was found two have two effects on the spectral response. At short wavelengths the QE was decreased due to absorption by small particles or back-scattering by larger particles. At longer wavelengths large maxima and minima are present in the QE spectra. This latter effect is not due to excitation of surface plasmons, but is instead related to modification of interference effects in the thin-film layer stack.
626

A decentralised graph-based framework for electrical power markets

Cerda Jacobo, Jaime January 2010 (has links)
One of the main tools used to clear the electrical power market across the world is the DC optimal power flow. Nevertheless, the classical model designed for vertically integrated power systems is now under pressure as new issues such as partial information introduced by the deregulation process, scalability posed by the multiple small renewable generation units as well as microgrids, and markets integration have to be addressed. This dissertation presents a graph-based decentralised framework for the electricity power market based on the DC optimal power flow where Newton's method is solved using graph techniques. Based on this ground, the main principles associated to the solution of systems of linear equations using a proper graph representation are presented. Then, the burden imposed by the handling of rows and columns in its matrix representation when inequality constraints have to be enforced or not is addressed in its graph based model. To this end the model is extended introducing the notion of conditional links. Next, this model is enhanced to address the graph decentralisation by introducing the weak link concept as a mean to disregard some links in the solution process while allowing the exact gradient to be computed. Following, recognizing that the DC optimal power flow is a quadratic separable program, this model is generalised to a quadratic separable program model. Finally, an agent oriented approach is proposed in order to implement the graph decentralisation. Here the agents will clear the market interchanging some economic information as well as some non-strategic information. The main contribution presented in this document is the application of graph methods to solve quadratic separable optimisation problems using Newton's method. This approach leads to a graph model whose structure is well defined. Furthermore, when applied to the DC optimal power flow this representation leads to a graph whose solution is totally embedded within the graph as both the Hessian as well as the gradient information can be accessed directly from the graph topology. In addition, the graph can be decentralised by providing a mean to evaluate the exact gradient. As a result when applied to the DC optimal power flow, the network interconnectivity is converted into local intercommunication tasks. This leads to a decentralised solution where the intercommunication is based mainly on economic information.
627

Performance of network coded systems supported by automatic repeat request

Qin, Yang January 2012 (has links)
Inspired by the network information theory, network coding was invented in 2000. Since then, the theory and application of network coding have received intensive research and various network coding schemes have been proposed and studied. It has been demonstrated that the packetlevel network coding has the potential to outperform the traditional routing strategies in packet networks. By taking the advantages of the information carried by the packets sent to different receivers (sinks) in a packet network, packet-level network coding is capable of reducing the number of packets transmitted over the network. Therefore, the packet-level network coding employs the potential for boosting the throughput of packet networks. By contrast, the symbollevel network coding, which is also referred to as the physical-layer network coding, is capable of exploiting interference instead of avoiding it for improving the channel capacity and/or enhancing the reliability of communications. In this thesis, our focus is on the packet-level network coding. Performance of communication systems with network coding has been widely investigated from different perspectives, mainly under the assumption that packets are reliably transmitted over networks without errors. However, in practical communication networks, transmission errors always occur and error-detection or error-correction techniques are required in order to ensure reliable communications. Therefore, in this report, we focus our attention mainly on studying the performance of the communication networks with packet-level network coding, where Automatic Retransmission reQuest (ARQ) schemes are employed for error protection. Three typical ARQ schemes are invoked in our research, which are the Stop-and-Wait ARQ (SW-ARQ), Go-Back-N ARQ (GBN-ARQ) and the Selective-Repeat ARQ (SR-ARQ). Our main concern is the impact of network coding on the throughput performance of network coding nodes or networks containing network coding nodes. Additionally, the impact of network coding on the delay performance of network coding nodes or coded networks is also addressed. In a little more detail, in Chapter 3 of the thesis, we investigate the performance of the netvi works employing packet-level network coding, when assuming that transmission from one node to another is not ideal and that a certain ARQ scheme is employed for error-control. Specifically, the delay characteristics of general network coding node are first analyzed. Our studies show that, when a coding node invokes more incoming links, the average delay for successfully forming coded packets increases. Then, the delay performance of the Butterfly networks is investigated, which shows that the delay generated by a Butterfly network is dominated by the communication path containing the network coding node. Finally, the performance of the Butterfly network is investigated by simulation approaches, when the Butterfly network employs SW-ARQ, GBN-ARQ, or SR-ARQ for error-control. The achievable throughput, the average delay as well as the standard deviation of the delay are considered. Our performance results show that, when given a packet error rate Packet Error Rate (PER), the SR-ARQ scheme is capable of attaining the highest throughput and resulting in the lowest delay among these three ARQ schemes. In Chapter 4, the steady-state throughput of general network coding nodes is investigated, when the SW-ARQ scheme is employed. We start with considering a Two-Input-Single-Output (2ISO) network coding node without queueing buffers. Expressions for computing the steady-state throughput is derived. Then, we extend our analysis to the general H-Input-Single-Output (HISO) network coding nodes without queueing buffers. Finally, our analytical approaches are further extended to the HISO network coding nodes with queueing buffers. A range of expressions for evaluating the steady-state throughput are obtained. The throughout performance of the HISO network coding nodes is investigated by both analytical and simulation approaches. Our studies in this chapter show that the throughput of a network coding node decreases, as the number of its incoming links increases. This property implies that, in a network coding system, the coding nodes may form the bottlenecks for information delivery. Furthermore, the studies show that adding buffers to the network coding node may improve the throughput performance of a network coding system. Then, in Chapters 5 and 6, we investigate the steady-state throughput performance of the general network coding nodes, when the GBN-ARQ in Chapter 5 or the SR-ARQ in Chapter 6 is employed. Again, analytical approaches for evaluating the steady-state throughput of the general network coding nodes are concerned and a range of analytical results are obtained. Furthermore, the throughput performance of the network coding nodes supported by the GBN-ARQ or SR-ARQ is investigated by both simulations and numerical approaches. Finally, in Chapter 7, the conclusions extracted from the research are summarized and the possible directions for future research are proposed.
628

Modified transmission and fluorescence in aperiodic and biomimetic photonic crystals

Pollard, Michael E. January 2011 (has links)
Complete photonic bandgaps (PBGs) are more readily achieved in highly-symmetric photonic crystals (Ph Cs). Aperiodic crystals (quasicrystals) with arbitrarily high orientational order are promising candidates to lower the dielectric contrast necessary to open PBGs. This thesis in- vestigates the connection between the structural and optical properties of four PhC lattices by studying the effects on transmission and fluorescence spectra. In order of increasing structural isotropy these lattices are: hexagonal, Archimedean-like, Stampfli, and a biomimetic 'sunflower'. High structural isotropy is associated with weaker diffraction. The sunflower's Fourier spectrum is defined by a dense ring of weak reciprocal lattice vectors. Its local morphology, which is everywhere unique, continuously transforms between localised 4- or 6-fold symmetry. All other crystals are spatially uniform with pure point spectra. Although structurally similar to the Archimedean, the Starnpfli improves isotropy without sacrificing diffraction efficiency. TM gaps of high-contrast (~c = 8.61) rod-type PhCs are shown to be nearly independent of the lattice geometry by FDTD simulations. The primary gaps are sensitive to random rod sizes, which disrupts the coherent coupling between the individual rod resonances. Transmission spectra for TE polarisation or hole-type Ph Cs are more dependent on Bragg reflection due to weak or non-existent Mie resonances. In small samples, the TM gap is typically wider in less isotropic crystals. Much larger samples demonstrate the importance of structural isotropy and long-range interactions in low ~c PhCs. The sunflower's 21% TM gap is, to date, the widest TM PBG reported for ~c = 1. The Stampfli also supports a TE gap in the same range as its 14% TM gap, thus yielding a 4.6% absolute PBG. Further band diagram calculations on an 'approximant' of the sunflower reveal the presence of intrinsic dipolar and monopolar defect states. Microwave characterisation of rod-type samples (~c = 8.61) showed complete TM PBGs (> 60dB) with gap ratios ranging from 37.28% (hexagonal) to 25.85% (sunflower). Low-contrast samples (~c = 1.6) showed complete TM PBGs (> 30dB) with gap ratios rising from 10.37% (hexagonal) to an ambiguous value of either 10.48% or 20.95% for the sunflower due to the unusual spiral structuring of the transmission spectra. The Stampfli also supports a complete TE gap (> 10dB) that coincides with its 14.19% TM gap for a 3.55% absolute gap that, to the author's knowledge, represents the first conclusive demonstration of an absolute PBG for ~c = 1.6. A larger sunflower sample was shown to have an extremely large experimental (simulated) TM gap of 33.33% (23.16%), erroneously broadened by the non-parallel rods. A new approach to enhance the efficiency of up conversion pumping in RE-doped media is pro- posed based on PBG suppression of emission from intermediate levels. Preliminary results indi- cate that visible emission from hexagonal and sunflower PhC slabs in 0.2 wt% Er:GLSO pumped at 808nm is enhanced by up to 1.6x at 550nm, or up to 4.5x at 525nm. Subsequent analysis appears to rule out suppression of IR emission, and suggests modified thermal properties as the cause.
629

Theory and practice of coordination algorithms exploiting the generalised distributive law

Delle Fave, Francesco Maria January 2012 (has links)
A key challenge for modern computer science is the development of technologies that allow interacting computer systems, typically referred as agents, to coordinate their decisions whilst operating in an environment with minimal human intervention. By so doing, the decision making capabilities of each of these agents should be improved by making decisions that take into account what the remaining agents intend to do. Against this background, the focus of this thesis is to study and design new coordination algorithms capable of achieving this improved performance. In this line of work, there are two key research challenges that need to be addressed. First, the current state-of-the-art coordination algorithms have only been tested in simulation. This means that their practical performance still needs to be demonstrated in the real world. Second, none of the existing algorithms are capable of solving problems where the agents need to coordinate over complex decisions which typically require to trade off several parameters such as multiple objectives, the parameters of a sufficient statistic and the sample value and the bounds of an estimator. However, such parameters typically characterise the agents’ interactions within many real world domains. For this reason, deriving algorithms capable of addressing such complex interactions is a key challenge to bring research in coordination algorithms one step closer to successful deployment. The aim of this thesis is to address these two challenges. To achieve this, we make two types of contribution. First, we develop a set practical contributions to address the challenge of testing the performance of state-of-the-art coordination algorithms in the real world. More specifically, we perform a case study on the deployment of the max-sum algorithm, a well known coordination algorithm, on a system that is couched in terms of allowing the first responders at the scene of a disaster to request imagery collection tasks of some of the most relevant areas to a team of unmanned aerial vehicles (UAVs). These agents then coordinate to complete the largest number of tasks. In more detail, max-sum is based on the generalised distributive law (GDL), a well known algebraic framework that has been used in disciplines such as artificial intelligence, machine learning and statistical physics, to derive effective algorithms to solve optimisation problems. Our iv contribution is the deployment of max-sum on real hardware and the evaluation of its performance in a real world setting. More specifically, we deploy max-sum on two UAVs (hexacopters) and test it a number of different settings. These tests show that max-sum does indeed perform well when confronted with the complexity and the unpredictability of the real world. The second category of contributions are theoretical in nature. More specifically, we propose a new framework and a set of solution techniques to address the complex interactions requirement. To achieve this, we move back to theory and tackle a new class of problem involving agents engaged in complex interactions defined by multiple parameters. We name this class partially ordered distributed constraint optimisation problems (PO-DCOPs). Essentially, this generalises the well known distributed constraint optimisation problem (DCOP) framework to settings in which agents make decisions over multiple parameters such as multiple objectives, the parameters of a sufficient statistic and the sample value and the bounds of an estimator. To measure the quality of these decisions, it becomes necessary to strike a balance between these parameters and to achieve this, the outcome of these decisions is represented using partially ordered constraint functions. Given this framework, we present three sub-classes of PO-DCOPs, each focusing on a different type of complex interaction. More specifically, we study (i) multi-objective DCOPs (MO-DCOPs) in which the agents’ decisions are defined over multiple objectives, (ii) risk-aware DCOPs (RA-DCOPs) in which the outcome of the agents’ decisions is not known with certainty and thus, where the agents need to carefully weigh the risk of making decisions that might lead to poor and unexpected outcomes and, (iii) multiarm bandit DCOPs (MAB-DCOPs) where the agents need to learn the outcome of their decisions online. To solve these problems, we again exploit the GDL framework. In particular, we employ the flexibility of the GDL to obtain either optimal or bounded approximate algorithms to solve PO-DCOPs. The key insight is to use the algebraic properties of the GDL to instantiate well known DCOP algorithms such as DPOP, Action GDL or bounded max-sum to solve PO-DCOPs. Given the properties of these algorithms, we derive a new set of solution techniques. To demonstrate their effectiveness, we study the properties of these algorithms empirically on various instances of MO-DCOPs, RA-DCOPs and MAB-DCOPs. Our experiments emphasize two key traits of the algorithms. First, bounded approximate algorithms perform well in terms of our requirements. Second, optimal algorithms incur an increase in both the computation and communication load necessary to solve PO-DCOPs because they are trying to optimally solve a problem which is potentially more complex than canonical DCOPs.
630

Design, modelling and characterisation of impact based and non-contact piezoelectric harvesters for rotating objects

Manla, Ghaithaa January 2010 (has links)
This thesis highlights two different methods of extracting electrical energy from rotational forces using impact based and non-contact based piezoelectric harvesters. In this work, the centripetal force is used as the main acting force that causes the piezoelectric harvesters to produce output power. In order to achieve this, the harvesters are mounted in a horizontal position while the rotational forces are applied. The impact based piezoelectric harvester consists of a tube with one piezoelectric pre-stressed beam mounted at each end of the tube. A ball bearing that has the freedom of movement between the two ends of the tube generates an impact force on the piezoelectric structures due to the effect of the centripetal force. The impact based piezoelectric harvester is modelled and its behaviour is analysed and verified experimentally. For the non-contact piezoelectric harvester, the applied force on the piezoelectric element is produced by a magnetic levitation system without the need for a direct physical contact. The impact of the magnet size and shape is studied and the results become a guideline that is used to design and optimize the piezoelectric harvester. The model of the non-contact piezoelectric harvester is derived and verified experimentally in order to analyse its behaviour at different boundary conditions. A comparison between the two harvesters is carried out. This includes highlighting the advantages and the limitations of each of them.

Page generated in 0.0364 seconds