• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Synthesizing imperative distributed-memory implementations from functional data-parallel programs

Aubrey-Jones, Tristan January 2015 (has links)
Distributed memory architectures such as Linux clusters have become increasingly common butremain difficult to program. We target this problem and present a noveltechnique to automatically generate data distribution plans, and subsequently MPI implementations in C++,from programs written in a functional core language. This framework encodes distributed data layouts as types, which are then used both to search (via type inference) for optimal data distribution plans and to generate the MPI implementations. The main novelty of our approach is that it supports multiple collections, distributed arrays, maps, and lists, rather than just arrays. We introduce the core language and explain our formalization of distributed data layouts. We describe how to search for data distribution plans using a type inference algorithm, and how we generate MPI implementations in C++ from such plans. We then show how our types can be extended to support local data layouts and improved array distributions. We also show how a theorem prover and suitable equational theories can be used to yield a better (i.e., more complete) type inference algorithm. We then describe the design of our implementation, and explain how we use a runtime performance-feedback directed search algorithm to find the best data distribution plans for different input programs. Finally, we present some conceptual and experimental evaluation which analyses the capabilities of our approach, and shows that our implementation can find distributed memory implementations of several example programs, and that the performance of generated programs is similar to that of hand-coded versions.
442

Broadband adaptive beamforming with low complexity and frequency invariant response

Koh, Choo Leng January 2009 (has links)
This thesis proposes different methods to reduce the computational complexity as well as increasing the adaptation rate of adaptive broadband beamformers. This is performed exemplarily for the generalised sidelobe canceller (GSC) structure. The GSC is an alternative implementation of the linearly constrained minimum variance beamformer, which can utilise well-known adaptive filtering algorithms, such as the least mean square (LMS) or the recursive least squares (RLS) to perform unconstrained adaptive optimisation. A direct DFT implementation, by which broadband signals are decomposed into frequency bins and processed by independent narrowband beamforming algorithms, is thought to be computationally optimum. However, this setup fail to converge to the time domain minimum mean square error (MMSE) if signal components are not aligned to frequency bins, resulting in a large worst case error. To mitigate this problem of the so-called independent frequency bin (IFB) processor, overlap-save based GSC beamforming structures have been explored. This system address the minimisation of the time domain MMSE, with a significant reduction in computational complexity when compared to time-domain implementations, and show a better convergence behaviour than the IFB beamformer. By studying the effects that the blocking matrix has on the adaptive process for the overlap-save beamformer, several modifications are carried out to enhance both the simplicity of the algorithm as well as its convergence speed. These modifications result in the GSC beamformer utilising a significantly lower computational complexity compare to the time domain approach while offering similar convergence characteristics. In certain applications, especially in the areas of acoustics, there is a need to maintain constant resolution across a wide operating spectrum that may extend across several octaves. To attain constant beamwidth is diffcult, particularly if uniformly spaced linear sensor array are employed for beamforming, since spatial resolution is reciprocally proportional to both the array aperture and the frequency. A scaled aperture arrangement is introduced for the subband based GSC beamformer to achieve near uniform resolution across a wide spectrum, whereby an octave-invariant design is achieved. This structure can also be operated in conjunction with adaptive beamforming algorithms. Frequency dependent tapering of the sensor signals is proposed in combination with the overlap-save GSC structure in order to achieve an overall frequency-invariant characteristic. An adaptive version is proposed for frequency-invariant overlap-save GSC beamformer. Broadband adaptive beamforming algorithms based on the family of least mean squares (LMS) algorithms are known to exhibit slow convergence if the input signal is correlated. To improve the convergence of the GSC when based on LMS-type algorithms, we propose the use of a broadband eigenvalue decomposition (BEVD) to decorrelate the input of the adaptive algorithm in the spatial dimension, for which an increase in convergence speed can be demonstrated over other decorrelating measures, such as the Karhunen-Loeve transform. In order to address the remaining temporal correlation after BEVD processing, this approach is combined with subband decomposition through the use of oversampled filter banks. The resulting spatially and temporally decorrelated GSC beamformer provides further enhanced convergence speed over spatial or temporal decorrelation methods on their own.
443

Design considerations of harvested-energy management

Ali, Mustafa January 2012 (has links)
Using energy harvesting for powering autonomous sensor systems can meet the goal of perpetual operation. However, the uncertainty in system supply coupled with the size constraints presents challenges in design of such systems. To address these challenges,this thesis is concerned with effective management of harvested-energy for matching supply and demand in order to operate perpetually with uniform performance. The thesis focuses on two fundamental design considerations in addressing these challenges: (i) managing variability of the energy harvesting source, and (ii) matching the demand with energy supply under the influence of non-ideal characteristics of the harvesting system. To address the problem of variability of energy source, the thesis focuses on effective prediction of harvested-energy. An effective approach for evaluating the accuracy of solar energy prediction algorithm is proposed and optimised values of prediction algorithm parameters are determined to minimise prediction error. The problem of achieving uniform performance under the supply variability is addressed by proposing a new prediction based energy management policy. The results of the proposed policy are compared with other recently reported policies and it is shown that the proposed policy achieves up to 41% lower variance in performance and 30% lower dead time of the system, which is important to achieve the goal of perpetual operation. To address the problem of effective matching of supply and demand, the thesis considers the design of photovoltaic energy harvesting supply and storage subsystem in terms of its component’s non-ideal characteristics. The influence of these characteristics on supply and demand is identified using modeling of losses and component interdependencies, and empirically validated using a reference system design. Using the proposed modeling, the performance of recently reported energy management policies is evaluated to show that these are ineffective in achieving the goal of perpetual operation, and optimisations are proposed to address this.
444

Suspended gate silicon nanodot memory

Garcia Ramirez, Mario Alberto January 2011 (has links)
The non-volatile memory market has been driven by Flash memory since its invention more than three decades ago. Today, this non-volatile memory is used in a wide variety of devices and systems from pen drives, mp3 players to cars, planes and satellites. However,the conventional floating gate memory technology in use for flash memory is facing a serious scalability issue, the tunnel oxide thickness cannot be reduced to less than 7nm as pointed out in the latest international technology roadmap for semiconductors (ITRS2010) [1]. The limit imposed on the tunnel oxide layer reduces the programming and erasing times, the scalability and endurance among other parameters. To overcome those inherent issues, this research is focused on the co-integration of nano-electromechanical systems (NEMS) with metal-oxide-semiconductor (MOS) technology in order to generate a new non-volatile and high speed memory. The memory device that we are proposing is a high-speed non-volatile memory structure called the Suspended Gate Silicon Nanodot Memory (SGSNM) cell. This non-volatile memory device features a MOSFET as a readout element, a silicon nanodot (SiNDs) monolayer as the floating gate and a movable suspended control gate isolated from the floating gate by an oxide layer and by an air-gap. The fundamental component in this novel device is the introduction of a doubly-clamped beam as a movable control gate, in which through this element, the programming and erasing operations take place. To understand the behaviour of the doubly-clamped beam structure, it is analysed by using analytical models such as the doubly-plate capacitor model and also by using two- and three-dimensional (2D and 3D) finite element method (FEM) analysis. The programming and erasing operations within the SGSNM occur when the suspended control gate is in contact with the tunnel oxide layer. This is the point at which the quantum-mechanical tunnelling mechanism (Fowler-Nordheim) takes place. Through this mechanism, the electrons are allowed to tunnel from the suspended control gate into the memory node and vice versa as a function of the applied voltage (bias). The tunnelling process is numerically analysed by implementing the Tsu-Esaki equation and the transfer matrix method within a homemade program which calculates the current density as a function of the tunnel oxide material and thickness. Both the suspended control gate and tunnelling process are implemented as analog behavioural models within the SGSNM cell that is simulated by using a commercial circuit simulator. From a transient analysis of the suspended control gate, it was found that the suspended control gate takes 0.8 nsec in pull-in on the tunnel oxide layer for a 1 μm-long doubly-clamped structure. In contrast, the time that the memory node takes in charge and discharge is 1.7 nsec. Hence, the programming and erasing times are a combination between the mechanical pull-in and the charging time, which is 2.5 nsec due the fact that to both operations are symmetrical. Moreover, the suspended control gate was successfully fabricated and suspended. This process was performed by depositing a thin layer of aluminium (500 nm) over the sacrificial layer (poly-Si) by using an e-beam evaporator, which was patterned with doubly-clamped beam features through the photolithographic process. By using a combination of wet and dry etching processes, the aluminium and the sacrificial layer were successfully removed without affecting the substrate (Si-based) or the suspended control gate beam. In addition, Capacitance - Voltage measurements were performed on a set of doubly-clamped beams from which the pull-in effect was successfully obtained. Finally, the footprints for the memory device fabrication process were developed and sketched within the document as well as the design of three photomasks.
445

Algorithms for scientific computing

O'Brien, Neil January 2012 (has links)
There has long been interest in algorithms for simulating physical systems. We are concernedwith two areaswithin this field: fastmultipolemethods andmeshlessmethods. Since Greengard and Rokhlin’s seminal paper in 1987, considerable interest has arisen in fast multipole methods for finding the energy of particle systems in two and three dimensions, and more recently in many other applications where fast matrix-vector multiplication is called for. We develop a new fast multipole method that allows the calculation of the energy of a system of N particles in O(N) time, where the particles’ interactions are governed by the 2D Yukawa potential which takes the form of a modified Bessel function Kv. We then turn our attention to meshless methods. We formulate and test a new radial basis function finite differencemethod for solving an eigenvalue problemon a periodic domain. We then applymeshlessmethods to modelling photonic crystals. After an initial background study of the field, we detail the Maxwell equations, which govern the interaction of the light with the photonic crystal, and show how photonic band gaps may be given rise to. We present a novel meshless weak-strong form method with reduced computational cost compared to the existing meshless weak form method. Furthermore, we develop a new radial basis function finite differencemethod for photonic band gap calculations. Throughout the work we demonstrate the application of cutting-edge technologies such as cloud computing to the development and verification of algorithms for physical simulations.
446

Risk analysis of user satisfaction in online communities

Hiscock, Philippa January 2015 (has links)
No description available.
447

Design and fabrication of a new 3D AC-electroosmotic micropump

Rouabah, Hamza A. January 2010 (has links)
Integrated Microsystems, MicroTAS or the Lab-on-a-chip, require integrated fluid handling. Advances in microelectronics fabrication processes have allowed the miniaturization of fluid handling devices such as micropumps. In biomedical technology, pumps for handling extremely small fluid amounts become more and more important where microsystems for biological analysis routinely use solid-state electrokinetic micropumps. AC Electrokinetic micropumps in particular AC electroosmosis pump can be used to pump fluids using planar electrodes which induce electrical forces on the fluid. However, planar electrodes have limited pumping capability of the micropump. In this thesis a new design for the AC electroosmotic is introduced. The new AC electroosmotic design presents the transition from planar microelectrode arrays to planar with High Aspect Ratio (HAR) pillars in order to increase the surface area of the electrodes. The physical mechanism of AC electrosmosis is the motion of induced Electrical Double Layers on microelectrodes driven into motion by the electric field generated by the electrodes. Since AC electrosmosis is a surface driven effect, increasing the surface area increases the power coupled into the fluid movement. By taking the channel volume and filling it with conductive pillars, the surface area therefore increases, but the volume remains the same, increasing the drive per unit volume. This will have the effect of increasing the pressure generated by the pump. To explore and realize the proposed pumping principle we attempted to benefit from available expertise of Professor Marc. J. Madou who specializes in Bio-MEMS field and microfabrication techniques. Prof. Madou and his team at UC Irvine have been able to construct large dimensions of high-aspect-ratio carbon pillars made out of pyrolyzed SU-8 using Carbon-MEMS process. This conversion of polymer to a conductive-polymer technique was adopted and applied to our proposed smaller dimension of 3D-electrodes design. The current planar electrodes designs studied previously were made out of gold and it is desired to make the pillars out of gold also. However due to some microfabrication limitations, and since gold pillars undergo chemical reactions involving dissolution and redeposition, pyrolyzed pillars are suitable for our process. Although pyrolyzed SU-8 pillars are less conductive than the gold, but they are perfectly polarisable, which is ideal for AC-electroosmosis. In this particular area of interest, we have investigated with the collaboration of Prof. Madou and his team the fabrication of high-aspect-ratio carbon pillars with different aspect ratios and dimensions and introduced them to AC-electroosmosis pumping. Carbon electrodes were successfully and generate local fluid and drive fluid, where the new 3D-AC-electroosmosis micropump has shown an increase of 5 times to previous planar electrodes design.
448

Investigation into low power and reliable system-on-chip design

Shafik, Rishad Ahmed January 2010 (has links)
It is likely that the demand for multiprocessor system-on-chip (MPSoC) with low power consumption and high reliability in the presence of soft errors will continue to increase. However, low power and reliable MPSoC design is challenging due to conflicting trade-off between power minimisation and reliability objectives. This thesis is concerned with the development and validation of techniques to facilitate effective design of low power and reliable MPSoCs. Special emphasis is placed upon system-level design techniques for MPSoCs with voltage scaling enabled processors highlighting the trade-offs between performance, power consumption and reliability. An important aspect in the system-level design is to validate reliability in the presence of soft errors through simulation technique. The first part of the thesis addresses the development of a SystemC fault injection simulator based on a novel fault injection technique. Using MPEG-2 decoder and other examples, it is shown that the simulator benefits from minimum design intrusion and high fault representation. The simulator is used throughout the thesis to facilitate the study of reliability of MPSoC. On-chip communication architecture plays a vital role in determining the performance and reliability of MPSoCs. The second part of the thesis focuses on comparative study between two types of on-chip communication architectures: network-on-chip (NoC) and advanced microprocessor bus architecture (AMBA). The comparisons are carried out using real application traffic based on MPEG-2 video decoder demonstrating the trade-off between performance and reliability. The third part of the thesis concentrates on low power and reliable system-level design techniques. Two new techniques are presented, which are capable of generating optimised designs in terms of low power consumption and reliability. The first technique demonstrates a power minimisation technique through appropriate voltage scaling of the MPSoC cores, such that real-time constraints are met and reliability is maintained at acceptable-level. The second technique deals with joint optimisation of power minimisation and reliability improvement for time-constrained MPSoCs. Extensive experiments are conducted for these two new techniques using different applications, including MPEG-2 video decoder. It is shown that the proposed techniques give significant power reduction and reliability improvement compared to existing techniques.
449

Coherent versus non-coherent space-time shift keying for co-located and distributed MIMO systems

Sugiura, Shinya January 2010 (has links)
In this thesis, we propose the novel Space-Time Coding (STC) concept of Space-Time Shift Keying (STSK) and explore its characteristics in the contexts of both co-located and cooperative Multiple-Input Multiple-Output (MIMO) systems using both coherent and non-coherent detection. Furthermore, we conceive new serially-concatenated turbo-coding assisted STSK arrangements for the sake of approaching the channel capacity limit, which are designed with the aid of EXtrinsic Information Transfer (EXIT) charts. The basic STSK concept is first proposed for the family of co-located MIMO systems employing coherent detection. More specifically, in order to generate space-time codewords, these Coherent STSK (CSTSK) encoding schemes activate one out of Q dispersion matrices. The CSTSK scheme is capable of striking an attractive tradeoff between the achievable diversity gain and the transmission rate, hence having the potential of outperforming other classic MIMO arrangements. Since no inter-channel interference is imposed at the CSTSK receiver, the employment of single-stream-based Maximum Likelihood (ML) detection becomes realistic. Furthermore, for the sake of achieving an infinitesimally low Bit-Error Ratio (BER) at low SNRs, we conceive a three-stage concatenated turbo CSTSK scheme. In order to mitigate the effects of potential Channel State Information (CSI) estimation errors as well as the high pilot overhead, the Differentially-encoded STSK (DSTSK) philosophy is conceived with the aid of the Cayley transform and differential unitary space-time modulation. The DSTSK receiver benefits from low-complexity non-coherent single-streambased ML detection, while retaining the CSTSK scheme’s fundamental benefits. In order to create further flexible STSK architecture, the above-mentioned co-located CSTSK scheme is generalized so that P out of Q dispersion matrices are activated during each space-time block interval. Owing to its highly flexible structure, this generalized STSK scheme subsumes diverse other MIMO arrangements. Finally, the STSK concept is combined with cooperative MIMO techniques, which are capable of attaining the maximum achievable diversity gain by eliminating the undesired performance limitations imposed by uncorrelated fading. More specifically, considering the usual twin-phase cooperative transmission regime constituted by a broadcast phase and by a cooperative phase, the CSTSK and DSTSK schemes developed for co-located MIMO systems are employed during the cooperative transmission phase.
450

Technology enhanced accessible interaction framework and a method for evaluating requirements and designs

Angkananon, Kewalin January 2015 (has links)
The motivation for this thesis was the lack of any existing comprehensive framework or method to help developers with the evaluation or gathering of requirements and the evaluation or designing of technology solutions to accessible interactions between people, technology and objects, particularly in face-to-face situations involving people with disabilities. A Technology Enhanced Interaction Framework (TEIF) and TEIF Method for enhancing interactions with people, technology and objects through the use of technology was developed and successfully validated by three developer experts, three accessibility experts, and an HCI professor. The TEIF main components are people, objects, technology, interactions, time / place, and context while the TEIF Method involves requirement questions with multiple choice answers and technology suggestions, Interaction Diagrams and Use Case Diagrams. For evaluation of the TEIF Method, an example scenario involving a hearing impaired visitor to a Thai small local museum was developed along with corresponding requirement questions and answers, technology suggestions, technology solutions, an Interaction Diagram and a Use Case Diagram. While the TEIF has all the necessary components and sub components to be a general framework, the TEIF Method is focused on accessible interactions, and the content of the method used in this research was focused on people with hearing impairment because of time limitations. An experiment with 36 developers showed they were able to use the TEIF Method to evaluate requirements for technology solutions to problems involving interaction with hearing-impaired people better than the Other Methods. The TEIF Method helped developers select a best solution significantly more often than the Other Methods and rate the best solution significantly closer to expert ratings than the Other Methods. The TEIF Method also helped differentiation between solutions to be closer to experts’ differentiation than the Other Methods for some solutions and requirements. Questionnaire results showed that the developers thought that the TEIF Method helped to evaluate requirements and technology solutions to interaction problems involving hearing impaired people and would also help with gathering requirements and designing technology solutions for people with other disabilities. The developers also thought that the TEIF Method helped improve a developer’s awareness of interaction issues and understanding of how environment context affects interaction. Suggestions for future developments include extending the TEIF Method for other disabilities, including a more nuanced multi-level classification of how well different technologies meet different requirements and the use of the TEIF and TEIF Method as an index for case based solutions.

Page generated in 0.0273 seconds