• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 127
  • 29
  • 21
  • 7
  • 1
  • Tagged with
  • 856
  • 334
  • 323
  • 318
  • 317
  • 317
  • 317
  • 313
  • 313
  • 312
  • 312
  • 311
  • 311
  • 311
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performance optimisation of cdma2000 1x network

Alwandy, Sammaer January 2009 (has links)
The traffic load and the rapid changes in the coverage area (and even the seasons) have huge influence on the radio interface in CDMA networks. The works presented in this thesis utilises and explores the tradeoffs relationship between capacity and quality versus coverage in CDMA networks. This thesis presents a method to enhance the QoS in Erbil’s WLL Network Erbil coverage area is about 45 sq km of urban and 20 sq Km suburb. The city is generally hilly area with some flat lands around the urban city centre; this geographical topology has been considered in the optimisation procedure accomplished by this work. The work proposes an enhancement to the radio interface, key performance indicators, a reduction of radio interference and a considerable improvement of the require QoS particularly for voice services. All these have generally been achieved through Tilt and Azimuth of the antenna taking into consideration the area physiognomy and conducting repeated field drive test and data collection and analysis. The radio interface performance is represented by four indicators captured by drive test equipment: Ec/Iο, Rx, Tx and FER. The network performance data stats (QoS) is represented by three indicators: call drop rate, call setup success ratio and soft handoff success rate. The coverage problem spots are resolved during drive test by conducting one or a combination of various techniques; Tilt and Azimuth, relocating the site, rechecking the handoff list or RSSI parameters. In this thesis, the influence of coverage area for each sector upon the action is presented in order to solve both coverage problem and the network performance degradation. The results of this works were clearly reflected through the enhancements of the radio interface performance. By adopting the proposed approach a considerable performance of the network has been achieved. The improvements were clearly reflected in the stats collected after optimisation and a reduction of call drop rate reached 34% when compared with non-optimised network. For an operator, the network performance is always a critical issue. The operation improvements achievable through the appropriate actions need to be carefully assessed. This thesis is extending to construct the best network optimisation procedure at the course of operation. From the perspective of system planning, the choice of best optimisation procedure and deployment options depends on the current and future demands for system performance and the available financial resources.
2

Reliability management techniques in SSD storage systems

Mir, Irfan Faisal January 2014 (has links)
Solid State Devices (SSDs) are becoming ubiquitous in many embedded devices due to their features such as no moving parts, shock/temperature resistance and low power consumption. The reliability, performance, lifespan, and verification of these devices is an increasing issue in many application domains. Redundant Array of Independent Disk (RAID) architectures have previously been used to increase reliability of magnetic disk devices. Recent works have proposed re-use of RAID architectures to address these issues in SSDs, however these have an inherent problem as all flash memory chips wear out at the same rate due to even distribution of write operations. Existing solutions partly solve this problem using uneven parity distribution across the array, but suffer age-variation problems under random and sequential writes thereby decreasing the reliability of the array as well as increasing the cost. The aim of this thesis is to enhance the use of RAID mechanisms in SSD storage systems, and to do so in a reliable, and efficient manner. In this thesis the two novel mechanisms are explored that enhance data reliability in a SSD array regardless of I/O workload characteristics. The first mechanism solves the age convergence problem in RAID systems and quickly achieves steady-state age convergence. The second mechanism reduces page writes, thereby increasing the lifespan of each element in the array. The SSD controller ( ANFS ) architecture and the associated RAID architecture (flash-RAID) are presented. The embedded Flash Translation Layer (FTL), RAID controller, SSD low level controller, and specialized host interface are developed on an FPGA in synthesizable Verilog. In these architectures the concept of a forced random write is introduced which is used to solving the age distribution problem of pure sequential writes. It further employs a log-structured approach to control on-chip wear-levelling, and a power-fail reliability mechanism. In addition to this a new flash management framework is presented that increases the performance of SSD storage systems in the exploitation of both multi-chip parallelism and out-of-order execution. The contribution of this thesis can be summarised as follows: the presentation of new algorithms that successfully enable the efficient and reliable RAID techniques in SSD devices, the development of a number of techniques that enhance the performance and reliability of flash-based file systems, the implementation of a controller in synthesizable Verilog that employs these techniques, and the provision of a complete test bed supporting the experiments.
3

Designing interactive applications using active and passive EEG-based BCI systems

Vi, Chi Thanh January 2014 (has links)
A brain computer interface (BCI) is a communication system that allows users to control computers or external devices by detecting and interpreting brain activities. The initial goals of BCI were to help severely disabled people, such as people with "locked-in" syndrome, to communicate with the outside world by interpreting their brain signals into corresponding external commands. Nowadays, state-of-the-art BCIs, especially using Electroencephalography (EEG), bring benefits to normal and healthy computer users in a way that enriches their experiences of everyday Human Computer Interaction (HCI). Although EEG may be used in the same manner of continuous control and communications, it has been extended to assist and measure the inner states of users in a more passive way. Because of this, a new categorization of BCI systems has been proposed, dividing BCI applications in general and EEG-based systems in specific into active, reactive, and passive BCI. This thesis focuses on how portable and commodity EEG headsets can benefit the majority of HCI users with their limited capabilities, in comparison to clinical and expensive headsets. Our investigations focus on active and passive EEG-based BCI systems. We first investigate about how to use task engagement as an additional input besides traditional input methods in the context of active BCI. We then move forward to passive use of BCT by using task engagement to evaluate an application while the user is taking part in an interaction. We further extend our investigation to Event-Related Potentials where in particular Error-Related Negativity is used to detect users' error awareness moments. We show that using EEG signals captured by Emotiv headsets, moments of users' error awareness (or Error Related Negativity - ERN) can be detected on a single trial basis. We then show that the classification rates are sufficient to benefit HCI in single user. Next, we show ERN patterns can be detected in observation tasks where it not only appears in the observers' EEG, but also shows an anticipation effect in collaborative settings. Based on the results, we propose different scenarios where task designers can employ these results to enhance interactive applications, combining with popular HCI settings and input methods.
4

Softcore stream processor for FPGA-based DSP

Wang, P. January 2014 (has links)
Modern DSP applications present increasingly high computational requirements and keep evolving in nature. Field Programmable Gate Arrays (FPGAs) host a vast array of logic; hardwired DSP slices and memory resources combined with reconfigurability, emerging as a promising platform for DSP implementations. However, the current manner of programming FPGA still relies on design of dedicated circuits which is time consuming and complex. This has prompted the emergence of 'soft' processor architectures, hosted on the FPGAs reconfigurable fabric. However, existing softcore processors are still constrained in terms of performance, resource efficiency and applicability. In this thesis, these issues are addressed by a proposed Softcore Stream Processor (SSP). The SSP is used to achieve the first recorded software defined IEEE 802.11 ac FFT architecture with real-time processing ability for 8 channels and all required bandwidths. More importantly, it demonstrates not only it can offer a flexible, real-time processing, it also achieves reductions in resource cost of, on average, 65%, compared to dedicated circuit designs. Sliding window applications as an important subdomain of DSP applications are also targeted in this thesis. The implementations achieve over an order of magnitude higher resource efficiency when compared to current best metrics achieved by soft vector processors. In addition to the novel softcore architecture, a model-level SSP platform synthesis flow is presented to allow generation of high-quality real-time DSP on SSP in a systematic and automated way.
5

The instruction of systolic array (ISA) and simulation of parallel algorithms

Muslih, Ossama K. January 1989 (has links)
Systolic arrays have proved to be well suited for Very Large Scale Integrated technology (VLSI) since they: - Consist of a regular network of simple processing cells, - Use local communication between the processing cells only, - Exploit a maximal degree of parallelism. However, systolic arrays have one main disadvantage compared with other parallel computer architectures: they are special purpose architectures only capable of executing one algorithm, e.g., a systolic array designed for sorting cannot be used to form matrix multiplication. Several approaches have been made to make systolic arrays more flexible, in order to be able to handle different problems on a single systolic array. In this thesis an alternative concept to a VLSI-architecture the Soft-Systolic Simulation System (SSSS), is introduced and developed as a working model of virtual machine with the power to simulate hard systolic arrays and more general forms of concurrency such as the SIMD and MIMD models of computation. The virtual machine includes a processing element consisting of a soft-systolic processor implemented in the virtual.machine language. The processing element considered here was a very general element which allows the choice of a wide range of arithmetic and logical operators and allows the simulation of a wide class of algorithms but in principle extra processing cells can be added making a library and this library be tailored to individual needs. The virtual machine chosen for this implementation is the Instruction Systolic Array (ISA). The ISA has a number of interesting features, firstly it has been used to simulate all SIMD algorithms and many MIMD algorithms by a simple program transformation technique, further, the ISA can also simulate the so-called wavefront processor algorithms, as well as many hard systolic algorithms. The ISA removes the need for the broadcasting of data which is a feature of SIMD algorithms (limiting the size of the machine and its cycle time) and also presents a fairly simple communication structure for MIMD algorithms. The model of systolic computation developed from the VLSI approach to systolic arrays is such that the processing surface is fixed, as are the processing elements or cells by virtue of their being embedded in the processing surface. The VLSI approach therefore freezes instructions and hardware relative to the movement of data with the virtual machine and softsystolic programming retaining the constructions of VLSI for array design features such as regularity, simplicity and local communication, allowing the movement of instructions with respect to data. Data can be frozen into the structure with instructions moving systolically. Alternatively both the data and instructions can move systolically around the virtual processors, (which are deemed fixed relative to the underlying architecture). The ISA is implemented in OCCAM programs whose execution and output implicitly confirm the correctness of the design. The soft-systolic preparation comprises of the usual operating system facilities for the creation and modification of files during the development of new programs and ISA processor elements. We allow any concurrent high level language to be used to model the softsystolic program. Consequently the Replicating Instruction Systolic Array Language (RI SAL) was devised to provide a very primitive program environment to the ISA but adequate for testing. RI SAL accepts instructions in an assembler-like form, but is fairly permissive about the format of statements, subject of course to syntax. The RI SAL compiler is adopted to transform the soft-systolic program description (RISAL) into a form suitable for the virtual machine (simulating the algorithm) to run. Finally we conclude that the principles mentioned here can form the basis for a soft-systolic simulator using an orthogonally connected mesh of processors. The wide range of algorithms which the ISA can simulate make it suitable for a virtual simulating grid.
6

MINDtouch : ephemeral transference : liveness in networked performance with mobile devices

Baker, Camille January 2010 (has links)
This practice-based thesis investigates the four key qualities of 'liveness', 'feltness', 'embodiment' and 'presence' in mobile media performance, in order to shed light on the use qualities and sensations that emerge when mobile technologies are used in tandem with wearable devices in performance contexts. The research explores mobile media as a non-verbal and visual communication tool that functions by repurposing the mobile phone device and its connection to a wireless network, not only for communication but explicitly for the expression of 'emotion' in the form of a video file representing an interpersonal connection shared over distance. The research aims to identify and supplement existing scholarly discourse on the nature of these four key strands of kinaesthetic philosophy made 'live' in the online network, applying knowledge gained through the practice of enhancing participant experience of the use of simple ubiquitous mobile tools with bespoke biofeedback sensors and an online repository for the playback of users' visual expressions. This enhanced toolkit enables participants to share personal relationships and social interactions in an immediate way, with collaborators at a distance; The selected methodology of active research using kinaesthetic tools in live performance seeks to identify and clarify new ways of simulating or emulating a non-verbal, visual exchange within a social participatory context, with particular attention paid to a sense of 'feltness' as an element of 'presence' or 'liveness', and with attention to the experience of a sense of 'co-presence' arising in real-time collaborative mobile performances at a distance. To best explore these concepts as well as the bodily sensations involved for participants, the thesis analyses original data gleaned from a larger R&D project (conducted in tandem with this thesis project, sponsored by the BBC) as its major case study. The project, called MINDtouch, created a series of unique practice-based new media performance events played out in real-time networked contexts. The MINDtouch events were framed as a means for participants to simulate dream exchange or telepathic thought transfer using mobile phones and biofeedback devices, linked to a bespoke video file protocol for archiving and sharing visual results. The corporeal, nonverbal forms of communication and visual interaction observed when participants use such devices within participatory performance events is examined by way of demonstrating the impact of specific live encounters and experiences of users in this emerging playing field between real-time and asynchronous, live and technologised forms expressing liveness/presence/distance. The thesis benefits from access to the larger MINDtouch project and its original data, providing this research with a set of process-based evidence files both in video and transcript form (contained in the thesis appendices). By analysing this unique data set and applying the theoretical contexts of kinaesthetic philosophies where appropriate, the thesis demonstrates both the practical and the critical/contextual effectiveness of the media facilitation process for the participants, and shares their senses of 'liveness' and 'presence' (of themselves and of others) when using technology to externalise visual expressions of internalised experiences. This thesis makes an original contribution to scholarship in the fields of Performance and New Media, with additional contributions to the cognate fields of Philosophy and Technology, and locates its arguments at the locus of the fields of Performance Art, Mobile Performance/Locative Media, Philosophies of the Body and Communications. The thesis uses methods, practices and tools from Phenomenology, Ethnography, Practice-As-Research, and Experience Design, bringing together the relevant aspects of these diverging areas of new media research and media art/performance practices. The research demonstrates that there is a need for new technological tools to express viscerally felt emotion and to communicate more directly. It is hoped that this study will be of use to future scholars in the arts and technology, and also that it may help to demonstrate a way of communicating rich emotion through felt and embodied interactions shared with others across vast distances (thus supporting political movements aimed at reducing global travel in the age of global warming).
7

A novel self-routing reconfigurable and fault-tolerant cell array

She, Xiaoxuan January 2007 (has links)
No description available.
8

Design and synthesis of modern integrated filter networks : a computer-aided approach

Teplechuk, Mykhaylo A. January 2005 (has links)
No description available.
9

Calibration of full-waveform airborne laser scanning data for 3D object segmentation

Abed, Fanar Mansour Abed January 2012 (has links)
Airborne Laser Scanning (ALS) is a fully commercial technology, which has seen rapid uptake from the photogrammetry and remote sensing community to classify surface features and enhance automatic object recognition and extraction processes. 3D object segmentation is considered as one of the major research topics in the field of laser scanning for feature recognition and object extraction applications. The demand for automatic segmentation has significantly increased with the emergence of full-waveform (FWF) ALS, which potentially offers an unlimited number of return echoes. FWF has shown potential to improve available segmentation and classification techniques through exploiting the additional physical observables which are provided alongside the standard geometric information. However, use of the FWF additional information is not recommended without prior radiometric calibration, taking into consideration all the parameters affecting the backscattered energy. The main focus of this research is to calibrate the additional information from FWF to develop the potential of point clouds for segmentation algorithms. Echo amplitude normalisation as a function of local incidence angle was identified as a particularly critical aspect, and a novel echo amplitude normalisation approach, termed the Robust Surface Normal (RSN) method, has been developed. Following the radar equation, a comprehensive radiometric calibration routine is introduced to account for all variables affecting the backscattered laser signal. Thereafter, a segmentation algorithm is developed, which utilises the raw 3D point clouds to estimate the normal for individual echoes based on the RSN method. The segmentation criterion is selected as the normal vector augmented by the calibrated backscatter signals. The developed segmentation routine aims to fully integrate FWF data to improve feature recognition and 3D object segmentation applications. The routine was tested over various feature types from two datasets with different properties to assess its potential. The results are compared to those delivered through utilizing only geometric information, without the additional FWF radiometric information, to assess performance over existing methods. The results approved the potential of the FWF additional observables to improve segmentation algorithms. The new approach was validated against manual segmentation results, revealing a successful automatic implementation and achieving an accuracy of 82%.
10

Design of asynchronous microprocessor for power proportionality

Rykunov, Maxim January 2014 (has links)
Microprocessors continue to get exponentially cheaper for end users following Moore’s law, while the costs involved in their design keep growing, also at an exponential rate. The reason is the ever increasing complexity of processors, which modern EDA tools struggle to keep up with. This makes further scaling for performance subject to a high risk in the reliability of the system. To keep this risk low, yet improve the performance, CPU designers try to optimise various parts of the processor. Instruction Set Architecture (ISA) is a significant part of the whole processor design flow, whose optimal design for a particular combination of available hardware resources and software requirements is crucial for building processors with high performance and efficient energy utilisation. This is a challenging task involving a lot of heuristics and high-level design decisions. Another issue impacting CPU reliability is continuous scaling for power consumption. For the last decades CPU designers have been mainly focused on improving performance, but “keeping energy and power consumption in mind”. The consequence of this was a development of energy-efficient systems, where energy was considered as a resource whose consumption should be optimised. As CMOS technology was progressing, with feature size decreasing and power delivered to circuit components becoming less stable, the energy resource turned from an optimisation criterion into a constraint, sometimes a critical one. At this point power proportionality becomes one of the most important aspects in system design. Developing methods and techniques which will address the problem of designing a power-proportional microprocessor, capable to adapt to varying operating conditions (such as low or even unstable voltage levels) and application requirements in the runtime, is one of today’s grand challenges. In this thesis this challenge is addressed by proposing a new design flow for the development of an ISA for microprocessors, which can be altered to suit a particular hardware platform or a specific operating mode. This flow uses an expressive and powerful formalism for the specification of processor instruction sets called the Conditional Partial Order Graph (CPOG). The CPOG model captures large sets of behavioural scenarios for a microarchitectural level in a computationally efficient form amenable to formal transformations for synthesis, verification and automated derivation of asynchronous hardware for the CPU microcontrol. The feasibility of the methodology, novel design flow and a number of optimisation techniques was proven in a full size asynchronous Intel 8051 microprocessor and its demonstrator silicon. The chip showed the ability to work in a wide range of operating voltage and environmental conditions. Depending on application requirements and power budget our ASIC supports several operating modes: one optimised for energy consumption and the other one for performance. This was achieved by extending a traditional datapath structure with an auxiliary control layer for adaptable and fault tolerant operation. These and other optimisations resulted in a reconfigurable and adaptable implementation, which was proven by measurements, analysis and evaluation of the chip.

Page generated in 0.0256 seconds