• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2627
  • 942
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 6014
  • 1462
  • 893
  • 731
  • 726
  • 709
  • 497
  • 495
  • 486
  • 455
  • 422
  • 414
  • 386
  • 366
  • 343
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Learning and development in Kohonen-style self organising maps.

Keith-Magee, Russell January 2001 (has links)
This thesis presents a biologically inspired model of learning and development. This model decomposes the lifetime of a single learning system into a number of stages, analogous to the infant, juvenile, adolescent and adult stages of development in a biological system. This model is then applied to Kohonen's SOM algorithm.In order to better understand the operation of Kohonen's SOM algorithm, a theoretical analysis of self-organisation is performed. This analysis establishes the role played by lateral connections in organisation, and the significance of the Laplacian lateral connections common to many SOM architectures.This analysis of neighbourhood interactions is then used to develop three key variations on Kohonen's SOM algorithm. Firstly, a new scheme for parameter decay, known as Butterworth Step Decay, is presented. This decay scheme provides training times comparable to the best training times possible using traditional linear decay, but precludes the need for a priori knowledge of likely training times. In addition, this decay scheme allows Kohonen's SOM to learn in a continuous manner.Secondly, a method is presented for establishing core knowledge in the fundamental representation of a SOM. This technique is known as Syllabus Presentation. This technique involves using a selected training syllabus to reinforce knowledge known to be significant. A method for developing a training syllabus, known as Percept Masking, is also presented.Thirdly, a method is presented for preventing the loss of trained representations in a continuously learning SOM. This technique, known as Arbor Pruning, involves restricting the weight update process to prevent the loss of significant representations. This technique can be used if the data domain varies within a known set of dimensions. However, it cannot be used to control forgetfulness if dimensions are added to or removed from ++ / the data domain.
632

Hardware-based text-to-braille translation

Zhang, Xuan January 2007 (has links)
Braille, as a special written method of communication for the blind, has been globally accepted for years. It gives blind people another chance to learn and communicate more efficiently with the rest of the world. It also makes possible the translation of printed languages into a written language which is recognisable for blind people. Recently, Braille is experiencing a decreasing popularity due to the use of alternative technologies, like speech synthesis. However, as a form of literacy, Braille is still playing a significant role in the education of people with visual impairments. With the development of electronic technology, Braille turned out to be well suited to computer-aided production because of its coded forms. Software based text-to-Braille translation has been proved to be a successful solution in Assistive Technology (AT). However, the feasibility and advantages of the algorithm reconfiguration based on hardware implementation have rarely been substantially discussed. A hardware-based translation system with algorithm reconfiguration is able to supply greater throughput than a software-based system. Further, it is also expected as a single component integrated in a multi-functional Braille system on a chip. / Therefore, this thesis presents the development of a system for text-to-Braille translation implemented in hardware. Differing from most commercial methods, this translator is able to carry out the translation in hardware instead of using software. To find a particular translation algorithm which is suitable for a hardware-based solution, the history of, and previous contributions to Braille translation are introduced and discussed. It is concluded that Markov systems, a formal language theory, were highly suitable for application to hardware based Braille translation. Furthermore, the text-to-Braille algorithm is reconfigured to achieve parallel processing to accelerate the translation speed. Characteristics and advantages of Field Programmable Gate Arrays (FPGAs), and application of Very High Speed Integrated Circuit Hardware Description Language (VHDL) are introduced to explain how the translating algorithm can be transformed to hardware. Using a Xilinx hardware development platform, the algorithm for text-to-Braille translation is implemented and the structure of the translator is described hierarchically.
633

Joint non-linear inversion of amplitudes and travel times in a vertical transversely isotropic medium using compressional and converted shear waves

Nadri, Dariush January 2008 (has links)
Massive shales and fractures are the main cause of seismic anisotropy in the upper-most part of the crust, caused either by sedimentary or tectonic processes. Neglecting the effect of seismic anisotropy in seismic processing algorithms may incorrectly image the seismic reflectors. This will also influence the quantitative amplitude analysis such as the acoustic or elastic impedance inversion and amplitude versus offsets analysis. Therefore it is important to obtain anisotropy parameters from seismic data. Conventional layer stripping inversion schemes and reflector based reflectivity inversion methods are solely dependent upon a specific reflector, without considering the effect of the other layers. This, on one hand, does not take the effect of transmission in reflectivity inversion into the account, and on the other hand, ignores the information from the waves travelling toward the lower layers. I provide a framework to integrate the information for each specific layer from all the rays which have travelled across this layer. To estimate anisotropy parameters I have implemented unconstrained minimization algorithms such as nonlinear conjugate gradients and variable metric methods, I also provide a nonlinear least square method, based on the Levenberg-Marquardt algorithm. In a stack of horizontal transversely isotropic layers with vertical axis of symmetry, where the layer properties are laterally invariant, we provide two different inversion schemes; traveltime and waveform inversion. / Both inversion schemes utilize compressional and joint compressional and converted shear waves. A new exact traveltime equation has been formulated for a dipping transversely isotropic system of layers. These traveltimes are also parametrized by the ray parameters for each ray element. I use the Newton method of minimization to estimate the ray parameter using a random prior model from a uniform distribution. Numerical results show that with the assumption of weak anisotropy, Thomsen’s anisotropy parameters can be estimated with a high accuracy. The inversion algorithms have been implemented as a software package in a C++ object oriented environment.
634

Wavelets and C*-algebras

Wood, Peter John, drwoood@gmail.com January 2003 (has links)
A wavelet is a function which is used to construct a specific type of orthonormal basis. We are interested in using C*-algebras and Hilbert C*-modules to study wavelets. A Hilbert C*-module is a generalisation of a Hilbert space for which the inner product takes its values in a C*-algebra instead of the complex numbers. We study wavelets in an arbitrary Hilbert space and construct some Hilbert C*-modules over a group C*-algebra which will be used to study the properties of wavelets. We study wavelets by constructing Hilbert C*-modules over C*-algebras generated by groups of translations. We shall examine how this construction works in both the Fourier and non-Fourier domains. We also make use of Hilbert C*-modules over the space of essentially bounded functions on tori. We shall use the Hilbert C*-modules mentioned above to study wavelet and scaling filters, the fast wavelet transform, and the cascade algorithm. We shall furthermore use Hilbert C*-modules over matrix C*-algebras to study multiwavelets.
635

Partial Volume Correction in PET/CT

Åkesson, Lars January 2008 (has links)
<p>In this thesis, a two-dimensional pixel-wise deconvolution method for partial volume correction (PVC) for combined Positron Emission Tomography and Computer Tomography (PET/CT) imaging has been developed. The method is based on Van Cittert's deconvolution algorithm and includes a noise reduction method based on adaptive smoothing and median filters. Furthermore, a technique to take into account the position dependent PET point spread function (PSF) and to reduce ringing artifacts is also described. The quantitative and qualitative performance of the proposed PVC algorithm was evaluated using phantom experiments with varying object size, background and noise level. PVC results in an increased activity recovery as well as image contrast enhancement. However, the quantitative performance of the algorithm is impaired by the presence of background activity and image noise. When applying the correction on clinical PET images, the result was an increase in standardized uptake values, up to 98% for small tumors in the lung. These results suggest that the PVC described in this work significantly improves activity recovery without producing excessive amount of ringing artifacts and noise amplification. The main limitations of the algorithm are the restriction to two dimensions and the lack of regularization constraints based on anatomical information from the co-registered CT images.</p>
636

Classification of busses and lorries in an automatic road toll system / Klassificering av bussar och lastbilar i ett automatiskt vägtullsystem

Jarl, Adam January 2003 (has links)
<p>An automatic road toll system enables the passing vehicles to change lanes and no stop is needed for payment. Because of different weight of personal cars, busses, lorries (trucks) and other vehicles, they affect the road in different ways. It is of interest to categorize the vehicles into different classes depending of their weight so that the right fee can be set. An automatic road toll system developed by Combitech Traffic Systems AB (now Kapsch TrafficCom AB), Joenkoping, Sweden, classifies the vehicles with help of a so called height image. This is a three dimensional image produced by two photographs of a vehicle. The photographs displays the same view but are mounted with a little spacing. This spacing makes it possible to create a height image. The existing classification uses only length, width and height to divide vehicles into classes. Vehicles of the same dimensions would then belong to the same class independent of their weight. An important example is busses and lorries (trucks) which often have the same dimensions, but trucks often have greater weight and should therefore require a larger fee. This work describes methods for separating busses from lorries with the help of height images. The methods search for variations in the width and height, and other features specific for busses and lorries respectively.</p>
637

MIMO Multiplierless FIR System

Imran, Muhammad, Khursheed, Khursheed January 2009 (has links)
<p>The main issue in this thesis is to minimize the number of operations and the energy consumption per operation for the computation (arithmetic operation) part of DSP circuits, such as Finite Impulse Response Filters (FIR), Discrete Cosine Transform (DCT), and Discrete Fourier Transform (DFT) etc. More specific, the focus is on the elimination of most frequent common sub-expression (CSE) in binary, Canonic Sign Digit (CSD), Twos Complement or Sign Digit representation of the coefficients of non-recursive multiple input multiple output (MIMO)  FIR system , which can be realized using shift-and-add based operations only. The possibilities to reduce the complexity i.e. the chip area, and the energy consumption have been investigated.</p><p>We have proposed an algorithm which finds the most common sub expression in the binary/CSD/Twos Complement/Sign Digit representation of coefficients of non-recursive MIMO multiplier less FIR systems. We have implemented the algorithm in MATLAB. Also we have proposed different tie-breakers for the selection of most frequent common sub-expression, which will affect the complexity (Area and Power consumption) of the overall system. One choice (tie breaker) is to select the pattern (if there is a tie for the most frequent pattern) which will result in minimum number of delay elements and hence the area of the overall system will be reduced. Another tie-breaker is to choose the pattern which will result in minimum adder depth (the number of cascaded adders). Minimum adder depth will result in least number of glitches which is the main factor for the power consumption in MIMO multiplier less FIR systems. Switching activity will be increased when glitches are propagated to subsequent adders (which occur if adder depth is high). As the power consumption is proportional to the switching activity (glitches) hence we will use the sub-expression which will result in lowest adder depth for the overall system.</p>
638

Aspects of List-of-Two Decoding

Eriksson, Jonas January 2006 (has links)
<p>We study the problem of list decoding with focus on the case when we have a list size limited to two. Under this restriction we derive general lower bounds on the maximum possible size of a list-of-2-decodable code. We study the set of correctable error patterns in an attempt to obtain a characterization. For a special family of Reed-Solomon codes - which we identify and name 'class-I codes' - we give a weight-based characterization of the correctable error patterns under list-of-2 decoding. As a tool in this analysis we use the theoretical framework of Sudan's algorithm. The characterization is used in an exact calculation of the probability of transmission error in the symmetric channel when list-of-2 decoding is used. The results from the analysis and complementary simulations for QAM-systems show that a list-of-2 decoding gain of nearly 1 dB can be achieved.</p><p>Further we study Sudan's algorithm for list decoding of Reed-Solomon codes for the special case of the class-I codes. For these codes algorithms are suggested for both the first and second step of Sudan's algorithm. Hardware solutions for both steps based on the derived algorithms are presented.</p>
639

Identification of Driving Styles in Buses

Karginova, Nadezda January 2010 (has links)
<p>It is important to detect faults in bus details at an early stage. Because the driving style affects the breakdown of different details in the bus, identification of the driving style is important to minimize the number of failures in buses.</p><p>The identification of the driving style of the driver was based on the input data which contained examples of the driving runs of each class. K-nearest neighbor and neural networks algorithms were used. Different models were tested.</p><p>It was shown that the results depend on the selected driving runs. A hypothesis was suggested that the examples from different driving runs have different parameters which affect the results of the classification.</p><p>The best results were achieved by using a subset of variables chosen with help of the forward feature selection procedure. The percent of correct classifications is about 89-90 % for the k-nearest neighbor algorithm and 88-93 % for the neural networks.</p><p>Feature selection allowed a significant improvement in the results of the k-nearest neighbor algorithm and in the results of the neural networks algorithm received for the case when the training and testing data sets were selected from the different driving runs. On the other hand, feature selection did not affect the results received with the neural networks for the case when the training and testing data sets were selected from the same driving runs.</p><p>Another way to improve the results is to use smoothing. Computing the average class among a number of consequent examples allowed achieving a decrease in the error.</p>
640

Investigation of an optimal utilization of Ultra-wide band measurements for position purposes

Siripi, Vishnu Vardhan January 2006 (has links)
<p>Ultra wideband (UWB) communication systems refers to systems whose bandwidth is many times greater than the “narrowband” systems (refers to a signal which occupies only small amount of space on the radio spectrum). UWB can be used for indoor, communications for high data rates, or very low data rates for substantial link distances because of the extremely large bandwidth, immune to multi-path fading, penetrations through concrete block or obstacles. UWB can also used for short distance ranging whose applications include asset location in a warehouse, position location for wireless sensor networks, and collision avoidance.</p><p>In order to verify analytical and simulation results with real-world measurements, the need for experimental UWB systems arises. The Institute of Communications Engineering [IANT] has developed a low-cost experimental UWB positioning system to test UWB based positioning concepts. The mobile devices use the avalanche effect of transistors for simple generation of bi-phase pulses and are TDMA multi-user capable. The receiver is implemented in software and employs coherent cross-correlation with peak detection to localize the mobile unit via Time-Difference-Of-Arrival (TDOA) algorithms. Since the power of a proposed UWB system’s signal spread over a very wide bandwidth, the frequencies allocated to multiple existing narrowband systems may interfere with UWB spectrum. The goal of the filters discussed in this project is to cancel or suppress the interference while not distort the desired signal. To investigate the interference, we develop a algorithm to calculate the interference tones. In this thesis, we assume the interference to be narrowband interference (NBI) modeled as sinusoidal tones with unknown amplitude, frequency and phase. If we known the interference tones then it may be removed using a simple notched filter. Herein, we chose an adaptive filter so that it can adjust the interference tone automatically and cancel. In this thesis I tested adaptive filter technique to cancel interference cancellation (ie) LMS algorithm and Adaptive Noise Cancellation (ANC) technique. In this thesis performance of the both filters are compared.</p>

Page generated in 0.0408 seconds