• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 10
  • 8
  • 7
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 18
  • 16
  • 14
  • 14
  • 12
  • 12
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Dynamic Classification Using the Adaptive Competitive Algorithm

Deldadehasl, maryam 01 December 2023 (has links) (PDF)
The Vector Quantization (VQ) model proposes a powerful solution for data clustering. Its design indicates a specific combination of concepts from machine learning and dynamical systems theory to classify input data into distinct groups. The model evolves over time to better match the distribution of the input data. This adaptive feature is a strength of the model, as it allows the cluster centers to shift according to the input patterns, effectively quantizing the data distribution. It is a gradient dynamical system, using the energy function V as its Lyapunov function, and thus possesses properties of convergence and stability. These characteristics make the VQ model a promising tool for complex data analysis tasks, including those encountered in machine learning, data mining, and pattern recognition.In this study, we have applied the dynamic model to the "Breast Cancer Wisconsin Diagnostic" dataset, a comprehensive collection of features derived from digitized images of fine needle aspirate (FNA) of breast masses. This dataset, comprising various diagnostic measurements related to breast cancer, poses a unique challenge for clustering due to its high dimensionality and the critical nature of its application in medical diagnostics. By employing the model, we aim to demonstrate its efficacy in handling complex, multidimensional data, especially in the realm of medical pattern recognition and data mining. This integration not only highlights the model's versatility in different domains but also showcases its potential in contributing significantly to medical diagnostics, particularly in breast cancer identification and classification.
42

Design of Keyword Spotting System Based on Segmental Time Warping of Quantized Features

Karmacharya, Piush January 2012 (has links)
Keyword Spotting in general means identifying a keyword in a verbal or written document. In this research a novel approach in designing a simple spoken Keyword Spotting/Recognition system based on Template Matching is proposed, which is different from the Hidden Markov Model based systems that are most widely used today. The system can be used equally efficiently on any language as it does not rely on an underlying language model or grammatical constraints. The proposed method for keyword spotting is based on a modified version of classical Dynamic Time Warping which has been a primary method for measuring the similarity between two sequences varying in time. For processing, a speech signal is divided into small stationary frames. Each frame is represented in terms of a quantized feature vector. Both the keyword and the  speech  utterance  are  represented  in  terms  of  1‐dimensional  codebook  indices.  The  utterance is divided into segments and the warped distance is computed for each segment and compared against the test keyword. A distortion score for each segment is computed as likelihood measure of the keyword. The proposed algorithm is designed to take advantage of multiple instances of test keyword (if available) by merging the score for all keywords used.   The training method for the proposed system is completely unsupervised, i.e., it requires neither a language model nor phoneme model for keyword spotting. Prior unsupervised training algorithms were based on computing Gaussian Posteriorgrams making the training process complex but the proposed algorithm requires minimal training data and the system can also be trained to perform on a different environment (language, noise level, recording medium etc.) by  re‐training the original cluster on additional data.  Techniques for designing a model keyword from multiple instances of the test keyword are discussed. System performance over variations of different parameters like number of clusters, number of instance of keyword available, etc were studied in order to optimize the speed and accuracy of the system. The system performance was evaluated for fourteen different keywords from the Call - Home and the Switchboard speech corpus. Results varied for different keywords and a maximum accuracy of 90% was obtained which is comparable to other methods using the same time warping algorithms on Gaussian Posteriorgrams. Results are compared for different parameters variation with suggestion of possible improvements. / Electrical and Computer Engineering
43

Speech Coder using Line Spectral Frequencies of Cascaded Second Order Predictors

Namburu, Visala 14 November 2001 (has links)
A major objective in speech coding is to represent speech with as few bits as possible. Usual transmission parameters include auto regressive parameters, pitch parameters, excitation signals and excitation gains. The pitch predictor makes these coders sensitive to channel errors. Aiming for robustness to channel errors, we do not use pitch prediction and compensate for its lack with a better representation of the excitation signal. We propose a new speech coding approach, Vector Sum Excited Cascaded Linear Prediction (VSECLP), based on code excited linear prediction. We implement forward linear prediction using five cascaded second order sections - parameterized in terms of line spectral frequency - in place of the conventional tenth order filter. The line spectral frequency parameters estimated by the Direct Line Spectral Frequency (DLSF) adaptation algorithm are closer to the true values than those estimated by the Cascaded Recursive Least Squares - Subsection algorithm. A simplified version of DLSF is proposed to further reduce computational complexity. Split vector quantization is used to quantize the line spectral frequency parameters and vector sum codebooks to quantize the excitation signals. The effect on reconstructed speech quality and transmission rate, of an increased number of bits and differently split combinations, is analyzed by testing VSECLP on the TIMIT database. The quantization of the excitation vectors using the discrete cosine transform resulted in segmental signal to noise ratio of 4 dB at 20.95 kbps, whereas the same quality was obtained at 9.6 kbps using vector sum codebooks. / Master of Science
44

Shortening time-series power flow simulations for cost-benefit analysis of LV network operation with PV feed-in

López, Claudio David January 2015 (has links)
Time-series power flow simulations are consecutive power flow calculations on each time step of a set of load and generation profiles that represent the time horizon under which a network needs to be analyzed. These simulations are one of the fundamental tools to carry out cost-benefit analyses of grid planing and operation strategies in the presence of distributed energy resources, unfortunately, their execution time is quite substantial. In the specific case of cost-benefit analyses the execution time of time-series power flow simulations can easily become excessive, as typical time horizons are in the order of a year and different scenarios need to be compared, which results in time-series simulations that require a rather large number of individual power flow calculations. It is often the case that only a set of aggregated simulation outputs is required for assessing grid operation costs, examples of which are total network losses, power exchange through MV/LV substation transformers, and total power provision from PV generators. Exploring alternatives to running time-series power flow simulations with complete input data that can produce approximations of the required results with a level of accuracy that is suitable for cost-benefit analyses but that require less time to compute can thus be beneficial. This thesis explores and compares different methods for shortening time-series power flow simulations based on reducing the amount of input data and thus the required number of individual power flow calculations, and focuses its attention on two of them: one consists in reducing the time resolution of the input profiles through downsampling while the other consists in finding similar time steps in the input profiles through vector quantization and simulating them only once. The results show that considerable execution time reductions and sufficiently accurate results can be obtained with both methods, but vector quantization requires much less data to produce the same level of accuracy as downsampling. Vector quantization delivers a far superior trade-off between data reduction, time savings, and accuracy when the simulations consider voltage control or when more than one simulation with the same input data is required, as in such cases the data reduction process can be carried out only once. One disadvantage of this method is that it does not reproduce peak values in the result profiles with accuracy, which is due to the way downsampling disregards certain time steps in the input profiles and to the averaging effect vector quantization has on the them. This disadvantage makes the simulations shortened through these methods less precise, for example, for detecting voltage violations.
45

An empirical analysis of scenario generation methods for stochastic optimization

Löhndorf, Nils 17 May 2016 (has links) (PDF)
This work presents an empirical analysis of popular scenario generation methods for stochastic optimization, including quasi-Monte Carlo, moment matching, and methods based on probability metrics, as well as a new method referred to as Voronoi cell sampling. Solution quality is assessed by measuring the error that arises from using scenarios to solve a multi-dimensional newsvendor problem, for which analytical solutions are available. In addition to the expected value, the work also studies scenario quality when minimizing the expected shortfall using the conditional value-at-risk. To quickly solve problems with millions of random parameters, a reformulation of the risk-averse newsvendor problem is proposed which can be solved via Benders decomposition. The empirical analysis identifies Voronoi cell sampling as the method that provides the lowest errors, with particularly good results for heavy-tailed distributions. A controversial finding concerns evidence for the ineffectiveness of widely used methods based on minimizing probability metrics under high-dimensional randomness.
46

Vektorkvantisering för kodning och brusreducering / Vector quantization for coding and noise reduction

Cronvall, Per January 2004 (has links)
<p>This thesis explores the possibilities of avoiding the issues generally associated with compression of noisy imagery, through the usage of vector quantization. By utilizing the learning aspects of vector quantization, image processing operations such as noise reduction could be implemented in a straightforward way. Several techniques are presented and evaluated. A direct comparison shows that for noisy imagery, vector quantization, in spite of it's simplicity, has clear advantages over MPEG-4 encoding.</p>
47

Vektorkvantisering för kodning och brusreducering / Vector quantization for coding and noise reduction

Cronvall, Per January 2004 (has links)
This thesis explores the possibilities of avoiding the issues generally associated with compression of noisy imagery, through the usage of vector quantization. By utilizing the learning aspects of vector quantization, image processing operations such as noise reduction could be implemented in a straightforward way. Several techniques are presented and evaluated. A direct comparison shows that for noisy imagery, vector quantization, in spite of it's simplicity, has clear advantages over MPEG-4 encoding.
48

An Improved C-Fuzzy Decision Tree and its Application to Vector Quantization

Chiu, Hsin-Wei 27 July 2006 (has links)
In the last one hundred years, the mankind has invented a lot of convenient tools for pursuing beautiful and comfortable living environment. Computer is one of the most important inventions, and its operation ability is incomparable with the mankind. Because computer can deal with a large amount of data fast and accurately, people use this advantage to imitate human thinking. Artificial intelligence is developed extensively. Methods, such as all kinds of neural networks, data mining, fuzzy logic, etc., apply to each side fields (ex: fingerprint distinguishing, image compressing, antennal designing, etc.). We will probe into to prediction technology according to the decision tree and fuzzy clustering. The fuzzy decision tree proposed the classification method by using fuzzy clustering method, and then construct out the decision tree to predict for data. However, in the distance function, the impact of the target space was proportional inversely. This situation could make problems in some dataset. Besides, the output model of each leaf node represented by a constant restricts the representation capability about the data distribution in the node. We propose a more reasonable definition of the distance function by considering both input and target differences with weighting factor. We also extend the output model of each leaf node to a local linear model and estimate the model parameters with a recursive SVD-based least squares estimator. Experimental results have shown that our improved version produces higher recognition rates and smaller mean square errors for classification and regression problems, respectively.
49

Projeto de classificadores de padrÃes baseados em protÃtipos usando evoluÃÃo diferencial / On the efficient design of a prototype-based classifier using differential evolution

Luiz Soares de Andrade Filho 28 November 2014 (has links)
Nesta dissertaÃÃo à apresentada uma abordagem evolucionÃria para o projeto eciente de classificadores baseados em protÃtipos utilizando EvoluÃÃo Diferencial. Para esta finalidade foram reunidos conceitos presentes na famÃlia de redes neurais LVQ (Learning Vector Quantization, introduzida por Kohonen para classificaÃÃo supervisionada, juntamente com conceitos extraÃdos da tÃcnica de clusterizaÃÃo automÃtica proposta por Das et al. baseada na metaheurÃstica EvoluÃÃo Diferencial. A abordagem proposta visa determinar tanto o nÃmero Ãtimo de protÃtipos por classe, quanto as posiÃÃes correspondentes de cada protÃtipo no espaÃo de cobertura do problema. AtravÃs de simulaÃÃes computacionais abrangentes realizadas sobre vÃrios conjuntos de dados comumente utilizados em estudos de comparaÃÃo de desempenho, foi demonstrado que o classificador resultante, denominado LVQ-DE, alcanÃa resultados equivalentes (ou muitas vezes atà melhores) que o estado da arte em classificadores baseados em protÃtipos, com um nÃmero muito menor de protÃtipos. / In this Master's dissertation we introduce an evolutionary approach for the eficient design of prototyp e-based classiers using dierential evolution (DE). For this purp ose we amalgamate ideas from the Learning Vector Quantization (LVQ) framework for sup ervised classication by Kohonen (KOHONEN, 2001), with the DE-based automatic clustering approach by Das et al. (DAS; ABRAHAM; KONAR, 2008) in order to evolve sup ervised classiers. The prop osed approach is able to determine b oth the optimal numb er of prototyp es p er class and the corresp onding p ositions of these prototyp es in the data space. By means of comprehensive computer simulations on b enchmarking datasets, we show that the resulting classier, named LVQ-DE, consistently outp erforms state-of-the-art prototyp e-based classiers.
50

On Perception-Based Image Compression Schemes

Ramasubramanian, D 03 1900 (has links) (PDF)
No description available.

Page generated in 0.1314 seconds