• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 17
  • 15
  • 14
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Partial Update Adaptive Filtering

Xie, Bei 25 April 2012 (has links)
Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster and have lower steady-state mean square error (MSE) than LMS. However, their high computational complexity ($O(N^2)$) makes them unsuitable for many real-time applications. A well-known approach to controlling computational complexity is applying partial update (PU) method to adaptive filters. A partial update method can reduce the adaptive algorithm complexity by updating part of the weight vector instead of the entire vector or by updating part of the time. An analysis for different PU adaptive filter algorithms is necessary and meaningful. The deficient-length adaptive filter addresses a situation in system identification where the length of the estimated filter is shorter than the length of the actual unknown system. It is related to the partial update adaptive filter, but has different performance. It can be viewed as a PU adaptive filter, in that the deficient-length adaptive filter also updates part of the weight vector. However, it updates the same part of the weight vector for each iteration, while the partial update adaptive filter updates a different part of the weight vector for each iteration. In this work, basic PU methods are applied to the adaptive filter algorithms which have not been fully addressed in the literature, including CG, EDS, and Constant Modulus Algorithm (CMA) based algorithms. A new PU method, the selective-sequential method, is developed for LSCMA. Mathematical analysis is shown including convergence condition, steady-state performance, and tracking performance. Computer simulation with proper examples is also shown to further help study the performance. The performance is compared among different PU methods or among different adaptive filtering algorithms. Computational complexity is calculated for each PU method and each adaptive filter algorithm. The deficient-length RLS and EDS are also analyzed and compared to the performance of the PU adaptive filter. In this dissertation, basic partial-update methods are applied to adaptive filter algorithms including CMA1-2, NCMA, Least Squares CMA (LSCMA), EDS, and CG. A new PU method, the selective-sequential method, is developed for LSCMA. Mathematical derivation and performance analysis are provided including convergence condition, steady-state mean and mean-square performance for a time-invariant system. The steady-state mean and mean-square performance are also presented for a time-varying system. Computational complexity is calculated for each adaptive filter algorithm. Numerical examples are shown to compare the computational complexity of the PU adaptive filters with the full-update filters. Computer simulation examples, including system identification and channel equalization, are used to demonstrate the mathematical analysis and show the performance of PU adaptive filter algorithms. They also show the convergence performance of PU adaptive filters. The performance is compared between the original adaptive filter algorithms and different partial-update methods. The performance is also compared among similar PU least-squares adaptive filter algorithms, such as PU RLS, PU CG, and PU EDS. Deficient-length RLS and EDS are studied. The performance of the deficient-length filter is also compared with the partial update filter. In addition to the generic applications of system identification and channel equalization, two special applications of using partial update adaptive filters are also presented. One application is using PU adaptive filters to detect Global System for Mobile Communication (GSM) signals in a local GSM system using the Open Base Transceiver Station (OpenBTS) and Asterisk Private Branch Exchange (PBX). The other application is using PU adaptive filters to do image compression in a system combining hyperspectral image compression and classification. Overall, the PU adaptive filters can usually achieve comparable performance to the full-update filters while reducing the computational complexity significantly. The PU adaptive filters can achieve similar steady-state MSE to the full-update filters. Among different PU methods, the MMax method has a convergence rate very close to the full-update method. The sequential and stochastic methods converge slower than the MMax method. However, the MMax method does not always perform well with the LSCMA algorithm. The sequential LSCMA has the best performance among the PU LSCMA algorithms. The PU CMA may perform better than the full-update CMA in tracking a time-varying system. The MMax EDS can converge faster than the MMax RLS and CG. It can converge to the same steady-state MSE as the MMax RLS and CG, while having a lower computational complexity. The PU LMS and PU EDS can also perform a little better in a system combining hyperspectral image compression and classification. / Ph. D.
32

Improving audio intelligibility in intercom devices : Implementera ett adaptivt filter för brusreducering

Tran, Hieu, Lundqvist, Thomas January 2024 (has links)
Porttelefoner används ofta i högljudda miljöer. Ett exempel på en sådan miljö är vindutsatta områden, där operatören i ett rum kan uppleva svårigheter att uppfatta tal från användaren som talar i en porttelefon på grund av den omgivande höga ljudnivån. Många porttelefoner och andra liknande enheter stöter vanligtvis på utmaningar och begränsningar, särskilt när det gäller snabbhet, storlek, resurshantering och hantering av dynamiska signaler.  Detta projekt genomfördes vid ett svenskt företag inom nätverksbaserade lösningar för videoövervakning och fysisk säkerhet. Projektet syftar till att utforska och implementera ett adaptiv filter med en adaptiv algoritm i C-programmering för att komplettera ett digitalt signalbehandlingssystem som en strategi för att förbättra ljudkvaliteten genom att reducera bruset hos porttelefoner i utmanande miljöer. Genom att tillämpa ett lämpligt adaptiv filter i en Raspberry Pi för att simulera en porttelefon, strävar projektet efter att reducera brus och optimera talet. Några av de vanligaste filtreringsalgoritmerna som använts i tidigare forskning för att förbättra ljudkvaliteten är Least mean square, Normalized least mean square och Recursive least square som även utvärderas i denna studie. Efter noggranna studier valdes algoritmen Normalized least mean square för implementering i detta projekt. Algoritmens prestanda utvärderas med hjälp av beräkningstiden, medelkvadratfelet och signal-till-brus-förhållandet i Matlab samt användartester för att säkerställa kvaliteten. Detta projekt uppnådde målen genom att utveckla ett fungerande adaptivt filter. Det rekommenderas att implementera filtret i en porttelefon där mikrofonerna inte är placerade nära varandra för att förhindra upptagning av dubbla liknande signaler. Under projektets gång hanterade systemet kontinuerligt dataströmmar effektivt i praktiska tester, vilket bekräftade att det fungerade utan fördröjningar. Detta bevisade det adaptiva filtrets effektivitet i verkliga applikationer, särskilt i högljudda miljöer. / Intercoms are often used in noisy environments. An example of such an environment is windy areas, where the operator inside a room may find it difficult to perceive speech from a user speaking through an intercom due to the surrounding high noise levels. Many intercoms and other similar devices typically encounter challenges and limitations, especially in terms of speed, size, resource management, and handling of dynamic signals. This project was carried out at a Swedish company specializing in network-based solutions for video surveillance and physical security. The project’s objective was to study and implement an adaptive filter with an adaptive algorithm in C programming to complement a digital signal processing system, as a strategy to enhance sound quality by reducing noise in intercoms in challenging environments. By applying a suitable adaptive filter in a Raspberry Pi to simulate an intercom, the goal of the project is to reduce noise and optimize speech clarity. Some of the most common filtering algorithms used in previous research to improve sound quality include Least mean square, Normalized least mean square och Recursive least square, which are evaluated in this study. After thorough studies, the Normalized least mean square algorithm was selected for implementation in this project. The performance of the algorithm is assessed using computation time, mean squared error, and signal-to-noise ratio in Matlab, along with user testing to ensure quality. This project achieved its goals by developing a functional adaptive filter. It is recommended to implement the filter in an intercom where the microphones are not placed close to each other to prevent the capture of similar duplicate signals. Throughout the project, the system continuously handled data streams effectively in practical tests, confirming that it operated without delays. This demonstrated the adaptive filter's effectiveness in real applications, particularly in noisy environments.
33

Stochastic density ratio estimation and its application to feature selection / Estimação estocástica da razão de densidades e sua aplicação em seleção de atributos

Braga, Ígor Assis 23 October 2014 (has links)
The estimation of the ratio of two probability densities is an important statistical tool in supervised machine learning. In this work, we introduce new methods of density ratio estimation based on the solution of a multidimensional integral equation involving cumulative distribution functions. The resulting methods use the novel V -matrix, a concept that does not appear in previous density ratio estimation methods. Experiments demonstrate the good potential of this new approach against previous methods. Mutual Information - MI - estimation is a key component in feature selection and essentially depends on density ratio estimation. Using one of the methods of density ratio estimation proposed in this work, we derive a new estimator - VMI - and compare it experimentally to previously proposed MI estimators. Experiments conducted solely on mutual information estimation show that VMI compares favorably to previous estimators. Experiments applying MI estimation to feature selection in classification tasks evidence that better MI estimation leads to better feature selection performance. Parameter selection greatly impacts the classification accuracy of the kernel-based Support Vector Machines - SVM. However, this step is often overlooked in experimental comparisons, for it is time consuming and requires familiarity with the inner workings of SVM. In this work, we propose procedures for SVM parameter selection which are economic in their running time. In addition, we propose the use of a non-linear kernel function - the min kernel - that can be applied to both low- and high-dimensional cases without adding another parameter to the selection process. The combination of the proposed parameter selection procedures and the min kernel yields a convenient way of economically extracting good classification performance from SVM. The Regularized Least Squares - RLS - regression method is another kernel method that depends on proper selection of its parameters. When training data is scarce, traditional parameter selection often leads to poor regression estimation. In order to mitigate this issue, we explore a kernel that is less susceptible to overfitting - the additive INK-splines kernel. Then, we consider alternative parameter selection methods to cross-validation that have been shown to perform well for other regression methods. Experiments conducted on real-world datasets show that the additive INK-splines kernel outperforms both the RBF and the previously proposed multiplicative INK-splines kernel. They also show that the alternative parameter selection procedures fail to consistently improve performance. Still, we find that the Finite Prediction Error method with the additive INK-splines kernel performs comparably to cross-validation. / A estimação da razão entre duas densidades de probabilidade é uma importante ferramenta no aprendizado de máquina supervisionado. Neste trabalho, novos métodos de estimação da razão de densidades são propostos baseados na solução de uma equação integral multidimensional. Os métodos resultantes usam o conceito de matriz-V , o qual não aparece em métodos anteriores de estimação da razão de densidades. Experimentos demonstram o bom potencial da nova abordagem com relação a métodos anteriores. A estimação da Informação Mútua - IM - é um componente importante em seleção de atributos e depende essencialmente da estimação da razão de densidades. Usando o método de estimação da razão de densidades proposto neste trabalho, um novo estimador - VMI - é proposto e comparado experimentalmente a estimadores de IM anteriores. Experimentos conduzidos na estimação de IM mostram que VMI atinge melhor desempenho na estimação do que métodos anteriores. Experimentos que aplicam estimação de IM em seleção de atributos para classificação evidenciam que uma melhor estimação de IM leva as melhorias na seleção de atributos. A tarefa de seleção de parâmetros impacta fortemente o classificador baseado em kernel Support Vector Machines - SVM. Contudo, esse passo é frequentemente deixado de lado em avaliações experimentais, pois costuma consumir tempo computacional e requerer familiaridade com as engrenagens de SVM. Neste trabalho, procedimentos de seleção de parâmetros para SVM são propostos de tal forma a serem econômicos em gasto de tempo computacional. Além disso, o uso de um kernel não linear - o chamado kernel min - é proposto de tal forma que possa ser aplicado a casos de baixa e alta dimensionalidade e sem adicionar um outro parâmetro a ser selecionado. A combinação dos procedimentos de seleção de parâmetros propostos com o kernel min produz uma maneira conveniente de se extrair economicamente um classificador SVM com boa performance. O método de regressão Regularized Least Squares - RLS - é um outro método baseado em kernel que depende de uma seleção de parâmetros adequada. Quando dados de treinamento são escassos, uma seleção de parâmetros tradicional em RLS frequentemente leva a uma estimação ruim da função de regressão. Para aliviar esse problema, é explorado neste trabalho um kernel menos suscetível a superajuste - o kernel INK-splines aditivo. Após, são explorados métodos de seleção de parâmetros alternativos à validação cruzada e que obtiveram bom desempenho em outros métodos de regressão. Experimentos conduzidos em conjuntos de dados reais mostram que o kernel INK-splines aditivo tem desempenho superior ao kernel RBF e ao kernel INK-splines multiplicativo previamente proposto. Os experimentos também mostram que os procedimentos alternativos de seleção de parâmetros considerados não melhoram consistentemente o desempenho. Ainda assim, o método Finite Prediction Error com o kernel INK-splines aditivo possui desempenho comparável à validação cruzada.
34

Estimativa robusta da frequ?ncia card?aca a partir de sinais de fotopletismografia de pulso

Benetti, Tiago 31 August 2018 (has links)
Submitted by PPG Engenharia El?trica (engenharia.pg.eletrica@pucrs.br) on 2018-10-29T13:30:23Z No. of bitstreams: 1 TIAGO BENETTI_DIS.pdf: 5038519 bytes, checksum: 95fa8d1b367b574eee27e772a55a9a49 (MD5) / Approved for entry into archive by Caroline Xavier (caroline.xavier@pucrs.br) on 2018-10-30T17:21:55Z (GMT) No. of bitstreams: 1 TIAGO BENETTI_DIS.pdf: 5038519 bytes, checksum: 95fa8d1b367b574eee27e772a55a9a49 (MD5) / Made available in DSpace on 2018-10-30T17:27:25Z (GMT). No. of bitstreams: 1 TIAGO BENETTI_DIS.pdf: 5038519 bytes, checksum: 95fa8d1b367b574eee27e772a55a9a49 (MD5) Previous issue date: 2018-08-31 / Heart rate monitoring using Photoplethysmography (PPG) signals acquired from the individuals pulse has become popular due to emergence of numerous low cost wearable devices. However, monitoring during physical activities has obstacles because of the influence of motion artifacts in PPG signals. The objective of this work is to introduce a new algorithm capable of removing motion artifacts and estimating heart rate from pulse PPG signals. Normalized Least Mean Square (NLMS) and Recursive Least Squares (RLS) algorithms are proposed for an adaptive filtering structure that uses acceleration signals as reference to remove motion artifacts. The algorithm uses the Periodogram of the filtered signals to extract their heart rates, which will be used together with a PPG Signal Quality Index to feed the input of a Kalman Filter. Specific heuristics and the Quality Index collaborate so that the Kalman filter provides a heart rate estimate with high accuracy and robustness to measurement uncertainties. The algorithm was validated from the heart rate obtained from Electrocardiography signals and the proposed method with the RLS algorithm presented the best results with an absolute mean error of 1.54 beats per minute (bpm) and standard deviation of 0.62 bpm, recorded for 12 individuals performing a running activity on a treadmill with varying speeds. The results make the performance of the algorithm comparable and even better than several recently developed methods in this field. In addition, the algorithm presented a low computational cost and suitable to the time interval in which the heart rate estimate is performed. Thus, it is expected that this algorithm will improve the obtaining of heart rate in currently available wearable devices. / O monitoramento da frequ?ncia card?aca utilizando sinais de Fotopletismografia ou PPG (do ingl?s, Photopletismography) adquiridos do pulso de indiv?duos tem se popularizado devido ao surgimento de in?meros dispositivos wearable de baixo custo. No entanto, o monitoramento durante atividades f?sicas tem dificuldades em raz?o da influ?ncia de artefatos de movimento nos sinais de PPG. O objetivo deste trabalho ? introduzir um novo algoritmo capaz de remover artefatos de movimento e estimar a frequ?ncia card?aca de sinais de PPG de pulso. Os algoritmos do M?nimo Quadrado M?dio Normalizado ou NLMS (do ingl?s, Normalized Least Mean Square) e de M?nimos Quadrados Recursivos ou RLS (do ingl?s, Recursive Least Squares) s?o propostos para uma estrutura de filtragem adaptativa que utiliza sinais de acelera??o como refer?ncia para remover os artefatos de movimento. O algoritmo utiliza o Periodograma dos sinais filtrados para extrair suas frequ?ncias card?acas, que ser?o utilizadas juntamente com um ?ndice de Qualidade do Sinal de PPG para alimentar a entrada de um Filtro de Kalman. Heur?sticas espec?ficas e o ?ndice de Qualidade colaboram para que filtro de Kalman forne?a uma estimativa da frequ?ncia card?aca com alta acur?cia e robustez a incertezas de medi??o. O algoritmo foi validado a partir da frequ?ncia card?aca obtida de sinais de Eletrocardiografia e o m?todo proposto com o algoritmo RLS apresentou os melhores resultados com um erro m?dio absoluto de 1,54 batimentos por minuto (bpm) e desvio padr?o de 0,62 bpm, registrados para 12 indiv?duos realizando uma atividade de corrida em uma esteira com velocidades variadas. Os resultados tornam o desempenho do algoritmo compar?vel e at? mesmo melhor que v?rios m?todos desenvolvidos recentemente neste campo. Al?m disso, o algoritmo apresentou um custo computacional baixo e adequado ao intervalo de tempo em que a estimativa da frequ?ncia card?aca ? realizada. Dessa forma, espera-se que este algoritmo melhore a obten??o da frequ?ncia card?aca em dispositivos wearable atualmente dispon?veis.
35

Contrôle adaptatif des feux de signalisation dans les carrefours : modélisation du système de trafic dynamique et approches de résolution / Adaptative traffic signal control at intersections : dynamic traffic system modeling and algorithms

Yin, Biao 11 December 2015 (has links)
La régulation adaptative des feux de signalisation est un problème très important. Beaucoup de chercheurs travaillent continuellement afin de résoudre les problémes liés à l’embouteillage dans les intersections urbaines. Il devient par conséquent très utile d’employer des algorithmes intelligents afin d’améliorer les performances de régulation et la qualité du service. Dans cette thèse, nous essayons d'étudier ce problème d’une part à travers une modèlisation microscopique et dynamique en temps discret, et d’autre part en explorant plusieurs approches de résoltion pour une intersection isolée ainsi que pour un réseau distribué d'intersections.La première partie se concentre sur la modélisation dynamique des problèmes des feux de signalisation ainsi que de la charge du réseau d’intersections. Le mode de la “séquence de phase adaptative” (APS) dans un plan de feux est d'abord considéré. Quant à la modélisation du contrôle des feux aux intersections, elle est formulée grâce à un processus décisionnel de markov (MDP). En particulier, la notion de “l'état du système accordable” est alors proposée pour la coordination du réseau de trafic. En outre, un nouveau modèle de “véhicule-suiveur” est proposé pour l'environnement de trafic. En se basant sur la modélisation proposée, les méthodes de contrôle des feux dans cette thèse comportent des algorithmes optimaux et quasi-optimaux. Deux algorithmes exacts de résolution basées sur la programmation dynamique (DP) sont alors étudiés et les résultats montrent certaines limites de cette solution DP surtout dans quelques cas complexes où l'espace d'états est assez important. En raison de l’importance du temps d’execution de l'algorithme DP et du manque d'information du modèle (notamment l’information exacte relative à l’arrivée des véhicules à l’intersection), nous avons opté pour un algorithme de programmation dynamique approximative (ADP). Enfin, un algorithme quasi-optimal utilisant l'ADP combinée à la méthode d’amélioration RLS-TD (λ) est choisi. Dans les simulations, en particulier avec l'intégration du mode de phase APS, l'algorithme proposé montre de bons résultats notamment en terme de performance et d'efficacité de calcul. / Adaptive traffic signal control is a decision making optimization problem. People address this crucial problem constantly in order to solve the traffic congestion at urban intersections. It is very popular to use intelligent algorithms to improve control performances, such as traffic delay. In the thesis, we try to study this problem comprehensively with a microscopic and dynamic model in discrete-time, and investigate the related algorithms both for isolated intersection and distributed network control. At first, we focus on dynamic modeling for adaptive traffic signal control and network loading problems. The proposed adaptive phase sequence (APS) mode is highlighted as one of the signal phase control mechanisms. As for the modeling of signal control at intersections, problems are fundamentally formulated by Markov decision process (MDP), especially the concept of tunable system state is proposed for the traffic network coordination. Moreover, a new vehicle-following model supports for the network loading environment.Based on the model, signal control methods in the thesis are studied by optimal and near-optimal algorithms in turn. Two exact DP algorithms are investigated and results show some limitations of DP solution when large state space appears in complex cases. Because of the computational burden and unknown model information in dynamic programming (DP), it is suggested to use an approximate dynamic programming (ADP). Finally, the online near-optimal algorithm using ADP with RLS-TD(λ) is confirmed. In simulation experiments, especially with the integration of APS, the proposed algorithm indicates a great advantage in performance measures and computation efficiency.
36

Stochastic density ratio estimation and its application to feature selection / Estimação estocástica da razão de densidades e sua aplicação em seleção de atributos

Ígor Assis Braga 23 October 2014 (has links)
The estimation of the ratio of two probability densities is an important statistical tool in supervised machine learning. In this work, we introduce new methods of density ratio estimation based on the solution of a multidimensional integral equation involving cumulative distribution functions. The resulting methods use the novel V -matrix, a concept that does not appear in previous density ratio estimation methods. Experiments demonstrate the good potential of this new approach against previous methods. Mutual Information - MI - estimation is a key component in feature selection and essentially depends on density ratio estimation. Using one of the methods of density ratio estimation proposed in this work, we derive a new estimator - VMI - and compare it experimentally to previously proposed MI estimators. Experiments conducted solely on mutual information estimation show that VMI compares favorably to previous estimators. Experiments applying MI estimation to feature selection in classification tasks evidence that better MI estimation leads to better feature selection performance. Parameter selection greatly impacts the classification accuracy of the kernel-based Support Vector Machines - SVM. However, this step is often overlooked in experimental comparisons, for it is time consuming and requires familiarity with the inner workings of SVM. In this work, we propose procedures for SVM parameter selection which are economic in their running time. In addition, we propose the use of a non-linear kernel function - the min kernel - that can be applied to both low- and high-dimensional cases without adding another parameter to the selection process. The combination of the proposed parameter selection procedures and the min kernel yields a convenient way of economically extracting good classification performance from SVM. The Regularized Least Squares - RLS - regression method is another kernel method that depends on proper selection of its parameters. When training data is scarce, traditional parameter selection often leads to poor regression estimation. In order to mitigate this issue, we explore a kernel that is less susceptible to overfitting - the additive INK-splines kernel. Then, we consider alternative parameter selection methods to cross-validation that have been shown to perform well for other regression methods. Experiments conducted on real-world datasets show that the additive INK-splines kernel outperforms both the RBF and the previously proposed multiplicative INK-splines kernel. They also show that the alternative parameter selection procedures fail to consistently improve performance. Still, we find that the Finite Prediction Error method with the additive INK-splines kernel performs comparably to cross-validation. / A estimação da razão entre duas densidades de probabilidade é uma importante ferramenta no aprendizado de máquina supervisionado. Neste trabalho, novos métodos de estimação da razão de densidades são propostos baseados na solução de uma equação integral multidimensional. Os métodos resultantes usam o conceito de matriz-V , o qual não aparece em métodos anteriores de estimação da razão de densidades. Experimentos demonstram o bom potencial da nova abordagem com relação a métodos anteriores. A estimação da Informação Mútua - IM - é um componente importante em seleção de atributos e depende essencialmente da estimação da razão de densidades. Usando o método de estimação da razão de densidades proposto neste trabalho, um novo estimador - VMI - é proposto e comparado experimentalmente a estimadores de IM anteriores. Experimentos conduzidos na estimação de IM mostram que VMI atinge melhor desempenho na estimação do que métodos anteriores. Experimentos que aplicam estimação de IM em seleção de atributos para classificação evidenciam que uma melhor estimação de IM leva as melhorias na seleção de atributos. A tarefa de seleção de parâmetros impacta fortemente o classificador baseado em kernel Support Vector Machines - SVM. Contudo, esse passo é frequentemente deixado de lado em avaliações experimentais, pois costuma consumir tempo computacional e requerer familiaridade com as engrenagens de SVM. Neste trabalho, procedimentos de seleção de parâmetros para SVM são propostos de tal forma a serem econômicos em gasto de tempo computacional. Além disso, o uso de um kernel não linear - o chamado kernel min - é proposto de tal forma que possa ser aplicado a casos de baixa e alta dimensionalidade e sem adicionar um outro parâmetro a ser selecionado. A combinação dos procedimentos de seleção de parâmetros propostos com o kernel min produz uma maneira conveniente de se extrair economicamente um classificador SVM com boa performance. O método de regressão Regularized Least Squares - RLS - é um outro método baseado em kernel que depende de uma seleção de parâmetros adequada. Quando dados de treinamento são escassos, uma seleção de parâmetros tradicional em RLS frequentemente leva a uma estimação ruim da função de regressão. Para aliviar esse problema, é explorado neste trabalho um kernel menos suscetível a superajuste - o kernel INK-splines aditivo. Após, são explorados métodos de seleção de parâmetros alternativos à validação cruzada e que obtiveram bom desempenho em outros métodos de regressão. Experimentos conduzidos em conjuntos de dados reais mostram que o kernel INK-splines aditivo tem desempenho superior ao kernel RBF e ao kernel INK-splines multiplicativo previamente proposto. Os experimentos também mostram que os procedimentos alternativos de seleção de parâmetros considerados não melhoram consistentemente o desempenho. Ainda assim, o método Finite Prediction Error com o kernel INK-splines aditivo possui desempenho comparável à validação cruzada.
37

Model-Based Stripmap Synthetic Aperture Radar Processing

West, Roger D 01 May 2011 (has links)
Synthetic aperture radar (SAR) is a type of remote sensor that provides its own illumination and is capable of forming high resolution images of the reflectivity of a scene. The reflectivity of the scene that is measured is dependent on the choice of carrier frequency; different carrier frequencies will yield different images of the same scene. There are different modes for SAR sensors; two common modes are spotlight mode and stripmap mode. Furthermore, SAR sensors can either be continuously transmitting a signal, or they can transmit a pulse at some pulse repetition frequency (PRF). The work in this dissertation is for pulsed stripmap SAR sensors. The resolvable limit of closely spaced reflectors in range is determined by the bandwidth of the transmitted signal and the resolvable limit in azimuth is determined by the bandwidth of the induced azimuth signal, which is strongly dependent on the length of the physical antenna on the SAR sensor. The point-spread function (PSF) of a SAR system is determined by these resolvable limits and is limited by the physical attributes of the SAR sensor. The PSF of a SAR system can be defined in different ways. For example, it can be defined in terms of the SAR system including the image processing algorithm. By using this definition, the PSF is an algorithm-specific sinc-like function and produces the bright, star-like artifacts that are noticeable around strong reflectors in the focused image. The PSF can also be defined in terms of just the SAR system before any image processing algorithm is applied. This second definition of the PSF will be used in this dissertation. Using this definition, the bright, algorithm-specific, star-like artifacts will be denoted as the inter-pixel interference (IPI) of the algorithm. To be specific, the combined effect of the second definition of PSF and the algorithm-dependent IPI is a decomposition of the first definition of PSF. A new comprehensive forward model for stripmap SAR is derived in this dissertation. New image formation methods are derived in this dissertation that invert this forward model and it is shown that the IPI that corrupts traditionally processed stripmap SAR images can be removed. The removal of the IPI can increase the resolvability to the resolution limit, thus making image analysis much easier. SAR data is inherently corrupted by uncompensated phase errors. These phase errors lower the contrast of the image and corrupt the azimuth processing which inhibits proper focusing (to the point of the reconstructed image being unusable). If these phase errors are not compensated for, the images formed by system inversion are useless, as well. A model-based autofocus method is also derived in this dissertation that complements the forward model and corrects these phase errors before system inversion.
38

Blind Adaptive DS-CDMA Receivers with Sliding Window Constant Modulus GSC-RLS Algorithm Based on Min/Max Criterion for Time-Variant Channels

Chang, Shih-chi 26 July 2006 (has links)
The code division multiple access (CDMA) system implemented by the direct-sequence (DS) spread spectrum (SS) technique is one of the most promising multiplexing technologies for wireless communications services. The SS communication adopts a technique of using much wider bandwidth necessary to transmit the information over the channel. In the DS-CDMA system, due to the inherent structure interference, referred to as the multiple access interference (MAI), the system performance might degrade. Next, for DS-CDMA systems over frequency-selective fading channels, the effect of inter symbol interference (ISI) will exist, such that a multiuser RAKE receiver has to be employed to combat the ISI as well as MAI. Since, in practical wireless communication environment, there may have several communication systems operated in the same area at the same time. In this thesis, we consider the environment of DS-CDMA systems, where the asynchronous narrow band interference (NBI) due to other systems is joined suddenly to the CDMA system. In general, when a system works in a stable state with adaptive detectors, a suddenly joined NBI signal will cause the system performance to be crash down. Under such circumstance, the existing conventional adaptive RAKE detectors may not be able to track well for the rapidly sudden changing NBI associated with the problems of ISI and MAI. It is known that the adaptive filtering algorithms, based on the sliding window linear constrained recursive least squares (SW LC-RLS), is very attractive to a violent changing environment. The main concern of this thesis is to propose a novel sliding window constant modulus RLS (SW CM-RLS) algorithm, based on the Min/max criterion, to deal with the NBI for DS-CDMA system over multipath channels. For simplicity and having less system complexity the generalized side-lobe canceller (GSC) structure is employed, and is referred to as the SW CM-GSC-RLS algorithm. The aim of the SW CM-GSC-RLS algorithm is used to alleviate the effect of NBI. It has the advantages of having faster convergence property and tracking ability, and can be applied to the environment in which the NBI is suddenly joined to the system under the effect of channel mismatch to achieve desired performance. At the end of this thesis, we extend the idea of the proposed algorithm to the space-time DS-CDMA RAKE receiver, in which the adaptive beamformer with temporal domain DS-CDMA receiver is employed. Via computer simulation results, we show that our new proposed schemes outperform the conventional CM GSC-RLS algorithm as well as the GSC-RLS algorithm (the so-called LCMV approach), in terms of mean square error of estimating channel impulse response, output signal to interference plus noise ratio and bit-error-rate.
39

Adaptive dim point target detection and tracking infrared images

DeMars, Thomas V. 12 1900 (has links)
Approved for public release; distribution is unlimited / The thesis deals with the detection and tracking of dim point targets in infrared images. Research topics include image process modeling with adaptive two-dimensional Least Mean Square (LMS) and Recursive Least Squares (RLS) prediction filters. Target detection is performed by significance testing the prediction error residual. A pulse tracker is developed which may be adjusted to discriminate target dynamics. The methods are applicable to detection and tracking in other spectral bands. / http://archive.org/details/adaptivedimpoint00dema / Major, United States Marine Corps
40

A Cable-Actuated Robotic Lumbar Spine as the Haptic Interface for Palpatory Training of Medical Students

Karadogan, Ernur January 2011 (has links)
No description available.

Page generated in 0.0617 seconds