21 |
Test Case Selection in Continuous Integration Using Reinforcement Learning with Linear Function ApproximatorSalman, Younus January 2023 (has links)
Continuous Integration (CI) has become an essential practice in software development, allowing teams to integrate code changes frequently and detect issues early. However, the selection of proper test cases for CI remains a challenge, as it requires balancing the need for thorough testing with the minimization of execution time and resources. This study proposes a practical and lightweight approach that leverages Reinforcement Learning with a linear function approximator for test case selection in CI. Several models are created where each one focuses on a different feature set. The proposed method aims to optimize the selection of test cases by learning from past CI outcomes, both the historical data of the test cases and the coverage data of the source code, and dynamically adapting the models for encountering new test cases and modified source code. Through experimentation and comparison between the models, the study demonstrates which feature set is optimal and efficient. The result indicates that Reinforcement Learning with a linear function approximator using coverage information can effectively assist in selecting test cases in CI, leading to enhanced software quality and development efficiency.
|
22 |
An Approach Based on Wavelet Decomposition and Neural Network for ECG Noise ReductionPoungponsri, Suranai 01 June 2009 (has links) (PDF)
Electrocardiogram (ECG) signal processing has been the subject of intense research in the past years, due to its strategic place in the detection of several cardiac pathologies. However, ECG signal is frequently corrupted with different types of noises such as 60Hz power line interference, baseline drift, electrode movement and motion artifact, etc. In this thesis, a hybrid two-stage model based on the combination of wavelet decomposition and artificial neural network is proposed for ECG noise reduction based on excellent localization features: wavelet transform and the adaptive learning ability of neural network. Results from the simulations validate the effectiveness of this proposed method. Simulation results on actual ECG signals from MIT-BIH arrhythmia database [30] show this approach yields improvement over the un-filtered signal in terms of signal-to-noise ratio (SNR).
|
23 |
Hierarchical Reinforcement Learning with Function Approximation for Adaptive ControlSkelly, Margaret Mary 08 April 2004 (has links)
No description available.
|
24 |
Built-In Self Training of Hardware-Based Neural NetworksAnderson, Thomas January 2017 (has links)
No description available.
|
25 |
Non-Wiener Effects in Narrowband Interference Mitigation Using Adaptive Transversal EqualizersIkuma, Takeshi 25 April 2007 (has links)
The least mean square (LMS) algorithm is widely expected to operate near the corresponding Wiener filter solution. An exception to this popular perception occurs when the algorithm is used to adapt a transversal equalizer in the presence of additive narrowband interference. The steady-state LMS equalizer behavior does not correspond to that of the fixed Wiener equalizer: the mean of its weights is different from the Wiener weights, and its mean squared error (MSE) performance may be significantly better than the Wiener performance. The contributions of this study serve to better understand this so-called non-Wiener phenomenon of the LMS and normalized LMS adaptive transversal equalizers.
The first contribution is the analysis of the mean of the LMS weights in steady state, assuming a large interference-to-signal ratio (ISR). The analysis is based on the Butterweck expansion of the weight update equation. The equalization problem is transformed to an equivalent interference estimation problem to make the analysis of the Butterweck expansion tractable. The analytical results are valid for all step-sizes. Simulation results are included to support the analytical results and show that the analytical results predict the simulation results very well, over a wide range of ISR.
The second contribution is the new MSE estimator based on the expression for the mean of the LMS equalizer weight vector. The new estimator shows vast improvement over the Reuter-Zeidler MSE estimator. For the development of the new MSE estimator, the transfer function approximation of the LMS algorithm is generalized for the steady-state analysis of the LMS algorithm. This generalization also revealed the cause of the breakdown of the MSE estimators when the interference is not strong, as the assumption that the variation of the weight vector around its mean is small relative to the mean of the weight vector itself.
Both the expression for the mean of the weight vector and for the MSE estimator are analyzed for the LMS algorithm at first. The results are then extended to the normalized LMS algorithm by the simple means of adaptation step-size redefinition. / Ph. D.
|
26 |
Low Power and Low Complexity Shift-and-Add Based ComputationsJohansson, Kenny January 2008 (has links)
The main issue in this thesis is to minimize the energy consumption per operation for the arithmetic parts of DSP circuits, such as digital filters. More specific, the focus is on single- and multiple-constant multiplications, which are realized using shift-and-add based computations. The possibilities to reduce the complexity, i.e., the chip area, and the energy consumption are investigated. Both serial and parallel arithmetic are considered. The main difference, which is of interest here, is that shift operations in serial arithmetic require flip-flops, while shifts can be hardwired in parallel arithmetic.The possible ways to connect a given number of adders is limited. Thus, for single-constant multiplication, the number of shift-and-add structures is finite. We show that it is possible to save both adders and shifts compared to traditional multipliers. Two algorithms for multiple-constant multiplication using serial arithmetic are proposed. For both algorithms, the total complexity is decreased compared to one of the best-known algorithms designed for parallel arithmetic. Furthermore, the impact of the digit-size, i.e., the number of bits to be processed in parallel, is studied for FIR filters implemented using serial arithmetic. Case studies indicate that the minimum energy consumption per sample is often obtained for a digit-size of around four bits.The energy consumption is proportional to the switching activity, i.e., the average number of transitions between the two logic levels per clock cycle. To achieve low power designs, it is necessary to develop accurate high-level models that can be used to estimate the switching activity. A method for computing the switching activity in bit-serial constant multipliers is proposed.For parallel arithmetic, a detailed complexity model for constant multiplication is introduced. The model counts the required number of full and half adder cells. It is shown that the complexity can be significantly reduced by considering the interconnection between the adders. A main factor for energy consumption in constant multipliers is the adder depth, i.e., the number of cascaded adders. The reason for this is that the switching activity will increase when glitches are propagated to subsequent adders. We propose an algorithm, where all multiplier coefficients are guaranteed to be realized at the theoretically lowest depth possible. Implementation examples show that the energy consumption is significantly reduced using this algorithm compared to solutions with fewer word level adders.For most applications, the input data are correlated since real world signals are processed. A data dependent switching activity model is derived for ripple-carry adders. Furthermore, a switching activity model for the single adder multiplier is proposed. This is a good starting point for accurate modeling of shift-and-add based computations using more adders.Finally, a method to rewrite an arbitrary function as a sum of weighted bit-products is presented. It is shown that for many elementary functions, a majority of the bit-products can be neglected while still maintaining reasonable high accuracy, since the weights are significantly smaller than the allowed error. The function approximation algorithms can be implemented using a low complexity architecture, which can easily be pipelined to an arbitrary degree for increased throughput.
|
27 |
Continuous Time and Discrete Time Fractional Order Adaptive Control for a Class of Nonlinear SystemsAburakhis, Mohamed Khalifa I, Dr 26 September 2019 (has links)
No description available.
|
28 |
Simulation-based optimization of Hybrid Systems Using Derivative Free Optimization TechniquesJayakumar, Adithya 27 December 2018 (has links)
No description available.
|
29 |
Particle Swarm Optimization Stability AnalysisDjaneye-Boundjou, Ouboti Seydou Eyanaa January 2013 (has links)
No description available.
|
30 |
ProposiÃÃo e avaliaÃÃo de algoritmos de filtragem adaptativa baseados na rede de kohonen / Proposition and evaluation of the adaptive filtering algorithms basad on the kohonenLuis Gustavo Mota Souza 02 June 2007 (has links)
nÃo hà / A Rede Auto-OrganizÃvel de Kohonen (Self-Organizing Map - SOM), por empregar um algoritmo de aprendizado nÃo supervisionado, vem sendo tradicionalmente aplicada na Ãrea de processamento de sinais em tarefas de quantizaÃÃo vetorial, enquanto que redes MLP (Multi-layer Perceptron) e RBF (Radial Basis Function) dominam as aplicaÃÃes que exigem a aproximaÃÃo de mapeamentos entrada-saÃda. Este tipo de aplicaÃÃo à comumente encontrada em tarefas de filtragem adaptativa que podem ser formatadas segundo a Ãtica da modelagem direta e inversa de sistemas, tais como identificaÃÃo equalizaÃÃo de canais de comunicaÃÃo. Nesta dissertaÃÃo, a gama de aplicaÃÃes da rede SOM à estendida atravÃs da proposiÃÃo de filtros adaptativos neurais baseados nesta rede, mostrando que os mesmos sÃo alternativas viÃveis aos filtros nÃo-lineares baseados nas redes MLP e RBF. Isto torna-se possÃvel graÃas ao uso de uma tÃcnica recentemente proposta, Quantized Temporal Associative Memory - VQTAM), que basicamente usa a filosofia de chamada MemÃria Associativa Temporal por QuantizaÃÃo Vetorial (Vector )treinamento da rede SOM para realizar a quantizaÃÃo vetorial simultÃnea dos espaÃos de entrada e de saÃda relativos ao problema de filtragem analisado. A partir da tÃcnica VQTAM, sÃo propostos trÃs arquiteturas de filtros adaptativos baseadas na rede SOM, cujos desempenhos foram avaliados em tarefas de identificaÃÃo e equalizaÃÃo de canais nÃolineares. O canal usado nas simulaÃÃes foi modelado como um processo auto-regressivo de Gauss-Markov de primeira ordem, contaminado com ruÃdo branco gaussiano e dotado de nÃo-linearidade do tipo saturaÃÃo (sigmoidal). Os resultados obtidos mostram que filtros adaptativos baseados na rede SOM tÃm desempenho equivalente ou superior aos tradicionais filtros transversais lineares e aos filtros nÃo-lineares baseados na rede MLP.
|
Page generated in 0.1573 seconds