• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

A Study of Process Parameter Optimization for BIC Steel

Tsai, Jeh-Hsin 06 February 2006 (has links)
Taguchi methods is also called quality engineering. It is a systematic methodology for product design(modify) and process design(improvement) with the most of saving cost and time, in order to satisfy customer requirement. Taguchi¡¦s parameter design is also known as robust design, which has the merits of low cost and high efficiency, and can achieve the activities of product quality design, management and improvement, consequently to reinforce the competitive ability of business. It is a worthy research course to study how to effectively apply parameter design, to shorten time spending on research, early to promote product having low cost and high quality on sale and to reinforce competitive advantage. However, the parameter design optimization problems are difficult in practical application owing to (1)complex and nonlinear relationships exist among the system¡¦s inputs, outputs and parameters and (2)interactions may occur among parameters. (3)In Taguchi¡¦s two-phase optimization procedure, the adjustment factor cannot be guaranteed to exist in practice. (4)For some reasons, the data may become lost or were never available. For these incomplete data, the Taguchi¡¦s method cannot treat them well. Neural networks have learning capacity fault tolerance and model-free characteristics. These characteristics support the neural networks as a competitive tool in processing multivariable input-output implementation. The successful field including diagnostics, robotics, scheduling, decision-marking, predicition, etc. In the process of searching optimization, genetic algorithm can avoid local optimization. So that it may enhance the possibility of global optimization. This study had drawn out the key parameters from the spheroidizing theory, and L18, L9 orthogonal experimental array were applied to determine the optimal operation parameters by Signal/Noise analysis. The conclusions are summarized as follows: 1. The spheroidizing of AISI 3130 used to be the highest unqualified product, and required for the second annealing treatment. The operational record before improvement showed 83 tons of the 3130 steel were required for the second treatment. The optimal operation parameters had been defined by L18(61¡Ñ35) orthogonal experimental array. The control parameters of the annealing temperature was at B2
742

Generation of Fuzzy Classification Systems using Genetic Algorithms

Lee, Cheng-Tsung 20 February 2006 (has links)
In this thesis, we propose an improved fuzzy GBML¡]genetic-based machine learning¡^algorithm to construct a FRBCS¡]fuzzy rule-based classification system¡^for pattern classification problem. Existing hybrid fuzzy GBML algorithm is consuming more computational time since it used the SS fuzzy model and combined with the Michigan-style algorithm for increasing the convergent rate of the Pittsburgh-style algorithm. By contrast, our improved fuzzy GBML algorithm is consuming less computational time since it used the MW fuzzy model and instead of the role of the Michigan-style algorithm by a heuristic procedure. Experimental results show that improved fuzzy GBML algorithm possesses the shorter computational time, the faster convergent rate, and the slightly better classification rate.
743

Modified GML Algorithm with Simulated Annealing for Estimation of Signal Arrival Time in WPAN Systems

Chang, Lun-Kai 27 July 2006 (has links)
The main purpose of this thesis is to estimate the signal arrival time in low rate wireless personal area network systems. In a dense multipath environment, the generalized maximum-likelihood (GML) algorithm can be used for the time-of-arrival (TOA) estimation. Nevertheless, the GML algorithm is very time-consuming and usually takes a long period of time, and sometimes fails to converge. Hence, a simplified scheme that would improve the algorithm is investigated. In the simplified scheme, the search is executed in a sequential form. Two threshold parameters are determined for the stop condition in the algorithm. One threshold is on the arrival time of estimated path, while the other is on the fading amplitude of estimated path. The determination of thresholds can be based on the minimum error probability, which is defined as the sum of the false alarm probability and the missing probability. Root-mean-square error statistics are used to improve the thresholds setting. In this scheme, candidate pairs of thresholds are evaluated in each appropriate range. To solve the problem that the root-mean-square error value for each pair of thresholds is calculated, the simulated annealing is adopted for searching the best threshold pair. The problem that all possible solutions in a large range must be evaluated can be solved by simulated annealing. From the simulation results, it is seen that, while the signal-to-noise ratio is larger or equal to 4dB, the proposed scheme can achieve better performance than the root-mean-square error statistics scheme.
744

An Automated Method for Resource Testing

Chen, Po-Kai 27 July 2006 (has links)
This thesis introduces a method that combines automated test data generation techniques with high volume testing and resource monitoring. High volume testing repeats test cases many times, simulating extended execution intervals. These testing techniques have been found useful for uncovering errors resulting from component coordination problems, as well as system resource consumption (e.g. memory leaks) or corruption. Coupling automated test data generation with high volume testing and resource monitoring could make this approach more scalable and effective in the field.
745

Empirical study on strategy for Regression Testing

Hsu, Pai-Hung 03 August 2006 (has links)
Software testing plays a necessary role in software development and maintenance. This activity is performed to support quality assurance. It is very common to design a number of testing suite to test their programs manually for most test engineers. To design test data manually is an expensive and labor-wasting process. Base on this reason, how to generate software test data automatically becomes a hot issue. Most researches usually use the meta-heuristic search methods like genetic algorithm or simulated annealing to gain the test data. In most circumstances, test engineers will generate the test suite first if they have a new program. When they debug or change some code to become a new one, they still design another new test suite to test it. Nearly no people will reserve the first test data and reuse it. In this research, we want to discuss whether it is useful to store the original test data.
746

The Validity Problem of Reverse Engineering Dynamic Systems

Chen, Jian-xun 15 August 2006 (has links)
The high-throughput measurement devices for DNA, RNA, and proteins produce large amount of information-rich data from biological dynamic systems. It is a need to reverse engineering these data to reveal parameters/structure and behavior relationships implicit in the data. Ultimately, complex interactions between its components that make up a system can be better understood. However, issues of reverse engineering in bioinformatics like algorithms use, the number of temporal sample, continuous or discrete type of input data, etc. are discussed but merely in the validity problem. We argue that, since the data available in reality are not so perfect, the result of reverse engineering is impacted by the un-perfect data. If this is true, to know how this impacts the results of the reverse engineering and to what extent is an important issue. We choose the parameter estimation as our task of reverse engineering and develop a novel method to investigate this validity problem. The data we used has a minor deviation from real data in each data point and then we compare the results of reverse engineering with its target parameters. It can be realized that the more error in data will introduce more serious validity problem in reverse engineering. Three artificial systems are used as test bed to demonstrate our approach. The results of the experiments show, a minor deviation in data may introduce large parameter deviation in the parameter solutions. We conclude that we should not ignore the data error in reverse engineering. To have more knowledge of this phenomenon, we further develop an analytical procedure to analyze the dynamic of the systems to see which characteristic will contribute to this impact. The sensitivity test, propagation analysis and impact factor analysis are applied to the systems. Some qualitative rules that describe the relationship between the results of reverse engineering and the dynamics of the system are summarized. All the finding of this exploration research needs more study to confirm its results. Along this line of research, the biological meaning and the possible relationship between robustness and the variation in parameters in reverse engineering is worth to study in the future. The better reverse algorithm to avoid this validity problem is another topic for future work.
747

GA-based Fractal Image Compression and Active Contour Model

Wu, Ming-Sheng 01 January 2007 (has links)
In this dissertation, several GA-based approaches for fractal image compression and active contour model are proposed. The main drawback of the classical fractal image compression is the long encoding time. Two methods are proposed in this dissertation to solve this problem. First, a schema genetic algorithm (SGA), in which the Schema Theorem is embedded in GA, is proposed to reduce the encoding time. In SGA, the genetic operators are adapted according to the Schema Theorem in the evolutionary process performed on the range blocks. We find that such a method can indeed speedup the encoder and also preserve the image quality. Moreover, based on the self-similarity characteristic of the natural image, a spatial correlation genetic algorithm (SC-GA) is proposed to further reduce the encoding time. There are two stages in the SC-GA method. The first stage makes use of spatial correlations in images for both the domain pool and the range pool to exploit local optima. The second stage is operated on the whole image to explore more adequate similarities if the local optima are not satisfactory. Thus not only the encoding speed is accelerated further, but also the higher compression ratio is achieved, because the search space is limited relative to the positions of the previously matched blocks, fewer bits are required to record the offset of the domain block instead of the absolute position. The experimental results of comparing the two methods with the full search, traditional GA, and other GA search methods are provided to demonstrate that they can indeed reduce the encoding time substantially. The main drawback of the traditional active contour model (ACM) for extracting the contour of a given object is that the snake cannot converge to the concave region of the object under consideration. An improved ACM algorithm is proposed in this dissertation to solve this problem. The algorithm is composed of two stages. In the first stage, the ACM with traditional energy function guides the snake to converge to the object boundary except the concave regions. In the second stage, for the control points which stay outside the concave regions, a proper energy template are chosen and are added in the external energy. The modified energy function is applied so as to move the snake toward the concave regions. Therefore, the object of interest can be completely extracted. The experimental results show that, by using this method, the snake can indeed completely extract the boundary of the given object, while the extra cost is very low. In addition, for the problem that the snake cannot precisely extract the object contour when the number of the control points on the snake is not enough, a GA-based ACM algorithm is presented to deal with such a problem. First the improved ACM algorithm is used to guide the snake to approximately extract the object boundary. By utilizing the evolutionary strategy of GA, we attempt to extract precisely the object boundary by adding a few control points into the snake. Similarly, some experimental results are provided to show the performance of the method.
748

Mobile Location Method Using Least Range and Clustering Techniques for NLOS Environments

Wang, Chien-chih 09 February 2007 (has links)
The technique of mobile location has become a popular research topic since the number of related applications for the location information is growing rapidly. The decision to make the location of mobile phones under the U.S. Federal Communications Commission (FCC) in 1996 is one of the driving forces to research and provide solutions to it. But, in wireless communication systems, non line of sight (NLOS) propagation is a key and difficult issue to improve mobile location estimation. We propose an efficient location algorithm which can mitigate the influence of NLOS error. First, based on the geometric relationship between known positions of the base stations, the theorem of ¡§Fermat Point¡¨ is utilized to collect the candidate positions (CPs) of the mobile station. Then, a set of weighting parameters are computed using a density-based clustering method. Finally, the location of mobile station is estimated by solving the optimal solution of the weighted objective function. Different distributions of NLOS error models are used to evaluate the performance of this method. Simulation results show that the performance of the least range measure (LRM) algorithm is slightly better than density-based clustering algorithm (DCA), and superior to the range based linear lines of position algorithm (LLOP) and range scaling algorithm (RSA) on location accuracy under different NLOS environments. The simulation results also satisfy the location accuracy demand of Enhanced 911 (E-911).
749

Some Common Subsequence Problems of Multiple Sequences and Their Applications

Huang, Kuo-Si 14 July 2007 (has links)
The longest common subsequence (LCS) problem is a famous and classical problem in computer science and molecular biology. The common subsequence of multiple sequences shows the identical and similar parts in these sequences. This dissertation pays attention to the approximate algorithms for finding the LCS of $k$ input sequence ($k$-LCS problem), the merged LCS problem, and the mosaic LCS problem. These three problems try to hunt out the identical relationships among the $k$ sequences, the interleaving relationship between a target sequence and a merged sequence of a pair of sequences, and the mosaic relationship between a target sequence and a set of sequences, respectively. Given $k$ input sequences, the $k$-LCS problem is to find the LCS which is common in all sequences. We first propose two $sigma$-approximate algorithms for the $k$-LCS problem with time complexities $O(sigma k n)$ and $O(sigma^{2} k n + sigma^{3} n)$ respectively, where $sigma$ and $n$ are the alphabet size and length of sequences, respectively. Experimental results show that our algorithms for 2-LCS could be a good filter to select the candidate sequences in database searching. Given a target sequence $T$ and a pair of merging sequences $A$ and $B$, the merged LCS problem is to find the LCS of $T$ and the optimally merged sequence by merging $A$ and $B$ alternately. Its goal is to find a merging way for understanding the interleaving relationship of sequences. We first propose an algorithm with $O(n^{3})$ time for solving the problem, where $n$ is the sequence length. We further add the block information of input sequences in the blocked merged LCS problem. To solve the latter problem, we propose an algorithm with time complexity $O(n^{2}m_{b})$, where $m_{b}$ is the number of blocks. Based on the S-table technique, we can design an improved algorithm with $O(n^{2} + nm_{b}^{2})$ time. Additionally, we desire to obtain the relationship between one sequence and a set of sequences. Given a target sequence $T$ and a set $S$ of source sequences, the mosaic LCS problem is to find the LCS of $T$ and a mosaic sequence $C$, composed of repeatable $k$ sequences in $S$. Based on the concept of break points in $T$, a divide and conquer algorithm is proposed with $O(n^2m|S|+ n^3log k)$ time, where $n$ and $m$ are the lengths of $T$ and the maximal length of sequences in $S$, respectively. Again, based on the S-table technique, an improved algorithm with $O(n(m+k)|S|)$ time is proposed by applying an efficient preprocessing.
750

A Fast Method with the Genetic Algorithm to Evaluate Power Delivery Networks

Lee, Fu-Tien 20 July 2007 (has links)
In recent high-speed digital circuits, the simultaneous switching noise (SSN) or ground bounce noise (GBN) is induced due to the transient currents flowing between power and ground planes during the state transitions of the logic gates. In order to¡@analyze the effect of GBN on power delivery systems effectively and accurately, the impedance of power/ground is an important index to evaluate power delivery systems. In the operating frequency bandwidth, the power impedance must be less than the target impedance. The typical way to suppress the SSN is adding decoupling capacitors to create a low impedance path between power and ground planes. By using the admittance matrix method, we can evaluate the effect of decoupling capacitors mounted on PCB fast and accurately reducing the time needed from the empirical or try-and-error design cycle. In order to reduce the cost of decoupling capacitors, the genetic algorithm is employed to optimize the placement of decoupling capacitors to suppress the GBN. The decoupling capacitor are not effective in the GHz frequency range due to their inherent lead inductance. The electromagnetic bandgap(EBG) structure can produce a stopband to prevent the noise from disperseing at higher frequency. Combining decoupling capacitors with EBG structure to find the optimum placement for suppression of the SSN by using the genetic algorithm.

Page generated in 0.0286 seconds