• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 503
  • 358
  • 96
  • 59
  • 43
  • 25
  • 17
  • 11
  • 10
  • 7
  • 6
  • 6
  • 4
  • 3
  • 2
  • Tagged with
  • 1370
  • 1370
  • 440
  • 234
  • 192
  • 177
  • 135
  • 134
  • 127
  • 113
  • 110
  • 109
  • 108
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Generating Implicit Functions Model from Triangles Mesh Model by Using Genetic Algorithm

Chen, Ya-yun 09 October 2005 (has links)
The implicit function model is nowadays generally applied to a lot of fields that need 3D, such as computer game, cartoon or for specially effect film. So far, most hardware are still to support the polygon-mesh model but not implicit function model, so polygon-mesh model is still the mainstream of computer graphics. However, translation between the two representation models becomes a new research topic. This paper presents a new method to translate the triangles mesh model into the implicit functions model. The main concept is to use the binary space-partitioning tree to divide the points and patches in the triangle mesh model to create a hierarchical structure. For each leaf node in this hierarchical structure, we would generate a corresponding implicit function. These implicit functions are generated by the genetic algorithm. And the internal nodes in this hierarchical structure are blended by the blending operators. The blending operators make the surface become smooth and continual. The method we proposed reduces the data in a large amount because we only save the coefficients of the implicit surface. And the genetic algorithm can avoid the high computing complexity.
132

A Study of Process Parameter Optimization for BIC Steel

Tsai, Jeh-Hsin 06 February 2006 (has links)
Taguchi methods is also called quality engineering. It is a systematic methodology for product design(modify) and process design(improvement) with the most of saving cost and time, in order to satisfy customer requirement. Taguchi¡¦s parameter design is also known as robust design, which has the merits of low cost and high efficiency, and can achieve the activities of product quality design, management and improvement, consequently to reinforce the competitive ability of business. It is a worthy research course to study how to effectively apply parameter design, to shorten time spending on research, early to promote product having low cost and high quality on sale and to reinforce competitive advantage. However, the parameter design optimization problems are difficult in practical application owing to (1)complex and nonlinear relationships exist among the system¡¦s inputs, outputs and parameters and (2)interactions may occur among parameters. (3)In Taguchi¡¦s two-phase optimization procedure, the adjustment factor cannot be guaranteed to exist in practice. (4)For some reasons, the data may become lost or were never available. For these incomplete data, the Taguchi¡¦s method cannot treat them well. Neural networks have learning capacity fault tolerance and model-free characteristics. These characteristics support the neural networks as a competitive tool in processing multivariable input-output implementation. The successful field including diagnostics, robotics, scheduling, decision-marking, predicition, etc. In the process of searching optimization, genetic algorithm can avoid local optimization. So that it may enhance the possibility of global optimization. This study had drawn out the key parameters from the spheroidizing theory, and L18, L9 orthogonal experimental array were applied to determine the optimal operation parameters by Signal/Noise analysis. The conclusions are summarized as follows: 1. The spheroidizing of AISI 3130 used to be the highest unqualified product, and required for the second annealing treatment. The operational record before improvement showed 83 tons of the 3130 steel were required for the second treatment. The optimal operation parameters had been defined by L18(61¡Ñ35) orthogonal experimental array. The control parameters of the annealing temperature was at B2
133

Generation of Fuzzy Classification Systems using Genetic Algorithms

Lee, Cheng-Tsung 20 February 2006 (has links)
In this thesis, we propose an improved fuzzy GBML¡]genetic-based machine learning¡^algorithm to construct a FRBCS¡]fuzzy rule-based classification system¡^for pattern classification problem. Existing hybrid fuzzy GBML algorithm is consuming more computational time since it used the SS fuzzy model and combined with the Michigan-style algorithm for increasing the convergent rate of the Pittsburgh-style algorithm. By contrast, our improved fuzzy GBML algorithm is consuming less computational time since it used the MW fuzzy model and instead of the role of the Michigan-style algorithm by a heuristic procedure. Experimental results show that improved fuzzy GBML algorithm possesses the shorter computational time, the faster convergent rate, and the slightly better classification rate.
134

A hybrid genetic algorithm for automatic test data generation

Wang, Hsiu-Chi 13 July 2006 (has links)
Automatic test data generation is a hot topic in recent software testing research. Various techniques have been proposed with different emphases. Among them, most of the methods are based on Genetic Algorithms (GA). However, whether it is the best Metaheuristic method for such a problem remains unclear. In this paper, we choose to use another approach which arms a GA with an intensive local searcher (the so-called Memetic Algorithm (MA) according to the recent terminology). The idea of incorporating local searcher is based on the observations from many real-world programs. It turns out the results outperform many other known Metaheuristic methods so far. We argue the needs of local search for software testing in the discussion of the paper.
135

An Automated Method for Resource Testing

Chen, Po-Kai 27 July 2006 (has links)
This thesis introduces a method that combines automated test data generation techniques with high volume testing and resource monitoring. High volume testing repeats test cases many times, simulating extended execution intervals. These testing techniques have been found useful for uncovering errors resulting from component coordination problems, as well as system resource consumption (e.g. memory leaks) or corruption. Coupling automated test data generation with high volume testing and resource monitoring could make this approach more scalable and effective in the field.
136

The Validity Problem of Reverse Engineering Dynamic Systems

Chen, Jian-xun 15 August 2006 (has links)
The high-throughput measurement devices for DNA, RNA, and proteins produce large amount of information-rich data from biological dynamic systems. It is a need to reverse engineering these data to reveal parameters/structure and behavior relationships implicit in the data. Ultimately, complex interactions between its components that make up a system can be better understood. However, issues of reverse engineering in bioinformatics like algorithms use, the number of temporal sample, continuous or discrete type of input data, etc. are discussed but merely in the validity problem. We argue that, since the data available in reality are not so perfect, the result of reverse engineering is impacted by the un-perfect data. If this is true, to know how this impacts the results of the reverse engineering and to what extent is an important issue. We choose the parameter estimation as our task of reverse engineering and develop a novel method to investigate this validity problem. The data we used has a minor deviation from real data in each data point and then we compare the results of reverse engineering with its target parameters. It can be realized that the more error in data will introduce more serious validity problem in reverse engineering. Three artificial systems are used as test bed to demonstrate our approach. The results of the experiments show, a minor deviation in data may introduce large parameter deviation in the parameter solutions. We conclude that we should not ignore the data error in reverse engineering. To have more knowledge of this phenomenon, we further develop an analytical procedure to analyze the dynamic of the systems to see which characteristic will contribute to this impact. The sensitivity test, propagation analysis and impact factor analysis are applied to the systems. Some qualitative rules that describe the relationship between the results of reverse engineering and the dynamics of the system are summarized. All the finding of this exploration research needs more study to confirm its results. Along this line of research, the biological meaning and the possible relationship between robustness and the variation in parameters in reverse engineering is worth to study in the future. The better reverse algorithm to avoid this validity problem is another topic for future work.
137

GA-based Fractal Image Compression and Active Contour Model

Wu, Ming-Sheng 01 January 2007 (has links)
In this dissertation, several GA-based approaches for fractal image compression and active contour model are proposed. The main drawback of the classical fractal image compression is the long encoding time. Two methods are proposed in this dissertation to solve this problem. First, a schema genetic algorithm (SGA), in which the Schema Theorem is embedded in GA, is proposed to reduce the encoding time. In SGA, the genetic operators are adapted according to the Schema Theorem in the evolutionary process performed on the range blocks. We find that such a method can indeed speedup the encoder and also preserve the image quality. Moreover, based on the self-similarity characteristic of the natural image, a spatial correlation genetic algorithm (SC-GA) is proposed to further reduce the encoding time. There are two stages in the SC-GA method. The first stage makes use of spatial correlations in images for both the domain pool and the range pool to exploit local optima. The second stage is operated on the whole image to explore more adequate similarities if the local optima are not satisfactory. Thus not only the encoding speed is accelerated further, but also the higher compression ratio is achieved, because the search space is limited relative to the positions of the previously matched blocks, fewer bits are required to record the offset of the domain block instead of the absolute position. The experimental results of comparing the two methods with the full search, traditional GA, and other GA search methods are provided to demonstrate that they can indeed reduce the encoding time substantially. The main drawback of the traditional active contour model (ACM) for extracting the contour of a given object is that the snake cannot converge to the concave region of the object under consideration. An improved ACM algorithm is proposed in this dissertation to solve this problem. The algorithm is composed of two stages. In the first stage, the ACM with traditional energy function guides the snake to converge to the object boundary except the concave regions. In the second stage, for the control points which stay outside the concave regions, a proper energy template are chosen and are added in the external energy. The modified energy function is applied so as to move the snake toward the concave regions. Therefore, the object of interest can be completely extracted. The experimental results show that, by using this method, the snake can indeed completely extract the boundary of the given object, while the extra cost is very low. In addition, for the problem that the snake cannot precisely extract the object contour when the number of the control points on the snake is not enough, a GA-based ACM algorithm is presented to deal with such a problem. First the improved ACM algorithm is used to guide the snake to approximately extract the object boundary. By utilizing the evolutionary strategy of GA, we attempt to extract precisely the object boundary by adding a few control points into the snake. Similarly, some experimental results are provided to show the performance of the method.
138

A Fast Method with the Genetic Algorithm to Evaluate Power Delivery Networks

Lee, Fu-Tien 20 July 2007 (has links)
In recent high-speed digital circuits, the simultaneous switching noise (SSN) or ground bounce noise (GBN) is induced due to the transient currents flowing between power and ground planes during the state transitions of the logic gates. In order to¡@analyze the effect of GBN on power delivery systems effectively and accurately, the impedance of power/ground is an important index to evaluate power delivery systems. In the operating frequency bandwidth, the power impedance must be less than the target impedance. The typical way to suppress the SSN is adding decoupling capacitors to create a low impedance path between power and ground planes. By using the admittance matrix method, we can evaluate the effect of decoupling capacitors mounted on PCB fast and accurately reducing the time needed from the empirical or try-and-error design cycle. In order to reduce the cost of decoupling capacitors, the genetic algorithm is employed to optimize the placement of decoupling capacitors to suppress the GBN. The decoupling capacitor are not effective in the GHz frequency range due to their inherent lead inductance. The electromagnetic bandgap(EBG) structure can produce a stopband to prevent the noise from disperseing at higher frequency. Combining decoupling capacitors with EBG structure to find the optimum placement for suppression of the SSN by using the genetic algorithm.
139

A Genetic Algorithm For Structural Optimization

Taskinoglu, Evren Eyup 01 December 2006 (has links) (PDF)
In this study, a design procedure incorporating a genetic algorithm (GA) is developed for optimization of structures. The objective function considered is the total weight of the structure. The objective function is minimized subjected to displacement and strength requirements. In order to evaluate the design constraints, finite element analysis are performed either by using conventional finite element solvers (i.e. MSC/NASTRAN&reg / ) or by using in-house codes. The application of the algorithm is shown by a number of design examples. Several strategies for reproduction, mutation and crossover are tested. Several conclusions drawn from the research results are presented.
140

A Genetic Algorithm For 2d Shape Optimization

Chen, Weihang 01 August 2008 (has links) (PDF)
In this study, an optimization code has been developed based on genetic algorithms associated with the finite element modeling for the shape optimization of plane stress problems. In genetic algorithms, constraints are mostly handled by using the concept of penalty functions, which penalize infeasible solutions by reducing their fitness values in proportion to the degrees of constraint violation. In this study, An Improved GA Penalty Scheme is used. The proposed method gives information about unfeasible individual fitness as near as possible to the feasible region in the evaluation function. The objective function in this study is the area of the structure. The area is minimized considering the Von-Misses stress criteria. In order to minimize the objective function, one-point crossover with roulette-wheel selection approach is used. Optimum dimensions of four problems available in the literature have been solved by the code developed . The algorithm is tested using several strategies such as / different initial population number, different probability of mutation and crossover. The results are compared with the ones in literature and conclusions are driven accordingly.

Page generated in 0.0522 seconds