1 |
Reducing the computational effort associated with evolutionary optimisation in single component designVekeria, Harish Dhanji January 1999 (has links)
The dissertation presents innovative Evolutionary Search (ES) methods for the reduction in computational expense associated with the optimisation of highly dimensional design spaces. The objective is to develop a semi-automated system which successfully negotiates complex search spaces. Such a system would be highly desirable to a human designer by providing optimised design solutions in realistic time. The design domain represents a real-world industrial problem concerning the optimal material distribution on the underside of a flat roof tile with varying load and support conditions. The designs utilise a large number of design variables (circa 400). Due to the high computational expense associated with analysis such as finite element for detailed evaluation, in order to produce "good" design solutions within an acceptable period of time, the number of calls to the evaluation model must be kept to a minimum. The objective therefore is to minimise the number of calls required to the analysis tool whilst also achieving an optimal design solution. To minimise the number of model evaluations for detailed shape optimisation several evolutionary algorithms are investigated. The better performing algorithms are combined with multi-level search techniques which have been developed to further reduce the number of evaluations and improve quality of design solutions. Multi-level techniques utilise a number of levels of design representation. The solutions of the coarse representations are injected into the more detailed designs for fine grained refinement. The techniques developed include Dynamic Shape Refinement (DSR), Modified Injection Island Genetic Algorithm (MiiGA) and Dynamic Injection Island Genetic Algorithm (DiiGA). The multi-level techniques are able to handle large numbers of design variables (i.e. > 100). Based on the performance characteristics of the individual algorithms and multi-level search techniques, distributed search techniques are proposed. These techniques utilise different evolutionary strategies in a multi-level environment and were developed as a way of further reducing computational expense and improve design solutions. The results indicate a considerable potential for a significant reduction in the number of evaluation calls during evolutionary search. In general this allows a more efficient integration with computationally intensive analytical techniques during detailed design and contribute significantly to those preliminary stages of the design process where a greater degree of analysis is required to validate results from more simplistic preliminary design models.
|
2 |
Job Search Strategies and Wage Effects for ImmigrantsOlli Segendorf, Åsa January 2005 (has links)
<p>Recruiting Through Networks - Wage Premiums and Rewards to Recommenders</p><p>This paper examines the firm's use of recommenders in its recruiting process. In the model, recommenders possess personal information about the worker's ability and about the workplace. In view of this private information, the firm may reward recommenders for good recruiting, thus using recommenders as a screening device. In equilibrium the expected skill of a worker is higher if recruitment has occurred through a recommender rather than through the market, but there is no wage premium. Swedish survey data supports the absence of a wage premium for recommended workers. It has not been possible to test the expected skill or the firm's reward policy vis-à-vis the recommender.</p><p>Job Search by Immigrants in Sweden</p><p>This paper analyses the job search strategies of immigrants born outside Europe and compares these with the search strategies of the native population. The analysis uses unique Swedish data gathered during 1998. Two clear patterns can be traced in the empirical analysis: immigrants search more intensively than natives; also, the greater search intensity is a requisite for getting a job. Specifically, the first analysis shows that immigrants who got jobs were likely to have used networks or direct contact with employers to a greater extent than natives. Immigrants who got jobs had submitted more applications and spent more time on job search than natives, while those who did not get jobs had not spent more time on job search than natives. The fourth and last analysis looks at the number of methods used in job search. Immigrants who left unemployment had not used more methods than natives. On the other hand, immigrants who remained unemployed had used significantly more methods than natives, indicating that it is not necessarily productive to use too many methods.</p><p>Wage Effects of Search Methods for Immigrants and Natives in Sweden</p><p>Using unique cross-section survey data collected in 1998, this study examines whether successful job-search method differ between natives and immigrants from outside Europe, and whether there is a wage difference between the two groups associated with the search method used.</p><p>It is found that those individuals from outside Europe who got jobs did relatively better when using formal methods than when using informal ones.</p><p>Next, a wage analysis has been performed, which shows that there is an overall wage discount for those born outside Europe. The discount is larger when using informal methods rather than formal.</p><p>To explore this further the informal method measure is divided in two parts, one part for contacts through friends and family and the second for contacts with the employer. The penalty for immigrants from outside Europe using an informal method as a successive job-search device is partly explained by contact with the employer, suggesting that the penalty for using informal methods has been underestimated in previous studies.An attempt has also been made to control for the effect of unobservable characteristics on wages, but this did not have any significant impact.</p>
|
3 |
Job Search Strategies and Wage Effects for ImmigrantsOlli Segendorf, Åsa January 2005 (has links)
Recruiting Through Networks - Wage Premiums and Rewards to Recommenders This paper examines the firm's use of recommenders in its recruiting process. In the model, recommenders possess personal information about the worker's ability and about the workplace. In view of this private information, the firm may reward recommenders for good recruiting, thus using recommenders as a screening device. In equilibrium the expected skill of a worker is higher if recruitment has occurred through a recommender rather than through the market, but there is no wage premium. Swedish survey data supports the absence of a wage premium for recommended workers. It has not been possible to test the expected skill or the firm's reward policy vis-à-vis the recommender. Job Search by Immigrants in Sweden This paper analyses the job search strategies of immigrants born outside Europe and compares these with the search strategies of the native population. The analysis uses unique Swedish data gathered during 1998. Two clear patterns can be traced in the empirical analysis: immigrants search more intensively than natives; also, the greater search intensity is a requisite for getting a job. Specifically, the first analysis shows that immigrants who got jobs were likely to have used networks or direct contact with employers to a greater extent than natives. Immigrants who got jobs had submitted more applications and spent more time on job search than natives, while those who did not get jobs had not spent more time on job search than natives. The fourth and last analysis looks at the number of methods used in job search. Immigrants who left unemployment had not used more methods than natives. On the other hand, immigrants who remained unemployed had used significantly more methods than natives, indicating that it is not necessarily productive to use too many methods. Wage Effects of Search Methods for Immigrants and Natives in Sweden Using unique cross-section survey data collected in 1998, this study examines whether successful job-search method differ between natives and immigrants from outside Europe, and whether there is a wage difference between the two groups associated with the search method used. It is found that those individuals from outside Europe who got jobs did relatively better when using formal methods than when using informal ones. Next, a wage analysis has been performed, which shows that there is an overall wage discount for those born outside Europe. The discount is larger when using informal methods rather than formal. To explore this further the informal method measure is divided in two parts, one part for contacts through friends and family and the second for contacts with the employer. The penalty for immigrants from outside Europe using an informal method as a successive job-search device is partly explained by contact with the employer, suggesting that the penalty for using informal methods has been underestimated in previous studies.An attempt has also been made to control for the effect of unobservable characteristics on wages, but this did not have any significant impact.
|
4 |
Heuristic multi-sequence search methodsJochumsson, Thorvaldur January 2001 (has links)
<p>With increasing size of sequence databases heuristic search approaches have become necessary. Hidden Markov models are the best performing search methods known today with respect to discriminative power, but are too time complex to be practical when searching in large sequence databases. In this report, heuristic algorithms that reduce the search space before searching with traditional search algorithms of hidden Markov models are presented and experimentally validated. The results of the validation show that the heuristic search algorithms will speed up the searches without decreasing their discriminative power.</p>
|
5 |
Heuristic multi-sequence search methodsJochumsson, Thorvaldur January 2001 (has links)
With increasing size of sequence databases heuristic search approaches have become necessary. Hidden Markov models are the best performing search methods known today with respect to discriminative power, but are too time complex to be practical when searching in large sequence databases. In this report, heuristic algorithms that reduce the search space before searching with traditional search algorithms of hidden Markov models are presented and experimentally validated. The results of the validation show that the heuristic search algorithms will speed up the searches without decreasing their discriminative power.
|
6 |
Water quality modeling and rainfall estimation: a data driven approachRoz, Evan Phillips 01 July 2011 (has links)
Water is vital to man and its quality it a serious topic of concern. Addressing sustainability issues requires new understanding of water quality and water transport. Past research in hydrology has focused primarily on physics-based models to explain hydrological transport and water quality processes. The widespread use of in situ hydrological instrumentation has provided researchers a wealth of data to use for analysis and therefore use of data mining for data-driven modeling is warranted. In fact, this relatively new field of hydroinformatics makes use of the vast data collection and communication networks that are prevalent in the field of hydrology. In this Thesis, a data-driven approach for analyzing water quality is introduced. Improvements in the data collection of information system allow collection of large volumes of data. Although improvements in data collection systems have given researchers sufficient information about various systems, they must be used in conjunction with novel data-mining algorithms to build models and recognize patterns in large data sets. Since the mid 1990's, data mining has been successful used for model extraction and describing various phenomena of interest.
|
7 |
Efficient Solution Of Optimization Problems With Constraints And/or Cost Functions Expensive To EvaluateKurtdere, Ahmet Gokhan 01 January 2010 (has links) (PDF)
There are many optimization problems motivated by engineering applications, whose constraints and/or cost functions are computationally expensive to evaluate. What is more derivative information is usually not available or available at a considerable cost. For that reason, classical optimization methods, based on derivatives, are not applicable. This study presents a framework based on available methods in literature to overcome this important problem. First, a penalized model is constructed where the violation of the constraints are added to the cost function. The model is optimized with help of stochastic approximation algorithms until a point satisfying the constraints is obtained. Then, a sample point set satisfying the constraints is obtained by taking advantage of direct search algorithms based sampling strategies. In this context, two search direction estimation methods, convex hull based and estimated radius of curvature of the sample point set based methods can be applicable. Point set is used to create a barrier which imposes a large cost for points near to the boundary. The aim is to obtain convergence to local optima using the most promising direction with help of direct search methods. As regards to the evaluation of the cost function there are two directions to follow: a-) Gradient-based methods, b-) Non-gradient methods. In gradient-based methods, the gradient is approximated using the so-called stochastic approximation algorithms. In the latter case, direct search algorithms based sampling strategy is realized. This study is concluded by using all these ideas in the solution of complicated test problems where the cost and the constraint functions are costly to evaluate.
|
8 |
The most appropriate process scheduling for Semiconductor back-end Assemblies--Application for Tabu SearchTsai, Yu-min 25 July 2003 (has links)
Wire Bonder and Molding are the most costive equipments in the investment of IC packaging; and the packaging quality, cost and delivery are concerned most in the assembly processes. An inappropriate process scheduling may result in the wastes of resources and assembly bottleneck. Manager must allocate the resources appropriately to adapt the changeable products and production lines.
We would introduce several heuristic search methods, especially the Tabu search. Tabu search is one of the most popular methods of heuristic search. We also use Tabu list to record several latest moves and avoid to the duplication of the paths or loops. It starts from an initial solution and keep moving the solution to the best neighborhood without stock by Tabu. The iterations would be repeated until the terminating condition is reached.
At last of the report, an example will be designed to approach the best wire bonding and molding scheduling by Tabu search; and verify the output volume is more than those with FIFO in the same period of production time.
Tabu search will be then confirmed to be effective for flexible flow shop.
|
9 |
Design Validation of RTL Circuits using Binary Particle Swarm Optimization and Symbolic ExecutionPuri, Prateek 05 August 2015 (has links)
Over the last two decades, chip design has been conducted at the register transfer (RT) Level using Hardware Descriptive Languages (HDL), such as VHDL and Verilog. The modeling at the behavioral level not only allows for better representation and understanding of the design, but also allows for encapsulation of the sub-modules as well, thus increasing productivity. Despite these benefits, validating a RTL design is not necessarily easier. Today, design validation is considered one of the most time and resource consuming aspects of hardware design. The high costs associated with late detection of bugs can be enormous. Together with stringent time to market factors, the need to guarantee the correct functionality of the design is more critical than ever.
The work done in this thesis tackles the problem of RTL design validation and presents new frameworks for functional test generation. We use branch coverage as our metric to evaluate the quality of the generated test stimuli. The initial effort for test generation utilized simulation based techniques because of their scalability with design size and ease of use. However, simulation based methods work on input spaces rather than the DUT's state space and often fail to traverse very narrow search paths in large input spaces. To encounter this problem and enhance the ability of test generation framework, in the following work in this thesis, certain design semantics are statically extracted and recurrence relationships between different variables are mined. Information such as relations among variables and loops can be extremely valuable from test generation point of view. The simulation based method is hybridized with Z3 based symbolic backward execution engine with feedback among different stages. The hybridized method performs loop abstraction and is able to traverse narrow design paths without performing costly circuit analysis or explicit loop unrolling. Also structural and functional unreachable branches are identified during the process of test generation. Experimental results show that the proposed techniques are able to achieve high branch coverage on several ITC'99 benchmark circuits and their modified variants, with significant speed up and reduction in the sequence length. / Master of Science
|
10 |
On Methods for Discrete Topology Optimization of Continuum StructuresWerme, Mats January 2008 (has links)
This thesis consists of an introduction and seven appended papers. The purpose of the introduction is to give an overview of the field of topology optimization of discretized load carrying continuum structures. It is assumed that the design domain has been discretized by the finite element method and that the design variable vector is a binary vector indicating presence or absence of material in the various finite elements. Common to all papers is the incorporation of von Mises stresses in the problem formulations. In the first paper the design variables are binary but it is assumed that the void structure can actually take some load. This is equivalent to adding a small positive value, epsilon, to all design variables, both those that are void and those that are filled with material. With this small positive lower bound the stiffness matrix becomes positive definite for all designs. If only one element is changed (from material to void or from void to material) the new global stiffness matrix is just a low rank modification of the old one and thus the Sherman-Morrison-Woodbury formula can be used to compute the displacements in the neighbouring designs efficiently. These efficient sensitivity calculations can then be applied in the context of a neighbourhood search method. Since the computed displacements are exact in the 1-neighbourhood (when one design variable is changed) the neighbourhood search method will find a local optimum with respect to the 1-neighbourhood. The second paper presents globally optimal zero-one solutions to some small scale topology optimization problems defined on discretized continuum design domains. The idea is that these solutions can be used as benchmarks when testing new algorithms for finding pure zero-one solutions to topology optimization problems. In the third paper the results from the first paper are extended to include also the case where there is no epsilon>0. In this case the stiffness matrix will no longer be positive definite which means that the Sherman-Morrison-Woodbury formula can no longer be applied. The changing of one or two binary design variables to their opposite binary values will still result in a low rank change, but the size of the reduced stiffness matrix will change with the design. It turns out, however, that it is possible to compute the effect of these low rank changes efficiently also without the positive lower bound. These efficient sensitivity calculations can then be used in the framework of a neighbourhood search method. In this case the complete 1-neighbourhood and a subset of the 2-neighbourhood is investigated in the search for a locally optimal solution. In the fourth paper the sensitivity calculations developed in the third paper are used to generate first and partial second order approximations of the nonlinear functions usually present in topology optimization problems. These approximations are then used to generate subproblems in two different sequential integer programming methods (SLIP and SQIP, respectively). Both these methods generate a sequence of iteration points that can be proven to converge to a local optimum with respect to the 1-neighbourhood. The methods are tested on some different topology optimization problems. The fifth paper demonstrates that the SLIP method developed in the previous paper can be applied also to the mechanism design problem with stress constraints. In order to generate the subproblems in a fast way small displacements are assumed, which implies that the efficient sensitivity calculations derived in the third paper can be used. The numerical results indicate that the method can be used to lower the stresses and still get a functional mechanism. In the sixth paper the SLIP method developed in the fourth paper is used as a post processor to obtain locally optimal zero-one solutions starting from a rounded solution to the corresponding continuous problem. The numerical results indicate that the method can perform well as a post processor. The seventh paper is a theoretical paper that investigates the validity of the commonly used positive lower bound epsilon on the design variables when stating and solving topology optimization problems defined on discretized load carrying continuum structures. The main result presented here is that an optimal "epsilon-1" solution to an "epsilon-perturbed" discrete minimum weight problem with constraints on compliance, von Mises stresses and strain energy densities, is optimal, after rounding to zero-one, to the corresponding "unperturbed" discrete problem. This holds if the constraints in the perturbed problem are carefully defined and epsilon>0 is sufficiently small. / QC 20100917
|
Page generated in 0.0451 seconds