• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6773
  • 2452
  • 1001
  • 805
  • 778
  • 234
  • 168
  • 120
  • 84
  • 79
  • 70
  • 63
  • 54
  • 52
  • 50
  • Tagged with
  • 15011
  • 2422
  • 1971
  • 1814
  • 1642
  • 1528
  • 1382
  • 1328
  • 1285
  • 1254
  • 1220
  • 1115
  • 973
  • 929
  • 927
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Manual micro-optimizations in C++: An investigation of four micro-optimizations and their usefulness

Ekström, Viktor January 2019 (has links)
Optimering är oumbärligt för att utnyttja datorns fulla potential. Det finns flera olika infallsvinklar till optimering, däribland så kallade mikrooptimeringar. Mikrooptimeringar är lokala ändringar som inte ändrar på någon algoritm.Denna studie undersöker fyra mikrooptimeringar: loop interchange, loop unrolling, cache loop end value, och iterator-inkrementering, för att se när de bidrar med prestandaförbättring i C++. Detta undersöks genom experiment, där körtiden för testfall med och utan mikrooptimeringar mäts och sedan jämförs. Mätningar görs på två kompilatorer.Resultatet visar flera situationer där mikrooptimeringar bidrar med prestandaförbättringar. Värdet kan däremot variera beroende på kompilator även när samma kod används. En mikrooptimering som bidrar med prestandaförbättring med en kompilatorn kan ge prestandaförsämring med en annan kompilator. Detta visar att kompilatorkännedom, och att mäta, är fortsatt viktigt. / Optimization is essential for utilizing the full potential of the computer. There are several different approaches to optimization, including so-called micro-optimizations. Micro-optimizations are local adjustments that do not change an algorithm.This study investigates four micro-optimizations: loop interchange, loop unrolling, cache loop end value, and iterator incrementation, to see when they provide performance benefit in C++. This is investigated through an experiment, where the running time of test cases with and without micro-optimization is measured and then compared between cases. Measurements are made on two compilers.Results show several circumstances where micro-optimizations provide benefit. However, value can vary greatly depending on the compiler even when using the same code. A micro-optimization providing benefit with one compiler may be detrimental to performance with another compiler. This shows that understanding the compiler, and measuring performance, remains important.
482

A structural design methodology based on multiobjective and manufacturing-oriented topology optimization / 多目的及び製造指向トポロジー最適化に基づく構造設計法

Sato, Yuki 25 March 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第21752号 / 工博第4569号 / 新制||工||1712(附属図書館) / 京都大学大学院工学研究科機械理工学専攻 / (主査)教授 西脇 眞二, 准教授 泉井 一浩, 教授 椹木 哲夫, 教授 松原 厚 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
483

A Metamodel based Multiple Criteria Optimization via Simulation Method for Polymer Processing

Villarreal-Marroquin, Maria G. January 2012 (has links)
No description available.
484

Whitespace Exploration

Daniel, Jason Lloyd 01 December 2017 (has links) (PDF)
As engineering systems grow in complexity so too must the design tools that we use evolve and allow for decision makers to efficiently ask questions of their model and obtain meaningful answers. The process of whitespace exploration has recently been developed to aid in engineering design and provide insight into a design space where traditional design exploration methods may fail. In an effort to further the research and development of whitespace exploration algorithms, a software package called Thalia has been created to allow for automated data collection and experimentation with the whitespace exploration methodology. In this work, whitespace exploration is defined and the current state of the art of whitespace exploration algorithms is reviewed. The whitespace exploration library Thalia along with a collection of benchmarking cases are described in detail. A set of experiments on the benchmark cases are run and analyzed to further understand the behavior of the algorithm and outline initial performance results which can later be used for comparison to aid in improving the methodology.
485

A Scaled Gradient Descent Method for Unconstrained Optimization Problems With A Priori Estimation of the Minimum Value

D'Alves, Curtis January 2017 (has links)
A scaled gradient descent method for competition of applications of conjugate gradient with priori estimations of the minimum value / This research proposes a novel method of improving the Gradient Descent method in an effort to be competitive with applications of the conjugate gradient method while reducing computation per iteration. Iterative methods for unconstrained optimization have found widespread application in digital signal processing applications for large inverse problems, such as the use of conjugate gradient for parallel image reconstruction in MR Imaging. In these problems, very good estimates of the minimum value at the objective function can be obtained by estimating the noise variance in the signal, or using additional measurements. The method proposed uses an estimation of the minimum to develop a scaling for Gradient Descent at each iteration, thus avoiding the necessity of a computationally extensive line search. A sufficient condition for convergence and proof are provided for the method, as well as an analysis of convergence rates for varying conditioned problems. The method is compared against the gradient descent and conjugate gradient methods. A method with a computationally inexpensive scaling factor is achieved that converges linearly for well-conditioned problems. The method is tested with tricky non-linear problems against gradient descent, but proves unsuccessful without augmenting with a line search. However with line search augmentation the method still outperforms gradient descent in iterations. The method is also benchmarked against conjugate gradient for linear problems, where it achieves similar convergence for well-conditioned problems even without augmenting with a line search. / Thesis / Master of Science (MSc) / This research proposes a novel method of improving the Gradient Descent method in an effort to be competitive with applications of the conjugate gradient method while reducing computation per iteration. Iterative methods for unconstrained optimization have found widespread application in digital signal processing applications for large inverse problems, such as the use of conjugate gradient for parallel image reconstruction in MR Imaging. In these problems, very good estimates of the minimum value at the objective function can be obtained by estimating the noise variance in the signal, or using additional measurements. The method proposed uses an estimation of the minimum to develop a scaling for Gradient Descent at each iteration, thus avoiding the necessity of a computationally extensive line search. A sufficient condition for convergence and proof are provided for the method, as well as an analysis of convergence rates for varying conditioned problems. The method is compared against the gradient descent and conjugate gradient methods. A method with a computationally inexpensive scaling factor is achieved that converges linearly for well-conditioned problems. The method is tested with tricky non-linear problems against gradient descent, but proves unsuccessful without augmenting with a line search. However with line search augmentation the method still outperforms gradient descent in iterations. The method is also benchmarked against conjugate gradient for linear problems, where it achieves similar convergence for well-conditioned problems even without augmenting with a line search.
486

Combined Design and Dispatch Optimization for Nuclear-Renewable Hybrid Energy Systems

Hill, Daniel Clyde 08 December 2023 (has links) (PDF)
Reliable, affordable access to electrical power is a requirement for almost all aspects of developed societies. Challenges associated with reducing carbon emissions has led to growing interest in nuclear-renewable hybrid energy systems (N-RHES). Much work has already been done in suggesting and analyzing various N-RHES using a variety of optimization techniques and assumptions. This work builds upon previous techniques for simultaneous combined design and dispatch optimization (CDDO) for hybrid energy systems (HES). The first contribution of this work is the development and application of sensitivity analysis tailored to the combined design and dispatch optimization problem. This sensitivity analysis cover uncertainty in design parameters, time series and dispatch horizon lengths. The result is a deeper insight into which sources of uncertainty are most important to account for and how the uncertainty around these sources can be quantified. The second contribution of this work is a novel multi-scale optimization algorithm for the combined HES design and dispatch optimization. This algorithm supports optimization of nonlinear models over very long-time horizons. This method is based on a multi-dimensional distribution of the optimal capacities for a system as determined by a large number of combined design and dispatch optimization problems each covering a subset of the complete time horizon. This method shows good agreement with the direct solution to multiple example systems and is then used to solve a problem with a dispatch horizon length 112.5 times longer than is solvable directly. The third contribution of this work is the application of the novel multi-scale method to three HES. Each of the application systems is used to demonstrate the strengths, validation and applicability of the developed algorithm to a wide range of possible HES/NHES designs.
487

Analytical and experimental comparison of deterministic and probabilistic optimization

Ponslet, Eric 06 June 2008 (has links)
The probabilistic approach to design optimization has received increased attention in the last two decades. It is widely recognized that such an approach should lead to designs that make better use of the resources than designs obtained with the classical deterministic approach by distributing safety onto the different components and/or failure modes of a system in an optimal manner. However, probabilistic models rely on a number of assumptions regarding the magnitude of the uncertainties, their distributions, correlations, etc. In addition, modelling errors and approximate reliability calculations (first order methods for example) introduce uncertainty in the predicted system reliability. Because of these inaccuracies, it is not clear if a design obtained from probabilistic optimization will really be more reliable than a design based on deterministic optimization. The objective of this work is to provide a partial answer to this question through laboratory experiments — such experimental validation is not currently available in the literature. A cantilevered truss structure is used as a test case. First, the uncertainties in stiffness and mass properties of the truss elements are evaluated from a large number of measurements. The transmitted scatter in the natural frequencies of the truss is computed and compared to experimental estimates obtained from measurements on 6 realizations of the structure. The experimental results are in reasonable agreement with the predictions, although the magnitude of the transmitted scatter is extremely small. The truss is then equipped with passive viscoelastic tuned dampers for vibration control. The controlled structure is optimized by selecting locations for the dampers and for tuning masses added to the truss. The objective is to satisfy upper limits on the acceleration at given points on the truss for a specified excitation. The properties of the dampers are the primary sources of uncertainties. Two optimal designs are obtained from deterministic and probabilistic optimizations; the deterministic approach maximizes safety margins while the probability of failure (i.e. exceeding the acceleration limit) is minimized in the probabilistic approach. The optimizations are performed with genetic algorithms. The predicted probability of failure of the optimum probabilistic design is less than half that of the deterministic optimum. Finally, optimal deterministic and probabilistic designs are compared in the laboratory. Because small differences in failure rates between two designs are not measurable with a reasonable number of tests, we use anti-optimization to identify a design problem that maximizes the contrast in probability of failure between the two approaches. The anti-optimization is also performed with a genetic algorithm. For the problem identified by the anti-optimization, the probability of failure of the optimum probabilistic design is 25 times smaller than that of the deterministic design. The rates of failure are then measured by testing 29 realizations of each optimum design. The results agree well with the predictions and confirm the larger reliability of the probabilistic design. However, the probabilistic optimum is shown to be very sensitive to modelling errors. This sensitivity can be reduced by including the modelling errors as additional uncertainties in the probabilistic formulation. / Ph. D.
488

Design Optimization of Fuzzy Logic Systems

Dadone, Paolo 29 May 2001 (has links)
Fuzzy logic systems are widely used for control, system identification, and pattern recognition problems. In order to maximize their performance, it is often necessary to undertake a design optimization process in which the adjustable parameters defining a particular fuzzy system are tuned to maximize a given performance criterion. Some data to approximate are commonly available and yield what is called the supervised learning problem. In this problem we typically wish to minimize the sum of the squares of errors in approximating the data. We first introduce fuzzy logic systems and the supervised learning problem that, in effect, is a nonlinear optimization problem that at times can be non-differentiable. We review the existing approaches and discuss their weaknesses and the issues involved. We then focus on one of these problems, i.e., non-differentiability of the objective function, and show how current approaches that do not account for non-differentiability can diverge. Moreover, we also show that non-differentiability may also have an adverse practical impact on algorithmic performances. We reformulate both the supervised learning problem and piecewise linear membership functions in order to obtain a polynomial or factorable optimization problem. We propose the application of a global nonconvex optimization approach, namely, a reformulation and linearization technique. The expanded problem dimensionality does not make this approach feasible at this time, even though this reformulation along with the proposed technique still bears a theoretical interest. Moreover, some future research directions are identified. We propose a novel approach to step-size selection in batch training. This approach uses a limited memory quadratic fit on past convergence data. Thus, it is similar to response surface methodologies, but it differs from them in the type of data that are used to fit the model, that is, already available data from the history of the algorithm are used instead of data obtained according to an experimental design. The step-size along the update direction (e.g., negative gradient or deflected negative gradient) is chosen according to a criterion of minimum distance from the vertex of the quadratic model. This approach rescales the complexity in the step-size selection from the order of the (large) number of training data, as in the case of exact line searches, to the order of the number of parameters (generally lower than the number of training data). The quadratic fit approach and a reduced variant are tested on some function approximation examples yielding distributions of the final mean square errors that are improved (i.e., skewed toward lower errors) with respect to the ones in the commonly used pattern-by-pattern approach. Moreover, the quadratic fit is also competitive and sometimes better than the batch training with optimal step-sizes, thus showing an improved performance of this approach. The quadratic fit approach is also tested in conjunction with gradient deflection strategies and memoryless variable metric methods, showing errors smaller by 1 to 7 orders of magnitude. Moreover, the convergence speed by using either the negative gradient direction or a deflected direction is higher than that of the pattern-by-pattern approach, although the computational cost of the algorithm per iteration is moderately higher than the one of the pattern-by-pattern method. Finally, some directions for future research are identified. / Ph. D.
489

Topology and Toolpath Optimization via Layer-Less Multi-Axis Material Extrusion

Kubalak, Joseph Riley 28 January 2021 (has links)
Although additive manufacturing technologies are often referred to as "3D printing," the family of technologies typically deposit material on a layer-by-layer basis. For material extrusion (ME) in particular, the deposition process results in weak inter- and intra-layer bonds that reduce mechanical performance in those directions. Despite this shortcoming, ME offers the opportunity to specifically and preferentially align the reinforcement of a composite material throughout a part by customizing the toolpath. Recent developments in multi-axis deposition have demonstrated the ability to place material outside of the XY-plane, enabling depositions to align to any 3D (i.e., non-planar) vector. Although mechanical property improvements have been demonstrated, toolpath planning capabilities are limited; the geometries and load paths are restricted to surface-based structures, rather than fully 3D load paths. By specifically planning deposition paths (roads) where the composite reinforcement is aligned to the load paths within a structure, there is an opportunity for a step-change in the mechanical properties of ME parts. To achieve this goal for arbitrary geometries and load paths, the author presents a design and process planning workflow that concurrently optimizes the topology of the part and the toolpath used to fabricate it. The workflow i) identifies the optimal structure and road directions using topology optimization (TO), ii) plans roads aligned to those optimal directions, iii) orders those roads for collision-free deposition, and iv) translates that ordered set of roads to a robot-interpretable toolpath. A TO algorithm, capable of optimizing 3D material orientations, is presented and demonstrated in the context of 2D and 3D load cases. The algorithm achieved a 38% improvement in final solution compliance for a 3D Wheel problem relative to existing TO algorithms with planar orientation optimization considerations. Optimized geometries and their associated orientation fields were then propagated with the presented alignment-focused deposition path planner and conventional toolpath planners. The presented method resulted in a 97% correlation between the road directions and the orientation field, while the conventional methods only achieved 77%. A planar multi-load case was then fabricated using each of these methods and tested in both tension and bending; the presented alignment-focused method resulted in a 108.24% and 29.25% improvement in each load case, respectively. To evaluate the workflow in a multi-axis context, an inverted Wheel problem was optimized and processed by the workflow. The resulting toolpaths were then fabricated on a multi-axis deposition platform and mechanically evaluated relative to geometrically similar structures using a conventional toolpath planner. While the alignment in the multi-axis specimen was improved from the conventional method, the mechanical properties were reduced due to limitations of the multi-axis deposition platform. / Doctor of Philosophy / The material extrusion additive manufacturing process is widely used by hobbyists and industry professionals to produce demonstration parts, but the process is often overlooked for end-use, load bearing parts. This is due to the layer-by-layer fabrication method used to create the desired geometries; the bonding between layers is weaker than the direction material is deposited. If load paths acting on the printed structure travel across those layer interfaces, the part performance will decrease. Whereas gantry-based systems are forced into this layer-by-layer strategy, robotic arms allow the deposition head to rotate, which enables depositions to be placed outside of the XY-plane (i.e., the typical layer). If depositions are appropriately planned using this flexibility, the layer interfaces can be oriented away from the load paths such that all of the load acts on the (stronger) depositions. Although this benefit has been demonstrated in literature, the existing methods for planning robotic toolpaths have limits on printability; certain load paths and geometries cannot be printed due to concerns that the robotic system will collide with the part being printed. This work focuses on increasing the generality of these toolpath planning methods by enabling any geometry and set of load paths to be printed. This is achieved through three objectives: i) identify the load paths within the structure, ii) plan roads aligned to those load paths, iii) order those roads such that collisions will not occur. The author presents and evaluates a design workflow that addresses each of these three objectives by simultaneously optimizing the geometry of the part as well as the toolpath used to fabricate it. Planar and 3D load cases are optimized, processed using the presented workflow, and then fabricated on a multi-axis deposition platform. The resulting specimens are then mechanically tested and compared to specimens fabricated using conventional toolpath planning methods.
490

A reliability-based method for optimization programming problems

Esteban, Jaime 30 March 2010 (has links)
In this study, a method is developed to solve general stochastic programming problems. The method is applicable to both linear and nonlinear optimization. Based on a proper linearization, a set of probabilistic constraints (performance functions) can be transformed into a corresponding set of deterministic constraints. this is accomplish by expanding all the constraints about the most probable failure point. The use of the proposed method allows the simplification of any stochastic programming problems into a standard linear programming problem. Numerical examples are applied to the area of probability- based optimum structural design. / Master of Science

Page generated in 0.0877 seconds