• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 8
  • 8
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 25
  • 18
  • 15
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimal control of nonsmooth dynamic systems

Woodford, Patrick Dominic January 1997 (has links)
No description available.
2

Dynamic Memory Optimization using Pool Allocation and Prefetching

Zhao, Qin, Rabbah, Rodric, Wong, Weng Fai 01 1900 (has links)
Heap memory allocation plays an important role in modern applications. Conventional heap allocators, however, generally ignore the underlying memory hierarchy of the system, favoring instead a low runtime overhead and fast response times. Unfortunately, with little concern for the memory hierarchy, the data layout may exhibit poor spatial locality, and degrade cache performance. In this paper, we describe a dynamic heap allocation scheme called pool allocation. The strategy aims to improve cache performance by inspecting memory allocation requests, and allocating memory from appropriate heap pools as dictated by the requesting context. The advantages are two fold. First, by pooling together data with a common context, we expect to improve spatial locality, as data fetched to the caches will contain fewer items from different contexts. If the allocation patterns are closely matched to the traversal patterns, the end result is faster memory performance. Second, by pooling heap objects, we expect access patterns to exhibit more regularity, thus creating more opportunities for data prefetching. Our dynamic memory optimizer exploits the increased regularity to insert prefetch instructions at runtime. The optimizations are implemented in DynamoRIO, a dynamic optimization framework. We evaluate the work using various benchmarks, and measure a 17% speedup over gcc -O3 on an Athlon MP, and a 13% speedup on a Pentium 4. / Singapore-MIT Alliance (SMA)
3

DYNAMIC MODELING AND OPTIMIZATION OF BASIC OXYGEN FURNACE (BOF) OPERATION

Dering de Lima Silva, Daniela January 2019 (has links)
The basic oxygen furnace (BOF) is responsible for over 60% of the steel production worldwide. Despite being an old and intensively studied process, the complex dynamics of the phenomena taking place in the BOF still challenge researchers and fuel debates. Moreover, the severe operational conditions often prevent direct/continuous measurements of the states, making the process operation largely dependent on past experience and operators' knowledge. In this work, a dynamic model and optimization framework that can aid operators with the decision-making process are developed. Through several case studies, it is shown that the developed framework can potentially be used to reduce operational costs and increase productivity. / Thesis / Master of Chemical Engineering (MChE)
4

Modeling the interaction between flooding events and economic growth

Grames, Johanna, Prskawetz, Alexia, Grass, Dieter, Viglione, Alberto, Blöschl, Günter January 2016 (has links) (PDF)
Recently socio-hydrology models have been proposed to analyze the interplay of community risk-coping culture, flooding damage and economic growth. These models descriptively explain the feedbacks between socio-economic development and natural disasters such as floods. Complementary to these descriptive models, we develop a dynamic optimization model, where the inter-temporal decision of an economic agent interacts with the hydrological system. We assume a standard macro-economic growth model where agents derive utility from consumption and output depends on physical capital that can be accumulated through investment. To this framework we add the occurrence of flooding events which will destroy part of the capital. We identify two specific periodic long term solutions and denote them rich and poor economies. Whereas rich economies can afford to invest in flood defense and therefore avoid flood damage and develop high living standards, poor economies prefer consumption instead of investing in flood defense capital and end up facing flood damages every time the water level rises like e.g. the Mekong delta. Nevertheless, they manage to sustain at least a low level of physical capital. We identify optimal investment strategies and compare simulations with more frequent, more intense and stochastic high water level events.
5

Dynamic optimization of classification systems for adaptive incremental learning.

Kapp, Marcelo Nepomoceno 25 May 2016 (has links) (PDF)
Tese de Doutorado, defendida na Université Du Québec, Canadian. 2010 / Submitted by Nilson Junior (nilson.junior@unila.edu.br) on 2016-05-25T23:32:09Z No. of bitstreams: 2 MKapp_PhD_2010.pdf: 2076219 bytes, checksum: db47dfe0b9ee92594b31d05a2f2e7fef (MD5) Recibo Deposito Legal_TESE_Marcelo Nepomoceno Kapp01.pdf: 205561 bytes, checksum: ac4617abe7f6d68472c1ac6ac02abd26 (MD5) / Made available in DSpace on 2016-05-25T23:32:22Z (GMT). No. of bitstreams: 2 MKapp_PhD_2010.pdf: 2076219 bytes, checksum: db47dfe0b9ee92594b31d05a2f2e7fef (MD5) Recibo Deposito Legal_TESE_Marcelo Nepomoceno Kapp01.pdf: 205561 bytes, checksum: ac4617abe7f6d68472c1ac6ac02abd26 (MD5) Previous issue date: 2010 / An incremental learning system updates itself in response to incoming data without reexamining all the old data. Since classification systems capable of incrementally storing, filtering, and classifying data are economical, in terms of both space and time, which makes them immensely useful for industrial, military, and commercial purposes, interest in designing them is growing. However, the challenge with incremental learning is that classification tasks can no longer be seen as unvarying, since they can actually change with the evolution of the data. These changes in turn cause dynamic changes to occur in the classification system’s parameters If such variations are neglected, the overall performance of these systems will be compromised in the future. In this thesis, on the development of a system capable of incrementally accommodating new data and dynamically tracking new optimum system parameters for self-adaptation, we first address the optimum selection of classifiers over time. We propose a framework which combines the power of Swarm Intelligence Theory and the conventional grid-search method to progressively identify potential solutions for gradually updating training datasets. The key here is to consider the adjustment of classifier parameters as a dynamic optimization problem that depends on the data available. Specifically, it has been shown that, if the intention is to build efficient Support Vector Machine (SVM) classifiers from sources that provide data gradually and serially, then the best way to do this is to consider model selection as a dynamic process which can evolve and change over time. This means that a number of solutions are required, depending on the knowledge available about the problem and uncertainties in the data. We also investigate measures for evaluating and selecting classifier ensembles composed of SVM classifiers. The measures employed are based on two different theories (diversity and margin) commonly used to understand the success of ensembles. This study has given us valuable insights and helped us to establish confidence-based measures as a tool for the selection of classifier ensembles. The main contribution of this thesis is a dynamic optimization approach that performs incremental learning in an adaptive fashion by tracking, evolving, and combining optimum hypotheses over time. The approach incorporates various theories, such as dynamic Particle Swarm Optimization, incremental Support Vector Machine classifiers, change detection, and dynamic ensemble selection based on classifier confidence levels. Experiments carried out on synthetic and real-world databases demonstrate that the proposed approach outperforms the classification methods often used in incremental learning scenarios.
6

Computational studies of some static and dynamic optimisation problems.

Lee, Wei R. January 1999 (has links)
In this thesis we shall investigate the numerical solutions to several important practical static and dynamic optimization problems in engineering and physics. The thesis is organized as follows.In Chapter 1 a general literature review is presented, including motivation and development of the problems, and existing results. Furthermore, some existing computational methods for optimal control problems are also discussed.In Chapter 2 the design of a semiconductor device is posed as an optimization problem: given an ideal voltage-current (V - I) characteristic, find one or more physical and geometrical parameters so that the V-I characteristic of the device matches the ideal one optimally with respect to a prescribed performance criterion. The voltage-current characteristic of a semiconductor device is governed by a set of nonlinear partial differential equations (PDE), and thus a black-box approach is taken for the numerical solution to the PDEs. Various existing numerical methods are proposed for the solution of the nonlinear optimization problem. The Jacobian of the cost function is ill-conditioned and a scaling technique is thus proposed to stabilize the resulting linear system. Numerical experiments, performed to show the usefulness of this approach, demonstrate that the approach always gives optimal or near-optimal solutions to the test problems in both two and three dimensions.In Chapter 3 we propose an efficient approach to numerical integration in one and two dimensions, where a grid set with a fixed number of vertices is to be chosen so that the error between the numerical integral and the exact integral is minimized. For one dimensional problem two schemes are developed for sufficiently smooth functions based on the mid-point rectangular quadrature rule and the trapezoidal rule respectively, and another method is also developed for integrands which are not ++ / sufficiently smooth. For two dimensional problems two schemes are first developed for sufficiently smooth functions. One is based on the barycenter rule on a rectangular partition, while the other is on a triangular partition. A scheme for insufficiently smooth functions is also developed. For illustration, several examples are solved using the proposed schemes, and the numerical results show the effectiveness of the approach.Chapter 4 deals with optimal recharge and driving plans for a battery-powered electric vehicle. A major problem facing battery-powered electric vehicles is in their batteries: weight and charge capacity. Thus a battery-powered electric vehicle only has a short driving range. To travel for a longer distance, the batteries are required to be recharged frequently. In this chapter we construct a model for a battery-powered electric vehicle, in which driving strategy is to be obtained so that the total traveling time between two locations is minimized. The problem is formulated as an unconventional optimization problem. However, by using the control parameterization enhancing transformation (CPET) (see [100]) it is shown that this unconventional optimization is equivalent to a conventional optimal parameter selection problem. Numerical examples are solved using the proposed method.In Chapter 5 we consider the numerical solution to a class of optimal control problems involving variable time points in their cost functions. The CPET is first used to convert the optimal control problem with variable time points into an equivalent optimal control problem with fixed multiple characteristic times (MCT). Using the control parameterization technique, the time horizon is partitioned into several subintervals. Let the partition points also be taken as decision variables. The control functions are approximated by piecewise constant or piecewise linear functions ++ / in accordance with these variable partition points. We thus obtain a finite dimensional optimization problem. The CPET transform is again used to convert approximate optimal control problems with variable partition points into equivalent standard optimal control problems with MCT, where the control functions are piecewise constant or piecewise linear functions with pre-fixed partition points. The transformed problems are essentially optimal parameter selection problems with MCT. The gradient formulae are obtained for the objective function as well as the constraint functions with respect to relevant decision variables. Numerical examples are solved using the proposed method.A numerical approach is proposed in Chapter 6 for constructing an approximate optimal feedback control law of a class of nonlinear optimal control problems. In this approach, the state space is partitioned into subdivisions, and the controllers are approximated by a linear combination of the 3rd order B-spline basis functions. Furthermore, the partition points are also taken as decision variables in this formulation. To show the effectiveness of the proposed approach, a two dimensional and a three dimensional examples are solved by the approach. The numerical results demonstrate that the method is superior to the existing methods with fixed partition points.
7

Ameliorating the Overhead of Dynamic Optimization

Zhao, Qin, Wong, Weng Fai 01 1900 (has links)
Dynamic optimization has several key advantages. This includes the ability to work on binary code in the absence of sources and to perform optimization across module boundaries. However, it has a significant disadvantage viz-a-viz traditional static optimization: it has a significant runtime overhead. There can be performance gain only if the overhead can be amortized. In this paper, we will quantitatively analyze the runtime overhead introduced by a dynamic optimizer, DynamoRIO. We found that the major overhead does not come from the optimizer's operation. Instead, it comes from the extra code in the code cache added by DynamoRIO. After a detailed analysis, we will propose a method of trace construction that ameliorate the overhead introduced by the dynamic optimizer, thereby reducing the runtime overhead of DynamoRIO. We believe that the result of the study as well as the proposed solution is applicable to other scenarios such as dynamic code translation and managed execution that utilizes a framework similar to that of dynamic optimization. / Singapore-MIT Alliance (SMA)
8

none

Cheng, Shu-Shuo 25 June 2008 (has links)
In this study, Gordon Schaefer Model is used to evaluate for the optimal conduction of open access and dynamic optimization in equilibrium condition. The results of the models are further analyzed by the method of comparative static analysis. According to the Schnute's method, the intrinsic growth rate, the catchability coefficient and the environmental carrying capacity have been estimated in the way to evaluate the equilibrium values of the resource stock and the effort of yellowfin tuna. The result of the sensitivity analysis is based on the assumption that all parameters are varied within reasonable ranges. The results of comparative static analysis consist with the results of the sensitivity analysis that the fishing is comparatively cohered with the dynamic optimization model. This study aims to provide a useful reference for the policy making in sustainable development of the offshore fishery resources in Taiwan.
9

Essays in Market Design

Leshno, Jacob January 2012 (has links)
This dissertation consists of three essays in market design. The first essay studies a dynamic allocation problem. The second presents a new model for many-to-one matching markets where colleges are matched to a large number of students. The third analyzes the effect of the minimum wage on training in internships. In many assignment problems items arrive stochastically over time, and must therefore be assigned dynamically. The first essay studies the social planers ability to dynamically match agents with heterogenous preferences to their preferred items. Impatient agents may misreport their preferences to receive an earlier assignment, causing welfare loss. The first essay presents a tractable model of the problem and mechanisms that minimize the welfare loss. The second essay, which is joint work with Eduardo Azevedo, considers the classical many-to-one matching problem when many students are assigned to a few large colleges. We show that stable matchings have a simple characterization. Any stable matching is equivalent to market clearing cutoffs — admission thresholds for each college. The essay presents a model where a continuum of students is to be matched to a finite number of schools. Using the cutoff representation we show that under broad conditions there is a unique stable matching, and that it varies continuously with respect to the underlying economy. The third essay, which is joint work with Michael Schwarz, looks at on the job training in firms. The firm recovers the cost of training by gradually training the worker over time, paying a wage below the workers marginal product and providing the remaining compensation in the form of training. When the worker’s productivity is close to the minimum wage the firm finds it profitable to front-load training, making the worker more productive and the training faster. A decrease in the minimal wage reduces the firm's incentive to front-load training, and can make training less efficient.
10

YETI: a GraduallY Extensible Trace Interpreter

Zaleski, Mathew 01 August 2008 (has links)
The implementation of new programming languages benefits from interpretation because it is simple, flexible and portable. The only downside is speed of execution, as there remains a large performance gap between even efficient interpreters and systems that include a just-in-time (JIT) compiler. Augmenting an interpreter with a JIT, however, is not a small task. Today, Java JITs are typically method-based. To compile whole methods, the JIT must re-implement much functionality already provided by the interpreter, leading to a ``big bang'' development effort before the JIT can be deployed. Adding a JIT to an interpreter would be easier if we could more gradually shift from dispatching virtual instructions bodies implemented for the interpreter to running instructions compiled into native code by the JIT. We show that virtual instructions implemented as lightweight callable routines can form the basis for a very efficient interpreter. Our new technique, interpreted traces, identifies hot paths, or traces, as a virtual program is interpreted. By exploiting the way traces predict branch destinations our technique markedly reduces branch mispredictions caused by dispatch. Interpreted traces are a high-performance technique, running about 25% faster than direct threading. We show that interpreted traces are a good starting point for a trace-based JIT. We extend our interpreter so traces may contain a mixture of compiled code for some virtual instructions and calls to virtual instruction bodies for others. By compiling about 50 integer and object virtual instructions to machine code we improve performance by about 30% over interpreted traces, running about twice as fast as the direct threaded system with which we started.

Page generated in 0.1165 seconds