• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 8
  • 8
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 25
  • 18
  • 15
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

YETI: a GraduallY Extensible Trace Interpreter

Zaleski, Mathew 01 August 2008 (has links)
The implementation of new programming languages benefits from interpretation because it is simple, flexible and portable. The only downside is speed of execution, as there remains a large performance gap between even efficient interpreters and systems that include a just-in-time (JIT) compiler. Augmenting an interpreter with a JIT, however, is not a small task. Today, Java JITs are typically method-based. To compile whole methods, the JIT must re-implement much functionality already provided by the interpreter, leading to a ``big bang'' development effort before the JIT can be deployed. Adding a JIT to an interpreter would be easier if we could more gradually shift from dispatching virtual instructions bodies implemented for the interpreter to running instructions compiled into native code by the JIT. We show that virtual instructions implemented as lightweight callable routines can form the basis for a very efficient interpreter. Our new technique, interpreted traces, identifies hot paths, or traces, as a virtual program is interpreted. By exploiting the way traces predict branch destinations our technique markedly reduces branch mispredictions caused by dispatch. Interpreted traces are a high-performance technique, running about 25% faster than direct threading. We show that interpreted traces are a good starting point for a trace-based JIT. We extend our interpreter so traces may contain a mixture of compiled code for some virtual instructions and calls to virtual instruction bodies for others. By compiling about 50 integer and object virtual instructions to machine code we improve performance by about 30% over interpreted traces, running about twice as fast as the direct threaded system with which we started.
12

Integration of Scheduling and Dynamic Optimization: Computational Strategies and Industrial Applications

Nie, Yisu 01 July 2014 (has links)
This thesis study focuses on the development of model-based optimization strategies for the integration of process scheduling and dynamic optimization, and applications of the integrated approaches to industrial polymerization processes. The integrated decision making approaches seek to explore the synergy between production schedule design and process unit control to improve process performance. The integration problem has received much attention from both the academia and industry since the past decade. For scheduling, we adopt two formulation approaches based on the state equipment network and resource task network, respectively. For dynamic optimization, we rely on the simultaneous collocation strategy to discretize the differential-algebraic equations. Two integrated formulations are proposed that result in mixed discrete/dynamic models, and solution methods based on decomposition approaches are addressed. A class of ring-opening polymerization processes are used for our industrial case studies. We develop rigorous dynamic reactor models for both semi-batch homopolymerization and copolymerization operations. The reactor models are based on first-principles such as mass and heat balances, reaction kinetics and vapor-liquid equilibria. We derive reactor models with both the population balance method and method of moments. The obtained reactor models are validated using historical plant data. Polymerization recipes are optimized with dynamic optimization algorithms to reduce polymerization times by modifying operating conditions such as the reactor temperature and monomer feed rates over time. Next, we study scheduling methods that involve multiple process units and products. The resource task network scheduling model is reformulated to the state space form that offers a good platform for incorporating dynamic models. Lastly for the integration study, we investigate a process with two parallel polymerization reactors and downstream storage and purification units. The dynamic behaviors of the two reactors are coupled through shared cooling resources. We formulate the integration problem by combining the state space resource task network model with the moment reactor model. The case study results indicate promising improvements of process performances by applying dynamic optimization and scheduling optimization separately, and more importantly, the integration of the two.
13

Advances in Newton-based Barrier Methods for Nonlinear Programming

Wan, Wei 01 August 2017 (has links)
Nonlinear programming is a very important tool for optimizing many systems in science and engineering. The interior point solver IPOPT has become one of the most popular solvers for NLP because of its high performance. However, certain types of problems are still challenging for IPOPT. This dissertation considers three improvements or extensions to IPOPT to improve performance on several practical classes of problems. Compared to active set solvers that treat inequalities by identifying active constraints and transforming to equalities, the interior point method is less robust in the presence of degenerate constraints. Interior point methods require certain regularity conditions on the constraint set for the solution path to exist. Dependent constraints commonly appear in applications such as chemical process models and violate the regularity conditions. The interior point solver IPOPT introduces regularization terms to attempt to correct this, but in some cases the required regularization terms either too large or too small and the solver will fail. To deal with these challenges, we present a new structured regularization algorithm, which is able to numerically delete dependent equalities in the KKT matrix. Numerical experiments on hundreds of modified example problems show the effectiveness of this approach with average reduction of more than 50% of the iterations. In some contexts such as online optimization, very fast solutions of an NLP are very important. To improve the performance of IPOPT, it is best to take advantage of problem structure. Dynamic optimization problems are often called online in a control or stateestimation. These problems are very large and have a particular sparse structure. This work investigates the use of parallelization to speed up the NLP solution. Because the KKT factorization is the most expensive step in IPOPT, this is the most important step to parallelize. Several cyclic reduction algorithms are compared for their performance on generic test matrices as well as matrices of the form found in dynamic optimization. The results show that for very large problems, the KKT matrix factorization time can be improved by a factor of four when using eight processors. Mathematical programs with complementarity constraints (MPCCs) are another challenging class of problems for IPOPT. Several algorithmic modifications are examined to specially handle the difficult complementarity constraints. First, two automatic penalty adjustment approaches are implemented and compared. Next, the use of our structured regularization is tested in combination with the equality reformulation of MPCCs. Then, we propose an altered equality reformulation of MPCCs which effectively removes the degenerate equality or inequality constraints. Using the MacMPEC test library and two applications, we compare the efficiency of our approaches to previous NLP reformulation strategies.
14

Large-Scale Dynamic Optimization Under Uncertainty using Parallel Computing

Washington, Ian D. January 2016 (has links)
This research focuses on the development of a solution strategy for the optimization of large-scale dynamic systems under uncertainty. Uncertainty resides naturally within the external forces posed to the system or from within the system itself. For example, in chemical process systems, external inputs include flow rates, temperatures or compositions; while internal sources include kinetic or mass transport parameters; and empirical parameters used within thermodynamic correlations and expressions. The goal in devising a dynamic optimization approach which explicitly accounts for uncertainty is to do so in a manner which is computationally tractable and is general enough to handle various types and sources of uncertainty. The approach developed in this thesis follows a so-called multiperiod technique whereby the infinite dimensional uncertainty space is discretized at numerous points (known as periods or scenarios) which creates different possible realizations of the uncertain parameters. The resulting optimization formulation encompasses an approximated expected value of a chosen objective functional subject to a dynamic model for all the generated realizations of the uncertain parameters. The dynamic model can be solved, using an appropriate numerical method, in an embedded manner for which the solution is used to construct the optimization formulation constraints; or alternatively the model could be completely discretized over the temporal domain and posed directly as part of the optimization formulation. Our approach in this thesis has mainly focused on the embedded model technique for dynamic optimization which can either follow a single- or multiple-shooting solution method. The first contribution of the thesis investigates a combined multiperiod multiple-shooting dynamic optimization approach for the design of dynamic systems using ordinary differential equation (ODE) or differential-algebraic equation (DAE) process models. A major aspect of this approach is the analysis of the parallel solution of the embedded model within the optimization formulation. As part of this analysis, we further consider the application of the dynamic optimization approach to several design and operation applications. Another vmajor contribution of the thesis is the development of a nonlinear programming (NLP) solver based on an approach that combines sequential quadratic programming (SQP) with an interior-point method (IPM) for the quadratic programming subproblem. A unique aspect of the approach is that the inherent structure (and parallelism) of the multiperiod formulation is exploited at the linear algebra level within the SQP-IPM nonlinear programming algorithm using an explicit Schur-complement decomposition. Our NLP solution approach is further assessed using several static and dynamic optimization benchmark examples. / Thesis / Doctor of Philosophy (PhD)
15

License Buyback Programs in Commercial Fisheries: An Application to the Shrimp Fishery in the Gulf of Mexico

Mamula, Aaron T. 16 January 2010 (has links)
This dissertation provides a thorough analysis of the costs associated with, and efficacy of, sequential license buyback auctions. I use data from the Texas Shrimp License Buyback Program - a sequential license buyback auction - to estimate the effects of a repeated game set-up on bidding behavior. I develop a dynamic econometric model to estimate parameters of the fisherman's optimal bidding function in this auction. The model incorporates the learning that occurs when an agent is able to submit bids for the same asset in multiple rounds and is capable of distinguishing between the fisherman's underlying valuation of the license and the speculative premium induced by the sequential auction. I show that bidders in the sequential auction do in fact inflate bids above their true license valuation in response to the sequential auction format. The results from our econometric model are used to simulate a hypothetical buyback program for capacity reduction in the offshore shrimp fishery in the Gulf of Mexico using two competing auction formats: the sequential auction and the one-time auction. I use this simulation analysis to compare the cost and effectiveness of sequential license buyback program relative to one-time license buyback programs. I find that one-time auctions, although they impose a greater up-front cost on the management agency - are capable of retiring more fishing effort per dollar spent then sequential license buyback programs. In particular, I find one-time license buyback auctions to be more cost effective than sequential ones because they remove the possibility for fishermen to learn about the agency's willingness to pay function and use this information to extract sale prices in excess of the true license value.
16

The study of the Bioeconomics analysis Of Grey mullet in Taiwan

Cheng, Man-chun 29 January 2007 (has links)
Abstract This study is based on the theory of biology and economy to establish the open access model, dynamic optimization model and static optimization of fishery mathematical models, to discuss the problem of fishery management. To be aimed at getting the equilibrium of resource stock and effort, research data are mainly analyzed by comparative statues. In so doing, the amount of grey mullet, collect and analyze the estimation of exogenous variable. Then, we can use Mathematica program to calculate the equilibrium value resource stock and the effort, and do the sensitivity analysis by standing on the change of estimation of exogenous variable. The result of analysis is as follow: These three fishery mathematical models¡¦ resource stock and effort are consistency. In another view of CPUE, it is not obvious of the economic effect of open access model. We must strengthen the management in policy of fishing for grey mullet, to let the fisherman earn the highest economic benefits. Keyword: open access model static optimization model. dynamic optimization model.
17

The biological and economical analysis of the resource of the shrimp Acetes intrmedius in TungKang,PingTung.

Yang, Chung-hao 27 June 2008 (has links)
The fishery of the shrimp Acetes intermedius in the southwestern coast of Taiwan has long history , and it is the food of many species of fishes and large-scale shrimps . Shrimp Acetes has not only fallen on dead ears , but also been ignored its importantce of ecologyical status in the southwestern coast by the academia because of less harvest and output value in the past . It then comes into operation the management of catch , leading the price going up and output value increasing rapidly when the establishment of TungKang producer organization of the shrimp Acetes intrmedius in 1994 , and it also becomes the important seasonal fishery . According to as was mentioned above , the study is based on the theory of biology and economy to put out the open access model , static optimization model and dynamic optimization of fishery mathematical models , and further discuss the problem of fishery management. In connection with getting the equilibrium of resource stock and effort , research data from the substitution of real data are mainly analyzed by compareative statues on exogenous variable .By means of understanding the sensitivity of variation on endogenous variable depending on exogenous variable , we can provide the member of TungKang producer organization of the shrimp Acetes intrmedius with the control on harvest and preservation of stock . The study can get the fact that the management of TungKang producer organization of the shrimp Acetes intrmedius has the notion of sustainable administration by the deriveation of theoretical model and the simulate analysis of historyical data. I hope the management of TungKang producer organization of the shrimp Acetes intrmedius can be popularized.
18

none

Wu, Hsiao-wen 27 July 2009 (has links)
In this study, the fundamental model of fish dynamic model- Gordon Schaefer Model is used to discuss the equilibrium levels for the Pacific Bluefin Tuna fishery of open access and dynamic optimization, and then to do the sensitivity analysis. By comparing the historical record of catch data with the equilibrium values of open access and dynamic optimization, we could know that the fish stocks and harvests of Pacific Bluefin Tuna are not in the condition of dynamic optimization. In order to ensure the sustainable development of Pacific Bluefin Tuna fishery, we have to take effective measures to preserve and manage the Pacific Bluefin Tuna resources. Finally, this study simulates and analyses the various management scenarios of the Pacific Bluefin Tuna fishery. The results of simulative analysis reveal that the optimal management of the Pacific Bluefin Tuna fishery would imply significant reallocation of the fishing gear shares. Furthermore, the net present value could increase substantially by reallocating the fishing gear shares.
19

Exploiting language abstraction to optimize memory efficiency

Sartor, Jennifer Bedke 13 December 2010 (has links)
The programming language and underlying hardware determine application performance, and both are undergoing revolutionary shifts. As applications have become more sophisticated and capable, programmers have chosen managed languages in many domains for ease of development. These languages abstract memory management from the programmer, which can introduce time and space overhead but also provide opportunities for dynamic optimization. Optimizing memory performance is in part paramount because hardware is reaching physical limits. Recent trends towards chip multiprocessor machines exacerbate the memory system bottleneck because they are adding cores without adding commensurate bandwidth. Both language and architecture trends add stress to the memory system and degrade application performance. This dissertation exploits the language abstraction to analyze and optimize memory efficiency on emerging hardware. We study the sources of memory inefficiencies on two levels: heap data and hardware storage traffic. We design and implement optimizations that change the heap layout of arrays, and use program semantics to eliminate useless memory traffic. These techniques improve memory system efficiency and performance. We first quantitatively characterize the problem by comparing many data compression algorithms and their combinations in a limit study of Java benchmarks. We find that arrays are a dominant source of heap inefficiency. We introduce z-rays, a new array layout design, to bridge the gap between fast access, space efficiency and predictability. Z-rays facilitate compression and offer flexibility, and time and space efficiency. We find that there is a semantic mismatch between managed languages, with their rapid allocation rates, and current hardware, causing unnecessary and excessive traffic in the memory subsystem. We take advantage of the garbage collector's identification of dead data regions, communicating information to the caches to eliminate useless traffic to memory. By reducing traffic and bandwidth, we improve performance. We show that the memory abstraction in managed languages is not just a cost to be borne, but an opportunity to alleviate the memory bottleneck. This thesis shows how to exploit this abstraction to improve space and time efficiency and overcome the memory wall. We enhance the productivity and performance of ubiquitous managed languages on current and future architectures. / text
20

Nonlinear Programming Approaches for Efficient Large-Scale Parameter Estimation with Applications in Epidemiology

Word, Daniel Paul 16 December 2013 (has links)
The development of infectious disease models remains important to provide scientists with tools to better understand disease dynamics and develop more effective control strategies. In this work we focus on the estimation of seasonally varying transmission parameters in infectious disease models from real measles case data. We formulate both discrete-time and continuous-time models and discussed the benefits and shortcomings of both types of models. Additionally, this work demonstrates the flexibility inherent in large-scale nonlinear programming techniques and the ability of these techniques to efficiently estimate transmission parameters even in very large-scale problems. This computational efficiency and flexibility opens the door for investigating many alternative model formulations and encourages use of these techniques for estimation of larger, more complex models like those with age-dependent dynamics, more complex compartment models, and spatially distributed data. How- ever, the size of these problems can become excessively large even for these powerful estimation techniques, and parallel estimation strategies must be explored. Two parallel decomposition approaches are presented that exploited scenario based de- composition and decomposition in time. These approaches show promise for certain types of estimation problems.

Page generated in 0.1318 seconds