• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 14
  • 7
  • 6
  • 4
  • 3
  • 1
  • Tagged with
  • 67
  • 67
  • 38
  • 20
  • 18
  • 16
  • 16
  • 12
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Real-Time Optimization using Extremum Seeking Control and Economic Model Predictive Control : With Applications to Bioreactors and Paper Machines

Trollberg, Olle January 2017 (has links)
Process optimization is used to improve the utility and the economic performance of industrial processes, and is as such central in most automation strategies. In this thesis, two feedback-based methods for online process optimization are considered: Extremum seeking control (ESC), a classic model-free method used for steady-state optimization which dates back to the early 1900's, and economic model predictive control (EMPC), a more recent method which utilizes a model to dynamically optimize the closed-loop process economics in real time. Part I of the thesis concerns ESC. Due to a well known result by Krsti\'c and Wang, it is known that the classic ESC-loop will possess a stable stationary solution in a neighborhood of the optimum when applied to dynamic plants. However, existence and stability of an optimal solution alone are not sufficient to guarantee that the ESC-loop will converge to the optimum; uniqueness also has to be considered. In this thesis, it is shown that the near-optimal solution is not necessarily unique, not even in cases where the objective, i.e., the steady-state input-output map, is convex. The stationary solutions to the loop are shown to be characterized by a condition on the local plant phase-lag, and for a biochemical reactor it is found that this condition can be satisfied not only locally at the optimum but also at arbitrary points away from the optimum. Bifurcation theory is used to show that the observed solution multiplicity may be explained by existence of fold bifurcation points, and conditions for existence of such points are given. The phase-lag condition for stationarity combined with the result by Krsti\'c and Wang suggest that the process phase-lag is connected to steady-state optimality. In this thesis, it is shown that the steady-state optimum corresponds to a bifurcation of the plant zero dynamics which is reflected in large local phase-lag variations. This explains why the classical ESC method will have a near-optimal stationary solution when applied to dynamic plants, and it also shows that a steady-state optimum may be located using only phase information. Finally, we introduce greedy ESC which is applicable to plants where the dynamics may be separated into different time-scales. By optimizing only the fast plant-dynamics, significant performance improvements may be achieved. Part II of this thesis concerns EMPC. The method is first evaluated for optimization of a paper-making process by means of simulations. These reveal several important properties of EMPC, e.g., that EMPC in the presence of excessive degrees of freedom automatically selects the inputs which are currently most efficient, and that EMPC effectively plans ahead which leads to significantly improved performance during grade changes. However, it is also observed that EMPC often operates with constraints active since economic objectives frequently are monotone, and this may lead to issues with robustness. To avoid active constraints, constraint margins are introduced to force the closed-loop to operate in the interior of the feasible set. The margins affect the economic performance significantly and the optimal choice is dependent on the uncertainty present in the problem. To avoid modeling of the uncertainty, it is suggested that the margins are adapted based on feedback from the realized closed-loop economic performance. / <p>QC 20180829</p>
2

Ställtidsoptimering / Setup time optimization

Bertilsson, Jimmy, Andersson, Joakim January 2012 (has links)
Emhart Glass AB är världsledande företag inom glasflasktillverkning. Dekonstruerar automatiserade maskiner som formar glasflaskor. I Sverige finns dettvå fabriker, en i Örebro och en i Sundsvall. I Örebro tillverkar man främstreservdelar och nya delar till maskinerna medan man i Sundsvall monterar ihopmaskinerna. Det finns totalt 15 olika fabriker och kontor över världen medhuvudkontor i schweiziska Cham.Eftersom Emhart Glass Örebro har för långa ställtider på några av deras maskinerska det undersökas hur omställningsarbetet går till i dagsläget och huromställningsarbetet skiljer sig åt mellan operatörerna. Det ska även undersökasom det finns några möjligheter till förbättringar samt om det i dagsläget finnsnågot standardiserat sätt som operatören borde följa. Ett dokument som beskriverhur ställarbetet ska gå till kommer även att tas fram.Ett utmärkt verktyg för att förkorta ställtiderna i en produktion är SMEDMetoden.Filosofin bakom SMED är att man ska analysera och skilja på inre ochyttre ställ. Med inre och yttre ställ menas de som endast kan utföras då maskinenär avstängd resp. de som kan utföras när maskinen är i drift.För att standardisera omställningsarbetet så att samtliga operatörer jobbar påliknande sätt så krävs det att man tar fram en dokumentation över hur arbetet skagå till. Därför har checklistor tagits fram till operatören. "Checklista -Omställning.xls" är en checklista med syftet att man ska kunna bocka för vilkadelar i förberedelserna man gjort inför kommande ställ. Den har tagits fram för attman enkelt ska kunna hålla reda på vilka delar man gjort om man blivit tvungenatt jobba med maskinen emellan förberedelserna eller om man slutar sitt skift ochlämnar delar av arbetet till nästa operatör.Om samtliga av dessa förbättringar införs kan man förvänta sig enställtidsreducering på 20,5% vilket motsvarar ca 35min per ställ. Ignorerar maninkörningstiderna och endast kollar på riggningstider kan man se en förbättring på36,4 %. / Emhart Glass Ltd is a world leader in glass bottle manufacturing. They designautomated machines that shape glass bottles. In Sweden there are two factories,one in Örebro and one in Sundsvall. In Örebro they manufacture primarily spareparts and new parts for the machines while they in Sundsvall assemble themachines. There are a total of 15 factories and offices around the world with theheadquarter located in Swiss Cham.Since Emhart Glass Örebro has long setup times on some of their machines. Thisis why we want to identify the current setup process and how the setup processdiffers between operators. We will also look at whether there are anyopportunities for improvement to be made and if they have a standardized way towork. A document that describes how to setup work should be done will also bedeveloped.An excellent tool to shorten the setup time in a production is the SMED method.The philosophy behind SMED is that you should analyze and separate the innerand outer activities. Inner and outer activities mean those activities which canonly be performed when the machine is turned off, respectively those activitiesthat can be performed when the machine is in operation.In order to standardize the adjustment process so that all operators are working ina similar way it's required that you make a documentation about how the workshould be done. Therefore, checklists been developed to the operator. "Checklista- Omställning.xls" is a checklist with the purpose to be able to check which partsof the preparations they have made before the next setup work. It has beendesigned to be easy to keep track of what parts you have done if you had to workwith the machine between the trial or if you quit your shift and leaving parts of thework to the next operator.If all of these improvements are implemented, we expect a set-up time reductionof 20.5% which corresponds to about 35min per set-up. By ignoring the runningtime and only check on the setup times, one can see an improvement of 36.4%.
3

Integration of scheduling and control with closed-loop prediction

Dering, Daniela January 2024 (has links)
Deregulation of electricity markets, increased usage of intermittent energy sources, and growing environmental concerns have created a volatile process manufacturing environment. Survival under this new paradigm requires chemical manufactures to shift from the traditional steady-state operation to a more dynamic and flexible operation mode. Under more frequent operating changes, the transition dynamics become increasingly relevant, rendering the traditional steady-state based scheduling decision-making suboptimal. This has motivated calls for the integration of scheduling and control. In an integrated scheduling and control framework, the scheduling decisions are based on a dynamic representation of the process. While various integration paradigms are proposed in the literature, our study concentrates on the closed-loop integration of scheduling and control. There are two main advantages to this approach: (i) seamless integration with the existing control system (i.e. it does not require a new control system infrastructure), (ii) the framework is aware of the control system dynamics, and hence has knowledge of the closed-loop process dynamics. The later aspect is particularly important as the control system plays a key role in determining the transition dynamics. The first part of our work is dedicated to developing an integrated scheduling and control framework that computes set-point trajectories, to be tracked by the lower-level linear model predictive control system, that are robust to demand uncertainty. We employ a piecewise linear representation of the nonlinear process model to obtain a mixed-integer linear programming (MILP) problem, thus alleviating the computational complexity compared to a mixed-integer nonlinear programming formulation. The value of the stochastic solution is used to confirm the superiority of the robust formulation against a nominal one that disregards uncertainty. In the second part of this study, we expand the framework to accommodate additional uncertainty types, including model and cost uncertainty. In the third part of this thesis, a deterministic integrated scheduling and control framework for processes controlled by distributed linear model predictive control is developed. The integrated problem is formulated as a MILP. To reduce the solution time, we introduce strategies to approximate the feedback control action. Through case studies, we demonstrate that knowledge of the control system enables the framework to effectively coordinate the MPC subsystems. The framework performs well even under conditions of plant-model mismatch conditions. In the final part of this study, we introduce an integrated scheduling and control formulation for processes controlled by nonlinear model predictive control (NMPC). Here, discrete scheduling decisions are represented using complementarity conditions. Additionally, we use the first-order Karush-Kuhn-Tucker conditions of the NMPC controller to compute the input values in the integrated problem. The resulting problem is a mathematical program with complementarity constraints that we solve using a regularization approach. For all case studies, the complementarity formulation effectively capture discrete scheduling decisions, and the KKT conditions always provides a local optimum of the associated NMPC problem. In summary, this study of the integration of scheduling and control addresses various control systems, uncertainty, and strategies for enhancing the solution time. Furthermore, we assess the performance of the proposed frameworks under conditions of plant-model mismatch, a common scenario in real-life applications. / Thesis / Doctor of Philosophy (PhD)
4

Fault-Tolerant Average Execution Time Optimization for General-Purpose Multi-Processor System-On-Chips

Väyrynen, Mikael January 2009 (has links)
<p>Fault tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault tolerance. For a given job and a soft (transient) no-error probability, we define mathematical formulas for AET using voting (active replication), rollback-recovery with checkpointing (RRC) and a combination of these (CRV) where bus communication overhead is included. And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize the AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC or a combination where RRC is included, (2) finding the number of processors and job-to-processor assignment when using voting or a combination where voting is used, and (3) defining fault tolerance scheme (voting, RRC or CRV) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.</p>
5

Run-time optimization of adaptive irregular applications

Yu, Hao 15 November 2004 (has links)
Compared to traditional compile-time optimization, run-time optimization could offer significant performance improvements when parallelizing and optimizing adaptive irregular applications, because it performs program analysis and adaptive optimizations during program execution. Run-time techniques can succeed where static techniques fail because they exploit the characteristics of input data, programs' dynamic behaviors, and the underneath execution environment. When optimizing adaptive irregular applications for parallel execution, a common observation is that the effectiveness of the optimizing transformations depends on programs' input data and their dynamic phases. This dissertation presents a set of run-time optimization techniques that match the characteristics of programs' dynamic memory access patterns and the appropriate optimization (parallelization) transformations. First, we present a general adaptive algorithm selection framework to automatically and adaptively select at run-time the best performing, functionally equivalent algorithm for each of its execution instances. The selection process is based on off-line automatically generated prediction models and characteristics (collected and analyzed dynamically) of the algorithm's input data, In this dissertation, we specialize this framework for automatic selection of reduction algorithms. In this research, we have identified a small set of machine independent high-level characterization parameters and then we deployed an off-line, systematic experiment process to generate prediction models. These models, in turn, match the parameters to the best optimization transformations for a given machine. The technique has been evaluated thoroughly in terms of applications, platforms, and programs' dynamic behaviors. Specifically, for the reduction algorithm selection, the selected performance is within 2% of optimal performance and on average is 60% better than "Replicated Buffer," the default parallel reduction algorithm specified by OpenMP standard. To reduce the overhead of speculative run-time parallelization, we have developed an adaptive run-time parallelization technique that dynamically chooses effcient shadow structures to record a program's dynamic memory access patterns for parallelization. This technique complements the original speculative run-time parallelization technique, the LRPD test, in parallelizing loops with sparse memory accesses. The techniques presented in this dissertation have been implemented in an optimizing research compiler and can be viewed as effective building blocks for comprehensive run-time optimization systems, e.g., feedback-directed optimization systems and dynamic compilation systems.
6

Contribution to the Optimization and Flexible Management of Chemical Processes

Ferrer Nadal, Sergio 19 June 2008 (has links)
La industria química ha experimentado en las últimas décadas un aumento en la competencia por la cual las empresas se ven obligadas a adaptarse a un mercado cambiante y cada vez más exigente. Aunque la globalización ha abierto nuevos mercados, ha incrementado también el número de competidores, de tal manera que sólo las empresas que usen las plantas más integradas y eficientes podrán mantenerse en el negocio. En este contexto global, el principal propósito de esta tesis es desarrollar métodos que exploten la flexibilidad de los procesos, con el objetivo de aumentar la eficiencia de las plantas y asegurar los requerimientos de seguridad y calidad de los productos. Esta tesis contribuye a la optimización y a la gestión de la producción desde pequeñas plantas que usen procesos discontinuos hasta grandes plantas de procesado continuo.En primer lugar, esta tesis trata la gestión de los procesos continuos en los que suelen fabricar productos muy similares a gran escala. La gran ventaja de los procesos continuos es que pueden conseguir mayor consistencia en la calidad de los productos y que pueden aprovechar las economías de escala que reducen los costes y residuos. Sin embargo, la industria química para mantenerse competitiva necesita adaptar continuamente sus procesos a las condiciones del mercado y de operación. El sistema de control supervisor presentado en esta parte de la tesis disminuye el tiempo de reacción frente a incidentes en los procesos continuos y re-optimiza la producción en tiempo real, si existe posibilidad de mejora.A continuación, esta tesis trata la gestión de los procesos semicontinuos que permiten una operación más flexible y personalizada. Los procesos semicontinuos operan con puestas en marcha y paradas periódicas para acomodar las frecuentes transiciones entre diferentes productos. Esta tesis presenta un nuevo concepto de fabricación flexible que permite programar perfiles variables de velocidad de producción dentro de cada campaña de producción.La mayor parte del trabajo de investigación de esta tesis se dedica a la planificación de la producción en los procesos discontinuos por lotes, utilizados principalmente en la producción de productos químicos con alto valor añadido. Estos procesos ofrecen varias ventajas respecto a los procesos continuos y semicontinuos debido a la mayor flexibilidad para acomodar diversos productos, diferentes capacidades de producción, y la posibilidad de realizar operaciones completamente diferentes en los mismos equipos. Sin embargo, la obtención del plan de producción óptimo usando se complica al aumentar la complejidad de la planta y/o el número de lotes a planificar. La simplificación de considerar tiempos de transferencia despreciables es generalmente aceptada en la literatura para evitar la complejidad del manejo de las operaciones de transferencia. En cambio, esta tesis pretende resaltar el papel crítico que juegan las operaciones de transferencia en la sincronización de tareas, y en la consiguiente determinación de planes de producción factibles.Siguiendo con los procesos por lotes, esta tesis demuestra que el uso del concepto de recetas flexibles mejora la operación de los procesos en ambientes de producción con mucha incertidumbre. La flexibilidad de las receta se considera como una oportunidad adicional, tanto para la planificación de la producción reactiva como preactiva, reduciendo el riesgo de llegar a resultados económicamente desfavorables.Finalmente, esta tesis presenta las plantas discontinuas sin tuberías como una alternativas a las plantas por lotes clásicas. En la búsqueda de formas más competitivas y efectivas de producción, la flexibilidad para producir un elevado número de productos en plantas por lotes es limitada debido a la necesidad de equipos fijos conectados por tuberías y frecuentes tareas de limpieza. Las plantas sin tuberías presentan una mayor flexibilidad ya que el material se transfiere entre estaciones de procesamiento usando equipos que se mueven dentro de la planta. El trabajo presentado en esta parte de la tesis contribuye a la mejora en la gestión de este tipo de plantas proponiendo una formulación más eficiente a las encontradas en la literatura que resuelve el problema de la planificación de la producción.En resumen, esta tesis desarrolla nuevas estrategias de modelado y métodos de resolución encaminados al soporte de la toma de decisiones que explotan la flexibilidad intrínseca de los procesos químicos. Las principales ventajas de cada una de las contribuciones de esta tesis se demuestran mediante su aplicación a diferentes casos de estudio. / The chemical industry has become increasingly competitive over the past decades. Companies are required to adapt to changing market conditions and meet stricter product specifications. While globalization has opened new markets for the chemical industry, it has also increased the competitor pool, giving an advantage to companies with more efficient and highly integrated plants.In this context, the main aim of this thesis is to demonstrate new concepts and computational methods that exploit process flexibility to enhance plant profitability under transient operating conditions. These methods ensure that safety and product quality requirements are consistently met. This thesis makes contributions to the optimization and management of production in plants ranging from small batch plants to large capacity continuous processes.First, this thesis addresses the management of continuous processes, in which similar products are mass produced. Continuous processes can achieve the highest consistency and product quality by taking advantage of economies of scale and reduced manufacturing costs and waste. However, in order to remain competitive in the market, plants are required to dynamically adapt their processes to fit the continuously changing market and operating conditions. The supervisory control system presented in this part of the thesis decreases the system reaction time to incidences and re-optimizes the production in real time if the opportunity for improved performance exists.Next, this thesis addresses the management of semicontinuous processes, which allow more customized and flexible operation. Semicontinuous processes run with periodic start-ups and shutdowns to accommodate frequent product transitions. This thesis proposes an optimization model that creates improved production schedules by introducing a new concept of flexible manufacturing that allows production rate profiles to be programmed within each operation campaign.The major part of the research work of this thesis deals with the operational management of batch processes, which are mainly used for the production of high value-added chemicals. Batch processing offers the advantage of increased flexibility in product variety, production volume, and the assortment of operations that can be processed by a particular piece of equipment. However, the trade-off is that production scheduling is significantly complicated by the large number of batches involved with different production paths. In order to avoid the complexity of managing transfer operations, the assumption of negligible transfer times is generally accepted in batch scheduling. Conversely, this thesis highlights the critical role that transfer operations play in the synchronization of tasks and in determining the feasibility of production schedules.Continuing to focus on batch plant operation, this thesis demonstrates that the use of the concept of flexible recipes enhances the operation batch plants within an uncertain environment. Recipe flexibility is considered as an additional opportunity for reactive scheduling as well as a proactive way to reduce the risk of meeting unfavorable scenarios.Finally, this thesis examines pipeless plants as an alternative to batch plants. In the search for more competitive and effective ways of production, flexibility of batch plants for producing a large number of products is limited due to the need for equipment, piping and frequent cleaning tasks. Pipeless plants have enhanced flexibility over batch plants, because the material is moved along its production path through moveable vessels. This part of the thesis contributes to the optimization of the management of pipeless plants by proposing an alternative formulation for solving short-term scheduling problems.In summary, this thesis provides novel modeling approaches and solution methods aimed at supporting the decision-making process in plant production scheduling which exploit the existing flexibility in chemical processes. The main advantages of each contribution are highlighted through case studies.
7

Discrete Search Optimization for Real-Time Path Planning in Satellites

Mays, Millie 06 September 2012 (has links)
This study develops a discrete search-based optimization method for path planning in a highly nonlinear dynamical system. The method enables real-time trajectory improvement and singular configuration avoidance in satellite rotation using Control Moment Gyroscopes. By streamlining a legacy optimization method and combining it with a local singularity management scheme, this optimization method reduces the computational burden and advances the capability of satellites to make autonomous look-ahead decisions in real-time. Current optimization methods plan offline before uploading to the satellite and experience high sensitivity to disturbances. Local methods confer autonomy to the satellite but use only blind decision-making to avoid singularities. This thesis' method seeks near-optimal trajectories which balance between the optimal trajectories found using computationally intensive offline solvers and the minimal computational burden of non-optimal local solvers. The new method enables autonomous guidance capability for satellites using discretization and stage division to minimize the computational burden of real-time optimization.
8

Analysis and Optimization for Testing Using IEEE P1687

Ghani Zadegan, Farrokh January 2010 (has links)
The IEEE P1687 (IJTAG) standard proposal aims at providing a standardized interface between on-chip embedded test, debug and monitoring logic (instruments), such as scan-chains and temperature sensors, and the Test Access Port of IEEE Standard 1149.1 mainly used for board test. A key feature in P1687 is to include Segment Insertion Bits (SIBs) in the scan path. SIBs make it possible to construct a multitude of different P1687 networks for the same set of instruments, and provide flexibility in test scheduling. The work presented in this thesis consists of two parts. In the first part, analysis regarding test application time is given for P1687 networks while making use of two test schedule types, namely concurrent and sequential test scheduling. Furthermore, formulas and novel algorithms are presented to compute the test time for a given P1687 network and a given schedule type. The algorithms are implemented and employed in extensive experiments on realistic industrial designs. In the second part, design of IEEE P1687 networks is studied. Designing the P1687 network that results in the least test application time for a given set of instruments, is a time-consuming task in the absence of automatic design tools. In this thesis work, novel algorithms are presented for automated design of P1687 networks which are optimized with respect to test application time and the required number of SIBs. The algorithms are implemented and demonstrated in experiments on industrial SOCs.
9

A Comparison of Waterflood Management Using Arrival Time Optimization and NPV Optimization

Tao, Qing 2009 December 1900 (has links)
Waterflooding is currently the most commonly used method to improve oil recovery after primary depletion. The reservoir heterogeneity such as permeability distribution could negatively affect the performance of waterflooding. The presence of high permeability streaks could lead to an early water breakthrough at the producers and thus reduce the sweep efficiency in the field. One approach to counteract the impact of heterogeneity and to improve waterflood sweep efficiency is through optimal rate allocation to the injectors and producers. Through optimal rate control, we can manage the propagation of the flood front, delay water breakthrough at the producers and also increase the sweep and hence, the recovery efficiency. The arrival time optimization method uses a streamline-based method to calculate water arrival time sensitivities with respect to production and injection rates. It can also optimize sweep efficiency on multiple realizations to account for geological uncertainty. To extend the scope of this optimization method for more general conditions, this work utilized a finite difference simulator and streamline tracing software to conduct the optimization. Apart from sweep efficiency, another most widely used optimization method is to maximize the net present value (NPV) within a given time period. Previous efforts on optimization of waterflooding used optimal control theorem to allocate injection/production rates for fixed well configurations. The streamline-based approach gives the optimization result in a much more computationally efficient manner. In the present study, we compare the arrival time optimization and NPV optimization results to show their strengths and limitations. The NPV optimization uses a perturbation method to calculate the gradients. The comparison is conducted on a 4- spot synthetic case. Then we introduce the accelerated arrival time optimization which has an acceleration term in the objective function to speed up the oil production in the field. The proposed new approach has the advantage of considering both the sweep efficiency and net present value in the field.
10

Run-time optimization of adaptive irregular applications

Yu, Hao 15 November 2004 (has links)
Compared to traditional compile-time optimization, run-time optimization could o&#64256;er signi&#64257;cant performance improvements when parallelizing and optimizing adaptive irregular applications, because it performs program analysis and adaptive optimizations during program execution. Run-time techniques can succeed where static techniques fail because they exploit the characteristics of input data, programs' dynamic behaviors, and the underneath execution environment. When optimizing adaptive irregular applications for parallel execution, a common observation is that the effectiveness of the optimizing transformations depends on programs' input data and their dynamic phases. This dissertation presents a set of run-time optimization techniques that match the characteristics of programs' dynamic memory access patterns and the appropriate optimization (parallelization) transformations. First, we present a general adaptive algorithm selection framework to automatically and adaptively select at run-time the best performing, functionally equivalent algorithm for each of its execution instances. The selection process is based on off-line automatically generated prediction models and characteristics (collected and analyzed dynamically) of the algorithm's input data, In this dissertation, we specialize this framework for automatic selection of reduction algorithms. In this research, we have identi&#64257;ed a small set of machine independent high-level characterization parameters and then we deployed an off-line, systematic experiment process to generate prediction models. These models, in turn, match the parameters to the best optimization transformations for a given machine. The technique has been evaluated thoroughly in terms of applications, platforms, and programs' dynamic behaviors. Speci&#64257;cally, for the reduction algorithm selection, the selected performance is within 2% of optimal performance and on average is 60% better than "Replicated Buffer," the default parallel reduction algorithm speci&#64257;ed by OpenMP standard. To reduce the overhead of speculative run-time parallelization, we have developed an adaptive run-time parallelization technique that dynamically chooses effcient shadow structures to record a program's dynamic memory access patterns for parallelization. This technique complements the original speculative run-time parallelization technique, the LRPD test, in parallelizing loops with sparse memory accesses. The techniques presented in this dissertation have been implemented in an optimizing research compiler and can be viewed as effective building blocks for comprehensive run-time optimization systems, e.g., feedback-directed optimization systems and dynamic compilation systems.

Page generated in 0.507 seconds