• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 503
  • 273
  • 82
  • 59
  • 25
  • 11
  • 11
  • 9
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1244
  • 981
  • 501
  • 432
  • 360
  • 229
  • 194
  • 185
  • 162
  • 132
  • 113
  • 113
  • 109
  • 109
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1201

Optimisation combinée des approvisionnements et du transport dans une chaine logistique / combined optimization of procurement and transport in supply chain

Rahmouni, Mouna 15 September 2015 (has links)
Le problème d’approvisionnement conjoint (JDP) proposé est un problème de planification des tournées de livraisons sur un horizon de temps décomposé en périodes élémentaires, l’horizon de temps étant la période commune de livraison de tous les produits,. La donnée de ces paramètres permet d’obtenir une formulation linéaire du problème, avec des variables de décision binaires. Le modèle intègre aussi des contraintes de satisfaction de la demande à partir des stocks et des quantités livrées, des contraintes sur les capacités de stockage et de transport.Afin de résoudre aussi le problème de choix des tournées de livraison, il est nécessaire d'introduire dans le modèle des contraintes et des variables liées aux sites visités au cours de chaque tour. Il est proposé de résoudre le problème en deux étapes. La première étape est le calcul hors ligne du coût minimal de la tournée associé à chaque sous-ensemble de sites. On peut observer que pour tout sous-ensemble donné de sites, le cycle hamiltonien optimal reliant ces sites à l'entrepôt peut être calculé à l'avance par un algorithme du problème du voyageur de commerce (TSP). Le but ici n'est pas d'analyser pleinement le TSP, mais plutôt d'intégrer sa solution dans la formulation de JRP. .Dans la deuxième étape, des variables binaires sont associées à chaque tour et à chaque période pour déterminer le sous-ensemble de sites choisi à chaque période et son coût fixe associé. / The proposed joint delivery problem (JDP) is a delivery tour planning problem on a time horizon decomposed into elementary periods or rounds, the time horizon being the common delivery period for all products. The data of these parameters provides a linear formulation of the problem, with binary decision variables. The model also incorporates the constraints of meeting demand from stock and the quantities supplied, storage and transport capacity constraints.In order to also solve the problem of choice of delivery rounds, it is necessary to introduce in the model several constraints and variables related to the sites visited during each round. It is proposed to solve the problem in two steps. The first step is the calculation of the minimum off-line cost of the tour associated with each subset of sites. One can observe that for any given subset of sites, the optimal Hamiltonian cycle linking those sites to the warehouse can be calculated in advance by a traveling salesman problem algorithm (TSP). The goal here is not to fully analyze the TSP, but rather to integrate its solution in the formulation of the JRP. In the second stage, binary variables are associated with each subset and each period to determine the selected subset of sites in each period and its associated fixed cost.
1202

Dvourozměrný elektronový plyn v kvantových jamách CdTE: studie ve vysokých magnetických polích / High mobility two-dimensional electron gas in CdTe quantum wells:High magnetic field studies

Kunc, Jan January 2011 (has links)
KurHigh mobility two-dimensional electron gas in CdTe quantum wells: High magnetic field studies Experimental studies of two-dimensional electron gases confined in CdTe and CdMnTe quantum wells are presented. The data analysis is supported by numerical calcula- tions of the band structure of confined states, using the local density and envelope func- tion approximations. Four by four, k.p calculations have been performed to justify the parabolic approximation of valence bands. Samples were characterized by Raman scatter- ing spectroscopy and far infrared cyclotron resonance absorption measurements. Low-field magneto-transport shows the dominant contribution of the semi-classical Drude conduc- tivity and three orders of magnitude weaker contributions of weak localization, electron- electron interaction and Shubnikov-de Haas oscillations. The contribution of electron- electron interactions is explained within a semi-classical model of circling electrons. The shape of Landau levels, broadening, transport and quantum lifetimes and dominant long- range scattering mechanism have been determined. High-field magneto-transport displays fractional quantum Hall states at Landau levels N = 0 and N = 1. The ground states 5/3 and 4/3 have been determined to be fully spin polarized, in agreement with the approach of composite...
1203

[en] ON THE COMPARISON OF COMPUTATIONALLY EFFICIENT QUOTA-SHARING METHODOLOGIES FOR LARGE-SCALE RENEWABLE GENERATION PORTFOLIOS / [pt] COMPARAÇÃO DE METODOLOGIAS COMPUTACIONALMENTE EFICIENTES PARA RATEIO DE QUOTAS DE PORTFOLIOS DE GERAÇÃO DE ENERGIA RENOVÁVEL DE LARGA ESCALA

LUCAS FREIRE 17 July 2017 (has links)
[pt] Portfólios de fontes renováveis de energia elétrica são mecanismos de gerenciamento de risco interessantes para comercialização de energia em mercados de negociação bilateral. Quando formados por agentes que pertencem a diferentes companhias sua estabilidade depende da maneira com que os benefícios de mitigação de risco gerados pelo portfólio são alocados individualmente entre os participantes. O problema de se encontrar uma solução estável pode ser matematicamente formulado através da busca de um vetor de alocação de quotas que pertença ao núcleo do jogo cooperativo, que por sua vez pode ser formulado como um conjunto de restrições lineares que aumenta exponencialmente com o número de participantes. Adicionalmente, o lado direito de cada restrição que define o núcleo do jogo cooperativo define o valor de uma determinada coalisão que, no presente trabalho, é obtido através de um modelo de otimização estocástica de dois estágios. Este trabalho compara diferentes metodologias computacionalmente eficientes baseadas em programação linear inteira mista e na técnica de decomposição de Benders para encontrar vetores de alocação de quotas que pertençam ao núcleo de portfólios de larga escala de geradores de energia renovável. São apresentados estudos de casos que utilizam dados reais do sistema elétrico brasileiro. / [en] Portfolios of renewable electricity sources are interesting risk-management mechanisms for trading in electricity contract markets. When they are formed by players belonging to different companies, their stability relies on the way the riskmitigation benefit generated by the optimal portfolio is allocated through individual participants. The problem of reaching a stable allocation can be mathematically formulated in terms of finding a quota-sharing vector belonging to the Core of a cooperative game, which can be formulated as a set of linear constraints that exponentially grows with the number of participants. Moreover, the right-hand-side of each constraint defining the Core relies on a given coalition value which, in the present work, is obtained by a two-stage stochastic optimization model. This work presents and compares efficient methodologies mainly based on mixed integer linear programming and Benders decomposition to find quota allocation vectors that belongs to the Core of large-scale renewable energy portfolios. Case studies are presented with realistic data from the Brazilian power system.
1204

Recourse policies in the vehicle routing problem with stochastic demands

Salavati-Khoshghalb, Majid 09 1900 (has links)
No description available.
1205

Modelagem matemática para a localização ótima de usinas de incineração com recuperação energética de resíduos sólidos domiciliaries: uma aplicação para Região Metropolitana da Baixada Santista e Litoral Norte / Mathematical modeling for the optimal location for incineration plants with energy recovery from municipal solid waste: an application to the Santos Metropolitan Region and North Coast

Nadja Nara Lima Heiderich 27 January 2012 (has links)
A presente pesquisa teve por objetivo propor uma estrutura de modelagem matemática para a localização ótima de unidades de tratamento térmico de resíduos com recuperação energética. Para tal, o ferramental utilizado foi o método de programação inteira mista, sendo a modelagem desenvolvida aplicada para a Região Metropolitana da Baixada Santista e Litoral Norte. Foi considerada como premissa básica que o processo de incineração seria operado pelo poder público; todos os municípios geradores de Resíduos Sólidos Domiciliares foram considerados como potenciais localidades para a instalação da unidade de tratamento térmico de resíduos; todos os aterros sanitários que atendiam os Municípios estudados foram considerados para a recepção das escórias e cinzas provenientes do processo de incineração. Foram especificados quatro cenários para tal análise, que abordaram competitividade em relação ao uso de aterros sanitários e a presença de eficiência na coleta seletiva dos Municípios. Os resultados apontaram para que a unidade de tratamento térmico de resíduos se localize no entorno dos Municípios de Santos, Praia Grande e São Vicente. Mesmo com a opção do uso de aterros sanitários, a implantação da unidade de tratamento térmico de resíduos se apresentou como uma alternativa mais favorável, tendo sido levados em conta, na modelagem proposta, aspectos tanto ambientais como econômicos. / This study aimed to propose a mathematical modeling framework for optimal location of units of the thermal treatment of waste with energy recovery. To this end, the tool used was the method of mixed integer programming, and the developed modeling applied to the Santos Metropolitan Region and North Coast. It was considered as the basic premise that the incineration process would be operated by the Government; all municipalities solid waste generators were considered as potential locations for the installation of the unit thermal treatment of waste, all landfills that serve municipalities studied were considered for receipt of slag and ash from the incineration process. Four scenarios were specified for this analysis that addressed competitiveness in relation to the use of landfills and the presence of selective collection efficiency in the municipalities. Results showed that the unit of thermal treatment of waste should be located in the vicinity of the cities of Santos, São Vicente and Praia Grande. Even with the option of using landfill, the deployment of the unit of thermal treatment of waste is presented as an alternative more favorable, having been taken into account in the proposed model, both environmental and economic aspects.
1206

Planejamento da expansão de sistemas de transmissão usando técnicas especializadas de programação inteira mista / Transmission network expansion planning via efficient mixed-integer linear programming techniques

Vanderlinde, Jeferson Back [UNESP] 06 September 2017 (has links)
Submitted by JEFERSON BACK VANDERLINDE null (jefersonbv@yahoo.com.br) on 2017-11-01T16:38:25Z No. of bitstreams: 1 jeferson_tese_final_20171101.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-11-13T15:38:34Z (GMT) No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) / Made available in DSpace on 2017-11-13T15:38:34Z (GMT). No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) Previous issue date: 2017-09-06 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho, consideram-se a análise teórica e a implementação computacional dos algoritmos Primal Simplex Canalizado (PSC) e Dual Simplex Canalizado (DSC) especializados. Esses algoritmos foram incorporados em um algoritmo Branch and Bound (B&B) de modo a resolver o problema de Planejamento da Expansão de Sistemas de Transmissão (PEST). Neste caso, o problema PEST foi modelado usando os chamados modelo de Transportes e modelo Linear Disjuntivo (LD), o que produz um problema de Programação Linear Inteiro Misto (PLIM). O algoritmo PSC é utilizado na resolução do problema de Programação Linear (PL) inicial após desconsiderar a restrição de integralidade do problema PLIM original. Juntamente com o algoritmo PSC, foi implementada uma estratégia para reduzir o número de variáveis artificiais adicionadas ao PL, consequentemente reduzindo o número de iterações do algoritmo PSC. O algoritmo DSC é utilizado na reotimização eficiente dos subproblemas gerados pelo algoritmo B&B, através do quadro ótimo do PL inicial, excluindo, assim, a necessidade da resolução completa de cada subproblema e, consequentemente, reduzindo o consumo de processamento e memória. Nesta pesquisa, é apresentada uma nova proposta de otimização, e, consequentemente, a implementação computacional usando a linguagem de programação FORTRAN que opera independentemente de qualquer solver. / In this research, the theoretical analysis and computational implementation of the specialized dual simplex algorithm (DSA) and primal simplex algorithm (PSA) for bounded variables is considered. These algorithms have been incorporated in a Branch and Bound (B&B) algorithm to solve the Transmission Network Expansion Planning (TNEP) problem. In this case, the TNEP problem is modeled using transportation model and linear disjunctive model (DM), which produces a mixed-integer linear programming (MILP) problem. After relaxing the integrality of investment variables of the original MILP problem, the PSA is used to solve the initial linear programming (LP) problem. Also, it has been implemented a strategy in PSA to reduce the number of artificial variables which are added into the LP problem, and consequently reduces the number of iterations of PSA. Through optimal solution of the initial LP, the DSA is used in efficient reoptimization of subproblems, resulting from the B&B algorithm, thus excludes the need for complete resolution of each subproblems, which results reducing the CPU time and memory consumption. This research presents the implementation of the proposed approach using the FORTRAN programming language which operates independently and does not use any commercial solver.
1207

Flow-shop with time delays, linear modeling and exact solution approaches / Flow-shop avec temps de transport, modélisation linéaire et approches de résolution exacte

Mkadem, Mohamed Amine 07 December 2017 (has links)
Dans le cadre de cette thèse, nous traitons le problème de flow-shop à deux machines avec temps de transport où l’objectif consiste à minimiser le temps de complétion maximal. Dans un premier temps, nous nous sommes intéressés à la modélisation de ce problème. Nous avons proposé plusieurs programmes linéaires en nombres entiers. En particulier, nous avons introduit une formulation linéaire basée sur une généralisation non triviale du modèle d’affectation pour le cas où les durées des opérations sur une même machine sont identiques. Dans un deuxième temps, nous avons élargi la portée de ces formulations mathématiques pour développer plusieurs bornes inférieures et un algorithme exact basé sur la méthode de coupe et branchement (Branch-and-Cut). En effet, un ensemble d’inégalités valides a été considéré afin d’améliorer la relaxation linéaire de ces programmes et d’accélérer leur convergence. Ces inégalités sont basées sur la proposition de nouvelles règles de dominance et l’identification de sous-instances faciles à résoudre. L’identification de ces sous-instances revient à déterminer les cliques maximales dans un graphe d’intervalles. En plus des inégalités valides, la méthode exacte proposée inclut la considération d’une méthode heuristique et d’une procédure visant à élaguer les nœuds. Enfin, nous avons proposé un algorithme par séparation et évaluation (Branch-and-Bound) pour lequel, nous avons introduit des règles de dominance et une méthode heuristique basée sur la recherche locale. Nos expérimentations montrent l’efficacité de nos approches qui dominent celles de la littérature. Ces expérimentations ont été conduites sur plusieurs classes d’instances qui incluent celles de la littérature, ainsi que des nouvelles classes d’instances où les algorithmes de la littérature se sont montrés peu efficaces. / In this thesis, we study the two-machine flow-shop problem with time delays in order to minimize the makespan. First, we propose a set of Mixed Integer Programming (MIP) formulations for the problem. In particular, we introduce a new compact mathematical formulation for the case where operations are identical per machine. The proposed mathematical formulations are then used to develop lower bounds and a branch-and-cut method. A set of valid inequalities is proposed in order to improve the linear relaxation of the MIPs. These inequalities are based on proposing new dominance rules and computing optimal solutions of polynomial-time-solvable sub-instances. These sub-instances are extracted by computing all maximal cliques on a particular Interval graph. In addition to the valid inequalities, the branch-and-cut method includes the consideration of a heuristic method and a node pruning procedure. Finally, we propose a branch-and-bound method. For which, we introduce a local search-based heuristic and dominance rules. Experiments were conducted on a variety of classes of instances including both literature and new proposed ones. These experiments show the efficiency of our approaches that outperform the leading methods published in the research literature.
1208

Proposta de um modelo de planejamento agregado da produção numa usina de açúcar e álcool vinculado à flutuação de preços em mercados à vista e no mercado futuro. / A model of aggregate production planning in a sugar mill and alcohol linked the decisions of prices in future markets and present markets.

Marcelo Dias Carvalho 09 November 2009 (has links)
O objetivo de estudo desta dissertação é o desenvolvimento de um modelo de planejamento agregado da produção que apóie as decisões de nível gerencial e de diretoria das usinas de açúcar e álcool no que tange às variedades de cana colhidas em cada semana, às compras de cana-de-açúcar de terceiros, ao tipo de transporte (próprio ou terceirizado) a se utilizar em cada semana, ao total de cana moída por semana para atendimento da demanda e aos processos (industrial e comercial) que se devem escolher para produzir e comercializar açúcar e álcool. As decisões devem ocorrer em função de preços nos mercados interno, externo e mercado futuro, do fluxo de caixa da empresa, da capacidade da usina para armazenar açúcar e álcool e da possibilidade de uso de estoque de terceiros. As decisões por compra de cana, escolha de processos e venda de produtos são tomadas semanalmente num horizonte móvel de planejamento de 52 semanas, que inclui o tempo de safra no centro-sul do Brasil (meados de março a meados de dezembro, aproximadamente 36 semanas) mais o período de entressafra (aproximadamente 16 semanas, de meados de dezembro a meados de março). A procura por melhores estratégias de comercialização de tal forma a auxiliar a tomada de decisões é uma necessidade constante dos empresários do setor, que muitas vezes são surpreendidos pelas variações de preços de açúcar e álcool no mercado interno, externo e mercado futuro. Na parte comercial, este trabalho utiliza o método Delphi de previsão de preços de açúcar e álcool que balizam as tomadas de decisão no planejamento e controle da produção das usinas de açúcar e álcool. Define-se Hedge como a operação financeira de proteger determinado ativo de uma empresa contra variações inesperadas de preços. Neste trabalho, utiliza-se um modelo de escolha de mix de produto para Hedge vinculado à lucratividade e minimização de risco denominado Modelo de Semi- Variância com análise de cenários de Markowitz. Nas decisões relacionadas com as partes agrícola, industrial e comercial, faz-se uso de um modelo de programação linear inteira mista e para resolvê-lo utiliza-se o software de programação matemática LINGO e suas interfaces com a planilha eletrônica Excel. Nas decisões vinculadas ao mix ótimo para o Hedge em cada semana, faz-se uso de um modelo de programação quadrática resolvido pelo LINGO e suas interfaces com a planilha eletrônica Excel. Um estudo de caso foi realizado numa usina de açúcar e álcool no município de Junqueirópolis (SP) para validar o modelo proposto. / The objective of study this dissertation is to develop a model of aggregate production planning to support the decisions of management and board level of sugar and alcohol plants in regard to varieties of cane harvested each week, purchasing cane of nonsugar, the type of transport (own or outsourced) to use each week, the total cane processed per week for taking care of the demand and processes (industrial and commercial) and that must be chosen to produce and sell sugar and alcohol. Decisions must occur in terms of domestic, foreign and future market prices, the company\'s cash flow and the capacity to store sugar and alcohol and the possibility of using stock to third parties. Decisions about buying cane, choice of processes and products for sale are made in a weekly mobile planning horizon of 52 weeks, which includes the time of harvest in central-southern Brazil (mid-March to mid-December, approximately 36 weeks) plus the off-season (approximately 16 weeks, from mid-December to mid March). The demand for better marketing strategies to help such decision making is a constant need for entrepreneurs in the sector, which are often surprised by the changes in prices of sugar and alcohol in the internal, external and future market. In the commercial part, this study uses the Delphi method of forecasting the price of sugar and alcohol that guides the decision-making in planning and controlling the production of sugar and alcohol plants. Hedging is defined as a financial transaction to protect certain assets of a business against unexpected changes in prices. In this work, it is used a model of choice of product mix for Hedge linked to profitability and minimizing risk named Model of Semi-Variance analysis with scenarios of Markowitz. In decisions related to the agricultural, industrial and commercial parts it is used a type of mixed integer linear programming and to solve it is used the mathematical programming software LINGO and its interface with Excel spreadsheets. In decisions related to the optimal mix for Hedge in each week, is used a quadratic programming model solved by LINGO and its interface with Excel spreadsheets. A case study was conducted in a sugar mill and alcohol in the city of Junqueirópolis (SP) to validate the proposed model.
1209

Meta-heurísticas Iterated Local Search, GRASP e Artificial Bee Colony aplicadas ao Job Shop Flexível para minimização do atraso total. / Meta-heuristics Iterated Local Search, GRASP and Artificial Bee Colony applied to Flexible Job Shop minimizing total tardiness.

Everton Luiz de Melo 07 February 2014 (has links)
O ambiente de produção abordado neste trabalho é o Job Shop Flexível (JSF), uma generalização do Job Shop (JS). O problema de programação de tarefas, ou jobs, no ambiente JS é classificado por Garey; Johnson e Sethi (1976) como NP-Difícil e o JSF é, no mínimo, tão difícil quanto o JS. O JSF é composto por um conjunto de jobs, cada qual constituído por operações. Cada operação deve ser processada individualmente, sem interrupção, em uma única máquina de um subconjunto de máquinas habilitadas. O principal critério de desempenho considerado é a minimização dos atrasos dos jobs. São apresentados modelos de Programação Linear Inteira Mista (PLIM) para minimizar o atraso total e o instante de término da última operação, o makespan. São propostas novas regras de prioridade dos jobs, além de adaptações de regras da literatura. Tais regras são utilizadas por heurísticas construtivas e são aliadas a estratégias cujo objetivo é explorar características específicas do JSF. Visando aprimorar as soluções inicialmente obtidas, são propostas buscas locais e outros mecanismos de melhoria utilizados no desenvolvimento de três meta-heurísticas de diferentes categorias. Essas meta-heurísticas são: Iterated Local Search (ILS), classificada como meta-heurística de trajetória; Greedy Randomized Adaptive Search (GRASP), meta-heurística construtiva; e Artificial Bee Colony (ABC), meta-heurística populacional recentemente proposta. Esses métodos foram selecionados por alcançarem bons resultados para diversos problemas de otimização da literatura. São realizados experimentos computacionais com 600 instâncias do JSF, permitindo comparações entre os métodos de resolução. Os resultados mostram que explorar as características do problema permite que uma das regras de prioridade propostas supere a melhor regra da literatura em 81% das instâncias. As meta-heurísticas ILS, GRASP e ABC chegam a conseguir mais de 31% de melhoria sobre as soluções iniciais e a obter atrasos, em média, somente 2,24% superiores aos das soluções ótimas. Também são propostas modificações nas meta-heurísticas que permitem obter melhorias ainda mais expressivas sem aumento do tempo de execução. Adicionalmente é estudada uma versão do JSF com operações de Montagem e Desmontagem (JSFMD) e os experimentos realizados com um conjunto de 150 instâncias também indicam o bom desempenho dos métodos desenvolvidos. / The production environment addressed herein is the Flexible Job Shop (FJS), a generalization of the Job Shop (JS). In the JS environment, the jobs scheduling problem is classified by Garey; Johnson and Sethi (1976) as NP-Hard and the FJS is at least as difficult as the JS. FJS is composed of a set of jobs, each consisting of operations. Each operation must be processed individually, without interruption, in a single machine of a subset of enabled machines. The main performance criterion is minimizing the jobs tardiness. Mixed Integer Linear Programming (MILP) models are presented. These models minimize the total tardiness and the completion time of the last operation, makespan. New priority rules of jobs are proposed, as well as adaptations of rules from the literature. These rules are used by constructive heuristics and are combined with strategies aimed at exploiting specific characteristics of FSJ. In order to improve the solutions initially obtained, local searches and other improvement mechanisms are proposed and used in the development of metaheuristics of three different categories. These metaheuristics are: Iterated Local Search (ILS), classified as trajectory metaheuristic; Greedy Randomized Adaptive Search (GRASP), constructive metaheuristic, and Artificial Bee Colony (ABC), recently proposed population metaheuristic. These methods were selected owing to their good results for various optimization problems in the literature. Computational experiments using 600 FJS instances are carried out to allow comparisons between the resolution methods. The results show that exploiting the characteristics of the problem allows one of the proposed priority rules to exceed the best literature rule in about 81% of instances. Metaheuristics ILS, GRASP and ABC achieve more than 31% improvement over the initial solutions and obtain an average tardiness only 2.24% higher than the optimal solutions. Modifications in metaheuristics are proposed to obtain even more significant improvements without increased execution time. Additionally, a version called Disassembly and Assembly FSJ (DAFJS) is studied and the experiments performed with a set of 150 instances also indicate good performance of the methods developed.
1210

Investigations on CPI Centric Worst Case Execution Time Analysis

Ravindar, Archana January 2013 (has links) (PDF)
Estimating program worst case execution time (WCET) is an important problem in the domain of real-time systems and embedded systems that are deadline-centric. If WCET of a program is found to exceed the deadline, it is either recoded or the target architecture is modified to meet the deadline. Predominantly, there exist three broad approaches to estimate WCET- static WCET analysis, hybrid measurement based analysis and statistical WCET analysis. Though measurement based analyzers benefit from knowledge of run-time behavior, amount of instrumentation remains a concern. This thesis proposes a CPI-centric WCET analyzer that estimates WCET as a product of worst case instruction count (IC) estimated using static analysis and worst case cycles per instruction (CPI) computed using a function of measured CPI. In many programs, it is observed that IC and CPI values are correlated. Five different kinds of correlation are found. This correlation enables us to optimize WCET from the product of worst case IC and worst case CPI to a product of worst case IC and corresponding CPI. A prime advantage of viewing time in terms of CPI, enables us to make use of program phase behavior. In many programs, CPI varies in phases during execution. Within each phase, the variation is homogeneous and lies within a few percent of the mean. Coefficient of variation of CPI across phases is much greater than within a phase. Using this observation, we estimate program WCET in terms of its phases. Due to the nature of variation of CPI within a phase in such programs, we can use a simple probabilistic inequality- Chebyshev inequality, to compute bounds of CPI within a desired probability. In some programs that execute many paths depending on if-conditions, CPI variation is observed to be high. The thesis proposes a PC signature that is a low cost way of profiling path information which is used to isolate points of high CPI variation and divides a phase into smaller sub-phases of lower CPI variation. Chebyshev inequality is applied to sub-phases resulting in much tighter bounds. Provision to divide a phase into smaller sub-phases based on allowable variance of CPI within a sub-phase also exists. The proposed technique is implemented on simulators and on a native platform. Other advantages of phases in the context of timing analysis are also presented that include parallelized WCET analysis and estimation of remaining worst case execution time for a particular program run.

Page generated in 0.0558 seconds