• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 82
  • 52
  • 44
  • 13
  • 12
  • 11
  • 9
  • 8
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 713
  • 713
  • 151
  • 139
  • 119
  • 98
  • 88
  • 85
  • 83
  • 79
  • 76
  • 74
  • 68
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

On the theory and modeling of dynamic programming with applications in reservoir operation

Sniedovich, Moshe,1945- January 1976 (has links)
This dissertation contains a discussion concerning the validity of the principle of optimality and the dynamic programming algorithm in the context of discrete time and state multistage decision processes. The multistage decision model developed for the purpose of the investigation is of a general structure, especially as far as the reward function is concerned. The validity of the dynamic programming algorithm as a solution method is investigated and results are obtained for a rather wide class of decision processes. The intimate relationship between the principle and the algorithm is investigated and certain important conclusions are derived. In addition to the theoretical considerations involved in the implementation of the dynamic programming algorithm, some modeling and computational aspects are also investigated. It is demonstrated that the multistage decision model and the dynamic programming algorithm as defined in this study provide a solid framework for handling a wide class of multistage decision processes. The flexibility of the dynamic programming algorithm as a solution procedure for nonroutine reservoir control problems is demonstrated by two examples, one of which is a reliability problem. To the best of the author's knowledge, many of the theoretical derivations presented in this study, especially those concerning the relation between the principle of optimality and the dynamic programming algorithm, are novel.
182

DECISION MAKING UNDER UNCERTAINTY IN DYNAMIC MULTI-STAGE ATTACKER-DEFENDER GAMES

Luo, Yi January 2011 (has links)
This dissertation presents efficient, on-line, convergent methods to find defense strategies against attacks in dynamic multi-stage attacker-defender games including adaptive learning. This effort culminated in four papers submitted to high quality journals and a book and they are partially published. The first paper presents a novel fictitious play approach to describe the interactions between the attackers and network administrator along a dynamic game. Multi-objective optimization methodology is used to predict the attacker's best actions at each decision node. The administrator also keeps track of the attacker's actions and updates his knowledge on the attacker's behavior and objectives after each detected attack, and uses this information to update the prediction of the attacker's future actions to find its best response strategies. The second paper proposes a Dynamic game tree based Fictitious Play (DFP) approach to describe the repeated interactive decision processes of the players. Each player considers all possibilities in future interactions with their uncertainties, which are based on learning the opponent's decision process (including risk attitude, objectives). Instead of searching the entire game tree, appropriate future time horizons are dynamically selected for both players. The administrator keeps tracking the opponent's actions, predicts the probabilities of future possible attacks, and then chooses its best moves. The third paper introduces an optimization model to maximize the deterministic equivalent of the random payoff function of a computer network administrator in defending the system against random attacks. By introducing new variables the transformed objective function becomes concave. A special optimization algorithm is developed which requires the computation of the unique solution of a single variable monotonic equation. The fourth paper, which is an invited book chapter, proposes a discrete-time stochastic control model to capture the process of finding the best current move of the defender. The defender's payoffs at each stage of the game depend on the attacker's and the defender's accumulative efforts and are considered random variables due to their uncertainty. Their certain equivalents can be approximated based on their first and second moments which is chosen as the cost functions of the dynamic system. An on-line, convergent, Scenarios based Proactive Defense (SPD) algorithm is developed based on Differential Dynamic Programming (DDP) to solve the associated optimal control problem.
183

Holdout transshipment policy in two-location inventory systems

Zhang, Jiaqi January 2009 (has links)
In two-location inventory systems, unidirectional transshipment policies are considered when an item is not routinely stocked at a location in the system. Unlike the past research in this area which has concentrated on the simple transshipment policies of complete pooling or no pooling, the research presented in this thesis endeavors to develop an understanding of a more general class of transshipment policy. The research considers two major approaches: a decomposition approach, in which the two-location system is decomposed into a system with independent locations, and Markov decision process approach. For the decomposition approach, the transshipment policy is restricted to the class of holdout transshipment policy. The first attempt to develop a decomposition approach assumes that transshipment between the locations occurs at a constant rate in order to decompose the system into two independent locations with constant demand rates. The second attempt modifies the assumption of constant rate of transshipment to take account of local inventory levels to decompose the system into two independent locations with non-constant demand rates. In the final attempt, the assumption of constant rate of transshipment is further modified to model more closely the location providing transshipments. Again the system is decomposed into two independent locations with non-constant demand rates. For each attempt, standard techniques are applied to derive explicit expressions for the average cost rate, and an iterative solution method is developed to find an optimal holdout transshipment policy. Computational results show that these approaches can provide some insights into the performance of the original system. A semi-Markov decision model of the system is developed under the assumption of exponential lead time rather than fixed lead time. This model is later extended to the case of phase-type distribution for lead time. The semi-Markov decision process allows more general transshipment policies, but is computationally more demanding. Implicit expressions for the average cost rate are derived from the optimality equation for dynamic programming models. Computational results illustrate insights into the management of the two-location system that can be gained from this approach.
184

Approximate dynamic programming and aerial refueling

Panos, Dennis C. 06 1900 (has links)
Aerial refueling is an integral part of the United States military's ability to strike targets around the world with an overwhelming and continuous projection of force. However, with an aging fleet of refueling tankers and an indefinite replacement schedule the optimization of tanker usage is vital to national security. Optimizing tanker and receiver refueling operations is a complicated endeavor as it can involve over a thousand of missions during a 24 hour period, as in Operation Iraqi Freedom and Operation Enduring Freedom. Therefore, a planning model which increases receiver mission capability, while reducing demands on tankers, can be used by the military to extend the capabilities of the current tanker fleet. Aerial refueling optimization software, created in CASTLE Laboratory, solves the aerial refueling problem through a multi-period approximation dynamic programming approach. The multi-period approach is built around sequential linear programs, which incorporate value functions, to find the optimal refueling tracks for receivers and tankers. The use of value functions allows for a solution which optimizes over the entire horizon of the planning period. This approach varies greatly from the myopic optimization currently in use by the Air Force and produces superior results. The aerial refueling model produces fast, consistent, robust results which require fewer tankers than current planning methods. The results are flexible enough to incorporate stochastic inputs, such as: varying refueling times and receiver mission loads, while still meeting all receiver refueling requirements. The model's ability to handle real world uncertainties while optimizing better than current methods provides a great leap forward in aerial refueling optimization. The aerial refueling model, created in CASTLE Lab, can extend the capabilities of the current tanker fleet. / Contract number: N00244-99-G-0019 / US Navy (USN) author.
185

Adaptation Timing in Self-Adaptive Systems

Moreno, Gabriel A. 01 April 2017 (has links)
Software-intensive systems are increasingly expected to operate under changing and uncertain conditions, including not only varying user needs and workloads, but also fluctuating resource capacity. Self-adaptation is an approach that aims to address this problem, giving systems the ability to change their behavior and structure to adapt to changes in themselves and their operating environment without human intervention. Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, some adaptation tactics—atomic adaptation actions that leave the system in a consistent state—have latency and take some time to produce their effect. In that case, reactive adaptation causes the system to lag behind environment changes. What is worse, a long running adaptation action may prevent the system from performing other adaptations until it completes, further limiting its ability to effectively deal with the environment changes. To address these limitations and improve the effectiveness of self-adaptation, we present proactive latency-aware adaptation, an approach that considers the timing of adaptation (i) leveraging predictions of the near future state of the environment to adapt proactively; (ii) considering the latency of adaptation tactics when deciding how to adapt; and (iii) executing tactics concurrently. We have developed three different solution approaches embodying these principles. One is based on probabilistic model checking, making it inherently able to deal with the stochastic behavior of the environment, and guaranteeing optimal adaptation choices over a finite decision horizon. The second approach uses stochastic dynamic programming to make adaptation decisions, and thanks to performing part of the computations required to make those decisions off-line, it achieves a speedup of an order of magnitude over the first solution approach without compromising optimality. A third solution approach makes adaptation decisions based on repertoires of adaptation strategies— predefined compositions of adaptation tactics. This approach is more scalable than the other two because the solution space is smaller, allowing an adaptive system to reap some of the benefits of proactive latency-aware adaptation even if the number of ways in which it could adapt is too large for the other approaches to consider all these possibilities. We evaluate the approach using two different classes of systems with different adaptation goals, and different repertoires of adaptation strategies. One of them is a web system, with the adaptation goal of utility maximization. The other is a cyberphysical system operating in a hostile environment. In that system, self-adaptation must not only maximize the reward gained, but also keep the probability of surviving a mission above a threshold. In both cases, our results show that proactive latency-aware adaptation improves the effectiveness of self-adaptation with respect to reactive time-agnostic adaptation.
186

Resilient dynamic state estimation in the presence of false information injection attacks

Lu, Jingyang 01 January 2016 (has links)
The impact of false information injection is investigated for linear dynamic systems with multiple sensors. First, it is assumed that the system is unaware of the existence of false information and the adversary is trying to maximize the negative effect of the false information on Kalman filter's estimation performance under a power constraint. The false information attack under different conditions is mathematically characterized. For the adversary, many closed-form results for the optimal attack strategies that maximize the Kalman filter's estimation error are theoretically derived. It is shown that by choosing the optimal correlation coefficients among the false information and allocating power optimally among sensors, the adversary could significantly increase the Kalman filter's estimation errors. In order to detect the false information injected by an adversary, we investigate the strategies for the Bayesian estimator to detect the false information and defend itself from such attacks. We assume that the adversary attacks the system with certain probability, and that he/she adopts the worst possible strategy that maximizes the mean squared error (MSE) if the attack is undetected. An optimal Bayesian detector is designed which minimizes the average system estimation error instead of minimizing the probability of detection error, as a conventional Bayesian detector typically does. The case that the adversary attacks the system continuously is also studied. In this case, sparse attack strategies in multi-sensor dynamic systems are investigated from the adversary's point of view. It is assumed that the defender can perfectly detect and remove the sensors once they are corrupted by false information injected by an adversary. The adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under the constraint that the adversary can only attack the system a few times over the sensor and over the time, which leads to an integer programming problem. In order to overcome the prohibitive complexity of the exhaustive search, polynomial-time algorithms, such as greedy search and dynamic programming, are proposed to find the suboptimal attack strategies. As for greedy search, it starts with an empty set, and one sensor is added at each iteration, whose elimination will lead to the maximum system estimation error. The process terminates when the cardinality of the active set reaches to the sparsity constraint. Greedy search based approaches such as sequential forward selection (SFS), sequential backward selection (SBS), and simplex improved sequential forward selection (SFS-SS) are discussed and corresponding attack strategies are provided. Dynamic programming is also used in obtaining the sub-optimal attack strategy. The validity of dynamic programming lies on a straightforward but important nature of dynamic state estimation systems: the credibility of the state estimate at current step is in accordance with that at previous step. The problem of false information attack on and the Kalman filter's defense of state estimation in dynamic multi-sensor systems is also investigated from a game theoretic perspective. The relationship between the Kalman filter and the adversary can be regarded as a two-person zero-sum game. The condition under which both sides of the game will reach a Nash equilibrium is investigated.
187

Implementace rezoluce řízení toku v dynamickém jazyce / Implementing control flow resolution in dynamic language

Šindelář, Štěpán January 2014 (has links)
Dynamic programming languages allow us to write code without type information and types of variables can change during execution. Although easier to use and suitable for fast prototyping, dynamic typing can lead to error prone code and is challenging for the compilers or interpreters. Programmers often use documentation comments to provide the type information, but the correspondence of the documentation and the actual code is usually not checked by the tools. In this thesis, we focus on one of the most popular dynamic programming languages: PHP. We have developed a framework for static analysis of PHP code as a part of the Phalanger project -- the PHP to .NET compiler. The framework supports any kind of analysis, but in particular, we implemented type inference analysis with emphasis on discovery of possible type related errors and mismatches between documentation and the actual code. The implementation was evaluated on real PHP applications and discovered several real errors and documentation mismatches with a good ratio of false positives. Powered by TCPDF (www.tcpdf.org)
188

Distributed Execution of Recursive Irregular Applications

Nikhil Hegde (7043171) 13 August 2019 (has links)
Massive computing power and applications running on this power, primarily confined to expensive supercomputers a decade ago, have now become mainstream through the availability of clusters with commodity computers and high-speed interconnects running big-data era applications. The challenges associated with programming such systems, for effectively utilizing the computing power, have led to the creation of intuitive abstractions and implementations targeting average users, domain experts, and savvy (parallel) programmers. There is often a trade-off between the ease of programming and performance when using these abstractions. This thesis develops tools to bridge the gap between ease of programming and performance of irregular programs—programs that involve one or more of irregular- data structures, control structures, and communication patterns—on distributed-memory systems. <div><br></div><div>Irregular programs are focused heavily in domains ranging from data mining to bioinformatics to scientific computing. In contrast to regular applications such as stencil codes and dense matrix-matrix multiplications, which have a predictable pattern of data access and control flow, typical irregular applications operate over graphs, trees, and sparse matrices and involve input-dependent data access pattern and control flow. This makes it difficult to apply optimizations such as those targeting locality and parallelism to programs implementing irregular applications. Moreover, irregular programs are often used with large data sets that prohibit single-node execution due to memory limitations on the node. Hence, distributed solutions are necessary in order to process all the data.<br><br>In this thesis, we introduce SPIRIT, a framework consisting of an abstraction and a space-adaptive runtime system for simplifying the creation of distributed implementations of recursive irregular programs based on spatial acceleration structures. SPIRIT addresses the insufficiency of traditional data-parallel approaches and existing systems in effectively parallelizing computations involving repeated tree traversals. SPIRIT employs locality optimizations applied in a shared-memory context, introduces a novel pipeline-parallel approach to execute distributed traversals, and trades-off performance with memory usage to create a space-adaptive system that achieves a scalable performance, and outperforms implementations done in contemporary distributed graph processing frameworks.</div><div><br>We next introduce Treelogy to understand the connection between optimizations and tree-algorithms. Treelogy provides an ontology and a benchmark suite of a broader class of tree algorithms to help answer: (i) is there any existing optimization that is applicable or effective for a new tree algorithm? (ii) can a new optimization developed for a tree algorithm be applied to existing tree algorithms from other domains? We show that a categorization (ontology) based on structural properties of tree- algorithms is useful for both developers of new optimizations and new tree algorithm creators. With the help of a suite of tree traversal kernels spanning the ontology, we show that GPU, shared-, and distributed-memory implementations are scalable and the two-point correlation algorithm with vptree performs better than the standard kdtree implementation.</div><div><br>In the final part of the thesis, we explore the possibility of automatically generating efficient distributed-memory implementations of irregular programs. As manually creating distributed-memory implementations is challenging due to the explicit need for managing tasks, parallelism, communication, and load-balancing, we introduce a framework, D2P, to automatically generate efficient distributed implementations of recursive divide-conquer algorithms. D2P automatically generates a distributed implementation of a recursive divide-conquer algorithm from its specification, which is a high-level outline of a recursive formulation. We evaluate D2P with recursive Dynamic programming (DP) algorithms. The computation in DP algorithms is not irregular per se. However, when distributed, the computation in efficient recursive formulations of DP algorithms requires irregular communication. User-configurable knobs in D2P allow for tuning the amount of available parallelism. Results show that D2P programs scale well, are significantly better than those produced using a state-of-the-art framework for parallelizing iterative DP algorithms, and outperform even hand-written distributed-memory implementations in most cases.</div>
189

Optimisation d'un système hybride de génération d'énergie électrique permettant de minimiser la consommation et l'empreinte environnementale. / Optimization of a hybrid electrical power generation system to minimize fuel consumption and environmental footprint

Kravtzoff, Ivan 02 July 2015 (has links)
Les préoccupations environnementales grandissantes nous ont conduits à faire des efforts pour réduire les émissions de gaz à effet de serre, notamment dans le domaine de la production de l’énergie électrique. C’est dans ce contexte que Leroy Somer a lancé des recherches sur les groupes électrogènes hybrides afin de minimiser la consommation de carburant et les coûts d’exploitations. Pour aborder les questions du dimensionnement des ressources matérielles et de leur utilisation optimale, une méthodologie est développée dans cette thèse. La recherche de la stratégie de gestion de l’énergie optimale est basée sur l’algorithme de programmation dynamique de Bellman. Elle sera associée à un algorithme d’optimisation à évolution différentielle pour optimiser le dimensionnement de la structure hybride. Les fonctions de coûts des optimisations sont obtenues par le développement de modèles énergétiques et économiques. Grâce à cette méthode, nous montrons que les gains d’un groupe électrogène hybride sont fortement liés à l’utilisation que l’on aura de celui-ci. Dans les cas où le groupe électrogène est utilisé sur des profils avec de faibles facteurs de charge, les gains pourront être conséquents. Il sera donc primordial de bien connaitre les profils de charge de l’application avant de dimensionner la structure tout entière du groupe électrogène hybride. Les travaux ont également débouché sur une mise en œuvre expérimentale qui a pu valider les premiers résultats obtenu lors des simulations. / Growing environmental issues and concerns have led to efforts to reduce CO2 and greenhouse effect pollutant emissions in the field of electric power generation. This has led Leroy Somer to investigate systems based on hybrid technologies to reduce genset fuel consumption and operating costs. A methodology is developed in this thesis to address issues of sizing hardware resources and their optimal use. The optimum energy management strategy is based on the dynamic programming algorithm of Bellman. It will be associated to a differential evolution optimization algorithm to optimize the design of the hybrid structure. The objective functions are obtained by developing energetic and economic models. Through this method, we show that the benefits of a hybrid generator are strongly related to its use. In cases where the generator is used on profiles with low load factors, the benefits will be significant. It will be very important to have good knowledge of load profiles applications before sizing the whole structure of the hybrid generator. A prototype of this system has been developed and has confirmed simulation results.
190

Analyse du potentiel de développement des ressources d’hydrocarbure non conventionnelles / Economic analysis of non conventional crude oil supply

Bouchonneau, Déborah 02 December 2011 (has links)
Les perspectives énergétiques globales soulignent une demande croissante d'énergie sur les prochaines décennies. Le pétrole brut devrait représenter environ 35% de l'offre d'énergie primaire à l'horizon 2030 d'après l'AIE. Parmi les sources d'approvisionnement, les hydrocarbures "non conventionnels" devraient contribuer significativement à l'offre de produits pétroliers, puisqu'ils présentent un intérêt stratégique en termes de réserves et d'indépendance énergétique. L'objectif de cette thèse est l'étude du potentiel de développement des ressources d'hydrocarbure non conventionnelles sous différents scénarios liés au contexte économique et environnemental. Les sables asphaltiques, principalement localisés au Canada, constituent notre cas d'application. La première partie de cette thèse a mis en évidence deux phases de développement: la première, de 1980 à 2005, correspondant à l'émergence de la filière grâce à des leviers réglementaires, économiques et géographiques; la seconde, amorcée en 2005 avec la dégradation du contexte économique, pendant laquelle le développement de la filière s'est fortement ralenti. La seconde partie de cette thèse porte sur l'analyse prospective à horizon 2050. L'élaboration d'un premier modèle basé sur la programmation linéaire a permis de quantifier l'offre tendancielle sous différents scénarios déterministes de prix et de réglementations environnementales. En particulier, la décision d'investissement apparaît significativement impactée par l'introduction d'une taxe CO2. Un second modèle basé sur la programmation dynamique a permis d'évaluer l'offre prospective en présence d'incertitudes. Un impact négatif de l'incertitude et de la volatilité des prix sur la décision d'investissement a été mis en évidence, avec ou sans réglementation environnementale. Cet impact négatif est accentué lorsqu'une incertitude supplémentaire sur le cadre réglementaire environnemental est introduite. / International energy outlook emphasizes an increasing energy demand over the next decades. Crude oil should represent about 35% of primary energy supply by 2030 according to the IEA. Among supply sources, non conventional crude oil should contribute significantly to the supply of petroleum products, being strategic in terms of reserves and energy independence. This thesis aims to evaluate the development potential of non conventional crude oil using different scenarios regarding the economic and environmental context. Oil sands, essentially located in Canada, constitute our application. The first part of this thesis highlights two development phases: the first one, from 1980 to 2005, corresponding to the emergence of the oil sands sector through regulatory, economic and geographical levers; the second one, started in 2005 with the deterioration of the economic climate, during which oil sands development slowed down significantly. The second part of this thesis focuses on the prospective analysis of the horizon 2050. Firstly, the development of a supply model based on linear programming allowed us to quantify non conventional oil trend supply under deterministic price and environmental regulation scenarios. In particular, investment decision is significantly affected by the establishment of a CO2 tax. Secondly, the development of another supply model based on dynamic programming allowed us to evaluate future non conventional crude oil supply under uncertainties. A negative impact of price uncertainty and volatility on investment decision is highlighted, under or without environmental regulation. This negative impact is strengthened by the introduction of a supplementary uncertainty in environmental legal framework.

Page generated in 0.0705 seconds