Spelling suggestions: "subject:"large scale systems."" "subject:"marge scale systems.""
31 |
Fast Optimization Methods for Model Predictive Control via Parallelization and Sparsity Exploitation / 並列化とスパース性の活用によるモデル予測制御の高速最適化手法DENG, HAOYANG 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22808号 / 情博第738号 / 新制||情||126(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 大塚 敏之, 教授 加納 学, 教授 太田 快人 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
32 |
The simultaneous prediction of equilibrium on large-scale networks : a unified consistent methodology for transportation planningSafwat, Kamal Nabil Ali January 1982 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 202-205. / by Kamal Nabil Ali Safwat. / Ph.D.
|
33 |
Market driven elastic secure infrastructureTikale, Sahil 30 May 2023 (has links)
In today’s Data Centers, a combination of factors leads to the static allocation of physical servers and switches into dedicated clusters such that it is difficult to add or remove hardware from these clusters for short periods of time. This silofication of the hardware leads to inefficient use of clusters. This dissertation proposes a novel architecture for improving the efficiency of clusters by enabling them to add or remove bare-metal servers for short periods of time. We demonstrate by implementing a working prototype of the architecture that such silos can be broken and it is possible to share servers between clusters that are managed by different tools, have different security requirements, and are operated by tenants of the Data Center, which may not trust each other.
Physical servers and switches in a Data Center are grouped for a combination of reasons. They are used for different purposes (staging, production, research, etc); host applications required for servicing specific workloads (HPC, Cloud, Big Data, etc); and/or configured to meet stringent security and compliance requirements. Additionally, different provisioning systems and tools such as Openstack-Ironic, MaaS, Foreman, etc that are used to manage these clusters take control of the servers making it difficult to add or remove the hardware from their control. Moreover, these clusters are typically stood up with sufficient capacity to meet anticipated peak workload.
This leads to inefficient usage of the clusters. They are under-utilized during off-peak hours and in the cases where the demand exceeds capacity the clusters suffer from degraded quality of service (QoS) or may violate service level objectives (SLOs). Although today’s clouds offer huge benefits in terms of on-demand elasticity, economies of scale, and a pay-as-you-go model yet many organizations are reluctant to move their workloads to the cloud. Organizations that (i) needs total control of their hardware (ii) has custom deployment practices (iii) needs to match stringent security and compliance requirements or (iv) do not want to pay high costs incurred from running workloads in the cloud prefers to own its hardware and host it in a data center. This includes a large section of the economy including financial companies, medical institutions, and government agencies that continue to host their own clusters outside of the public cloud. Considering that all the clusters may not undergo peak demand at the same time provides an opportunity to improve the efficiency of clusters by sharing resources between them.
The dissertation describes the design and implementation of the Market Driven Elastic Secure Infrastructure (MESI) as an alternative to the public cloud and as an architecture for the lowest layer of the public cloud to improve its efficiency. It allows mutually non-trusting physically deployed services to share the physical servers of a data center efficiently. The approach proposed here is to build a system composed of a set of services each fulfilling a specific functionality. A tenant of the MESI has to trust only a minimal functionality of the tenant that offers the hardware resources. The rest of the services can be deployed by each tenant themselves
MESI is based on the idea of enabling tenants to share hardware they own with tenants they may not trust and between clusters with different security requirements. The architecture provides control and freedom of choice to the tenants whether they wish to deploy and manage these services themselves or use them from a trusted third party. MESI services fit into three layers that build on each other to provide: 1) Elastic Infrastructure, 2) Elastic Secure Infrastructure, and 3) Market-driven Elastic Secure Infrastructure.
1) Hardware Isolation Layer (HIL) – the bottommost layer of MESI is designed for moving nodes between multiple tools and schedulers used for managing the clusters. It defines HIL to control the layer 2 switches and bare-metal servers such that tenants can elastically adjust the size of the clusters in response to the changing demand of the workload. It enables the movement of nodes between clusters with minimal to no modifications required to the tools and workflow used for managing these clusters. (2) Elastic Secure Infrastructure (ESI) builds on HIL to enable sharing of servers between clusters with different security requirements and mutually non-trusting tenants of the Data Center. ESI enables the borrowing tenant to minimize its trust in the node provider and take control of trade-offs between cost, performance, and security. This enables sharing of nodes between tenants that are not only part of the same organization by can be organization tenants in a co-located Data Center. (3) The Bare-metal Marketplace is an incentive-based system that uses economic principles of the marketplace to encourage the tenants to share their servers with others not just when they do not need them but also when others need them more. It provides tenants the ability to define their own cluster objectives and sharing constraints and the freedom to decide the number of nodes they wish to share with others.
MESI is evaluated using prototype implementations at each layer of the architecture. (i) The HIL prototype implemented with only 3000 Lines of Code (LOC) is able to support many provisioning tools and schedulers with little to no modification; adds no overhead to the performance of the clusters and is in active production use at MOC managing over 150 servers and 11 switches. (ii) The ESI prototype builds on the HIL prototype and adds to it an attestation service, a provisioning service, and a deterministically built open-source firmware. Results demonstrate that it is possible to build a cluster that is secure, elastic, and fairly quick to set up. The tenant requires only minimum trust in the provider for the availability of the node. (iii) The MESI prototype demonstrates the feasibility of having a one-of-kind multi-provider marketplace for trading bare-metal servers where providers also use the nodes. The evaluation of the MESI prototype shows that all the clusters benefit from participating in the marketplace. It uses agents to trade bare-metal servers in a marketplace to meet the requirements of their clusters. Results show that compared to operating as silos individual clusters see a 50% improvement in the total work done; up to 75% improvement (reduction) in waiting for queues and up to 60% improvement in the aggregate utilization of the test bed.
This dissertation makes the following contributions: (i) It defines the architecture of MESI allows mutually non-trusting tenants of the data center to share resources between clusters with different security requirements. (ii) Demonstrates that it is possible to design a service that breaks the silos of static allocation of clusters yet has a small Trusted Computing Base (TCB) and no overhead to the performance of the clusters. (iii) Provides a unique architecture that puts the tenant in control of its own security and minimizes the trust needed in the provider for sharing nodes. (iv) A working prototype of a multi-provider marketplace for bare-metal servers which is a first proof-of-concept that demonstrates that it is possible to trade real bare-metal nodes at practical time scales such that moving nodes between clusters is sufficiently fast to be able to get some useful work done. (v) Finally results show that it is possible to encourage even mutually non-trusting tenants to share their nodes with each other without any central authority making allocation decisions. Many smart, dedicated engineers and researchers have contributed to this work over the years. I have jointly led the efforts to design the HIL and the ESI layer; led the design and implementation of the bare-metal marketplace and the overall MESI architecture.
|
34 |
Stochastic Modeling and Decentralized Control Policies for Large-Scale Vehicle Sharing Systems via Closed Queueing NetworksGeorge, David K. 26 June 2012 (has links)
No description available.
|
35 |
PC-ICICLE: an interactive color integrated circuit layout editor for personal computersHarimoto, Seiyu 17 November 2012 (has links)
An interactive color graphics layout editor for VLSI has been implemented on the IBM PC. The software, PC-ICICLE, is written in Microsoft PASCAL and the 8086/88 Assembly Language under the DOS 2.0 environment. The basic hardware requirement is the standard configuration of the IBM PC with 256K bytes, and color graphics monitor and adapter. Without the need for any special hardware, PC-ICICLE makes layout editors more readily available to VLSI chip designers. PC-ICICLE has also been executed on the IBM PC-XT, IBM PC-AT, and Zenith's IBM compatible PC without any modifications. / Master of Science
|
36 |
Analysis of a nonhierarchical decomposition algorithmShankar, Jayashree 19 September 2009 (has links)
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs.
Here, the algorithm is carefully analyzed by testing it on simple quadratic programs, thereby recognizing the problems with the algorithm. Different modifications are made to improve its robustness and the best version is tested on a larger dimensional example. Some of the changes made are very fundamental, affecting the updating of the various tuning parameters present in the original algorithm.
The algorithm involves solving a given problem by dividing it into subproblems and a final coordination phase. The results indicate good success with small problems. On testing it with a larger dimensional example, it was discovered that there is a basic flaw in the coordination phase which needs to be rectified. / Master of Science
|
37 |
Theories of Optimal Control and Transport with Entropy Regularization / エントロピー正則化を伴う最適制御・輸送理論Ito, Kaito 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24263号 / 情博第807号 / 新制||情||136(附属図書館) / 京都大学大学院情報学研究科数理工学専攻 / (主査)准教授 加嶋 健司, 教授 太田 快人, 教授 山下 信雄 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
38 |
Nonlinear dynamical systems and control for large-scale, hybrid, and network systemsHui, Qing 08 July 2008 (has links)
In this dissertation, we present several main research thrusts involving thermodynamic stabilization via energy dissipating hybrid controllers and nonlinear control of network systems. Specifically, a novel class of fixed-order, energy-based hybrid controllers is presented as a means for achieving enhanced energy
dissipation in Euler-Lagrange, lossless, and dissipative dynamical systems. These dynamic controllers combine a logical switching
architecture with continuous dynamics to guarantee that the system plant energy is strictly decreasing across switching. In addition, we construct hybrid dynamic controllers that guarantee that the closed-loop system is
consistent with basic thermodynamic principles. In particular, the existence of an entropy function for the closed-loop system is established
that satisfies a hybrid Clausius-type inequality. Special cases of energy-based hybrid controllers involving state-dependent switching are
described, and the framework is applied to aerospace system models. The overall framework demonstrates that energy-based hybrid resetting
controllers provide an extremely efficient mechanism for dissipating energy in nonlinear dynamical systems. Next, we present finite-time coordination controllers for multiagent network systems. Recent technological advances in
communications and computation have spurred a broad interest in autonomous, adaptable vehicle formations. Distributed decision-making for
coordination of networks of dynamic agents addresses a broad area of applications including cooperative control of unmanned air vehicles,
microsatellite clusters, mobile robotics, and congestion control in communication networks. In this part of the dissertation we focus on
finite-time consensus protocols for networks of dynamic agents with undirected information flow. The proposed controller architectures are predicated on the recently developed notion of system thermodynamics resulting in thermodynamically consistent continuous controller architectures involving the exchange of information between agents that guarantee that the closed-loop dynamical network is consistent with basic thermodynamic principles.
|
39 |
Adaptive Fault Tolerance Strategies for Large Scale SystemsGeorge, Cijo January 2012 (has links) (PDF)
Exascale systems of the future are predicted to have mean time between node failures (MTBF) of less than one hour. At such low MTBF, the number of processors available for execution of a long running application can widely vary throughout the execution of the application. Employing traditional fault tolerance strategies like periodic checkpointing in these highly dynamic environments may not be effective because of the high number of application failures, resulting in large amount of work lost due to rollbacks apart from the increased recovery overheads. In this context, it is highly necessary to have fault tolerance strategies that can adapt to the changing node availability and also help avoid significant number of application failures. In this thesis, we present two adaptive fault tolerance strategies that make use of node failure pre-diction mechanisms to provide proactive fault tolerance for long running parallel applications on large scale systems.
The first part of the thesis deals with an adaptive fault tolerance strategy for malleable applications. We present ADFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. We first develop cost models that consider different factors like accuracy of node failure predictions and application scalability, for evaluating the benefits of various fault tolerance actions including check-pointing, live-migration and rescheduling. Our adaptive framework then uses the cost models to make runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to minimize application failures and maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in work done by the application in the presence of failures, and is effective even for petascale and exascale systems.
In the second part of the thesis, we present a fault tolerance strategy using adaptive process replication that can provide fault tolerance for applications using partial replication of a set of application processes. This fault tolerance framework adaptively changes the set of replicated processes (replicated set) periodically based on node failure predictions to avoid application failures. We have developed an MPI prototype implementation, PAREP-MPI that allows dynamically changing the replicated set of processes for MPI applications. Experiments with real scientific applications on real systems have shown that the overhead of PAREP-MPI is minimal. We have shown using simulations with real and synthetic failure traces that our strategy involving adaptive process replication significantly outperforms existing mechanisms providing up to 20% improvement in application efficiency even for exascale systems. Significant observations are also made which can drive future research efforts in fault tolerance for large and very large scale systems.
|
40 |
Résolution de grands problèmes en optimisation stochastique dynamique et synthèse de lois de commande / Solving large-scale dynamic stochastic optimization problemsGirardeau, Pierre 17 December 2010 (has links)
Le travail présenté ici s'intéresse à la résolution numérique de problèmes de commande optimale stochastique de grande taille. Nous considérons un système dynamique, sur un horizon de temps discret et fini, pouvant être influencé par des bruits exogènes et par des actions prises par le décideur. L'objectif est de contrôler ce système de sorte à minimiser une certaine fonction objectif, qui dépend de l'évolution du système sur tout l'horizon. Nous supposons qu'à chaque instant des observations sont faites sur le système, et éventuellement gardées en mémoire. Il est généralement profitable, pour le décideur, de prendre en compte ces observations dans le choix des actions futures. Ainsi sommes-nous à la recherche de stratégies, ou encore de lois de commandes, plutôt que de simples décisions. Il s'agit de fonctions qui à tout instant et à toute observation possible du système associent une décision à prendre. Ce manuscrit présente trois contributions. La première concerne la convergence de méthodes numériques basées sur des scénarios. Nous comparons l'utilisation de méthodes basées sur les arbres de scénarios aux méthodes particulaires. Les premières ont été largement étudiées au sein de la communauté "Programmation Stochastique". Des développements récents, tant théoriques que numériques, montrent que cette méthodologie est mal adaptée aux problèmes à plusieurs pas de temps. Nous expliquons ici en détails d'où provient ce défaut et montrons qu'il ne peut être attribué à l'usage de scénarios en tant que tel, mais plutôt à la structure d'arbre. En effet, nous montrons sur des exemples numériques comment les méthodes particulaires, plus récemment développées et utilisant également des scénarios, ont un meilleur comportement même avec un grand nombre de pas de temps. La deuxième contribution part du constat que, même à l'aide des méthodes particulaires, nous faisons toujours face à ce qui est couramment appelé, en commande optimale, la malédiction de la dimension. Lorsque la taille de l'état servant à résumer le système est de trop grande taille, on ne sait pas trouver directement, de manière satisfaisante, des stratégies optimales. Pour une classe de systèmes, dits décomposables, nous adaptons des résultats bien connus dans le cadre déterministe, portant sur la décomposition de grands systèmes, au cas stochastique. L'application n'est pas directe et nécessite notamment l'usage d'outils statistiques sophistiqués afin de pouvoir utiliser la variable duale qui, dans le cas qui nous intéresse, est un processus stochastique. Nous proposons un algorithme original appelé Dual Approximate Dynamic Programming (DADP) et étudions sa convergence. Nous appliquons de plus cet algorithme à un problème réaliste de gestion de production électrique sur un horizon pluri-annuel. La troisième contribution de la thèse s'intéresse à une propriété structurelle des problèmes de commande optimale stochastique : la question de la consistance dynamique d'une suite de problèmes de décision au cours du temps. Notre but est d'établir un lien entre la notion de consistance dynamique, que nous définissons de manière informelle dans le dernier chapitre, et le concept de variable d'état, qui est central dans le contexte de la commande optimale. Le travail présenté est original au sens suivant. Nous montrons que, pour une large classe de modèles d'optimisation stochastique n'étant pas a priori consistants dynamiquement, on peut retrouver la consistance dynamique quitte à étendre la structure d'état du système / This work is intended at providing resolution methods for Stochastic Optimal Control (SOC) problems. We consider a dynamical system on a discrete and finite horizon, which is influenced by exogenous noises and actions of a decision maker. The aim is to minimize a given function of the behaviour of the system over the whole time horizon. We suppose that, at every instant, the decision maker is able to make observations on the system and even to keep some in memory. Since it is generally profitable to take these observations into account in order to draw further actions, we aim at designing decision rules rather than simple decisions. Such rules map to every instant and every possible observation of the system a decision to make. The present manuscript presents three main contributions. The first is concerned with the study of scenario-based solving methods for SOC problems. We compare the use of the so-called scenario trees technique to the particle method. The first one has been widely studied among the Stochastic Programming community and has been somehow popular in applications, until recent developments showed numerically as well as theoretically that this methodology behaved poorly when the number of time steps of the problem grows. We here explain this fact in details and show that this negative feature is not to be attributed to the scenario setting, but rather to the use of a tree structure. Indeed, we show on numerical examples how the particle method, which is a newly developed variational technique also based on scenarios, behaves in a better way even when dealing with a large number of time steps. The second contribution starts from the observation that, even with particle methods, we are still facing some kind of curse of dimensionality. In other words, decision rules intrisically suffer from the dimension of their domain, that is observations (or state in the Dynamic Programming framework). For a certain class of systems, namely decomposable systems, we adapt results concerning the decomposition of large-scale systems which are well known in the deterministic case to the SOC case. The application is not straightforward and requires some statistical analysis for the dual variable, which is in our context a stochastic process. We propose an original algorithm called Dual Approximate Dynamic Programming (DADP) and study its convergence. We also apply DADP to a real-life power management problem. The third contribution is concerned with a rather structural property for SOC problems: the question of dynamic consistency for a sequence of decision making problems over time. Our aim is to establish a link between the notion of time consistency, that we loosely define in the last chapter, and the central concept of state structure within optimal control. This contribution is original in the following sense. Many works in the literature aim at finding optimization models which somehow preserve the "natural" time consistency property for the sequence of decision making problems. On the contrary, we show for a broad class of SOC problems which are not a priori time-consistent that it is possible to regain this property by simply extending the state structure of the model
|
Page generated in 0.0628 seconds