Spelling suggestions: "subject:"computing resource"" "subject:"acomputing resource""
1 |
Performance Optimization of a Service in Virtual and Non - Virtual EnvironmentTamanampudi, Monica, Sannareddy, Mohith Kumar Reddy January 2019 (has links)
In recent times Cloud Computing has become an accessible technology which makes it possible to provide online services to end user by the network of remote servers. With the increase in remote servers and resources allocated to these remote servers leads to performance degradation of service. In such a case, the environment on which service is made run plays a significant role in order to provide better performance and adds up to Quality of Service. This paper focuses on Bare metal and Linux container environments as request response time is one of the performance metrics to determine the QOS. To improve request response time platforms are customized using real-time kernel and compiler optimization flags to optimize the performance of a service. UDP packets are served to the service made run in these customized environments. From the experiments performed, it concludes that Bare metal using real-time kernel and level 3 Compiler optimization flag gives better performance of a service.
|
2 |
Design and evaluation of a public resource computing frameworkBaldassari, James D. January 2006 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: distributed systems, network computing, volunteer computing, public resource computing. Includes bibliographical references. (p.108-109)
|
3 |
A Designer-Augmenting Framework for Self-Adaptive Control SystemsHaoguang Yang (19747588) 02 October 2024 (has links)
<p dir="ltr">Robotic software design and implementation have traditionally relied on human engineers to fine-tune parameters, optimize hardware utilization, and mitigate unprecedented situations. As we face more demanding and complex applications, such as distributed robotic fleets and autonomous driving, explicit fine-tuning of autonomous systems yields diminishing returns. To make autonomous systems smarter, a design-time and run-time framework is required to extract constraints from high-level human decisions, and self-adapt on-the-fly to maintain desired specifications. Specifically, for controllers that govern cyber-physical interactions, making them self-adaptive involves two challenges. Firstly, controller design methods have historically neglected computing hardware constraints that realize real-time execution. Hence, intensive manual tuning is required to materialize a controller prototype with balanced control performance and computing resource consumption. Secondly, precisely modeling the physical system dynamics at edge cases is difficult and costly. However, with modeling discrepancies, controllers fine-tuned at design time may fail at run time, causing safety concerns. While humans are inherently adept at reacting and getting used to unknown system dynamics, how to transfer this knowledge to robots is still unresolved.</p><p dir="ltr">To address the two challenges, we propose a designer-augmenting framework for self-adaptive control systems. Our framework includes a resource/performance co-design tool and a model-free controller self-adaptation method for real-time control systems. Our resource/performance co-design tool automatically exploits the Pareto front of controllers, between real-time computing resource utilization and achievable control performance. The co-design tool simplifies the iterative partitioning and verification of controller performance and distributed resource budget, enabling human engineers to directly interface with high-level design decisions between quality and cost. Our controller self-adaptation method extracts objectives and tolerances from human demonstrations and applies them to real-time controller switching, allowing human experts to design fault mitigation behaviors directly through coaching. The objective extraction and real-time adaptation do not rely on prior knowledge of the plant, making them inherently robust against mismatch between the design reference model and the physical system.</p><p dir="ltr">Only with the prerequisite of real-time schedulability under Worst-Case Execution Time (WCET), will the digital controller deliver the designed dynamics. To determine the real-time schedulability of controllers during the design-time iteration and run-time self-adaptation, we propose a novel estimate of WCET based on the Mixed Weibull distribution of profiling statistics and a linear composition model. Our hybrid approach applies to design-time estimation of arbitrary-scaled controllers, yielding results as accurate as a state-of-the-art method while being more robust under small profiling sample sizes. Finally, we propose a resource consolidator that accounts for real-time schedulable bounds to utilize available computing resources while preventing deadline misses efficiently. Our consolidator, formulated as a vector packing problem, exploits different parallelization techniques on a CPU/FPGA hybrid architecture to obtain the most compact allocation plan for a given controller complexity and throughput. </p><p dir="ltr">By jointly considering all four aspects, our framework automates the co-optimization of controller performance and computing hardware requirements throughout the life cycle of a control system. As a result, the engineering time required to design and deploy a controller is significantly reduced, while the adaptivity of human engineers is extended to fault mitigation at run-time.</p>
|
4 |
Performance Modelling for Optimized Resource Management and Application Deployment in Cloud EnvironmentsUllrich, Markus 25 August 2022 (has links)
Cloud computing is an exciting concept that propels the development of technologies, the creation and expansion of businesses and the rapid prototyping of new ideas. Utilizing the advantages the cloud offers to their fullest potential is not a simple task and thus often users struggle with the technological aspects, lose revenue or do not attempt to benefit from this idea at all.
In this dissertation, we identify the lack of standards for performance descriptions as well as the steep learning curve to get familiar with the cloud, which is further amplified by the abundance of available services, as the most prevalent issues that individuals and companies encounter. We further show the relevance of solving these issues by outlining the expected impact, which includes decreased time and financial detriments for individuals and companies as well as a negative effect on the environment.
To solve the identified problems we propose the development of a cloud broker with three key components that utilize a performance oriented resource and application model to 1) compare arbitrary resources and applications in a fair manner based on general information, collected with standard benchmark tools 2) select the optimal infrastructure for any application by estimating its resource consumption and execution time and 3) automatically create and manage the selected infrastructure as well as the application deployment.
Our contributions to this proposal include the development and test of prototypical proof-of-concept implementations for the three components, the design of the underlying resource and application performance model as well as the selection of appropriate, generic benchmark solutions, which we deployed on two major public clouds using our prototypes.
In an extensive objective-based evaluation we assess that we contributed towards solving all the major issues that we identified to increase the usability and efficiency of cloud computing by enabling a better comprehension of resource and application performance in cloud environments and by reducing the necessary time and effort to deploy arbitrary applications in the cloud. We conclude by interpreting the evaluation results and providing an outlook towards future work.:1 Introduction
2 Challenges
3 Improve Resource Selection and Management in Cloud Environments
4 Cloud Resource Comparison
5 Resource Estimation
6 Cloud Application Execution
7 Overall Evaluation
8 Conclusion
A LFA Artifacts
B Analysis and Results
C PoC Platform / Die Dissertation beschäftigt sich mit der effizienten Nutzung von Cloud Ressourcen zur Beschleunigung der Entwicklung neuer Technologien und Geschäftsmodellen sowie des Rapid Prototypings neuer Ideen. Auf Grund der Komplexität von Cloud Plattformen, stellt die Nutzung derer oft eine große Hürde, speziell für kleine und mittlere Unternehmen dar, weshalb oft Ressourcen verschwendet werden, Prozesse mehr Zeit in Anspruch nehmen als nötig oder erst gar kein Versuch unternommen wird, diese Technologie zu nutzen.
In der Arbeit werden dazu drei Kernprobleme identifiziert und thematisiert. Dies sind Lücken in Bezug auf Standards zur Beschreibung der Performance von Cloud Ressourcen, die Fülle an existierenden Cloud Diensten, sowie die steile Lernkurze bei der Nutzung dieser Dienste.
Zur Lösung der identifizierten Probleme, wird in der Arbeit die Entwicklung einer Cloud Broker Anwendung mit drei Kernkomponenten vorgeschlagen, die ein Performanz-orientiertes Ressourcen- und Anwendungsmodell verwenden, welches es ermöglicht:
1) beliebige Ressourcen und Anwendungen unterschiedlichster Anbieter mit der Hilfe von frei verfügbaren und standardisierten Benchmark Tools zu vergleichen,
2) die passende Infrastruktur für jede auszuführende Anwendung durch Schätzung des Ressourcenbedarfs und der Dauer der Ausführung auszuwählen und
3) die gewählte Infrastruktur automatisch in der Cloud erzeugt und die Anwendung selbstständig ausführt.
Im Rahmen der Dissertation wurden dazu alle drei Kernkomponenten prototypisch implementiert, das zugrundeliegende Ressourcen und Anwendungsmodell designt, sowie geeignete Benchmark Lösungen ausgewählt und umfangreiche Benchmarks auf zwei großen, öffentlichen Cloud Plattformen mit Hilfe der entwickelten Prototypen durchgeführt. In einer umfassenden zielorientierten Evaluation, wird der Beitrag zur Lösung der im Vorfeld identifizierten Probleme bewertet und festgestellt, dass mit den entwickelten Komponenten sowohl die Nutzbarkeit als auch Effizienz von Cloud-Computing insgesamt erhöht werden kann. Dies wird ermöglicht durch ein besseres Verständnis der Ressourcen und Anwendungsperformanz, sowie durch Reduzierung der notwendigen Zeit und des Aufwands für eine Anwendungsausführung in der Cloud. Im Vortrag wird abschließend noch ein Ausblick auf weiterführende Arbeiten gegeben.:1 Introduction
2 Challenges
3 Improve Resource Selection and Management in Cloud Environments
4 Cloud Resource Comparison
5 Resource Estimation
6 Cloud Application Execution
7 Overall Evaluation
8 Conclusion
A LFA Artifacts
B Analysis and Results
C PoC Platform
|
Page generated in 0.0605 seconds