201 |
Cloud computing in a South African BankVan der Merwe, Arno 30 June 2014 (has links)
This research looked at cloud computing in a South African bank. Interviews were
conducted in the information technology sector of a major bank in South Arica, as part
of a deductive research method, to establish how cloud computing should be
understood, what are specific benefits, obstacles, risks and if the benefits outweigh the
obstacles and risks.
The research demonstrated that cloud computing is a fairly new concept in South
African banks especially when it comes to the public cloud. Private clouds are currently
in existence, especially in the form of data centres and virtualised services. The
research also indicated that benefits outweigh obstacles and risks, with cost seen as the
most important benefit in contrast to privacy and security as the most important obstacle
to consider.
It would be difficult for a bank in South Africa to move into the public cloud and the focus
would be to move no-core services into a public cloud and to keep the core services
within the bank.
It should be noted that the research sample was limited to only one of the major banks
in South African and that it would be inaccurate to present the results as a complete
view of banks in South Africa. / Dissertation (MBA)--University of Pretoria, 2013. / pagibs2014 / Gordon Institute of Business Science (GIBS) / MBA / Unrestricted
|
202 |
Towards a framework for enhancing user trust in cloud computingNyoni, Tamsanqa B January 2014 (has links)
Cloud computing is one of the latest appealing technological trends to emerge in the Information Technology (IT) industry. However, despite the surge in activity and interest, there are significant and persistent concerns about cloud computing, particularly with regard to trusting the platform in terms of confidentiality, integrity and availability of user data stored through these applications. These factors are significant in determining trust in cloud computing and thus provide the foundation for this study. The significant role that trust plays in the use of cloud computing was considered in relation to various trust models, theories and frameworks. Cloud computing is still considered to be a new technology in the business world, therefore minimal work and academic research has been done on enhancing trust in cloud computing. Academic research which focuses on the adoption of cloud computing and, in particular, the building of user trust has been minimal. The available trust models, frameworks and cloud computing adoption strategies that exist mainly focus on cost reduction and the various benefits that are associated with migrating to a cloud computing platform. Available work on cloud computing does not provide clear guidelines for establishing user trust in a cloud computing application. The issue of establishing a reliable trust context for data and security within cloud computing is, up to this point, not well defined. This study investigates the impact that a lack of user trust has on the use of cloud computing. Strategies for enhancing user trust in cloud computing are required to overcome the data security concerns. This study focused on establishing methods to enhance user trust in cloud computing applications through the theoretical contributions of the Proposed Trust Model by Mayer, Davis, and Schoorman (1995) and the Confidentiality, Integrity, Availability (CIA) Triad by Steichen (2010). A questionnaire was used as a means of gathering data on trust-related perceptions of the use of cloud computing. The findings of this questionnaire administered to users and potential users of cloud computing applications are reported in this study. The questionnaire primarily investigates key concerns which result in self-moderation of cloud computing use and factors which would improve trust in cloud computing. Additionally, results relating to user awareness of potential confidentiality, integrity and availability risks are described. An initial cloud computing adoption model was proposed based on a content analysis of existing cloud computing literature. This initial model, empirically tested through the questionnaire, was an important foundation for the establishment of the Critical Success Factors (CSFs) and therefore the framework to enhance user trust in cloud computing applications. The framework proposed by this study aims to assist new cloud computing users to determine the appropriateness of a cloud computing service, thereby enhancing their trust in cloud computing applications.
|
203 |
Adopce Cloud computing ve firemním sektoru / Adoption of Cloud computing in the corporate sectorMalík, Tomáš January 2011 (has links)
This work is focused on the newly emerging field of Cloud computing and view of adoption of this technology in various industrial sectors. The first section explains the concept of cloud computing, its characteristics and individual models. The following part is a brief analysis of the size and development of the current market for cloud services. Next section focuses on the creation of industry categories acording to area of their business, together with their description. The actual categories are examined in part four, along with their expenditures on IT and state of Cloud's adoption, in addition with specific advantages and obstacles. Results from previous section are summarized in part five. In conclusion, the hypothesis is verified and the main findings summarized.
|
204 |
Aerosol-Cloud-Radiation Interactions in Regimes of Liquid Water CloudsBlock, Karoline 17 October 2018 (has links)
Despite large efforts and decades of research, the scientific understanding of how aerosols impact climate by modulating microphysical cloud properties is still low and associated radiative forcing estimates (RFaci ) vary with a wide spread. But since anthropogenically forced aerosol-cloud interactions (ACI) are considered to oppose parts of the global warming, it is crucial to know their contribution to the total radiative forcing in order to improve climate predictions.
To obtain a better understanding and quantification of ACI and the associated radiative effect it as been suggested to use concurrent measurements and observationally constrained model simulations. In this dissertation a joint satellite-reanalysis approach is introduced, bridging the gap between climate models and satellite observations in a bottom-up approach. This methodology involves an observationally constrained aerosol model, refined and concurrent multi-component satellite retrievals, a state-of-the-art aerosol activation parameteriza-
tion as well as radiative transfer model. This methodology is shown here to be useful for a quantitative as well as qualitative analysis of ACI and for estimating RFaci . As a result, a 10-year long climatology of cloud condensation nuclei (CCN) (particles from which cloud droplets form) is produced and evaluated. It is the first of its kind providing 3-D CCN concentrations of global coverage for various supersaturations and aerosol species and offering the opportunity to be used for evaluation in models and ACI studies. Further, the distribution and variability of the resulting cloud droplet numbers and their susceptibility to changes in aerosols is explored and compared to previous estimates. In this context, an analysis by cloud regime has been proven useful. Last but not least, the computation and analysis of the present-day regime-based RFaci represents
the final conclusion of the bottom-up methodology. Overall, this thesis provides a comprehensive assessment of interactions and uncertainties related to aerosols, clouds and radiation in regimes of liquid water clouds and helps to improve
the level of scientific understanding.
|
205 |
Evaluation of Cloud Native Solutions for Trading Activity Analysis / Evaluering av cloud native lösningar för analys av transaktionsbaserad börshandelJohansson, Jonas January 2021 (has links)
Cloud computing has become increasingly popular over recent years, allowing computing resources to be scaled on-demand. Cloud Native applications are specifically created to run on the cloud service model. Currently, there is a research gap regarding the design and implementation of cloud native applications, especially regarding how design decisions affect metrics such as execution time and scalability of systems. The problem investigated in this thesis is whether the execution time and quality scalability, ηt of cloud native solutions are affected when housing the functionality of multiple use cases within the same cloud native application. In this work, a cloud native application for trading data analysis is presented, where the functionality of 3 use cases are implemented to the application: (1) creating reports of trade prices, (2) anomaly detection, and (3) analysis of relation diagram of trades. The execution time and scalability of the application are evaluated and compared to readily available solutions, which serve as a baseline for the evaluation. The results of use cases 1 and 2 are compared to Amazon Athena, while use case 3 is compared to Amazon Neptune. The results suggest that having functionalities combined into the same application could improve both execution time and scalability of the system. The impact depends on the use case and hardware configuration. When executing the use cases in a sequence, the mean execution time of the implemented system was decreased up to 17.2% while the quality scalability score was improved by 10.3% for use case 2. The implemented application had significantly lower execution time than Amazon Neptune but did not surpass Amazon Athena for respective use cases. The scalability of the systems varied depending on the use case. While not surpassing the baseline in all use cases, the results show that the execution time of a cloud native system could be improved by having functionality of multiple use cases within one system. However, the potential performance gains differ depending on the use case and might be smaller than the performance gains of choosing another solution. / Cloud computing har de senaste åren blivit alltmer populärt och möjliggör att skala beräkningskapacitet och resurser på begäran. Cloud native-applikationer är specifikt skapade för att köras på distribuerad infrastruktur. För närvarande finns det luckor i forskningen gällande design och implementering av cloud native applikationer, särskilt angående hur designbeslut påverkar mätbara värden som exekveringstid och skalbarhet. Problemet som undersöks i denna uppsats är huruvida exekveringstiden och måttet av kvalitetsskalbarhet, ηt påverkas när funktionaliteten av flera användningsfall intregreras i samma cloud native applikation. I det här arbetet skapades en cloud native applikation som kombinerar flera användningsfall för att analysera transaktionsbaserad börshandelsdata. Funktionaliteten av 3 användningsfall implementeras i applikationen: (1) generera rapporter över handelspriser, (2) detektering av avvikelser och (3) analys av relations-grafer. Applikationens exekveringstid och skalbarhet utvärderas och jämförs med kommersiella cloudtjänster, vilka fungerar som en baslinje för utvärderingen. Resultaten från användningsfall 1 och 2 jämförs med Amazon Athena, medan användningsfall 3 jämförs med Amazon Neptune. Resultaten antyder att systemets exekveringstid och skalbarhet kan förbättras genom att funktionalitet för flera användningsfall implementeras i samma system. Effekten varierar beroende på användningsfall och hårdvarukonfiguration. När samtliga användningsfall körs i en sekvens, minskar den genomsnittliga körtiden för den implementerade applikationen med upp till 17,2% medan kvalitetsskalbarheten ηt förbättrades med 10,3%för användningsfall 2. Den implementerade applikationen har betydligt kortare exekveringstid än Amazon Neptune men överträffar inte Amazon Athena för respektive användningsfall. Systemens skalbarhet varierade beroende på användningsfall. Även om det inte överträffar baslinjen i alla användningsfall, visar resultaten att exekveringstiden för en cloud native applikation kan förbättras genom att kombinera funktionaliteten hos flera användningsfall inom ett system. De potentiella prestandavinsterna varierar dock beroende på användningsfallet och kan vara mindre än vinsterna av att välja en annan lösning.
|
206 |
Methods and Benchmarks for Auto-Scaling Mechanisms in Elastic Cloud Environments / Methoden und Messverfahren für Mechanismen des automatischen Skalierens in elastischen CloudumgebungenHerbst, Nikolas Roman January 2018 (has links) (PDF)
A key functionality of cloud systems are automated resource management mechanisms at the infrastructure level. As part of this, elastic scaling of allocated resources is realized by so-called auto-scalers that are supposed to match the current demand in a way that the performance remains stable while resources are efficiently used.
The process of rating cloud infrastructure offerings in terms of the quality of their achieved elastic scaling remains undefined. Clear guidance for the selection and configuration of an auto-scaler for a given context is not available. Thus, existing operating solutions are optimized in a highly application specific way and usually kept undisclosed.
The common state of practice is the use of simplistic threshold-based approaches. Due to their reactive nature they incur performance degradation during the minutes of provisioning delays. In the literature, a high-number of auto-scalers has been proposed trying to overcome the limitations of reactive mechanisms by employing proactive prediction methods.
In this thesis, we identify potentials in automated cloud system resource management and its evaluation methodology. Specifically, we make the following contributions:
We propose a descriptive load profile modeling framework together with automated model extraction from recorded traces to enable reproducible workload generation with realistic load intensity variations. The proposed Descartes Load Intensity Model (DLIM) with its Limbo framework provides key functionality to stress and benchmark resource management approaches in a representative and fair manner.
We propose a set of intuitive metrics for quantifying timing, stability and accuracy aspects of elasticity. Based on these metrics, we propose a novel approach for benchmarking the elasticity of Infrastructure-as-a-Service (IaaS) cloud platforms independent of the performance exhibited by the provisioned underlying resources.
We tackle the challenge of reducing the risk of relying on a single proactive auto-scaler by proposing a new self-aware auto-scaling mechanism, called Chameleon, combining multiple different proactive methods coupled with a reactive fallback mechanism.
Chameleon employs on-demand, automated time series-based forecasting methods to predict the arriving load intensity in combination with run-time service demand estimation techniques to calculate the required resource consumption per work unit without the need for a detailed application instrumentation. It can also leverage application knowledge by solving product-form queueing networks used to derive optimized scaling actions. The Chameleon approach is first in resolving conflicts between reactive and proactive scaling decisions in an intelligent way.
We are confident that the contributions of this thesis will have a long-term impact on the way cloud resource management approaches are assessed. While this could result in an improved quality of autonomic management algorithms, we see and discuss arising challenges for future research in cloud resource management and its assessment methods: The adoption of containerization on top of virtual machine instances introduces another level of indirection. As a result, the nesting of virtual resources increases resource fragmentation and causes unreliable provisioning delays. Furthermore, virtualized compute resources tend to become more and more inhomogeneous associated with various priorities and trade-offs. Due to DevOps practices, cloud hosted service updates are released with a higher frequency which impacts the dynamics in user behavior. / Eine Schlüsselfunktionalität von Cloud-Systemen sind automatisierte Mechanismen zur Ressourcenverwaltung auf Infrastrukturebene. Als Teil hiervon wird das elastische Skalieren der allokierten Ressourcen durch eigene Mechanismen realisiert. Diese sind dafür verantwortlich, dass die dynamische Ressourcenzuteilung die aktuelle Nachfrage in einem Maße trifft, welches die Performance stabil hält und gleichzeitig Ressourcen effizient auslastet.
Prozesse, welche die Bewertung der Qualität von elastischem Skalierungsverhalten in der Realität ermöglichen, sind derzeit nicht umfassend definiert. Folglich fehlt es an Leitfäden und Entscheidungskriterien bei der Auswahl und Konfiguration automatisch skalierender Mechanismen. In der Praxis zum Einsatz kommende Lösungen sind auf ihr Anwendungsszenario optimiert und werden in fast allen Fällen unter Verschluss gehalten.
Mehrheitlich werden einfache, schwellenwertbasierte Regelungsansätze eingesetzt. Diese nehmen aufgrund ihres inhärent reaktiven Charakters verschlechterte Performance während der Bereitstellungsverzögerung im Minutenbereich in Kauf. In der Literatur wird eine große Anzahl an Mechanismen zur automatischen Skalierung vorgeschlagen, welche versuchen, diese Einschränkung durch Einsatz von Schätzverfahren zu umgehen. Diese können in Ansätze aus der Warteschlangentheorie, der Kontrolltheorie, der Zeitreihenanalyse und des maschinellen Lernens eingeteilt werden. Jedoch erfreuen sich prädiktive Mechanismen zum automatischen Skalieren aufgrund des damit verknüpften hohen Risikos, sich auf einzelne Schätzverfahren zu verlassen, bislang keines breiten Praxiseinsatzes.
Diese Dissertation identifiziert Potenziale im automatisierten Ressourcenmanagement von Cloud-Umgebungen und deren Bewertungsverfahren. Die Beiträge liegen konkret in den folgenden Punkten:
Es wird eine Sprache zur deskriptiven Modellierung von Lastintensitätsprofilen und deren automatischer Extraktion aus Aufzeichnungen entwickelt, um eine wiederholbare Generierung von realistischen und in ihrer Intensität variierenden Arbeitslasten zu ermöglichen. Das vorgeschlagene Descartes Lastintensitätsmodell (DLIM) zusammen mit dem Limbo Software-Werkzeug stellt hierbei Schlüsselfunktionalitäten zur repräsentativen Arbeitslastgenerierung und fairen Bewertung von Ressourcenmanagementansätzen zur Verfügung.
Es wird eine Gruppe intuitiver Metriken zur Quantifizierung der zeit-, genauigkeits- und stabilitätsbezogenen Qualitätsaspekte elastischen Verhaltens vorgeschlagen. Basierend auf diesen zwischenzeitlich von der Forschungsabteilung der Standard Performance Evaluation Corporation (SPEC) befürworteten Metriken, wird ein neuartiges Elastizitätsmessverfahren zur fairen Bewertung von Infrastruktur-Cloud-Dienstleistungen, unabhängig von der Leistungsfähigkeit der zugrunde liegenden Ressourcen, entwickelt.
Durch die Entwicklung eines neuartigen, hybriden Ansatzes zum automatischen Skalieren, genannt Chameleon, wird das Risiko reduziert, welches sich aus dem Einsatz einzelner proaktiver Methoden automatischen Skalierens ergibt. Chameleon kombiniert mehrere verschiedene proaktive Methoden und ist mit einer reaktiven Rückfallebene gekoppelt. Dazu verwendet Chameleon bei Bedarf automatische Zeitreihenvorhersagen, um ankommende Arbeitslasten abzuschätzen. Ergänzend dazu kommen Techniken der Serviceanforderungsabschätzung zur Systemlaufzeit zum Einsatz, um den Ressourcenverbrauch einzelner Arbeitspakete in etwa zu bestimmen, ohne dass eine feingranulare Instrumentierung der Anwendung erforderlich ist. Abgesehen davon nutzt Chameleon anwendungsbezogenes Wissen, um Warteschlangennetze in Produktform zu lösen und optimale Skalierungsaktionen abzuleiten. Der Chameleon-Ansatz ist der erste seiner Art, welcher Konflikte zwischen reaktiven und proaktiven Skalierungsaktionen in intelligenter Art und Weise aufzulösen vermag.
Zusammenfassend kann gesagt werden, dass die Beiträge dieser Dissertation auf lange Sicht die Art und Weise beeinflussen dürften, in welcher Ressourcenmanagementansätze in Cloudumgebungen bewertet werden. Ein Ergebnis wäre unter anderem eine verbesserte Qualität der Algorithmen für ein automatisches Ressourcenmanagement.
Als Grundlage für zukünftige Forschungsarbeiten werden aufkommende Herausforderungen identifiziert und diskutiert: Die Einführung der Containerisierung innerhalb von virtuellen Maschineninstanzen bewirkt eine weitere Ebene der Indirektion. Als Folge dieser Verschachtelung der virtuellen Ressourcen wird die Fragmentierung erhöht und unzuverlässige Bereitstellungsverzögerungen verursacht. Außerdem tendieren die virtualisierten Rechenressourcen aufgrund von Priorisierung und Zielkonflikten mehr und mehr zu inhomogenen Systemlandschaften. Aufgrund von DevOps-Praktiken werden Softwareupdates von Diensten in Cloudumgebungen mit einer höheren Frequenz durchgeführt, welche sich auf das Benutzungsverhalten dynamisierend auswirken kann.
|
207 |
Performance Engineering of Serverless Applications and Platforms / Performanz Engineering von Serverless Anwendungen und PlattformenEismann, Simon January 2023 (has links) (PDF)
Serverless computing is an emerging cloud computing paradigm that offers a highlevel
application programming model with utilization-based billing. It enables the
deployment of cloud applications without managing the underlying resources or
worrying about other operational aspects. Function-as-a-Service (FaaS) platforms
implement serverless computing by allowing developers to execute code on-demand
in response to events with continuous scaling while having to pay only for the
time used with sub-second metering. Cloud providers have further introduced
many fully managed services for databases, messaging buses, and storage that also
implement a serverless computing model. Applications composed of these fully
managed services and FaaS functions are quickly gaining popularity in both industry
and in academia.
However, due to this rapid adoption, much information surrounding serverless
computing is inconsistent and often outdated as the serverless paradigm evolves.
This makes the performance engineering of serverless applications and platforms
challenging, as there are many open questions, such as: What types of applications
is serverless computing well suited for, and what are its limitations? How should
serverless applications be designed, configured, and implemented? Which design
decisions impact the performance properties of serverless platforms and how can
they be optimized? These and many other open questions can be traced back to an
inconsistent understanding of serverless applications and platforms, which could
present a major roadblock in the adoption of serverless computing.
In this thesis, we address the lack of performance knowledge surrounding serverless
applications and platforms from multiple angles: we conduct empirical studies
to further the understanding of serverless applications and platforms, we introduce
automated optimization methods that simplify the operation of serverless applications,
and we enable the analysis of design tradeoffs of serverless platforms by
extending white-box performance modeling. / Serverless Computing ist ein neues Cloud-Computing-Paradigma, das ein High-Level-Anwendungsprogrammiermodell mit nutzungsbasierter Abrechnung bietet. Es ermöglicht die Bereitstellung von Cloud-Anwendungen, ohne dass die zugrunde liegenden Ressourcen verwaltet werden müssen oder man sich um andere betriebliche Aspekte kümmern muss. FaaS-Plattformen implementieren Serverless Computing, indem sie Entwicklern die Möglichkeit geben, Code nach Bedarf als Reaktion auf Ereignisse mit kontinuierlicher Skalierung auszuführen, während sie nur für die genutzte Zeit mit sekundengenauer Abrechnung zahlen müssen. Cloud-Anbieter haben darüber hinaus viele vollständig verwaltete Dienste für Datenbanken, Messaging-Busse und Orchestrierung eingeführt, die ebenfalls ein Serverless Computing-Modell implementieren. Anwendungen, die aus diesen vollständig verwalteten Diensten und FaaS-Funktionen bestehen, werden sowohl in der Industrie als auch in der Wissenschaft immer beliebter.
Aufgrund dieser schnellen Verbreitung sind jedoch viele Informationen zum Serverless Computing inkonsistent und oft veraltet, da sich das Serverless Paradigma weiterentwickelt. Dies macht das Performanz-Engineering von Serverless Anwendungen und Plattformen zu einer Herausforderung, da es viele offene Fragen gibt, wie zum Beispiel: Für welche Arten von Anwendungen ist Serverless Computing gut geeignet und wo liegen seine Grenzen? Wie sollten Serverless Anwendungen konzipiert, konfiguriert und implementiert werden? Welche Designentscheidungen wirken sich auf die Performanzeigenschaften von Serverless Plattformen aus und wie können sie optimiert werden? Diese und viele andere offene Fragen lassen sich auf ein uneinheitliches Verständnis von Serverless Anwendungen und Plattformen zurückführen, was ein großes Hindernis für die Einführung von Serverless Computing darstellen könnte.
In dieser Arbeit adressieren wir den Mangel an Performanzwissen zu Serverless Anwendungen und Plattformen aus mehreren Blickwinkeln: Wir führen empirische Studien durch, um das Verständnis von Serverless Anwendungen und Plattformen zu fördern, wir stellen automatisierte Optimierungsmethoden vor, die das benötigte Wissen für den Betrieb von Serverless Anwendungen reduzieren, und wir erweitern die White-Box-Performanzmodellierungerung für die Analyse von Designkompromissen von Serverless Plattformen.
|
208 |
Evaluating the quality of ground surfaces generated from Terrestrial Laser Scanning (TLS) dataSun, Yanshen 24 June 2019 (has links)
Researchers and GIS analysts have used Aerial Laser Scanning (ALS) data to generate Digital Terrain Models (DTM) since the 1990s, and various algorithms developed for ground point extraction have been proposed based on the characteristics of ALS data. However, Terrestrial Laser Scanning (TLS) data, which might be a better indicator of ground morphological features under dense tree canopies and more accessible for small areas, have been long ignored. In this research, the aim was to evaluate if TLS data were as qualified as ALS to serve as a source of a DTM. To achieve this goal, there were three steps: acquiring and aligning ALS and TLS of the same region, applying ground filters on both of the data sets, and comparing the results.
Our research area was a 100m by 140m region of grass, weeds and small trees along Strouble's Creek on the Virginia Tech campus. Four popular ground filter tools (ArcGIS, LASTools, PDAL, MCC) were applied to both ALS and TLS data. The output ground point clouds were then compared with a DTM generated from ALS data of the same region. Among the four ground filter tools employed in this research, the distances from TLS ground points to the ALS ground surface were no more than 0.06m with standard deviations less than 0.3m. The results indicated that the differences between the ground extracted from TLS and that extracted from ALS were subtle. The conclusion is that Digital Terrain Models (DTM) generated from TLS data are valid. / Master of Science / Elevation is one of the most basic data for researches such as flood prediction and land planning in the field of geography, agriculture, forestry, etc. The most common elevation data that could be downloaded from the internet were acquired from field measurements or satellites. However, the finest grained of that kind of data is 1/3m and errors can be introduced by ground objects such as trees and buildings. To acquire more accurate and pure-ground elevation data (also called Digital Terrain Models (DTM)), Researchers and GIS analysts introduced laser scanners for small area geographical research. For land surface data collection, researchers usually fly a drone with laser scanner (ALS) to derive the data underneath, which could be blocked by ground objects. An alternative way is to place the laser scanner on a tripod on the ground (TLS), which provides more data for ground morphological features under dense tree canopies and better precision. As ALS and TLS collect data from different perspectives, the coverage of a ground area can be different. As most of the ground extraction algorithm were designed for ALS data, their performance on TLS data hasn’t been fully tested yet. Our research area was a 100m by 140m region of grass, weeds and small trees along Strouble’s Creek on the Virginia Tech campus. Four popular ground filter tools (ArcGIS, LASTools, PDAL, MCC) were applied to both ALS and TLS data. The output ground point clouds were then compared with a ground surface generated from ALS data of the same region. Among the four ground filter tools employed in this research, the distances from TLS ground points to the ALS ground surface were no more than 0.06m with standard deviations less than 0.3m. The results indicated that the differences between the ground extracted from TLS and that extracted from ALS were subtle. The conclusion is that Digital Terrain Models (DTM) generated from TLS data are valid.
|
209 |
Channel and Server Scheduling for Energy-Fair Mobile Computation OffloadingMoscardini, Jonathan A. January 2016 (has links)
This thesis investigates energy fairness in an environment where multiple mobile cloud computing users are attempting to utilize both a shared channel and a shared server to offload jobs to remote computation resources, a technique known as mobile computation offloading. This offloading is done in an effort to reduce energy consumption at the mobile device, which has been demonstrated to be highly effective in previous work. However, insufficient resources are available for all mobile devices to offload all generated jobs due to constraints at the shared channel and server. In addition to these constraints, certain mobile devices are at a disadvantage relative to others in their achievable offloading rate. Hence, the shared resources are not necessarily shared fairly, and an effort must be made to do so.
A method for improving offloading fairness in terms of total energy is derived, in which the state of the queue of jobs waiting for offloading is evaluated in an online fashion, at each job arrival, in order to inform an offloading decision for that newest arrival; no prior state or future predictions are used to determine the optimal decision. This algorithm is evaluated by comparing it on several criteria to standard scheduling methods, as well as to an optimal offline (i.e., non-causal) schedule derived from the solution of a min-max energy integer linear program. Various results derived by simulation demonstrate the improvements in energy fairness achieved. / Thesis / Master of Applied Science (MASc)
|
210 |
Data Parallel Application Development and Performance with AzureZhang, Dean 08 September 2011 (has links)
No description available.
|
Page generated in 0.0629 seconds