881 |
<b>THE APPLICATION OF QUANTITATIVE METHODS IN THE ADOPTION OF CLOUD COMPUTING WITHIN A FRAMEWORK OF UNIFIED TECHNOLOGY ACCEPTANCE THEORY: A COMPARATIVE </b><b>ANALYSIS OF U.S. HOSPITALS</b>ntitled ItemNegussie Tilahun (17563476) 08 December 2023 (has links)
<p dir="ltr">This study aims to predict the environmental, organizational, and managerial factors that determine the adoption of cloud computing in U.S. healthcare delivery systems. The premise of the analysis is that several internal and external factors determine a health provider’s transition to cloud computing. The U.S. government has funded healthcare providers through HITECH <a href="" target="_blank">(Health Information Technology for Economic and Clinical Health) </a>to implement electronic health records (EHR) which is considered as an important first step in transitioning to cloud computing. This study investigated whether there is a significant difference between hospitals and providers that received HITECH funding to enhance their EHR infrastructure and those that did not in terms of their external environmental complexities, internal organizational structure, and quality of healthcare services they provide. A stratified random sample was applied to select a cohort of 3,385 hospitals from the American Hospital Association (AHA) 2022 roster for the period 2018- 2021 to test the study hypothesis. The sampled hospitals were linked with claim, administrative, cost, and ICD-10 clinical data files to capture variables of interest repeatedly over the study period. The analysis modeled for selected external (location, market concentration as measured by Herfindahl Index), internal (number and composition of staff – physicians, nurses, technicians, etc.) demographic, clinical and financial factors. Quantitative methods such as generalized estimating equations (GEE), logistic regression, and generalized linear mixed model (GLMM) were applied within the framework of unified technology acceptance theory (UTAT), accounting for both discrete and continuous response variables while modeling for possible between-subject heterogeneity and within-subject correlations. The analysis is based on publicly available data sources that are systematically linked to address the research questions. The portion of the HITECH funding that is applied for cloud computing is calculated from the hospital’s EHR funding. This is one of the very few longitudinal time series studies of cloud computing in healthcare since almost all previous studies on American hospitals are cross-sectional. The findings of this study show statistically significant differences between hospitals that received government funding in terms of internal organizational structure, environmental complexity, and quality of healthcare provided. The analysis identified management and quality metrics that help to gauge continuously changing organizational needs and identify emerging trends. This study proposes specific topics that future researchers can consider promoting a successful implementation of cloud computing.</p>
|
882 |
Comparing performance and developer experience for a serverless application integrated with a serverless databaseStiernborg, Leonora January 2023 (has links)
Cloud computing has introduced a paradigm shift in the information technology sector by enabling the user to access computing resources over the internet. Serverless is a new cloud computing technology that has gained significant popularity for the deployment of services and applications. Serverless applications are often integrated with other services such as serverless databases. Existing work in the area of performance evaluation of serverless applications mainly focuses on serverless applications that are not integrated with a serverless database. Additionally, there is a lack of evaluation of the user experience for the implementation of a serverless application on the different cloud providers. This thesis aims to evaluate the performance and developer experience of serverless applications integrated with a serverless database on the three leading cloud platforms: Amazon Web Services, Google Cloud Platform and Microsoft Azure. This was performed by implementing and performing tests on an experimental serverless application implemented on each platform. Furthermore, the thesis compares the performance difference between the two programming languages Python and Javascript (Node.js). This thesis indicates that AWS has the overall best performance and user experience of the three platforms.
|
883 |
Cost-Effective Large-Scale Digital Twins Notification System with Prioritization ConsiderationVrbaski, Mira 19 December 2023 (has links)
Large-Scale Digital Twins Notification System (LSDTNS) monitors a Digital Twin (DT) cluster for a predefined critical state, and once it detects such a state, it sends a Notification Event (NE) to a predefined recipient. Additionally, the time from producing the DT's Complex Event (CE) to sending an alarm has to be less than a predefined deadline. However, addressing scalability and multi-objectives, such as deployment cost, resource utilization, and meeting the deadline, on top of process scheduling, presents a complex challenge. Therefore, this thesis presents a complex methodology consisting of three contributions that address system scalability, multi-objectivity and scheduling of CE processes using Reinforcement Learning (RL).
The first contribution proposes the IoT Notification System Architecture based on a micro-service-based notification methodology that allows for running and seamlessly switching between various CE reasoning algorithms. Our proposed IoT Notification System architecture addresses the scalability issue in state-of-the-art CE Recognition systems.
The second contribution proposes a novel methodology for multi-objective optimization for cloud provisioning (MOOP). MOOP is the first work dealing with multi-optimization objectives for microservice notification applications, where the notification load is variable and depends on the results of previous microservices subtasks. MOOP provides a multi-objective mathematical cloud resource deployment model and demonstrates effectiveness through the case study.
Finally, the thesis presents a Scheduler for large-scale Critical Notification applications based on a Deep Reinforcement Learning (SCN-DRL) scheduling approach for LSDTNS using RL. SCN-DRL is the first work dealing with multi-objective optimization for critical microservice notification applications using RL. During the performance evaluation, SCN-DRL demonstrates better performance than state-of-the-art heuristics. SCN-DRL shows steady performance when the notification workload increases from 10% to 90%. In addition, SCN-DRL, tested with three neural networks, shows that it is resilient to sudden container resources drop by 10%. Such resilience to resource container failures is an important attribute of a distributed system.
|
884 |
Compliance Regulatory and Security Challenges in Cloud & IP Telephony -A comparison study between India and Sweden / Compliance Regulatory and Security Challenges in Cloud & IP Telephony -A comparison study between India and SwedenManayathil Chackochan, Thomas, Gonsalvez, Ronit January 2023 (has links)
Cloud computing has evolved from cutting-edge technology to a best practice for businesses across industries. However, compliance with regulatory mandates and addressing security challenges in the cloud environment remain significant concerns. This thesis aims to explore the compliance, regulatory, and security challenges associated with cloud computing, with a particular focus on the differences in regulatory frameworks between an Asian country (India) and a European country (Sweden). Additionally, the study delves into the forensic investigation challenges in terms of evidence collection in the cloud environment. The research methodology involves studying the available literature on regulatory rules and cloud forensics, conducting surveys with cloud customers, experts, and cloud service provider (CSP) professionals, and proposing possible solutions and recommendations to overcome the identified challenges. By addressing these issues, this research contributes to a comprehensive understanding of the impacts of compliance regulations on cloud and IP Telephony services and the security and forensic investigation challenges in cloud platforms.
|
885 |
Auto-scaling Prediction using MachineLearning Algorithms : Analysing Performance and Feature CorrelationAhmed, Syed Saif, Arepalli, Harshini Devi January 2023 (has links)
Despite Covid-19’s drawbacks, it has recently contributed to highlighting the significance of cloud computing. The great majority of enterprises and organisations have shifted to a hybrid mode that enables users or workers to access their work environment from any location. This made it possible for businesses to save on-premises costs by moving their operations to the cloud. It has become essential to allocate resources effectively, especially through predictive auto-scaling. Although many algorithms have been studied regarding predictive auto-scaling, further analysis and validation need to be done. The objectives of this thesis are to implement machine-learning algorithms for predicting auto-scaling and to compare their performance on common grounds. The secondary objective is to find data connections amongst features within the dataset and evaluate their correlation coefficients. The methodology adopted for this thesis is experimentation. The selection of experimentation was made so that the auto-scaling algorithms can be tested in practical situations and compared to the results to identify the best algorithm using the selected metrics. This experiment can assist in determining whether the algorithms operate as predicted. Metrics such as Accuracy, F1-Score, Precision, Recall, Training Time andRoot Mean Square Error(RMSE) are calculated for the chosen algorithms RandomForest(RF), Logistic Regression, Support Vector Machine and Naive Bayes Classifier. The correlation coefficients of the features in the data are also measured, which helped in increasing the accuracy of the machine learning model. In conclusion, the features related to our target variable(CPU us-age, p95_scaling) often had high correlation coefficients compared to other features. The relationships between these variables could potentially be influenced by other variables that are unrelated to the target variable. Also, from the experimentation, it can be seen that the optimal algorithm for determining how cloud resources should be scaled is the Random Forest Classifier.
|
886 |
Active Assurance in KubernetesWennerström, William January 2021 (has links)
No description available.
|
887 |
Improving the performance of stream processing pipeline for vehicle dataGu, Wenyu January 2020 (has links)
The growing amount of position-dependent data (containing both geo position data (i.e. latitude, longitude) and also vehicle/driver-related information) collected from sensors on vehicles poses a challenge to computer programs to process the aggregate amount of data from many vehicles. While handling this growing amount of data, the computer programs that process this data need to exhibit low latency and high throughput – as otherwise the value of the results of this processing will be reduced. As a solution, big data and cloud computing technologies have been widely adopted by industry. This thesis examines a cloud-based processing pipeline that processes vehicle location data. The system receives real-time vehicle data and processes the data in a streaming fashion. The goal is to improve the performance of this streaming pipeline, mainly with respect to latency and cost. The work began by looking at the current solution using AWS Kinesis and AWS Lambda. A benchmarking environment was created and used to measure the current system’s performance. Additionally, a literature study was conducted to find a processing framework that best meets both industrial and academic requirements. After a comparison, Flink was chosen as the new framework. A new solution was designed to use Fink. Next the performance of the current solution and the new Flink solution were compared using the same benchmarking environment and. The conclusion is that the new Flink solution has 86.2% lower latency while supporting triple the throughput of the current system at almost same cost. / Den växande mängden positionsberoende data (som innehåller både geo-positionsdata (dvs. latitud, longitud) och även fordons- / förarelaterad information) som samlats in från sensorer på fordon utgör en utmaning för datorprogram att bearbeta den totala mängden data från många fordon. Medan den här växande mängden data hanteras måste datorprogrammen som behandlar dessa datauppvisa låg latens och hög genomströmning - annars minskar värdet på resultaten av denna bearbetning. Som en lösning har big data och cloud computing-tekniker använts i stor utsträckning av industrin. Denna avhandling undersöker en molnbaserad bearbetningspipeline som bearbetar fordonsplatsdata. Systemet tar emot fordonsdata i realtid och behandlar data på ett strömmande sätt. Målet är att förbättra prestanda för denna strömmande pipeline, främst med avseende på latens och kostnad. Arbetet började med att titta på den nuvarande lösningen med AWS Kinesis och AWS Lambda. En benchmarking-miljö skapades och användes för att mäta det aktuella systemets prestanda. Dessutom genomfördes en litteraturstudie för att hitta en bearbetningsram som bäst uppfyller både industriella och akademiska krav. Efter en jämförelse valdes Flink som det nya ramverket. En nylösning designades för att använda Fink. Därefter jämfördes prestandan för den nuvarande lösningen och den nya Flink-lösningen med samma benchmarking-miljö och. Slutsatsen är att den nya Flink-lösningen har 86,2% lägre latens samtidigt som den stöder tredubbla kapaciteten för det nuvarande systemet till nästan samma kostnad.
|
888 |
Managing Microservices with a Service Mesh : An implementation of a service mesh with Kubernetes and IstioMara Jösch, Ronja January 2020 (has links)
The adoption of microservices facilitates extending computer systems in size, complexity, and distribution. Alongside their benefits, they introduce the possibility of partial failures. Besides focusing on the business logic, developers have to tackle cross-cutting concerns of service-to-service communication which now defines the applications' reliability and performance. Currently, developers use libraries embedded into the application code to address these concerns. However, this increases the complexity of the code and requires the maintenance and management of various libraries. The service mesh is a relatively new technology that possibly enables developers staying focused on their business logic. This thesis investigates one of the available service meshes called Istio, to identify its benefits and limitations. The main benefits found are that Istio adds resilience and security, allows features currently difficult to implement, and enables a cleaner structure and a standard implementation of features within and across teams. Drawbacks are that it decreases performance by adding CPU usage, memory usage, and latency. Furthermore, the main disadvantage of Istio is its limited testing tools. Based on the findings, the Webcore Infra team of the company can make a more informed decision whether or not Istio is to be introduced. / Tillämpningen av microservices underlättar utvidgningen av datorsystem i storlek, komplexitet och distribution. Utöver fördelarna introducerar de möjligheten till partiella misslyckanden. Förutom att fokusera på affärslogiken måste utvecklare hantera övergripande problem med kommunikation mellan olika tjänster som nu definierar applikationernas pålitlighet och prestanda. För närvarande använder utvecklare bibliotek inbäddade i programkoden för att hantera dessa problem. Detta ökar dock kodens komplexitet och kräver underhåll och hantering av olika bibliotek. Service mesh är en relativt ny teknik som kan möjliggöra för utvecklare att hålla fokus på sin affärslogik. Denna avhandling undersöker ett av de tillgängliga service mesh som kallas Istio för att identifiera dess fördelar och begränsningar. De viktigaste fördelarna som hittas är att Istio lägger till resistens och säkerhet, tillåter funktioner som för närvarande är svåra att implementera och möjliggör en renare struktur och en standardimplementering av funktioner inom och över olika team. Nackdelarna är att det minskar prestandan genom att öka CPU-användning, minnesanvändning och latens. Dessutom är Istios största nackdel dess begränsade testverktyg. Baserat på resultaten kan Webcore Infra-teamet i företaget fatta ett mer informerat beslut om Istio ska införas eller inte.
|
889 |
Autoscaling through Self-Adaptation Approach in Cloud Infrastructure. A Hybrid Elasticity Management Framework Based Upon MAPE (Monitoring-Analysis-Planning-Execution) Loop, to Ensure Desired Service Level Objectives (SLOs)Butt, Sarfraz S. January 2019 (has links)
The project aims to propose MAPE based hybrid elasticity management
framework on the basis of valuable insights accrued during systematic analysis
of relevant literature. Each stage of MAPE process acts independently as a
black box in proposed framework, while dealing with neighbouring stages. Thus,
being modular in nature; underlying algorithms in any of the stage can be
replaced with more suitable ones, without affecting any other stage.
The hybrid framework enables proactive and reactive autoscaling approaches
to be implemented simultaneously within same system. Proactive approach is
incorporated as a core decision making logic on the basis of forecast data, while
reactive approach being based upon actual data would act as a damage control
measure; activated only in case of any problem with proactive approach. Thus,
benefits of both the worlds; pre-emption as well as reliability can be achieved
through proposed framework. It uses time series analysis (moving average
method / exponential smoothing) and threshold based static rules (with multiple monitoring intervals and dual threshold settings) during analysis and planning
phases of MAPE loop, respectively. Mathematical illustration of the framework
incorporates multiple parameters namely VM initiation delay / release criterion,
network latency, system oscillations, threshold values, smart kill etc. The
research concludes that recommended parameter settings primarily depend
upon certain autoscaling objective and are often conflicting in nature. Thus, no
single autoscaling system with similar values can possibly meet all objectives simultaneously, irrespective of reliability of an underlying framework. The
project successfully implements complete cloud infrastructure and autoscaling
environment over experimental platforms i-e OpenStack and CloudSim Plus.
In nutshell, the research provides solid understanding of autoscaling
phenomenon, devises MAPE based hybrid elasticity management framework
and explores its implementation potential over OpenStack and CloudSim Plus.
|
890 |
Supporting Data-Intensive Scientic Computing on Bandwidth and Space Constrained EnvironmentsBicer, Tekin 18 August 2014 (has links)
No description available.
|
Page generated in 0.1034 seconds