Spelling suggestions: "subject:"[een] CLOUD COMPUTING"" "subject:"[enn] CLOUD COMPUTING""
181 |
MPSF: cloud scheduling framework for distributed workflow execution. / MPSF: um arcabouço para escalonamento em computação em nuvem para execução distribuída de fluxos de trabalho.Nelson Mimura Gonzalez 16 December 2016 (has links)
Cloud computing represents a distributed computing paradigm that gained notoriety due to its properties related to on-demand elastic and dynamic resource provisioning. These characteristics are highly desirable for the execution of workflows, in particular scientific workflows that required a great amount of computing resources and that handle large-scale data. One of the main questions in this sense is how to manage resources of one or more cloud infrastructures to execute workflows while optimizing resource utilization and minimizing the total duration of the execution of tasks (makespan). The more complex the infrastructure and the tasks to be executed are, the higher the risk of incorrectly estimating the amount of resources to be assigned to each task, leading to both performance and monetary costs. Scenarios which are inherently more complex, such as hybrid and multiclouds, rarely are considered by existing resource management solutions. Moreover, a thorough research of relevant related work revealed that most of the solutions do not address data-intensive workflows, a characteristic that is increasingly evident for modern scientific workflows. In this sense, this proposal presents MPSF, the Multiphase Proactive Scheduling Framework, a cloud resource management solution based on multiple scheduling phases that continuously assess the system to optimize resource utilization and task distribution. MPSF defines models to describe and characterize workflows and resources. MPSF also defines performance and reliability models to improve load distribution among nodes and to mitigate the effects of performance fluctuations and potential failures that might occur in the system. Finally, MPSF defines a framework and an architecture to integrate all these components and deliver a solution that can be implemented and tested in real applications. Experimental results show that MPSF is able to predict with much better accuracy the duration of workflows and workflow phases, as well as providing performance gains compared to greedy approaches. / A computação em nuvem representa um paradigma de computação distribuída que ganhoudestaque devido a aspectos relacionados à obtenção de recursos sob demanda de modo elástico e dinâmico. Estas características são consideravelmente desejáveis para a execução de tarefas relacionadas a fluxos de trabalho científicos, que exigem grande quantidade de recursos computacionais e grande fluxo de dados. Uma das principais questões neste sentido é como gerenciar os recursos de uma ou mais infraestruturas de nuvem para execução de fluxos de trabalho de modo a otimizar a utilização destes recursos e minimizar o tempo total de execução das tarefas. Quanto mais complexa a infraestrutura e as tarefas a serem executadas, maior o risco de estimar incorretamente a quantidade de recursos destinada para cada tarefa, levando a prejuízos não só em termos de tempo de execução como também financeiros. Cenários inerentemente mais complexos como nuvens híbridas e múltiplas nuvens raramente são considerados em soluções existentes de gerenciamento de recursos para nuvens. Além destes fatores, a maioria das soluções não oferece mecanismos claros para tratar de fluxos de trabalho com alta intensidade de dados, característica cada vez mais proeminente em fluxos de trabalho moderno. Neste sentido, esta proposta apresenta MPSF, uma solução de gerenciamento de recursos baseada em múltiplas fases de gerenciamento baseadas em mecanismos dinâmicos de alocação de tarefas. MPSF define modelos para descrever e caracterizar fluxos de trabalho e recursos de modo a suportar cenários simples e complexos, como nuvens híbridas e nuvens integradas. MPSF também define modelos de desempenho e confiabilidade para melhor distribuir a carga e para combater os efeitos de possíveis falhas que possam ocorrer no sistema. Por fim, MPSF define um arcabouço e um arquitetura que integra todos estes componentes de modo a definir uma solução que possa ser implementada e utilizada em cenários reais. Testes experimentais indicam que MPSF não só é capaz de prever com maior precisão a duração da execução de tarefas, como também consegue otimizar a execução das mesmas, especialmente para tarefas que demandam alto poder computacional e alta quantidade de dados.
|
182 |
Automated feature synthesis on big data using cloud computing resourcesSaker, Vanessa January 2020 (has links)
The data analytics process has many time-consuming steps. Combining data that sits in a relational database warehouse into a single relation while aggregating important information in a meaningful way and preserving relationships across relations, is complex and time-consuming. This step is exceptionally important as many machine learning algorithms require a single file format as an input (e.g. supervised and unsupervised learning, feature representation and feature learning, etc.). An analyst is required to manually combine relations while generating new, more impactful information points from data during the feature synthesis phase of the feature engineering process that precedes machine learning. Furthermore, the entire process is complicated by Big Data factors such as processing power and distributed data storage. There is an open-source package, Featuretools, that uses an innovative algorithm called Deep Feature Synthesis to accelerate the feature engineering step. However, when working with Big Data, there are two major limitations. The first is the curse of modularity - Featuretools stores data in-memory to process it and thus, if data is large, it requires a processing unit with a large memory. Secondly, the package is dependent on data stored in a Pandas DataFrame. This makes the use of Featuretools with Big Data tools such as Apache Spark, a challenge. This dissertation aims to examine the viability and effectiveness of using Featuretools for feature synthesis with Big Data on the cloud computing platform, AWS. Exploring the impact of generated features is a critical first step in solving any data analytics problem. If this can be automated in a distributed Big Data environment with a reasonable investment of time and funds, data analytics exercises will benefit considerably. In this dissertation, a framework for automated feature synthesis with Big Data is proposed and an experiment conducted to examine its viability. Using this framework, an infrastructure was built to support the process of feature synthesis on AWS that made use of S3 storage buckets, Elastic Cloud Computing services, and an Elastic MapReduce cluster. A dataset of 95 million customers, 34 thousand fraud cases and 5.5 million transactions across three different relations was then loaded into the distributed relational database on the platform. The infrastructure was used to show how the dataset could be prepared to represent a business problem, and Featuretools used to generate a single feature matrix suitable for inclusion in a machine learning pipeline. The results show that the approach was viable. The feature matrix produced 75 features from 12 input variables and was time efficient with a total end-to-end run time of 3.5 hours and a cost of approximately R 814 (approximately $52). The framework can be applied to a different set of data and allows the analysts to experiment on a small section of the data until a final feature set is decided. They are able to easily scale the feature matrix to the full dataset. This ability to automate feature synthesis, iterate and scale up, will save time in the analytics process while providing a richer feature set for better machine learning results.
|
183 |
Modelo de implementación de soluciones tecnológicas al 2020Romero La Rosa, Max Ronald, Zúñiga Alemán, Luis Fernando Alonso 01 May 2016 (has links)
Durante los últimos diez años, se han desarrollado diferentes tecnologías que aún se desconocen, ya sea por falta de interés o por falta de información de la misma, dado su alto costo de implementación o una serie de factores. Sin embargo, a medida que pasan los años, estas tecnologías se están implementando a pasos acelerados. Las cuatro principales macro tecnologías que engloban a todos estos productos son: Información, Social, Mobile y Cloud. Estos, a su vez, incluyen teléfonos y casas inteligentes que denotan una gran cantidad de transferencia de información y presencia en los cuatro principales cuadrantes. La tecnología evoluciona hasta un determinado momento en el que parece ya no avanzar más. A su vez, las aplicaciones parecen haber dado todo su potencial en los Smartphones; sin embargo, aparecen los diferentes equipos que revolucionan al mercado tales como los productos de la empresa Nest que combinan la información con el hardware y a su vez reduce sustantivamente el tamaño de los dispositivos. Es debido a este acelerado avance que diversos rubros ligados a la tecnología, en su mayoría el hogar y las personas en general, no se toman el tiempo de realizar una investigación minuciosa de estos avances tecnológicos, lo que en un futuro conlleva una falta de conocimiento en las herramientas orientadas a su línea de investigación tecnológica. Por ello, es de suma importancia, conocer estas nuevas tecnologías para estar a la vanguardia de los últimos desarrollos y avances, ya que con mayor frecuencia aparecen en la vida cotidiana y en cada momento del día. En el presente proyecto se propone un modelo de solución tecnológica orientado al ambiente de la cocina y la experiencia del usuario, que permita tener un mayor conocimiento de cómo estas tecnologías son utilizadas como oportunidades de mejora y obtener el mayor beneficio de ellas en favor de su comodidad y seguridad. / During the past ten years, it has developed different technologies that are still unknown, either for lack of interest or lack of information of the same, given the high cost of implementation or a number of factors. However, as the years pass, these technologies are being implemented at an accelerated pace. The four main macro technologies that encompass all of these products are: Information, Social, Mobile and Cloud. These, in turn, include phones and smart homes that show a lot of information transfer and presence in the four main quadrants. Technology evolves to a certain point where it seems to go no further. In turn, the applications seem to have given their full potential in smartphones; however, the different teams that revolutionize the market such as company products Nest which combine information with hardware and in turn substantially reduces the size of the devices appear. It is due to this accelerated progress various areas related to technology, most home and people in general, do not take the time to conduct a thorough investigation of these technological advances, which in the future leads to a lack of knowledge tools in their line-oriented technology research. It is therefore extremely important to know these new technologies to be at the forefront of the latest developments and progress, and appear most often in everyday life and every moment of the day. The present project intends a model of technological solution oriented kitchen environment and user experience that allows for a better understanding of how these technologies are used as opportunities for improvement and get the most benefit from them in favor of proposed its comfort and safety. / Tesis
|
184 |
A REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATIONUnknown Date (has links)
Cloud computing has provided many services to potential consumers, one of these services being the provision of network functions using virtualization. Network Function Virtualization is a new technology that aims to improve the way we consume network services. Legacy networking solutions are different because consumers must buy and install various hardware equipment. In NFV, networks are provided to users as a software as a service (SaaS). Implementing NFV comes with many benefits, including faster module development for network functions, more rapid deployment, enhancement of the network on cloud infrastructures, and lowering the overall cost of having a network system. All these benefits can be achieved in NFV by turning physical network functions into Virtual Network Functions (VNFs). However, since this technology is still a new network paradigm, integrating this virtual environment into a legacy environment or even moving all together into NFV reflects on the complexity of adopting the NFV system. Also, a network service could be composed of several components that are provided by different service providers; this also increases the complexity and heterogeneity of the system. We apply abstract architectural modeling to describe and analyze the NFV architecture. We use architectural patterns to build a flexible NFV architecture to build a Reference Architecture (RA) for NFV that describe the system and how it works. RAs are proven to be a powerful solution to abstract complex systems that lacks semantics. Having an RA for NFV helps us understand the system and how it functions. It also helps us to expose the possible vulnerabilities that may lead to threats toward the system. In the future, this RA could be enhanced into SRA by adding misuse and security patterns for it to cover potential threats and vulnerabilities in the system. Our audiences are system designers, system architects, and security professionals who are interested in building a secure NFV system. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
|
185 |
Vytvoření monitorovacího řešení pro službu PowerBI / Monitoring Solution for the PowerBI ServiceTrifanov, Filip January 2021 (has links)
This master’s thesis deals with design of the monitoring solution for the Power BI service. The thesis is divided into theoretical, analytical and design sections. In the theoretical part describes the theoretical fundamentals, used technologies and analytical tools. The analytical part analyzes the company Intelligent Technologies, competitive solutions and data sources for the design part. The design part proposes its own solution for monitoring the Power BI service, including the costs and benefits of the proposed solution.
|
186 |
Analyses, Mitigation and Applications of Secure Hash AlgorithmsAl-Odat, Zeyad Abdel-Hameed January 2020 (has links)
Cryptographic hash functions are one of the widely used cryptographic primitives with a purpose to ensure the integrity of the system or data. Hash functions are also utilized in conjunction with digital signatures to provide authentication and non-repudiation services. Secure Hash Algorithms are developed over time by the National Institute of Standards and Technology (NIST) for security, optimal performance, and robustness. The most known hash standards are SHA-1, SHA-2, and SHA-3.
The secure hash algorithms are considered weak if security requirements have been broken. The main security attacks that threaten the secure hash standards are collision and length extension attacks. The collision attack works by finding two different messages that lead to the same hash. The length extension attack extends the message payload to produce an eligible hash digest. Both attacks already broke some hash standards that follow the Merkle-Damgrard construction. This dissertation proposes methodologies to improve and strengthen weak hash standards against collision and length extension attacks. We propose collision-detection approaches that help to detect the collision attack before it takes place. Besides, a proper replacement, which is supported by a proper construction, is proposed. The collision detection methodology helps to protect weak primitives from any possible collision attack using two approaches. The first approach employs a near-collision detection mechanism that was proposed by Marc Stevens. The second approach is our proposal. Moreover, this dissertation proposes a model that protects the secure hash functions from collision and length extension attacks. The model employs the sponge structure to construct a hash function. The resulting function is strong against collision and length extension attacks. Furthermore, to keep the general structure of the Merkle-Damgrard functions, we propose a model that replaces the SHA-1 and SHA-2 hash standards using the Merkle-Damgrard construction. This model employs the compression function of the SHA-1, the function manipulators of the SHA-2, and the $10*1$ padding method. In the case of big data over the cloud, this dissertation presents several schemes to ensure data security and authenticity. The schemes include secure storage, anonymous privacy-preserving, and auditing of the big data over the cloud.
|
187 |
Modelo Tecnológico de Reconocimiento Facial para la Identificación de Pacientes en el Sector SaludLa Madrid Arroyo, Diego Alonso, Barriga Rivera, Martín Humberto 01 December 2019 (has links)
El fraude médico y los ciberataques en el sector sanitario son fenómenos en aumento. La suplantación de identidad es una modalidad de fraude que tiene como propósito asumir la identidad de otra persona en una institución médica para obtener bienes y servicios médicos a las aseguradoras presentando reclamaciones falsas obteniendo un beneficio económico. Por lo tanto, afecta a la población asegurada ya que involucra un monto invertido, tiempo y servicio brindado. Sólo es necesario presentar documentos de identidad para ser atendido, la cual puede ser una medida de verificación y validación de alto riesgo para el paciente si se trata de algún fraude, debido a que cuando una persona usa la identidad médica de la víctima para obtener servicios médicos o medicamentos con receta, esa información se incorpora a la historia clínica electrónica de la víctima y puede complicar su atención médica en el futuro. Identificar al paciente de forma segura e inequívoca es de vital importancia para el paciente, impidiendo que nadie pueda suplantar su identidad.
El presente proyecto detalla el desarrollo de un modelo tecnológico que tiene como objetivo la identificación de pacientes mediante un servicio cognitivo de reconocimiento facial en Cloud computing para cubrir la necesidad que tienen los sectores de salud de prevenir la suplantación de identidad. Además, en caso de emergencias, se alerta a los parientes del paciente identificado el estado de salud en el que se encuentra mediante un mensaje de texto. Se espera que el modelo les permita a los pacientes ser atendidos sin la necesidad de contar con un documento de identidad en caso se encuentren en estado de emergencia y prevenir fraudes como las suplantaciones de identidad. Finalmente, se definirá un plan de continuidad que contenga mecanismos de respaldo en tiempo real para la disponibilidad y confiabilidad. Asimismo, se contará con recursos a nivel de software, los cuales serán detallados en base a características, especificaciones y uso. / Medical fraud and cyber-attacks in the health sector are increasing phenomena. Impersonation is a form of fraud whose purpose is to assume the identity of another person in a medical institution to obtain medical goods and services to insurers by presenting false claims obtaining an economic benefit. Therefore, it affects the insured population as it involves an amount invested, time and service provided. It is only necessary to present identity documents to be treated, which can be a measure of verification and validation of high risk for the patient if it is a fraud, because when a person uses the victim's medical identity to obtain services doctors or prescription drugs, that information is incorporated into the victim's electronic medical record and may complicate their medical care in the future. Identifying the patient safely and unequivocally is of vital importance to the patient, preventing anyone from supplanting their identity.
This project details the development of a technological model that aims to identify patients through a cognitive facial recognition service on cloud computing to cover the need of health sectors to prevent phishing. In addition, the closest relatives of the identified patient will be alerted to the state of health in which they are in a text message. The model is expected to allow patients to be treated without the need to have an identity document in case they are in a state of emergency and prevent fraud such as phishing. Finally, a continuity plan will be defined that contains real-time backup mechanisms for availability and reliability. Also, there will be resources at the software level, which will be detailed based on features, specifications and use. / Tesis
|
188 |
Next Generation Cloud Computing Architectures: Performance and PricingMahajan, Kunal January 2021 (has links)
Cloud providers need to optimize the container deployments to efficiently utilize their network, compute and storage resources. In addition, they require an attractive pricing strategy for the compute services like containers, virtual machines, and serverless computing in order to attract users, maximize their profits and achieve a desired utilization of their resources. This thesis aims to tackle the twofold challenge of achieving high performance in container deployments and identifying the pricing for compute services.
For performance, the thesis presents a transport-adaptive network architecture (D-TAIL) improving tail latencies. Existing transport protocols such as Homa, pFabric [1, 2] utilize Shortest Remaining Processing Time (SRPT) scheduling policy which is known to have starvation issues for long flows as SRPT prioritizes short flows. D-TAIL addresses this limitation by taking age of the flow in consideration while deciding the priority. D-TAIL shows a maximum reduction of 72%, 29.66% and 28.39% in 99th-percentile FCT for transport protocols like DCTCP, pFabric and Homa respectively. In addition, the thesis also presents a container deployment design utilizing peer-to-peer network and virtual file system with content-addressable storage to address the problem of cold starts in existing container deployment systems. The proposed deployment design increases compute availability, reduces storage requirement and prevents network bottlenecks.
For pricing, the thesis studies the tradeoffs between serverless computing (SC) and traditional cloud computing (virtual machine, VM) using realistic cost models, queueing theoretic performance models, and a game theoretic formulation. For customers, we identify their workload distribution between SC and VM to minimize their cost while maintaining a particular performance constraint. For cloud provider, we identify the SC and VM prices to maximize its profit. The main result is the identification and characterization of three optimal operational regimes for both customers and the provider, that leverage either SC or VM only, or both, in a hybrid configuration.
|
189 |
Modelo de una solución ECM Open Source basado en cloud computing para una PYME del sector manufactura / Cloud-based Open-Source Enterprise Content Management Model at a SME operating in the Manufacturing sectorMontesinos Rosales, Andrea Yadira, Salas Villacorta, Rolando Sebastian 27 January 2020 (has links)
En la actualidad, las empresas deben mantener estructurados y organizados todos sus recursos de información, desean también tener un canal que unifique todo en un solo repositorio con el objetivo de darle una estructura y ponerlos a rápida disposición de sus colaboradores, clientes y proveedores logrando optimizar sus procesos y obteniendo una ventaja competitiva, ya que la información estará siempre ordenada y disponible cuando se necesite. Es por ello que para lograr este objetivo muchas de las empresas implementan software de gestión de contenido empresarial (ECM). Con estos sistemas, las empresas no solo mantienen organizada y estructurada su información, sino también logran crear un ambiente colaborativo que ayuda a gestionar y mejorar el trabajo en equipo que se da dentro de la organización, a la vez es posible que puedan crear un canal de intercambio de información con su entorno, respondiendo las necesidades de sus clientes, proveedores o entidades gubernamentales.
Con el paso del tiempo y los avances tecnológicos, los ECM han ido evolucionando, ahora están siendo potenciados por la presencia del Cloud Computing y gracias a ella ahora no solo se puede organizar información y colaborar desde el ambiente mismo de trabajo con un dispositivo asignado, sino desde cualquier parte y desde cualquier dispositivo que tenga acceso a internet. Gracias también a la tecnología Cloud, los ECM están llegando al alcance de todo tipo de empresas, incluidas PYMES.
El presente proyecto está destinado a mostrar un modelo de implementación de un software ECM potenciado por Cloud en una PYME. / In this days, companies must keep structured and organized all their information resources, they also want to have a channel that unifies everything in a single repository in order to make them available quickly to their employees, customers and suppliers improving his processes and obtaining a competitive advantage, making the information always structured and available when needed. That is why to achieve this goal many of the companies implement enterprise content management (ECM) software. With these systems, companies will not only organize and structure their information, they can also create a collaborative environment that helps manage and improve the teamwork, while it is possible that they can create a channel for exchanging information with its environment, responding to the needs of its customers, suppliers or government entities.
With the passage of time and technological advances, ECMs have been evolving, they are now being enhanced by the presence of Cloud Computing and thanks to it, now it is not only possible to organize information and collaborate from the same working environment with an assigned device, it’s now possible work from anywhere and from any device that has internet access. Thanks also to Cloud technology, ECMs are reaching the reach of all types of companies, including SMEs.
This project is intended to show a model of implementation of an ECM software powered by Cloud in an SME. / Tesis
|
190 |
Scalable Scheduling Policies with Performance Guarantees for Cloud ApplicationsPsychas, Konstantinos January 2020 (has links)
We study three models of job scheduling in a distributed server system.
For each of them we suggest scheduling algorithms that are computationally efficient and provably achieve a performance objective related to the model. For this we consider jobs to be an abstraction of executable programs that request specific resources e.g. memory, CPU. Resources need to be reserved in one of the servers for the duration the programs run, which is unknown.
The first model considers queue-based scheduling algorithms, in which jobs belong to a finite set of types and each type has a separate queue. The scheduling objective under this formulation is to keep the size of all queues bounded, which translates to bounded queuing delay. The two families of algorithms for this model can achieve the objective for the maximum theoretical workload. Most importantly they follow vastly different paradigms and are both viable alternatives depending on what other trade-offs the scheduling has to achieve.
The second model considers that resource requirements of jobs come from an unknown distribution. Jobs are queued and the objective is again to keep the number of jobs in the queue bounded and consequently the queuing delay. In this harder formulation there is no previous characterization of the maximum workload that can achieve the objective. We provide such a characterization and algorithms that achieve at least 2/3 of that maximum.
Lastly, we consider a model without queues in which jobs are admitted or rejected on arrival, with the goal to maximize the total utility of the jobs that run. Algorithms of this model were proven to achieve at least 1/2 of the maximum, but further analysis suggests that this limit can be as high as 1 − e (exponent -1). In all models we made simplifying assumptions that allow us to prove the desired properties of the system, but despite the theoretical nature of this work, we also discuss how the algorithms can be applied and tailored to the needs of different cloud applications. We hope they will eventually inspire improvements to existing cloud infrastructure management deployments.
|
Page generated in 0.0444 seconds