481 |
Využití databázových systémů v sw balících typu ERP/ERP II / Usage of database systems in robust ERP/ERP II sw packagesVašek, Martin January 2010 (has links)
This thesis is concerning with problems of using database systems in ERP / ERP II software packages. The goal is to define position of ERP / ERP II systems in the Information System market. With this topic are also connected characteristics of database systems and definition of their specific position towards ERP / ERP II solutions. Except classical solutions, when the whole Information System is situated "inside" a company, there are also analyzed new attitudes, which respect external provider of ERP / ERP II and database services, particularly SaaS and Cloud Computing technology. This thesis also deals with evaluation of contributions and threats of these new business models, respecting different size of ERP / ERP II solutions. After introductory theoretical chapters, we choose respondents from groups of producers and distributors of ERP / ERP II products and we implemented a survey through questionnaire research. The goal is to clarify main reasons of choice of specific database platforms, used with different types of ERP / ERP II solutions. Afterward, with the aid of defined hypothesis, I'm trying to explain a degree of platform independence of robust ERP / ERP II software packages, towards database platforms. In closing parts of the thesis, there are compared individual database platforms among each other, respecting their suitability of usage in ERP / ERP II systems. Database systems are closely analyzed from several points of view. On the basis of ascertained theoretical and empirical frequency of particular database solutions we determine dominant market players and with the aid of multicriterial comparison we clear up reasons of their success among other competitors. Finally, we outline an anticipated trend, where the database systems market destined for ERP / ERP II products should grow in.
|
482 |
[en] DISTRIBUTED RDF GRAPH KEYWORD SEARCH / [pt] BUSCA DISTRIBUÍDA EM GRAFO RDF POR PALAVRA-CHAVEDANILO MORET RODRIGUES 26 December 2014 (has links)
[pt] O objetivo desta dissertação é melhorar a busca por palavra-chave em
formato RDF. Propomos uma abordagem escalável, baseada numa representação
tensorial, que permite o armazenamento distribuído e, como consequência, o uso
de técnicas de paralelismo para agilizar a busca sobre grandes bases de RDF, em
particular, as publicadas como Linked Data. Um volume sem precedentes de
informação está sendo disponibilizado seguindo os princípios de Linked Data,
formando o que chamamos de Web of Data. Esta informação, tipicamente
codificada como triplas RDF, costuma ser representada como um grafo, onde
sujeitos e objetos são vértices, e predicados são arestas ligando os vértices. Em
consequência da ampla adoção de mecanismos de busca na World Wide Web,
usuários estão familiarizados com a busca por palavra-chave. No caso de grafos
RDF, no entanto, a extração de uma partição coerente de grafos para enriquecer os
resultados da busca é uma tarefa cara, demorada, e cuja expectativa do usuário é
de que seja executada em tempo real. Este trabalho tem como objetivo o
tratamento deste problema. Parte de uma solução proposta recentemente prega a
indexação do grafo RDF como uma matriz esparsa, que contém um conjunto de
informações pré-computadas para agilizar a extração de seções do grafo, e o uso
de consultas baseadas em tensores sobre a matriz esparsa. Esta abordagem
baseada em tensores permite que se tome vantagem de técnicas modernas de
programação distribuída, e.g., a utilização de bases de dados não-relacionais
fracionadas e o modelo de MapReduce. Nesta dissertação, propomos o desenho e
exploramos a viabilidade da abordagem baseada em tensores, com o objetivo de
construir um depósito de dados distribuído e agilizar a busca por palavras-chave
com uma abordagem paralela. / [en] The goal of this dissertation is to improve RDF keyword search. We
propose a scalable approach, based on a tensor representation that allows for
distributed storage, and thus the use of parallel techniques to speed up the search
over large linked data sets, in particular those published as Linked Data. An
unprecedented amount of information is becoming available following the
principles of Linked Data, forming what is called the Web of Data. This
information, typically codified as RDF subject-predicate-object triples, is
commonly abstracted as a graph which subjects and objects are nodes, and
predicates are edges connecting them. As a consequence of the widespread
adoption of search engines on the World Wide Web, users are familiar with
keyword search. For RDF graphs, however, extracting a coherent subset of data
graphs to enrich search results is a time consuming and expensive task, and it is
expected to be executed on-the-fly at user prompt. The dissertation s goal is to
handle this problem. A recent proposal has been made to index RDF graphs as a
sparse matrix with the pre-computed information necessary for faster retrieval of
sub-graphs, and the use of tensor-based queries over the sparse matrix. The tensor
approach can leverage modern distributed computing techniques, e.g., nonrelational
database sharding and the MapReduce model. In this dissertation, we
propose a design and explore the viability of the tensor-based approach to build a
distributed datastore and speed up keyword search with a parallel approach.
|
483 |
What Are the Security Challenges Concerning Maintenance Data in the Railway IndustryKhan, Hiba January 2019 (has links)
Recently, technology advancement has brought improvement in all the sectors, including the railway sector. The Internet of Things (IoT) based railway systems have immense potential to improve quality and systems that will enable more efficient, environmental friendly railway system. Many research brought innovations that offer enormous benefits for rail travel. The current research focuses on the railway industries, as they want to reap the benefits of IT concept such as Cloud Computing, Information Security, and Internet of Things (IoT). Railway industries are generating a large volume of data every day from different sources. In addition, machine and human interactions are rapidly increasing along with the development of technologies. This data need to be properly gathered, analysed and shared in a way that it is safe from different types of cyberattacks and calamities. To overcome smart devices’ and Cloud’s limitations, the new paradigm known as Fog computing has appeared. In which an additional layer processes the data and sends the results to the Cloud. Despite numerous benefits of Fog, computing brings into IoT-based environments, privacy and security issues remain the main challenge for its implementation. Hence, the primary purpose of this research is to investigate the potential challenges, consequences, threats, vulnerabilities, and risk management of data security in the railway infrastructure in the context of eMaintenance.
|
484 |
Determinants influencing adoption of cloud computing by small medium enterprises in South AfricaMatandela, Wanda January 2017 (has links)
Submitted in partial fulfillment of the requirements for the degree of Master of Commerce in Information Systems (Coursework) at the School of Economic and Business Sciences, University of the Witwatersrand, June 2017 / Small Medium Enterprises (SMEs) are now recognized as the driving force behind most thriving economies. This is mainly attributed to the role they play in most economies in reducing unemployment and their contribution towards Gross Domestic Product. This means that SMEs should have the right resources to enable them to enhance performance. Choosing the right technology to support their businesses is one of the important decisions that SMEs should make. Understanding the benefits and challenges of different technologies is often a problem for most SMEs.
One of the new technologies that has gained prominence in recent years is cloud computing. Even though the value associated with this technology has been widely researched especially for large enterprises, the rate at which SMEs adopt cloud computing still remains low. The purpose of this research sought to explore and describe the determinants influencing the adoption of cloud computing by SMEs in South Africa. The study used Technology Organization Environment (TOE) framework as the theoretical lens in understanding the adoption of Could Computing phenomenon.
Further, this qualitative exploratory and descriptive study used semi-structured interviews to collect data from five SMEs based in Johannesburg, Gauteng Province, operating in different industries and belonging to the National Small Business Chamber.
The main factors that were identified as playing an important role in the adoption of cloud computing by SMEs are, relative advantage, complexity, compatibility, awareness, trialability, culture, top management support, size, regulation and trade partner relationship. It is worth noting that there was not enough evidence that competitive pressure played a significant role in SME cloud adoption. / XL2018
|
485 |
A collaborative architecture against DDOS attacks for cloud computing systems. / Uma arquitetura colaborativa contra ataques distribuídos de negação de serviço para sistemas de computação em nuvem.Almeida, Thiago Rodrigues Meira de 14 December 2018 (has links)
Distributed attacks, such as Distributed Denial of Service (DDoS) ones, require not only the deployment of standalone security mechanisms responsible for monitoring a limited portion of the network, but also distributed mechanisms which are able to jointly detect and mitigate the attack before the complete exhaustion of network resources. This need led to the proposal of several collaborative security mechanisms, covering different phases of the attack mitigation: from its detection to the relief of the system after the attack subsides. It is expected that such mechanisms enable the collaboration among security nodes through the distributed enforcement of security policies, either by installing security rules (e.g., for packet filtering) and/or by provisioning new specialized security nodes on the network. Albeit promising, existing proposals that distribute security tasks among collaborative nodes usually do not consider an optimal allocation of computational resources. As a result, their operation may result in a poor Quality of Service for legitimate packet flows during the mitigation of a DDoS attack. Aiming to tackle this issue, this work proposes a collaborative solution against DDoS attacks with two main goals: (1) ensure an optimal use of resources already available in the attack\'s datapath in a proactive way, and (2) optimize the placement of security tasks among the collaborating security nodes. Regardless the characteristics of each main goal, legitimate traffic must be preserved as packet loss is reduced as much as possible. / Sem resumo
|
486 |
Modeling and analysis of securityUnknown Date (has links)
Cloud Computing is a new computing model consists of a large pool of hardware
and software resources on remote datacenters that are accessed through the Internet.
Cloud Computing faces significant obstacles to its acceptance, such as security,
virtualization, and lack of standardization. For Cloud standards, there is a long debate
about their role, and more demands for Cloud standards are put on the table. The Cloud
standardization landscape is so ambiguous. To model and analyze security standards for
Cloud Computing and web services, we have surveyed Cloud standards focusing more on
the standards for security, and we classified them by groups of interests. Cloud
Computing leverages a number of technologies such as: Web 2.0, virtualization, and
Service Oriented Architecture (SOA). SOA uses web services to facilitate the creation of
SOA systems by adopting different technologies despite their differences in formats and
protocols. Several committees such as W3C and OASIS are developing standards for web services; their standards are rather complex and verbose. We have expressed web services security standards as patterns to make it easy for designers and users to understand their key points. We have written two patterns for two web services standards; WS-Secure Conversation, and WS-Federation. This completed an earlier work we have done on web services standards. We showed relationships between web services security standards and used them to solve major Cloud security issues, such as, authorization and access control, trust, and identity management. Close to web services, we investigated Business Process Execution Language (BPEL), and we addressed security considerations in BPEL and how to enforce them. To see how Cloud vendors look at web services standards, we took Amazon Web Services (AWS) as a case-study. By reviewing AWS documentations, web services security standards are barely mentioned. We highlighted some areas where web services security standards could solve some AWS limitations, and improve AWS security process. Finally, we studied the security guidance of two major Cloud-developing organizations, CSA and NIST. Both missed the quality of attributes offered by web services security standards. We expanded their work and added benefits of adopting web services security standards in securing the Cloud. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2013.
|
487 |
Predictive modeling for chronic conditionsUnknown Date (has links)
Chronic Diseases are the major cause of mortality around the world, accounting for 7 out of 10 deaths each year in the United States. Because of its adverse effect on the quality of life, it has become a major problem globally. Health care costs involved in managing these diseases are also very high. In this thesis, we will focus on two major chronic diseases Asthma and Diabetes, which are among the leading causes of mortality around the globe. It involves design and development of a predictive analytics based decision support system which uses five supervised machine learning algorithm to predict the occurrence of Asthma and Diabetes. This system helps in controlling the disease well in advance by selecting its best indicators and providing necessary feedback. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
|
488 |
Projeto de um broker de gerenciamento adaptativo de recursos em computação em nuvem baseado em técnicas de controle realimentado / Design of an adaptive resource management broker for cloud computing based on feedback control techniquesNobile, Pedro Northon 25 February 2013 (has links)
Computação em nuvem refere-se a um modelo de disponibilização de recursos computacionais no qual a infraestrutura de software e hardware é ofertada como um serviço, e vem se estabelecendo como um paradigma de sucesso graças a versatilidade e ao custo-efetividade envolvidos nesse modelo de negócio, possibilitando o compartilhamento de um conjunto de recursos físicos entre diferentes usuários e aplicações. Com o advento da computação em nuvem e a possibilidade de elasticidade dos recursos computacionais virtualizados, a alocação dinâmica de recursos vem ganhando destaque, e com ela as questões referentes ao estabelecimento de contratos e de de qualidade de serviço. Historicamente, as pesquisas em QoS concentram-se na solução de problemas que envolvem duas entidades: usuários e servidores. Entretanto, em ambientes de nuvem, uma terceira entidade passa a fazer parte dessa interação, o consumidor de serviços em nuvem, que usa a infraestrutura para disponibilizar algum tipo de serviço aos usuários finais e que tem recebido pouca atenção das pesquisa até o momento, principalmente no que tange ao desenvolvimento de mecanismos automáticos para a alocação dinâmica de recursos sob variação de demanda. Este trabalho consiste na proposta de uma arquitetura de gerenciamento adaptativo de recursos sob a perspectiva do modelo de negócio envolvendo três entidades, focada na eficiência do consumidor. O trabalho inspira-se em técnicas de controle realimentado para encontrar soluções adaptativas aos problemas de alocação dinâmica de recursos, resultando em uma arquitetura de broker de consumidor, um respectivo protótipo e um método de projeto de controle para sistemas computacionais dessa natureza / CLoud computing refers to a computer resource deployment model in which software and hardware infrastructure are offered as a service. Cloud computing has become a successful paradigm due to the versatility and cost-effectiveness involved in that business model, making it possible to share a cluster of physical resources between several users and applications. With the advent of cloud computing and the computer elastic resource, dynamic allocation of virtualized resources is becoming more prominent, and along with it, the issues concerning the establishment of quality of service parameters. Historically, research on QoS has focused on solutions for problems involving two entities: users and servers. However, in cloud environments, a third party becomes part of this interaction, the cloud consumer, that uses the infrastructure to provide some kind of service to endusers, and which has received fewer attention, especially regarding the development of autonomic mechanisms for dynamic resource allocation under time-varying demand. This work aims at the development of an architecture for dynamic adaptive resource allocation involving three entities, focused on consumer revenue. The research outcome is a consumer broker architecture based on feedback control, a respective architecture prototype and a computer system feedback control methodology which may be applied in this class of problems
|
489 |
Orquestração de migração massiva de máquinas virtuais baseada em análise cíclica para ambientes de computação na nuvem. / Massive virtual machine live migration orchestration for cloud computing environment based on cyclic analyses.Baruchi, Artur 15 May 2015 (has links)
Um das principais características da tecnologia de virtualização é a Live Migration, que permite que máquinas virtuais sejam movimentadas entre máquinas físicas sem a interrupção da execução. Esta característica habilita a implementação de políticas mais sofisticadas dentro de um ambiente de computação na nuvem, como a otimização de uso de energia elétrica e recursos computacionais. Entretanto, a Live Migration pode impor severa degradação de desempenho nas aplicações das máquinas virtuais e causar diversos impactos na infraestrutura dos provedores de serviço, como congestionamento de rede e máquinas virtuais co-existentes nas máquinas físicas. Diferente de diversos estudos, este estudo considera a carga de trabalho da máquina virtual um importante fator e argumenta que escolhendo o momento adequado para a migração da máquina virtual pode-se reduzir as penalidades impostas pela Live Migration. Este trabalho introduz a Application-aware Live Migration (ALMA), que intercepta as submissões de Live Migration e, baseado na carga de trabalho da aplicação, adia a migração para um momento mais favorável. Os experimentos conduzidos neste trabalho mostraram que a arquitetura reduziu em até 74% o tempo das migrações para os experimentos com benchmarks e em até 67% os experimentos com carga de trabalho real. A transferência de dados causada pela Live Migration foi reduzida em até 62%. Além disso, o presente introduz um modelo que faz a predição do custo da Live Migration para a carga de trabalho e também um algoritmo de migração que não é sensível à utilização de memória da máquina virtual. / A key feature in virtualization technology is the Live Migration, which allows a Virtual Machine to be moved from a physical host to another without execution interruption. This feature enables the implementation of more sophisticated policies inside a cloud environment, such as energy and computational resources optimization. However, live migration can impose severe performance degradation for virtual machine application and cause multiple impacts in service provider infrastructure, such as network congestion and co-located virtual machine performance degradation. Unlike of several studies this work consider the virtual machine workload an important factor and argue that carefully choosing a proper moment to migrate it can reduce the live migration penalties. This work introduces the Application-aware Live Migration Architecture (ALMA), which intercepts live migrations submissions and, based in the application workload, postpone the migration to a more propitious live migration moment. The experiments conducted by this work demonstrated that the architecture reduced up to 74% for live migration time for benchmarks and 67% for real applications workload. The network data transfer during the live migration was reduced up to 62%. Also, the present work introduces a model to predict live migration cost for the application and an algorithm that it is not memory usage sensitive.
|
490 |
Informações de suporte ao escalonamento de workflows científicos para a execução em plataformas de computação em nuvem / Support information to scientific workflow scheduling for execution in cloud computing platformsTeixeira, Eduardo Cotrin 26 April 2016 (has links)
A ciência tem feito uso frequente de recursos computacionais para execução de experimentos e processos científicos, que podem ser modelados como workflows que manipulam grandes volumes de dados e executam ações como seleção, análise e visualização desses dados segundo um procedimento determinado. Workflows científicos têm sido usados por cientistas de várias áreas, como astronomia e bioinformática, e tendem a ser computacionalmente intensivos e fortemente voltados à manipulação de grandes volumes de dados, o que requer o uso de plataformas de execução de alto desempenho como grades ou nuvens de computadores. Para execução dos workflows nesse tipo de plataforma é necessário o mapeamento dos recursos computacionais disponíveis para as atividades do workflow, processo conhecido como escalonamento. Plataformas de computação em nuvem têm se mostrado um alternativa viável para a execução de workflows científicos, mas o escalonamento nesse tipo de plataforma geralmente deve considerar restrições específicas como orçamento limitado ou o tipo de recurso computacional a ser utilizado na execução. Nesse contexto, informações como a duração estimada da execução ou limites de tempo e de custo (chamadas aqui de informações de suporte ao escalonamento) são importantes para garantir que o escalonamento seja eficiente e a execução ocorra de forma a atingir os resultados esperados. Este trabalho identifica as informações de suporte que podem ser adicionadas aos modelos de workflows científicos para amparar o escalonamento e a execução eficiente em plataformas de computação em nuvem. É proposta uma classificação dessas informações, e seu uso nos principais Sistemas Gerenciadores de Workflows Científicos (SGWC) é analisado. Para avaliar o impacto do uso das informações no escalonamento foram realizados experimentos utilizando modelos de workflows científicos com diferentes informações de suporte, escalonados com algoritmos que foram adaptados para considerar as informações inseridas. Nos experimentos realizados, observou-se uma redução no custo financeiro de execução do workflow em nuvem de até 59% e redução no makespan chegando a 8,6% se comparados à execução dos mesmos workflows sendo escalonados sem nenhuma informação de suporte disponível. / Science has been using computing resources to perform scientific processes and experiments that can be modeled as workflows handling large data volumes and performing actions such as selection, analysis and visualization of these data according to a specific procedure. Scientific workflows have been used by scientists from many areas, such as astronomy and bioinformatics, and tend to be computationally intensive and heavily focused on handling large data volumes, which requires using high-performance computing platforms such as grids or clouds. For workflow execution in these platforms it is necessary to assign the workflow activities to the available computational resources, a process known as scheduling. Cloud computing platforms have proved to be a viable alternative for scientific workflows execution, but scheduling in cloud must take into account specific constraints such as limited budget or the type of computing resources to be used in execution. In this context, information such as the estimated duration of execution, or time and cost limits (here this information is generally referred to as scheduling support information) become important for efficient scheduling and execution, aiming to achieve the expected results. This work identifies support information that can be added to scientific workflow models to support efficient scheduling and execution in cloud computing platforms. We propose and analyze a classification of such information and its use in Scientific Workflows Management Systems (SWMS). To assess the impact of support information on scheduling, experiments were conducted with scientific workflow models using different support information, scheduled with algorithms that were adapted to consider the added information. The experiments have shown a reduction of up to 59% on the financial cost of workflow execution in the cloud, and a reduction reaching 8,6% on the makespan if compared to workflow execution scheduled without any available supporting information.
|
Page generated in 0.1402 seconds