• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 10
  • 7
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Avalia??o da aplicabilidade de recursos na ?rea de TI para a continuidade dos sistemas cr?ticos ao neg?cio / Evaluation of applicability in IT resources for business critical systems continuity

Becker, Mauricio 03 December 2012 (has links)
Made available in DSpace on 2016-04-04T18:31:33Z (GMT). No. of bitstreams: 1 Mauricio Becker.pdf: 1680814 bytes, checksum: e32eecc9e8f4e602513a2049aa488491 (MD5) Previous issue date: 2012-12-03 / In the current globalized and connected context where companies and organizations increasingly rely on their information technology infrastructure for business continuity, such critical systems and applications need to be available and operational for end users and customers almost twenty-four hours per day over the three hundred sixty-five days of the year. In view of this, this research sought to identify the state of the art in terms of technologies for high availability computing environments and how best practice guides for IT Governance & Management can help to maintain available and operational the business critical systems. As a complement, it sought through the application of a survey research to understand the existing IT practices in one of three states with greatest economic importance in the Brazilian Northeast area. The study was conducted among December 2011 and March 2012 and aimed to evaluate current investments in IT resources, identify the main causes of unplanned downtimes and measure their impacts. From the data analysis it was possible to identify improvement opportunities regarding infrastructure, people, processes and services in order to minimize the unavailability of business critical systems for companies and organizations. / No atual contexto globalizado e conectado, as empresas e organiza??es dependem cada vez mais de suas infraestruturas de tecnologia da informa??o para a continuidade de seus neg?cios. Por esse motivo, tais sistemas e aplica??es cr?ticos ?s organiza??es necessitam estar dispon?veis e operacionais aos usu?rios internos e clientes por praticamente vinte e quatro horas por dia ao longo dos trezentos e sessenta e cinco dias do ano. Diante disto, esta pesquisa buscou apreender o estado da arte em termos de tecnologias para ambientes computacionais de alta disponibilidade e como guias de melhores pr?ticas de Governan?a e Gerenciamento de TI podem contribuir para manterdispon?veis e operacionais os sistemas cr?ticos ao neg?cio. Como complemento, buscou por meio da aplica??o de um survey compreender a pr?tica de TI existente em um dos tr?s Estados do nordeste brasileiro de maior relev?ncia econ?mica. O estudo foi realizado entre dezembro de 2011 e mar?o de 2012 e teve por objetivos avaliar os atuais investimentos em recursos de TI, mensurar os impactos e identificar as principais causas das paradas n?o programadas. Da an?lise dos dados, foi poss?vel apontar as oportunidades de melhoria no que tange infraestrutura, pessoas, processos e servi?os visando minimizar a indisponibilidade dos sistemas cr?ticos ao neg?cio de empresas e organiza??es.
42

Alta disponibilidade: uma abordagem com DNS e Proxy Reverso em Multi-Cloud

Pires, Luis Paulo Gon?alves 15 December 2016 (has links)
Submitted by SBI Biblioteca Digital (sbi.bibliotecadigital@puc-campinas.edu.br) on 2017-02-01T13:15:39Z No. of bitstreams: 1 LUIS PAULO GONCALVES PIRES.pdf: 3166033 bytes, checksum: 043d546bf3a8212c07798369bfcc2f7f (MD5) / Made available in DSpace on 2017-02-01T13:15:39Z (GMT). No. of bitstreams: 1 LUIS PAULO GONCALVES PIRES.pdf: 3166033 bytes, checksum: 043d546bf3a8212c07798369bfcc2f7f (MD5) Previous issue date: 2016-12-15 / Pontif?cia Universidade Cat?lica de Campinas ? PUC Campinas / While there is considerable enthusiasm for the migration of on-premise data centers to cloud computing services, there is still some concern about the availability of these same services. This is due, for example, to historical incidents such as that in 2011, when a crash on Amazon's servers caused sites of several of its customers to go down for almost 36 hours. In view of this, it becomes necessary to develop strategies to guarantee the availability offered by the providers. In the present work, a solution is proposed, which implements high availability in Multi-Cloud environments, through the distribution of DNS access and the use of reverse proxy. A financial analysis was also carried out, taking into account market values in Cloud Computing services, which showed that the proposed solution may even be advantageous with respect to the traditional one. Specifically, a Multi-Cloud system, consisting of two Clouds with 99.90% availability each, provides total availability of 99.999%, and it costs 34% less than a single Cloud with 99.95% availability. The simulation results, obtained in a virtualized environment, using two Clouds, with availability of 99.49% and 99.43%, showed a system availability of 99.9971%. In this way, using Multi-Cloud systems it is possible to obtain high availability systems, from lower availability Clouds, according to user?s needs, besides saving with provider services costs. / A despeito de haver consider?vel entusiasmo quanto ? migra??o de data-centers on-primese para servi?os de Cloud Computing, ainda existe certo receio no que se refere ? disponibilidade destes mesmos servi?os. Isso se deve, por exemplo, a incidentes hist?ricos como o ocorrido em 2011, quando uma falha nos servidores da Amazon fez com que sites de v?rios de seus clientes ficassem fora do ar por quase 36 horas. Em vista disso, torna-se necess?rio desenvolver estrat?gias para garantir a disponibilidade oferecida pelos provedores. No presente trabalho, descreve-se uma solu??o que implementa alta disponibilidade em ambientes Multi-Cloud, mediante a distribui??o de acesso por DNS e a utiliza??o de proxy reverso. Realizou-se tamb?m uma an?lise financeira, levando-se em conta valores de mercado em servi?os de Cloud Computing, o que mostrou que a solu??o proposta pode ser mesmo vantajosa com a rela??o ? solu??o tradicional. Especificamente, um sistema Multi-Cloud, composto por duas Clouds com disponibilidade de 99,90%, que prov? disponibilidade total de 99,999%, custa 34% menos do que uma ?nica Cloud com disponibilidade de 99,95%. Os resultados de simula??o, obtidos em ambiente virtualizado, utilizando-se duas Clouds, com disponibilidades de 99,49% e 99,43%, alcan?aram disponibilidade 99,9971%. Desta forma, utilizando-se sistemas Multi-Cloud ? poss?vel se obter sistemas de alta disponibilidade, de acordo necessidade do usu?rio, a partir de Clouds de mais baixa disponibilidade, al?m de ser poss?vel economizar com os custos dos servi?os do provedor.
43

Study of mechanisms ensuring service continuity for IKEv2 and IPsec protocols

Palomares Velasquez, Daniel 14 November 2013 (has links) (PDF)
During 2012, the global mobile traffic represented 70\% more than 2011. The arrival of the 4G technology introduced 19 times more traffic than non-4G sessions, and in 2013 the number of mobile-connected to the Internet exceeded the number of human beings on earth. This scenario introduces great pressure towards the Internet service providers (ISPs), which are called to ensure access to the network and maintain its QoS. At short/middle term, operators will relay on alternative access networks in order to maintain the same performance characteristics. Thus, the traffic of the clients might be offloaded from RANs to some other available access networks. However, the same security level is not ensured by those wireless access networks. Femtocells, WiFi or WiMAX (among other wireless technologies), must rely on some mechanism to secure the communications and avoid untrusted environments. Operators are mainly using IPsec to extend a security domain over untrusted networks. This introduces new challenges in terms of performance and connectivity for IPsec. This thesis concentrates on the study of the mechanism considering improving the IPsec protocol in terms of continuity of service. The continuity of service, also known as resilience, becomes crucial when offloading the traffic from RANs to other access networks. This is why we first concentrate our effort in defining the protocols ensuring an IP communication: IKEv2 and IPsec. Then, we present a detailed study of the parameters needed to keep a VPN session alive, and we demonstrate that it is possible to dynamically manage a VPN session between different gateways. Some of the reasons that justify the management of VPN sessions is to provide high availability, load sharing or load balancing features for IPsec connections. These mechanisms increase the continuity of service of IPsec-based communication. For example, if for some reason a failure occurs to a security gateway, the ISP should be able to overcome this situation and to provide mechanisms to ensure continuity of service to its clients. Some new mechanisms have recently been implemented to provide High Availability over IPsec. The open source VPN project, StrongSwan, implemented a mechanism called ClusterIP in order to create a cluster of IPsec gateways. We merged ClusterIP with our own developments in order to define two architectures: High Availability and Context Management over Mono-LAN and Multi-LAN environments. We called Mono-LAN those architectures where the cluster of security gateways is configured under a single IP address, whereas Multi-LAN concerns those architectures where different security gateways are configured with different IP addresses. Performance measurements throughout the thesis show that transferring a VPN session between different gateways avoids re-authentication delays and reduce the amount of CPU consumption and calculation of cryptographic material. From an ISP point of view, this could be used to avoid overloaded gateways, redistribution of the load, better network performances, improvements of the QoS, etc. The idea is to allow a peer to enjoy the continuity of a service while maintaining the same security level that it was initially proposed
44

Risk-based proactive availability management - attaining high performance and resilience with dynamic self-management in Enterprise Distributed Systems

Cai, Zhongtang 10 January 2008 (has links)
Complex distributed systems such as distributed information flows systems which continuously acquire manipulate and disseminate information across an enterprise's distributed sites and machines, and distributed server applications co-deployed in one or multiple shared data centers, with each of them having different performance/availability requirements that vary over time and competing with each other for the shared resources, have been playing a more serious role in industry and society now. Consequently, it becomes more important for enterprise scale IT infrastructure to provide timely and sustained/reliable delivery and processing of service requests. This hasn't become easier, despite more than 30 years of progress in distributed computer connectivity, availability and reliability, if not more difficult~cite{ReliableDistributedSys}, because of many reasons. Some of them are, the increasing complexity of enterprise scale computing infrastructure; the distributed nature of these systems which make them prone to failures, e.g., because of inevitable Heisenbugs in these complex distributed systems; the need to consider diverse and complex business objectives and policies including risk preference and attitudes in enterprise computing; the issues of performance and availability conflicts, varying importance of sub-systems in an enterprise's distributed infrastructure which compete for resource in currently typical shared environment; and the best effort nature of resources such as network resources, which implies resource availability itself an issue, etc. This thesis proposes a novel business policy-driven risk-based automated availability management which uses an automated decision engine to make various availability decisions and meet business policies while optimizing overall system utility, uses utility theory to capture users' risk attitudes, and address the potentially conflicting business goals and resource demands in enterprise scale distributed systems. For the critical and complex enterprise applications, since a key contributor to application utility is the time taken to recover from failures, we develop a novel proactive fault tolerance approach, which uses online methods for failure prediction to dynamically determine the acceptable amounts of additional processing and communication resources to be used (i.e., costs) to attain certain levels of utility and acceptable delays in failure recovery. Since resource availability itself is often not guaranteed in typical shared enterprise IT environments, this thesis provides IQ-Paths with probabilistic service guarantee, to address the dynamic network behavior in realistic enterprise computing environment. The risk-based formulation is used as an effective way to link the operational guarantees expressed by utility and enforced by the PGOS algorithm with the higher level business objectives sought by end users. Together, this thesis proposes novel availability management framework and methods for large-scale enterprise applications and systems, with the goal to provide different levels of performance/availability guarantees for multiple applications and sub-systems in a complex shared distributed computing infrastructure. More specifically, this thesis addresses the following problems. For data center environments, (1) how to provide availability management for applications and systems that vary in both resource requirements and in their importance to the enterprise, based both on operational level quantities and on business level objectives; (2) how to deal with managerial policies such as risk attitude; and (3) how to deal with the tradeoff between performance and availability, given limited resources in a typical data center. Since realistic business settings extend beyond single data centers, a second set of problems addressed in this thesis concerns predictable and reliable operation in wide area settings. For such systems, we explore (4) how to provide high availability in widely distributed operational systems with low cost fault tolerance mechanisms, and (5) how to provide probabilistic service guarantees given best effort network resources.
45

Crash recovery with partial amnesia failure model issues

De Juan Marín, Rubén 30 September 2008 (has links)
Replicated systems are a kind of distributed systems whose main goal is to ensure that computer systems are highly available, fault tolerant and provide high performance. One of the last trends in replication techniques managed by replication protocols, make use of Group Communication Sys- tem, and more specifically of the communication primitive atomic broadcast for developing more eficient replication protocols. An important aspect in these systems consists in how they manage the disconnection of nodes {which degrades their service{ and the connec- tion/reconnection of nodes for maintaining their original support. This task is delegated in replicated systems to recovery protocols. How it works de- pends specially on the failure model adopted. A model commonly used for systems managing large state is the crash-recovery with partial amnesia be- cause it implies short recovery periods. But, assuming it implies arising several problems. Most of them have been already solved in the literature: view management, abort of local transactions started in crashed nodes { when referring to transactional environments{ or for example the reinclu- sion of new nodes to the replicated system. Anyway, there is one problem related to the assumption of this second failure model that has not been completely considered: the amnesia phenomenon. Phenomenon that can lead to inconsistencies if it is not correctly managed. This work presents this inconsistency problem due to the amnesia and formalizes it, de ning the properties that must be ful lled for avoiding it and de ning possible solutions. Besides, it also presents and formalizes an inconsistency problem {due to the amnesia{ which appears under a speci c sequence of events allowed by the majority partition progress condition that will imply to stop the system, proposing the properties for overcoming it and proposing di erent solutions. As a consequence it proposes a new majority partition progress condition. In the sequel there is de / De Juan Marín, R. (2008). Crash recovery with partial amnesia failure model issues [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3302 / Palancia
46

Optimalizace distribuovaného kolektoru síťových toků / Optimization of Distributed Network Flow Collector

Wrona, Jan January 2016 (has links)
This thesis is focused on the optimization of distributed IP flow information collector. Nowadays, the centralized collector is a frequently used solution but is already reaching its performance limits in large scale and high-speed networks. The implementation of the distributed collector is in its early phase and it is necessary to look for solutions that will use it to its full potential. Therefore this thesis proposes a shared nothing architecture without a single point of failure. Using the above proposed architecture, the distributed collector is tolerant to the failure of at least one node. A distributed flow data analysis software, whose performance scales linearly with the number of nodes, is also part of this thesis.
47

Dostupná řešení pro clustrování serverů / Available Solutions for Server Clustering

Bílek, Václav January 2008 (has links)
The goal of this master thesis is to analyze Open Source resources for loadbalancing and high availability, with aim on areas of its typical usage. These areas are particularly solutions of network infrastructure (routers, loadbalancers), generally network and internet services and parallel filesystems. Next part of this thesis is analysis of design, implementation and plans of subsequent advancement of an fast growing Internet project. The effect of this growth is necessity of solving scalability on all levels. The last part is performance analysis of individual loadbalancing methods in the Linux Virtual Server project.

Page generated in 0.0799 seconds