• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 29
  • 20
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Modelo para avaliação da qualidade de projetos de planos de continuidade de negócios aplicados a sistemas computacionais. / Business continuity plans projects applied to computer systems quality evaluation model.

Wagner Ludescher 26 May 2011 (has links)
Diante da constante necessidade de funcionamento ininterrupto dos sistemas computacionais, das mais diversas organizações, é imperativo que existam meios de continuidade dos negócios e recuperação de desastres implantados, testados e prontos para serem invocados. Diante disso, torna-se essencial a existência de uma maneira de avaliar se as informações, os procedimentos e o nível do conhecimento dos colaboradores da organização estão adequados para enfrentar uma ocorrência inesperada e devastadora no ambiente computacional da organização. A presente tese propõe um modelo hierárquico para se representar e avaliar a qualidade dos Projetos de Planos de Continuidade de Negócios (PPCN) aplicados a sistemas computacionais. Este modelo apresenta o mapeamento das principais características que esses planos devem possuir, de acordo com as principais normas relativas ao tema (BS 25999, ABNT NBR ISO/IEC 27001 e ABNT NBR ISO/IEC 27002), as experiências de especialistas da área e dados reais dos usuários dos PPCNs obtidos por meio da utilização de questionários. É proposto neste trabalho, também, um Índice de Qualidade (IQ) para os PPCNs que permite a comparação de um PPCN existente com um PPCN ideal, identificando-se os pontos fracos nele existentes e munindo a organização com informações para a busca de soluções que resultarão na melhoria do PPCN atual. / Given the need for computer systems uninterrupted operation, for the most different organizations, it is imperative that business continuity and disaster recovery plans be already in place, tested and ready to be invoked. Given this, it is essential for there being a way to assess whether the information, procedures and organizations employees knowledge level are adequate to deal with an unexpected and devastating event in the organization\'s computing environment. This thesis proposes a hierarchical model to represent and assess the organizations computer systems Business Continuity Plan Project (BCPP) quality. This model maps the main features these plans should have, in accordance with the main standards related to this area (BS 25999, ISO/IEC 27001 and ISO/IEC 27002), specialists experience and real data from BCPPs users obtained from questionnaires. As a complementary proposal, a BCPP Quality Index (QI) is suggested, which will allow organizations to compare their existing BCPP against an ideal BCPP, identifying the gaps between these plans and providing the organization with information for seeking solutions that will result in the improvement of current BCPP.
72

Managing Effective Communication After a Crisis

Thompson, Enid Alane 01 January 2016 (has links)
Despite the effects of natural disasters on small business owners, the owners' communication strategies to alleviate loss to their companies' profitability remain problematic. The purpose of this qualitative descriptive multiunit case study was to explore what communication strategies some small business owners developed and implemented for facilitating resuming their business operations after a natural disaster. The targeted population consisted of 2 small business owners located along the Belmar Boardwalk in Belmar, New Jersey. The conceptual framework for this study was Coombs' situational crisis communication theory. The case data collected were from semistructured interviews and company documents. Employing member checking and methodological triangulation increased the assurance of the study's credibility and trustworthiness. The data analysis consisted of separating the data into groupings, identifying major groupings, assessing the information within the major groups, and developing thematic interpretations. The 4 validated themes that emerged were communication, community, disaster recovery, and stakeholders (employees). The findings from this study may contribute to social change by providing communication strategies that small business owners can use to mitigate losses from disasters, and facilitate businesses' and communities' recovery for reducing further losses.
73

A framework for availability, performance and survivability evaluation of disaster tolerant cloud computing systems

SILVA, Bruno 26 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-10-31T13:02:48Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Bruno_Silva_Doutorado_Ciencia_da_Computacao_2016.pdf: 7350049 bytes, checksum: f6bc77a5446b293d932df5ac54dad560 (MD5) / Made available in DSpace on 2016-10-31T13:02:48Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Bruno_Silva_Doutorado_Ciencia_da_Computacao_2016.pdf: 7350049 bytes, checksum: f6bc77a5446b293d932df5ac54dad560 (MD5) Previous issue date: 2016-02-26 / CNPq / Cloud Computing Systems (CCSs) allow the utilization of application services for users around the world. An important challenge for CCS providers is to supply a high-quality service even when there are failures, overloads, and disasters. A Service Level Agreement (SLA) is often established between providers and clients to define the availability, performance and security requirements of such services. Fines may be imposed on providers if SLA’s quality parameters are not met. A widely adopted strategy to increase CCS availability and mitigate the effects of disasters corresponds to the utilization of redundant subsystems and the adoption of geographically distributed data centers. Considering this approach, services of affected data centers can be transferred to operational data centers of the same CCS. However, the data center synchronization time increases with the distance, which may affect system performance. Additionally, resources over-provisioning may affect the service profitability, given the high costs of redundant subsystems. Therefore, an assessment that include performance, availability, possibility of disasters and data center allocation is of utmost importance for CCS projects. This work presents a framework for geographically distributed CCS evaluation that estimates metrics related to performance, availability and disaster recovery (man-made or natural disasters). The proposed framework is composed of an evaluation process, a set of models, evaluation tool, and fault injection tool. The evaluation process helps designers to represent CCS systems and obtain the desired metrics. This process adopts a formal hybrid modeling, which contemplates CCS high-level models, stochastic Petri nets (SPN) and reliability block diagrams (RBD) for representing and evaluating CCS subsystems. An evaluation tool is proposed (GeoClouds Modcs) to allow easy representation and evaluation of cloud computing systems. Finally, a fault injection tool for CCSs (Eucabomber 2.0) is presented to estimate availability metrics and validate the proposed models. Several case studies are presented and analyze survivability, performance and availability metrics considering multiple data center allocation scenarios for CCS systems. / Sistemas de Computação em Nuvem (SCNs) permitem a utilização de aplicações como serviços para usuários em todo o mundo. Um importante desafio para provedores de SCN corresponde ao fornecimento de serviços de qualidade mesmo na presença de eventuais falhas, sobrecargas e desastres. Geralmente, um acordo de nível de serviço (ANS) é estabelecido entre fornecedores e clientes para definição dos requisitos de disponibilidade, desempenho e segurança de tais serviços. Caso os parâmetros de qualidade definidos no ANS não sejam satisfeitos, multas podem ser aplicadas aos provedores. Nesse contexto, uma estratégia para aumentar a disponibilidade de SCNs e mitigar os efeitos de eventuais desastres consiste em utilizar subsistemas redundantes e adotar de centros de dados distribuídos geograficamente. Considerando-se esta abordagem, os serviços de centros de dados afetados podem ser transferidos para outros centros de dados do mesmo SCN. Contudo, o tempo de sincronização entre os diferentes centros de dados aumenta com a distância entre os mesmos, o que pode afetar a performance do sistema. Além disso, o provisionamento excessivo de recursos pode afetar a rentabilidade do serviço, dado o alto custo dos subsistemas redundantes. Portanto, uma avaliação que contemple desempenho, disponibilidade, possibilidade de desastres e alocação de centro de dados é de fundamental importância para o projeto de SCNs. Este trabalho apresenta um framework para avaliação de SCNs distribuídos geograficamente que permite a estimativa de métricas de desempenho, disponibilidade e capacidade de recuperação de desastres (naturais ou causados pelo homem). O framework é composto de um processo de avaliação, conjunto de modelos, ferramenta de avaliação e ferramenta de injeção de falhas. O processo de avaliação apresentado pode auxiliar projetistas de SCNs desde a representação do sistem de computação em nuvem até a obtenção das métricas de interesse. Este processo utiliza uma modelagem formal híbrida, que contempla modelos de SCN de alto nível, redes de Petri estocásticas (RPEs) e diagramas de bloco de confiabilidade (DBCs) para representação e avaliação de SCNs e seus subsistemas. Uma ferramenta de avaliação é proposta (GeoClouds Modcs) que permite fácil representação e avaliação de sistemas de computação em nuvem. Por fim, uma ferramenta de injeção de falhas em SCN (Eucabomber 2.0) é apresentada para estimar métricas de disponibilidade e validar os modelos propostos. Vários estudos de caso são apresentados e estes analisam a capacidade de recuperação de desastres, desempenho e disponibilidade de SCNs distribuídos geograficamente.
74

Service Availability in Cloud Computing : Threats and Best Practices

Adegoke, Adekunle, Osimosu, Emmanuel January 2013 (has links)
Cloud computing provides access to on-demand computing resources and storage space, whereby applications and data are hosted with data centers managed by third parties, on a pay-per-use price model. This allows organizations to focus on core business goals instead of managing in-house IT infrastructure.                     However, as more business critical applications and data are moved to the cloud, service availability is becoming a growing concern. A number of recent cloud service disruptions have questioned the reliability of cloud environments to host business critical applications and data. The impact of these disruptions varies, but, in most cases, there are financial losses and damaged reputation among consumers.         This thesis aims to investigate the threats to service availability in cloud computing and to provide some best practices to mitigate some of these threats. As a result, we identified eight categories of threats. They include, in no particular order: power outage, hardware failure, cyber-attack, configuration error, software bug, human error, administrative or legal dispute and network dependency. A number of systematic mitigation techniques to ensure constant availability of service by cloud providers were identified. In addition, practices that can be applied by cloud customers and users of cloud services, to improve service availability were presented.
75

Problematika RTO a RPO v riadení kontinuity / The issue of RTO and RPO in business continuity management

Salai, Viktor January 2011 (has links)
This thesis is concerned about Business Continuity Management with the focus on determination of the RTO and RPO parameters. The work is divided into two main parts. The first section briefly describes theoretical issues of Business Continuity Management in conjunction with IT Service Continuity Management. Then there are described various technologies that are currently used to ensure continuity and recovery of IT systems. In conclusion of the theoretical part, the work focuses on the process of Business Impact Analysis. This thesis thus offers a comprehensive view of main principles and benefits that these concepts have for company and how they should be integrated with each other. It provides an overview of various technologies, their link to RTO and RPO parameters and also the definition of steps of disaster recovery procedure. The work also briefly defines the basic procedure for determination of the RTO and RPO parameters, subsequent steps needed to design recovery solutions and analyzes various aspects which need to be considered when defining the parameters of continuity. The second practical part is based on the previous theoretically defined procedure and its aim is to analyze the data acquired during the questionnaire investigation in chosen company to determine the parameters of continuity for the applications in use and then suggest appropriate ways of addressing their backup and recovery. The result of this thesis is therefore a description of the analysis and practical application of methods for setting the parameters for continuity of information systems and the subsequent establishment of necessary measures.
76

A review of the implementation of disaster risk assessments in the city of Cape Town: challenges and prospects

White, Deon Robin January 2013 (has links)
Masters in Public Administration - MPA / The problem question of this study is how the City of Cape Town, as a metro municipality went about implementing Disaster Risk Assessments. While the National Disaster Management Centre acknowledges that municipalities are battling to perform Disaster Risk Assessments. Understanding what was done, by whom and when will aid in the understanding of implementing Disaster Risk Assessments. Uncovering the prospects and challenges they faced and will help shed light on the guidance that is required by other municipalities, although this study’s inference is limited by the methodology. The relatively new Disaster Management Act requires a shift from old civil defence legislation to a proactive disaster risk reduction mode, with new institutional arrangements. The shift to a proactive disaster risk reduction approach required by the new legislation cannot be achieved without firstly implementing these new institutional and policy arrangements and secondly, implementing this first and vital step in the disaster risk reduction process namely, Disaster Risk Assessments. The study also seeks to understand in the community was involved. This is a qualitative study, i.e. it contains descriptive statistics and narratives. It used questionnaires to provide numerical and descriptive data to measure compliance to the Disaster Management Act in terms of the institutional arrangements implemented by the City of Cape Town. Secondly, qualitative data was collected through semi-structured interviews to provide data to understand the challenges and prospects encountered in performing Disaster Risk Assessments. A literature review was also undertaken to highlight the current debates in Disaster Risk Reduction. The stratified sample was from the officials employed at the City’s Disaster Management Centre, Area Managers, NGOs, Ward Councillors and Consultants. The data was collated and the analysed. The objective is to primarily understand what was done, by whom, when and secondly to understand the prospects and challenges faced. The findings, recommendations and areas of future study are captured in this research report.
77

Risk- och krishantering utifrån Business Continuity Management : - En studie under en rådande kris / Risk and crisis management based on the strategy of Business Continuity Management

Olsson, Isabell, Hedberg, Marcus, Eriksson, Martin January 2020 (has links)
Bakgrund: Då kriser inträffar allt oftare är det av stor vikt för organisationer att ha en utvecklad strategi för att hantera dessa störningar. Ett sätt för att hantera risker och kriser är Business Continuity Management som involverar hela processen med planering av åtgärder i förväg, krishantering samt utvärdering efter krisen. Syfte: Syftet med uppsatsen är att identifiera och analysera teorierna kring Business Continuity Management samt huruvida dessa tillämpas praktiskt i stora svenska tillverkande företag. Vidare ämnar studien skapa en djupare förståelse kring hur de studerade företagen, med hjälp av jämförelser och granskning, agerar under en rådande krissituation. Metod: Uppsatsen bygger på en kvalitativ forskningsmetod med där den empiriska materialinsamlingen har skett via semistrukturerade intervjuer. Ytterligare empiriskt material är de studerade företagens års- och kvartalsrapporter. En komparativ forskningsdesign har tillämpats, vilken möjliggör för jämförelser mellan de studerade företagens användning av BCM. Slutsats: Studien kan konstatera att inget utav de studerade företagen tillämpar Business Continuity Management fullt ut. Däremot nyttjas modeller och strategier som ligger i linje med den teoretiska referensramen för BCM. / Background: As crisis occur more and more frequently around the world, it is vital for organizations to have an implemented strategy to cope with these disruptions. One way to manage these crises is Business Continuity Management which involves the entire process of planning actions in advance, crisis management and post-crisis evaluation. Purpose: The purpose of this thesis is to identify and analyse the theories related to Business Continuity Management and whether these theories practically applies in Swedish manufacturing companies. Furthermore, the study aims to create a deeper understanding of how the studied companies, by means of comparisons and review, act in a crisis situation. Method: The thesis is based on a qualitative research method where the empirical material collection has been done via semi-structured interviews. Further empirical material is the annual and quarterly reports of the companies studied. A comparative research design has been applied, which allows for comparisons between the studied companies' use of BCM. Conclusion: The study finds that none of the studied companies fully applies Business Continuity Management. However, models and strategies that are in line with the theoretical frame of reference for BCM are used.
78

Virtualizace a optimalizace IT v produkční společnosti / Virtualisation and Optimalisation of IT in a Production Company

Popelka, David January 2013 (has links)
The master‘s thesis deals with the implementation of server virtualization in the Entity production, s.r.o., which is a small company by it’s size, but needs modern IT technologies for the normal operations. The thesis shows how to reuse the server virtualization technologies from large corporations, that have been unavailable to small business companies in past. It also design show to optimize the current IT environment.
79

Supporting mega-collaboration: a framework for the dynamic development of team culture

Newlon, Christine Mae 19 October 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This research project, inspired by the nationwide crisis following Hurricane Katrina, identifies mega-collaboration as an emergent social phenomenon enabled by the Internet. The substantial, original contribution of this research is a mega-collaboration tool (MCT) to enable grassroots individuals and organizations to rapidly form teams, negotiate problem definitions, allocate resources, organize interventions, and mediate their efforts with those of official response organizations. The project demonstrated that a tool that facilitates the exploration of a team’s problem space can support online collaboration. It also determined the basic building blocks required to construct a mega-collaboration tool. In addition, the project demonstrated that it is possible to dynamically build the team data structure through use of the proposed interface, a finding that validates the database design at the core of the MCT. This project has made a unique contribution by proposing a new operational vision of how disaster response, and potentially many other problems, should be managed in the future.
80

The Effects of Anti-price Gouging Legislation on Supply Chain Dynamics

Maynard, Jason Edward 01 January 2011 (has links) (PDF)
The purpose of this thesis is to model the effects of anti-price gouging (APG) legislation on the costs to businesses during the recovery period of a disaster. A system dynamics model of a business’s replenishment procedures is used to simulate the effects of APG legislation on business performance. Economists have published expansive research on the effects of price ceilings on supply and demand, but there is little research evidence on the operational consequences of price ceiling legislation on business costs. APG legislation increases consumer’s forward buying and shortage gaming after a disaster by removing price incentives to be frugal. Forward buying and shortage gaming are two key drivers of the demand variation and the bullwhip effect, which leads to increased inventory costs, misguided capacity expansion and reduced service levels. These costs have a negative impact on local businesses that are critical to a community’s economic health and recovery from a disaster. The simulation results from this thesis show that APG legislation is not an effective regulatory response to decrease the impact of disasters on affected communities.

Page generated in 0.0761 seconds