• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 9
  • 7
  • 3
  • 1
  • 1
  • Tagged with
  • 41
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Increasing the robustness of a service in a complex information flow

Johansson, Albin January 2019 (has links)
In complex information flows where a lot of varied data is transmitted through many companies and divisions, incidents will occur. When Visma Spcs had an incident where invoices sent from Visma to Visma's customers were duplicated and the service meant to receive the transactions did not handle the duplicates properly. They decided that the receiver service was to be upgraded to prevent this incident from happening again, as well as fixing some other issues the service had had. Incidents like this one must be investigated and a solution must be implemented to decrease the likelihood that similar incidents will happen again. In this report, the reader will see examples on how this can be handled and the benefits of tackling technical debt, along with how much more complicated the solutions might get if the service is not allowed to be taken offline.
12

Downtime cost and Reduction analysis: Survey results

Tabikh, Mohamad January 2014 (has links)
The purpose of this paper is to present a sample of how Swedish manufacturing companies deal with equipment downtime cost, and further how they analyze its reduction. The study was performed by conducting a web-based survey within Swedish firms that have at least 200 employees. The main results obtained from the investigation show that the estimated downtime cost constitute about 23.9 % from the total manufacturing cost ratio, and 13.3 % from planned production time. Additionally, the hourly cost of downtime, whether planned or unplanned, is relatively high. However, there is a shortage of systematic models that capable to trace the individual cost imposed by downtime events. This lack was shown apparently whilst 83 % of surveyed companies they do not have any complete model adapted for quantifying their downtime costs. Moreover, only few companies develop their cost accounting methods such as, activity-based costing (ABC) and resource consumption accounting (RCA) to assimilate and reveal the real costs that associated with planned and unplanned stoppages. Still, the general pattern of downtime cost calculation allocated to direct labor and lost capacity cost. On the other hand, the attempts of decreasing downtime events and thus costs were based on schedule maintenance tactics that supported by overall equipment effectiveness (OEE) tool, as an indicator for affirming improvements. Nonetheless, the analysis indicates the need for optimized maintenance tactics by incorporating reliability-centered maintenance (RCM) and total productive maintenance (TPM) into companies’ maintenance systems. The maintenance role of reducing downtime impacts not highly recognized. Furthermore, the same analysis shows the requirement for better results of performance measurement systems is by implementing total equipment effectiveness performance tool (TEEP). The advantage of such tool is to provide the impact index of planned stoppages in equipment utilization factor. Finally, the lack of fully integrated models for assessing the downtime costs and frameworks for distinguishing the difference between planned and unplanned stoppages are the main reasons behind the continuation of cost in ascending form. Due to that, the improvements will emphasize on areas with less cost saving opportunities. As a result, this will affect the production efficiency and effectiveness which in return has its influence on costs and thereby profits margin.
13

Zero-Downtime Deployment in a High Availability Architecture : Controlled experiment of deployment automation in a high availability architecture

Nilsson, Axel January 2018 (has links)
Computer applications are no longer local installations on our computers. Many modern web applications and services rely on an internet connection to a centralized server to access the full functionality of the application. High availability architectures can be used to provide redundancy in case of failure to ensure customers always have access to the server. Due to the complexity of such systems and the need for stability, deployments are often avoided and new features and bug fixes cannot be delivered to the end user quickly. In this project, an automation system is proposed to allow for deployments to a high availability architecture while ensuring high availability. The purposed automation system is then tested in a controlled experiment to see if it can deliver what it promises. During low amounts of traffic, the deployment system showed it could make a deployment with a statistically insignificant change in error rate when compared to normal operations. Similar results were found during medium to high levels of traffic for successful deployments, but if the system had to recover from a failed deployment there was an increase in errors. However, the response time during the experiment showed that the system had a significant effect on the response time of the web application resulting in the availability being compromised in certain situations.
14

Návrh eliminace prostojů elektroúdržby / The Proposal of Electrical Maintenance Downtime Elimination

Novotný, František January 2010 (has links)
This thesis is focused on description, analysis and minimization of downtime periods related to electricity maintenance activities. In theoretical part various kinds of wasting are described, which can happen during the work of electrician. In the plant Ventily the most faulty machines were found out from analysis of records and appropriate solutions were proposed. In addition to this consecutive company profits were evaluated.
15

Studie efektivnosti využití strojů ve vybraném provozu / The Study Efficiency Utilization of Machines in the Selected Operation

Kolesa, Jan January 2016 (has links)
The thesis focuses on the study of the effectiveness of selected machines in a company through indicators of overall equipment effectiveness. The aim of this work is to find bottlenecks in production and to provide solutions to increase the productivity of machinery monitoring the operation of machinery. On the selected machine is an analysis that is detected downtime limiting engine running. By entering the draft countermeasures against individual downtime. According to the lessons learned can then implement countermeasures in the company to avoid loss of productivity in production.
16

Kaffepaus : Analys av stopporsaker på paketeringslina hos Arvid Nordquist HAB / Coffee break : Analysis of stoppages at a packaging line at Arvid Nordquist HAB

Isacson, Mimmi, Jonviken, Caroline January 2019 (has links)
Arvid Nordquist HAB rostar, maler och packar kaffe i sin fabrik i Solna, Stockholm. Efter installation av ett program för produktions- och driftuppföljning, RS Production, har det uppmärksammats att andelen korta stopp (Mikrostopp, varar upp till två minuter) ibland är hög. Projektets mål var således att undersöka de mest frekventa orsakerna till mikrostopp samt att försöka ta fram förslag på åtgärds- eller handlingsplaner. Ett mål var även att utreda om de stoppkategorier som finns i RS production är relevanta. För att uppfylla målen ställdes följande frågor: Kan operatörerna beskriva verkligheten med de kategorier som finns i RS Production idag? Vilka är de huvudsakliga stopporsakerna för kategorin Mikrostopp? Vilka åtgärder kan vidtas för att minska stopptiden relaterad till stopporsaker ikategorierna Mikrostopp, Felorsak okänd och Orsak Övrigt? Data samlades in från RS Production, observationer samt med hjälp av enkäter och samtal med operatörer och tekniker. De tre främsta orsakerna till stopp är silobyte, paket fastnar och paket välter. Vidare framkom att det inte är någon större skillnad på antal stopp mellan skiften. Helgskiften hade färre stopp totalt, men har också totalt färre timmar. Nattskiften hade flest stopp när antal stopp per arbetad timme beräknades. Eco, Kok och Festivita hade flest stopp per packat ton kaffe. Antalet mikostopp reduceras enklast med en buffert. Utöver mikrostopp studerades även kategorierna Felorsak okänd och Orsak övrigt. Dessa används på ett likartat sätt och det framkom från enkäterna att det rådde delade meningar om hur Felorsak okänd används. Felorsak okänd och Orsak övrigt verkade användas trots att passande kategorier finns i RS Production. En anledning till detta är att stopp inte kategoriseras av operatör på en gång, en annan kan vara att operatör inte känner till eller orka leta fram rätt kategori. Här rekommenderas att dessa två kategorier slås samman till en. / Arvid Nordquist HAB roasts, grinds and package coffee in their factory in Solna, Stockholm. It has come to light, following the installation of software (RS production) for production and operation management, that the number of short stops (“Mikrostopp”, lasting for 2 minutes or less) sometimes is very high. The main goal of this project was therefore to examine what the most frequent causes for stops in the “Mikrostopp” category and to try to produce solutions and action plans. Another goal was to examine if the categories for stops in RS Production are relevant. To reach the goals following questions were asked: Can the operators describe the reality with the categories that are available in RS Production today? What are the main causes for stops in the category “Mikrostopp”? What measures can be taken to reduce the downtime related to causes for stoppages in the categories “Mikrostopp”, “Felorsak okänd” and “Orsak Övrigt”? Data was collected from RS Production, observations, a survey and discussions with operators and technicians. The three most common causes for stops are change of silo and packets getting stuck or falling. The analysis shows that stops are evenly distributed over the working shifts. The weekend shifts had fewer stops when the total number of stops for each type of shift was compared. When number of stops per working hour was calculated, the night shift had the highest number. Eco, Kok and Festivita had the highest number of stops when number of stops per kind and tonne of packaged coffee was analysed. The downtime related to micro stops can be reduced with the help of buffers. In addition to the Mikrostopp category, the categories named Felorsak okänd and Orsak övrigt were studied. These categories were used in an analogous way, and the survey showed that how and when Felorsak okänd were to be used was perceived differently between operators. Both “Felorsak okänd” and “Orsak övrigt” were used even though there were other fitting categories in RS Production for the stop. One reason for this could be that the operator did not categorize the stop until later. Another reason could be that they did not know where to find a more suitable categoryor did not care to look. The solution to this problem is simply to merge these two categories and use as one
17

An approach to failure prediction in a cloud based environment

Adamu, Hussaini, Bashir, Mohammed, Bukar, Ali M., Cullen, Andrea J., Awan, Irfan U. January 2017 (has links)
yes / Failure in a cloud system is defined as an even that occurs when the delivered service deviates from the correct intended behavior. As the cloud computing systems continue to grow in scale and complexity, there is an urgent need for cloud service providers (CSP) to guarantee a reliable on-demand resource to their customers in the presence of faults thereby fulfilling their service level agreement (SLA). Component failures in cloud systems are very familiar phenomena. However, large cloud service providers’ data centers should be designed to provide a certain level of availability to the business system. Infrastructure-as-a-service (Iaas) cloud delivery model presents computational resources (CPU and memory), storage resources and networking capacity that ensures high availability in the presence of such failures. The data in-production-faults recorded within a 2 years period has been studied and analyzed from the National Energy Research Scientific computing center (NERSC). Using the real-time data collected from the Computer Failure Data Repository (CFDR), this paper presents the performance of two machine learning (ML) algorithms, Linear Regression (LR) Model and Support Vector Machine (SVM) with a Linear Gaussian kernel for predicting hardware failures in a real-time cloud environment to improve system availability. The performance of the two algorithms have been rigorously evaluated using K-folds cross-validation technique. Furthermore, steps and procedure for future studies has been presented. This research will aid computer hardware companies and cloud service providers (CSP) in designing a reliable fault-tolerant system by providing a better device selection, thereby improving system availability and minimizing unscheduled system downtime.
18

DISCRETION OR DIRECTION?: AN ANALYSIS OF PATROL OFFICER DOWNTIME

FAMEGA, CHRISTINE NATALIE 02 September 2003 (has links)
No description available.
19

Resilire: Achieving High Availability Through Virtual Machine Live Migration

Lu, Peng 16 October 2013 (has links)
High availability is a critical feature of data centers, cloud, and cluster computing environments. Replication is a classical approach to increase service availability by providing redundancy. However, traditional replication methods are increasingly unattractive for deployment due to several limitations such as application-level non-transparency, non-isolation of applications (causing security vulnerabilities), complex system management, and high cost. Virtualization overcomes these limitations through another layer of abstraction, and provides high availability through virtual machine (VM) live migration: a guest VM image running on a primary host is transparently check-pointed and migrated, usually at a high frequency, to a backup host, without pausing the VM; the VM is resumed from the latest checkpoint on the backup when a failure occurs. A virtual cluster (VC) generalizes the VM concept for distributed applications and systems: a VC is a set of multiple VMs deployed on different physical machines connected by a virtual network. This dissertation presents a set of VM live migration techniques, their implementations in the Xen hypervisor and Linux operating system kernel, and experimental studies conducted using benchmarks (e.g., SPEC, NPB, Sysbench) and production applications (e.g., Apache webserver, SPECweb). We first present a technique for reducing VM migration downtimes called FGBI. FGBI reduces the dirty memory updates that must be migrated during each migration epoch by tracking memory at block granularity. Additionally, it determines memory blocks with identical content and shares them to reduce the increased memory overheads due to block-level tracking granularity, and uses a hybrid compression mechanism on the dirty blocks to reduce the migration traffic. We implement FGBI in the Xen hypervisor and conduct experimental studies, which reveal that the technique reduces the downtime by 77% and 45% over competitors including LLM and Remus, respectively, with a performance overhead of 13%. We then present a lightweight, globally consistent checkpointing mechanism for virtual cluster, called VPC, which checkpoints the VC for immediate restoration after (one or more) VM failures. VPC predicts the checkpoint-caused page faults during each checkpointing interval, in order to implement a lightweight checkpointing approach for the entire VC. Additionally, it uses a globally consistent checkpointing algorithm, which preserves the global consistency of the VMs' execution and communication states, and only saves the updated memory pages during each checkpointing interval. Our Xen-based implementation and experimental studies reveal that VPC reduces the solo VM downtime by as much as 45% and reduces the entire VC downtime by as much as 50% over competitors including VNsnap, with a memory overhead of 9% and performance overhead of 16%. The dissertation's third contribution is a VM resumption mechanism, called VMresume, which restores a VM from a (potentially large) checkpoint on slow-access storage in a fast and efficient way. VMresume predicts and preloads the memory pages that are most likely to be accessed after the VM's resumption, minimizing otherwise potential performance degradation due to cascading page faults that may occur on VM resumption. Our experimental studies reveal that VM resumption time is reduced by an average of 57% and VM's unusable time is reduced by 73.8% over native Xen's resumption mechanism. Traditional VM live migration mechanisms are based on hypervisors. However, hypervisors are increasingly becoming the source of several major security attacks and flaws. We present a mechanism called HSG-LM that does not involve the hypervisor during live migration. HSG-LM is implemented in the guest OS kernel so that the hypervisor is completely bypassed throughout the entire migration process. The mechanism exploits a hybrid strategy that reaps the benefits of both pre-copy and post-copy migration mechanisms, and uses a speculation mechanism that improves the efficiency of handling post-copy page faults. We modify the Linux kernel and develop a new page fault handler inside the guest OS to implement HSG-LM. Our experimental studies reveal that the technique reduces the downtime by as much as 55%, and reduces the total migration time by as much as 27% over competitors including Xen-based pre-copy, post-copy, and self-migration mechanisms. In a virtual cluster environment, one of the main challenges is to ensure equal utilization of all the available resources while avoiding overloading a subset of machines. We propose an efficient load balancing strategy using VM live migration, called DCbalance. Differently from previous work, DCbalance records the history of mappings to inform future placement decisions, and uses a workload-adaptive live migration algorithm to minimize VM downtime. We improve Xen's original live migration mechanism and implement the DCbalance technique, and conduct experimental studies. Our results reveal that DCbalance reduces the decision generating time by 79%, the downtime by 73%, and the total migration time by 38%, over competitors including the OSVD virtual machine load balancing mechanism and the DLB (Xen-based) dynamic load balancing algorithm. The dissertation's final contribution is a technique for VM live migration in Wide Area Networks (WANs), called FDM. In contrast to live migration in Local Area Networks (LANs), VM migration in WANs involve migrating disk data, besides memory state, because the source and the target machines do not share the same disk service. FDM is a fast and storage-adaptive migration mechanism that transmits both memory state and disk data with short downtime and total migration time. FDM uses page cache to identify data that is duplicated between memory and disk, so as to avoid transmitting the same data unnecessarily. We implement FDM in Xen, targeting different disk formats including raw and Qcow2. Our experimental studies reveal that FDM reduces the downtime by as much as 87%, and reduces the total migration time by as much as 58% over competitors including pre-copy or post-copy disk migration mechanisms and the disk migration mechanism implemented in BlobSeer, a widely used large-scale distributed storage service. / Ph. D.
20

Macroergonomics to Understand Factors Impacting Patient Care During Electronic Health Record Downtime

Larsen, Ethan 18 September 2018 (has links)
Through significant federal investment and incentives, Electronic Health Records have become ubiquitous in modern hospitals. Over the past decade, these computer support systems have provided healthcare operations with new safety nets, and efficiency increases, but also introduce new problems when they suddenly go offline. These downtime events are chaotic and dangerous for patients. With the safety systems clinicians have become accustomed to offline, patients are at risk from errors and delays. This work applies the Macroergonomic methodology to facilitate an exploratory study into the issues related to patient care during downtime events. This work uses data from existing sources within the hospital, such as the electronic health record itself. Data collection mechanisms included interviews, downtime paper reviews, and workplace observations. The triangulation of data collection mechanisms facilitated a thorough exploration of the issues of downtime. The Macroergonomic Analysis and Design (MEAD) methodology was used to guide the analysis of the data, and identify variances and shifts in responsibility due to downtime. The analysis of the data supports and informs developing potential intervention strategies to enable hospitals to better cope with downtime events. Within MEAD, the assembled data is used to inform the creation of a simulation model which was used to test the efficacy of the intervention strategies. The results of the simulation testing are used to determine the specific parameters of the intervention suggestions as they relate to the target hospitals. The primary contributions of this work are an exploratory study of electronic health record downtime and impacts to patient safety, and an adaptation of the Macroergonomic Analysis and Design methodology, employing multiple data collection methods and a high-fidelity simulation model. The methodology is intended to guide future research into the downtime issue, and the direct findings can inform the creation of better downtime contingency strategies for the target hospitals, and possibly to offer some generalizability for all hospitals. / Ph. D. / Hospitals experience periodic outages of their computerized work support systems from a variety of causes. These outages can range from partial communication and or access restrictions to total shutdown of all computer systems. Hospitals operating during a computerized outage or downtime are potentially unable to access computerized records, procedures and conduct patient care activities which are facilitated by computerized systems. Hospitals are in need of a means to cope with the complications of downtime and the loss of computerized support systems without risking patient care. This dissertation assesses downtime preparedness and planning through the application of Macroergonomics which has incorporated discrete event simulation. The results provide a further understanding of downtime risks and deficiencies in current planning approaches. The study enhances the application of Macroergonomics and demonstrates the value of discrete event simulation as a tool to aid in Macroergonomic evaluations. Based on the Macroergonomic Analysis and Design method, downtime improvement strategies are developed and tested, demonstrating their potential efficacy over baseline. Through this dissertation, the deficiencies in current contingency plans are examined and exposed and further the application of Macroergonomics in healthcare.

Page generated in 0.088 seconds