1 |
Att styra säkerhet med siffror : En essä om (att se) gränserEngström, Diana January 2015 (has links)
Work, especially that in complex, dynamic workplaces, often requires subtle, local judgment with regard to timing of subtasks, relevance, importance, prioritization and so forth. Still, people in Nuclear Industry seem to think safety results from people just following procedures. In the wake of failure it can be tempting to introduce new procedures and an even stricter "rule following culture". None, or at least very little, attention is given to tacit knowledge and individual skills. I am aiming to highlight the inadequacy of putting too much trust in formalization and that reporting and trending of events will contribute to increased learning, an increased nuclear safety and an efficient operational experience. The ability to interpret a situation concrete depends on proven experience in similar situations, analogical thinking and tacit knowledge. In this essay I intend to problematize the introduction and use of so-called Corrective Action Program (CAP) and computerized reporting systems linked to CAP in the Nuclear Industry. What I found out is that the whole industry, from regulators to licensees, seems to be stuck in the idea that the scientific perspective on knowledge is the only "true" perspective. This leads to an exaggerated belief in that technology and formalized work processes and routines will create a safer business. The computerized reporting system will not, as the idea was from the beginning, contribute to increased nuclear safety since the reports is based on the trigger and not the underlying causes and in-depth analysis. Managing safety by numbers (incidents, error counts, safety threats, and safety culture indicators) is very practical but has its limitations. Error counts only uphold an illusion of rationality and control, but may offer neither real insight nor productive routes for progress on safety. The question is why the CAP, error counts and computerized reporting systems have had such a big impact in the nuclear industry? It rests after all, on too weak foundations. The answer is that the scientific perspective on knowledge is the dominating perspective. What people do not understand is that an excessive use of computerized systems and an increased formalization actually will create new risks when people lose their skills and ability to reflect and put more trust in the system than in themselves.
|
2 |
Determining Organisational Readiness for the Future-Fit for Business BenchmarkAbela, Paul, Roquet, Omar, Zeaiter, Ali Armand January 2016 (has links)
No description available.
|
3 |
Towards Scalable Performance Analysis of MPI Parallel ApplicationsAguilar, Xavier January 2015 (has links)
A considerably fraction of science discovery is nowadays relying on computer simulations. High Performance Computing (HPC) provides scientists with the means to simulate processes ranging from climate modeling to protein folding. However, achieving good application performance and making an optimal use of HPC resources is a heroic task due to the complexity of parallel software. Therefore, performance tools and runtime systems that help users to execute applications in the most optimal way are of utmost importance in the landscape of HPC. In this thesis, we explore different techniques to tackle the challenges of collecting, storing, and using fine-grained performance data. First, we investigate the automatic use of real-time performance data in order to run applications in an optimal way. To that end, we present a prototype of an adaptive task-based runtime system that uses real-time performance data for task scheduling. This runtime system has a performance monitoring component that provides real-time access to the performance behavior of anapplication while it runs. The implementation of this monitoring component is presented and evaluated within this thesis. Secondly, we explore lossless compression approaches for MPI monitoring. One of the main problems that performance tools face is the huge amount of fine-grained data that can be generated from an instrumented application. Collecting fine-grained data from a program is the best method to uncover the root causes of performance bottlenecks, however, it is unfeasible with extremely parallel applications or applications with long execution times. On the other hand, collecting coarse-grained data is scalable but sometimes not enough to discern the root cause of a performance problem. Thus, we propose a new method for performance monitoring of MPI programs using event flow graphs. Event flow graphs provide very low overhead in terms of execution time and storage size, and can be used to reconstruct fine-grained trace files of application events ordered in time. / <p>QC 20150508</p>
|
4 |
Parallel implementation and application of particle scale heat transfer in the Discrete Element MethodAmritkar, Amit Ravindra 25 July 2013 (has links)
Dense fluid-particulate systems are widely encountered in the pharmaceutical, energy, environmental and chemical processing industries. Prediction of the heat transfer characteristics of these systems is challenging. Use of a high fidelity Discrete Element Method (DEM) for particle scale simulations coupled to Computational Fluid Dynamics (CFD) requires large simulation times and limits application to small particulate systems. The overall goal of this research is to develop and implement parallelization techniques which can be applied to large systems with O(105- 106) particles to investigate particle scale heat transfer in rotary kiln and fluidized bed environments.
The strongly coupled CFD and DEM calculations are parallelized using the OpenMP paradigm which provides the flexibility needed for the multimodal parallelism encountered in fluid-particulate systems. The fluid calculation is parallelized using domain decomposition, whereas N-body decomposition is used for DEM. It is shown that OpenMP-CFD with the first touch policy, appropriate thread affinity and careful tuning scales as well as MPI up to 256 processors on a shared memory SGI Altix. To implement DEM in the OpenMP framework, ghost particle transfers between grid blocks, which consume a substantial amount of time in DEM, are eliminated by a suitable global mapping of the multi-block data structure. The global mapping together with enforcing perfect particle load balance across OpenMP threads results in computational times between 2-5 times faster than an equivalent MPI implementation.
Heat transfer studies are conducted in a rotary kiln as well as in a fluidized bed equipped with a single horizontal tube heat exchanger. Two cases, one with mono-disperse 2 mm particles rotating at 20 RPM and another with a poly-disperse distribution ranging from 1-2.8 mm and rotating at 1 RPM are investigated. It is shown that heat transfer to the mono-disperse 2 mm particles is dominated by convective heat transfer from the thermal boundary layer that forms on the heated surface of the kiln. In the second case, during the first 24 seconds, the heat transfer to the particles is dominated by conduction to the larger particles that settle at the bottom of the kiln. The results compare reasonably well with experiments. In the fluidized bed, the highly energetic transitional flow and thermal field in the vicinity of the tube surface and the limits placed on the grid size by the volume-averaged nature of the governing equations result in gross under prediction of the heat transfer coefficient at the tube surface. It is shown that the inclusion of a subgrid stress model and the application of a LES wall function (WMLES) at the tube surface improves the prediction to within ± 20% of the experimental measurements. / Ph. D.
|
5 |
Detection and interpretation of weak signalsWiik, Richard January 2016 (has links)
Managing safety at a nuclear power plant is about a complex system with demanding technology under time pressure where the cost of failure is exceptionally high. Swedish nuclear power plants have over the last few years introduced Pre-job Briefing and other so called Human Performance Tools to advert errors and strengthen control. By using the Systemic Resilience Model different views of safety are taken to understand the origin of the signals that leads to a Pre-job Briefing, and how the signal is interpreted, re-interpreted, and presented. The study took place at a Swedish nuclear power plant and included four days of observations and 20 interviewees. The thematic analysis shows a similarity between mentioned origins of Pre-job Briefings and the intended use of Pre-job Briefing. Characteristics of a High Reliability Organisation is shown in practice by a culture of that one will to have a Pre-job Briefing is enough, that sharp end workers is used as a valuable resource for safety and a systematic support to screen jobs over time without influencing non-job related factors. The signals acted upon matched well with the intended, and personnel get several opportunities to evaluate the signals together, striving for best possible circumstances. The Systemic Resilience Model was successfully applied together with a thematic analysis, which strengthens its validity as a holistic model that combines different views of safety in one coherent model. SyRes allowed to present additional themes, leaving the question at what stage SyRes is optimally implemented in a thematic analysis.
|
6 |
Extending the Functionality of Score-P through Plugins: Interfaces and Use CasesSchöne, Robert, Tschüter, Ronny, Ilsche, Thomas, Schuchart, Joseph, Hackenberg, Daniel, Nagel, Wolfgang E. 18 October 2017 (has links) (PDF)
Performance measurement and runtime tuning tools are both vital in the HPC software ecosystem and use similar techniques: the analyzed application is interrupted at specific events and information on the current system state is gathered to be either recorded or used for tuning. One of the established performance measurement tools is Score-P. It supports numerous HPC platforms and parallel programming paradigms. To extend Score-P with support for different back-ends, create a common framework for measurement and tuning of HPC applications, and to enable the re-use of common software components such as implemented instrumentation techniques, this paper makes the following contributions: (I) We describe the Score-P metric plugin interface, which enables programmers to augment the event stream with metric data from supplementary data sources that are otherwise not accessible for Score-P. (II) We introduce the flexible Score-P substrate plugin interface that can be used for custom processing of the event stream according to the specific requirements of either measurement, analysis, or runtime tuning tasks. (III) We provide examples for both interfaces that extend Score-P’s functionality for monitoring and tuning purposes.
|
7 |
Extending the Functionality of Score-P through Plugins: Interfaces and Use CasesSchöne, Robert, Tschüter, Ronny, Ilsche, Thomas, Schuchart, Joseph, Hackenberg, Daniel, Nagel, Wolfgang E. 18 October 2017 (has links)
Performance measurement and runtime tuning tools are both vital in the HPC software ecosystem and use similar techniques: the analyzed application is interrupted at specific events and information on the current system state is gathered to be either recorded or used for tuning. One of the established performance measurement tools is Score-P. It supports numerous HPC platforms and parallel programming paradigms. To extend Score-P with support for different back-ends, create a common framework for measurement and tuning of HPC applications, and to enable the re-use of common software components such as implemented instrumentation techniques, this paper makes the following contributions: (I) We describe the Score-P metric plugin interface, which enables programmers to augment the event stream with metric data from supplementary data sources that are otherwise not accessible for Score-P. (II) We introduce the flexible Score-P substrate plugin interface that can be used for custom processing of the event stream according to the specific requirements of either measurement, analysis, or runtime tuning tasks. (III) We provide examples for both interfaces that extend Score-P’s functionality for monitoring and tuning purposes.
|
Page generated in 0.0886 seconds