• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 20
  • 14
  • 11
  • 8
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 50
  • 22
  • 20
  • 19
  • 19
  • 19
  • 19
  • 18
  • 18
  • 18
  • 18
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Análise de confiabilidade de peças de madeira fletidas dimensionadas segundo a NBR 7190/97 / Reliability analysis of bending timber members designed according NBR 7190/97

Daniel Veiga Adolfs 21 November 2011 (has links)
A passagem do método das tensões admissíveis para o método dos estados limites na NBR 7190 - Projeto de Estruturas de Madeira, ocorrida em 1997, foi feita por meio de calibração determinística que teve como ponto central a resistência da madeira na compressão paralela às fibras. Um dos aspectos modificados foi o dimensionamento de peças fletidas no tocante à verificação das tensões normais devidas ao momento fletor, em que é utilizada a resistência à compressão paralela às fibras. Com a intenção de averiguar o grau de segurança do modelo de cálculo para esse caso, foram realizadas análises de confiabilidade para vigas fletidas de madeira. Foram coletados dados de 549 testes de flexão de vigas, obtendo-se valores relativos à ruptura, e também foram obtidas informações estatísticas a respeito como a média e o desvio-padrão da resistência à compressão paralela às fibras, das mesmas espécies usadas nas vigas. Nas análises de confiabilidade foram utilizadas 5 variáveis aleatórias, com 5 tipos de combinações diferentes, analisadas com e sem erro de modelo para cada um dos 16 grupos de resultados levantados, totalizando 2752 análises de confiabilidade. Os resultados das análises sem o erro de modelo mostram que a norma não atinge valores suficientes para o índice de confiabilidade e que, com a introdução do erro de modelo, os resultados são mais adequados. Também se verificou que o modelo adotado pela norma é muito conservador, no caso de peças de madeira de Pinus SP classificadas. / The transition of allowable stress to limit state design methods in \"NBR 7190 - Projeto de Estruturas de Madeira\", in 1997, was made considering the strength in compression parallel to the grain as central point of deterministic calibration. One of the aspects modified was the design of beams, related to tension and compression; in this case is used the strength in compression. A reliability analysis was made for timber beams to determine the security level of the theoretical model. Were collected data related to failure from 549 bending tests in beams, and statistical information about the mean and standard deviation of the strength in compression of the wood species used in the beams. In reliability analysis were used 5 random variables, with 5 different types of combinations, analyzed with and without model error for each of the 16 groups of results collected, resulting 2752 reliability analysis. The results of the analysis without the model error show that the standard doesnt achieve sufficient values for the reliability index and, with the introduction of model error, the results are more adequate. It was also verified that the theoretical model is very conservative in the case of graded members of Pine species.
52

Wireless vital signs monitoring system for ubiquitous healthcare with practical tests and reliability analysis

Lee, Y.-D. (Young-Dong) 30 November 2010 (has links)
Abstract The main objective of this thesis project is to implement a wireless vital signs monitoring system for measuring the ECG of a patient in the home environment. The research focuses on two specific research objectives: 1) the development of a distributed healthcare system for vital signs monitoring using wireless sensor network devices and 2) a practical test and performance evaluation for the reliability for such low-rate wireless technology in ubiquitous health monitoring applications. The first section of the thesis describes the design and implementation of a ubiquitous healthcare system constructed from tiny components for the home healthcare of elderly persons. The system comprises a smart shirt with ECG electrodes and acceleration sensors, a wireless sensor network node, a base station and a server computer for the continuous monitoring of ECG signals. ECG data is a commonly used vital sign in clinical and trauma care. The ECG data is displayed on a graphical user interface (GUI) by transferring it to a PDA or a terminal PC. The smart shirt is a wearable T-shirt designed to collect ECG and acceleration signals from the human body in the course of daily life. In the second section, a performance evaluation of the reliability of IEEE 802.15.4 low-rate wireless ubiquitous health monitoring is presented. Three scenarios of performance studies are applied through practical tests: 1) the effects of the distance between sensor nodes and base-station, 2) the deployment of the number of sensor nodes in a network and 3) data transmission using different time intervals. These factors were measured to analyse the reliability of the developed technology in low-rate wireless ubiquitous health monitoring applications. The results showed how the relationship between the bit-error-rate (BER) and signal-to-noise ratio (SNR) was affected when varying the distance between sensor node and base-station, through the deployment of the number of sensor nodes in a network and through data transmission using different time intervals.
53

Methods for estimating reliability of water treatment processes : an application to conventional and membrane technologies

Beauchamp, Nicolas 11 1900 (has links)
Water supply systems aim, among other objectives, to protect public health by reducing the concentration of, and potentially eliminating, microorganisms pathogenic to human beings. Yet, because water supply systems are engineered systems facing variable conditions, such as raw water quality or treatment process performance, the quality of the drinking water produced also exhibits variability. The reliability of a treatment system is defined in this context as the probability of producing drinking water that complies with existing microbial quality standards. This thesis examines the concept of reliability for two physicochemical treatment technologies, conventional rapid granular filtration and ultrafiltration, used to remove the protozoan pathogen Cryptosporidium parvum from drinking water. First, fault tree analysis is used as a method of identifying technical hazards related to the operation of these two technologies and to propose ways of minimizing the probability of failure of the systems. This method is used to compile operators’ knowledge into a single logical diagram and allows the identification of important processes which require efficient monitoring and maintenance practices. Second, an existing quantitative microbial risk assessment model is extended to be used in a reliability analysis. The extended model is used to quantify the reliability of the ultrafiltration system, for which performance is based on full-scale operational data, and to compare it with the reliability of rapid granular filtration systems, for which performance is based on previously published data. This method allows for a sound comparison of the reliability of the two technologies. Several issues remain to be addressed regarding the approaches used to quantify the different input variables of the model. The approaches proposed herein can be applied to other water treatment technologies, to aid in prioritizing interventions to improve system reliability at the operational level, and to determine the data needs for further refinements of the estimates of important variables. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
54

Formal models for safety analysis of a Data Center system / Modèles formels pour l’analyse de la sûreté de fonctionnement d’un Data center

Bennaceur, Mokhtar Walid 21 November 2019 (has links)
Un Data Center (DC) est un bâtiment dont le but est d'héberger des équipements informatiques pour fournir différents services Internet. Pour assurer un fonctionnement constant de ces équipements, le système électrique fournit de l'énergie, et pour les maintenir à une température constante, un système de refroidissement est nécessaire. Chacun de ces besoins doit être assuré en permanence, car la conséquence de la panne de l'un d'eux entraîne une indisponibilité de l'ensemble du système du DC, ce qui peut être fatal pour une entreprise.A notre connaissance, il n'existe pas de travaux d'étude sur l'analyse de sûreté de fonctionnement et de performance, prenant en compte l'ensemble du système du DC avec les différentes interactions entre ses sous-systèmes. Les études d'analyse existantes sont partielles et se concentrent sur un seul sous-système, parfois deux. L'objectif principal de cette thèse est de contribuer à l'analyse de sûreté de fonctionnement d'un Data Center. Pour cela, nous étudions, dans un premier temps, chaque sous-système (électrique, thermique et réseau) séparément, afin d'en définir ses caractéristiques. Chaque sous-système du DC est un système de production qui transforment les alimentations d'entrée (énergie pour le système électrique, flux d'air pour le système thermique, et paquets pour le réseau) en sorties, qui peuvent être des services Internet. Actuellement, les méthodes d'analyse de sûreté de fonctionnement existantes pour ce type de systèmes sont inadéquates, car l'analyse de sûreté doit tenir compte non seulement de l'état interne de chaque composant du système, mais également des différents flux de production qui circulent entre ces composants. Dans cette thèse, nous considérons une nouvelle technique de modélisation appelée Arbres de Production (AP) qui permet de modéliser la relation entre les composants d'un système avec une attention particulière aux flux circulants entre ces composants.La technique de modélisation en AP permet de traiter un seul type de flux à la fois. Son application sur le sous-système électrique est donc appropriée, car il n'y a qu'un seul type de flux (le courant électrique). Toutefois, lorsqu'il existe des dépendances entre les sous-systèmes, comme c'est le cas pour les sous-systèmes thermiques et les sous-systèmes de réseaux, différents types de flux doivent être pris en compte, ce qui rend l'application de la technique des APs inadéquate. Par conséquent, nous étendons cette technique pour traiter les dépendances entre les différents types de flux qui circulent dans le DC. En conséquence, il est facile d'évaluer les différents indicateurs de sûreté de fonctionnement du système global du DC, en tenant compte des interactions entre ses sous-systèmes. De plus, nous faisons quelques statistiques de performance. Nous validons les résultats de notre approche en les comparant à ceux obtenus par un outil de simulation que nous avons implémenté et qui est basé sur la théorie des files d'attente.Jusqu'à présent, les modèles d'arbres de production n'ont pas d'outils de résolution. C'est pourquoi nous proposons une méthode de résolution basée sur la Distribution de Probabilité de Capacité (Probability Distribution of Capacity - PDC) des flux circulants dans le système du DC. Nous implémentons également le modèle d'AP en utilisant le langage de modélisation AltaRica 3.0, et nous utilisons son simulateur stochastique dédié pour estimer les indices de fiabilité du système. Ceci est très important pour comparer et valider les résultats obtenus avec notre méthode d'évaluation. En parallèle, nous développons un outil qui implémente l'algorithme de résolution des APs avec une interface graphique basée qui permet de créer, éditer et analyser des modèles d'APs. L'outil permet également d'afficher les résultats et génère un code AltaRica, qui peut être analysé ultérieurement à l'aide du simulateur stochastique de l'outil AltaRica 3.0. / A Data Center (DC) is a building whose purpose is to host IT devices to provide different internet services. To ensure constant operation of these devices, energy is provided by the electrical system, and to keep them at a constant temperature, a cooling system is necessary. Each of these needs must be ensured continuously, because the consequence of breakdown of one of them leads to an unavailability of the whole DC system, and this can be fatal for a company.In our Knowledge, there exists no safety and performance studies’, taking into account the whole DC system with the different interactions between its sub-systems. The existing analysis studies are partial and focus only on one sub-system, sometimes two. The main objective of this thesis is to contribute to the safety analysis of a DC system. To achieve this purpose, we study, first, each DC sub-system (electrical, thermal and network) separately, in order to define their characteristics. Each DC sub-system is a production system and consists of combinations of components that transform entrance supplies (energy for the electrical system, air flow for the thermal one, and packets for the network one) into exits, which can be internet services. Currently the existing safety analysis methods for these kinds of systems are inadequate, because the safety analysis must take into account not only the internal state of each component, but also the different production flows circulating between components. In this thesis, we consider a new modeling methodology called Production Trees (PT) which allows modeling the relationship between the components of a system with a particular attention to the flows circulating between these components.The PT modeling technique allows dealing with one kind of flow at once. Thus its application on the electrical sub-system is suitable, because there is only one kind of flows (the electric current). However, when there are dependencies between sub-systems, as in thermal and network sub-systems, different kinds of flows need to be taken into account, making the application of the PT modeling technique inadequate. Therefore, we extend this technique to deal with dependencies between the different kinds of flows in the DC. Accordingly it is easy to assess the different safety indicators of the global DC system, taking into account the interactions between its sub-systems. Moreover we make some performance statistics. We validate the results of our approach by comparing them to those obtained by a simulation tool that we have implemented based on Queuing Network theory.So far, Production Trees models are not tool supported. Therefore we propose a solution method based on the Probability Distribution of Capacity (PDC) of flows circulating in the DC system. We implement also the PT model using the AltaRica 3.0 modeling language, and use its dedicated stochastic simulator to estimate the reliability indices of the system. This is very important to compare and validate the obtained results with our assessment method. In parallel, we develop a tool which implements the PT solution algorithm with an interactive graphical interface, which allows creating, editing and analyzing PT models. The tool allows also displaying the results, and generates an AltaRica code, which can be subsequently analyzed using the stochastic simulator of AltaRica 3.0 tool.
55

Development of an Educational Tool for Deterministic and Probabilistic Slope Stability Analysis

Thiago Fernandes Leao (8098877) 10 December 2019 (has links)
<div>This research consists of the development of a new educational tool for calculations of 2D slope stability problems, named PNW-SLOPE. Slope stability has been considered one of the most important topics in geotechnical engineering for many years, so this is a subject which students should build a good background in the university. This program was created in Microsoft Excel with the aid of VBA (Visual Basic for Applications). The use of VBA allowed the creation of a good user interface, therefore those who are using the program can easily follow the instructions to create, analyze the model and check the results. Even though there are many commercial programs with the same application, this research presents a new alternative, more focused on educational purposes. PNW-SLOPE is divided in several modules.The first consists of the geometry definition of the slope. The second module consists of a deterministic slope stability analysis considering limit equilibrium method and the method of slices. The third module consists of a probability analysis considering Monte Carlo simulation. With these two options, users can compare both analysis and understand how important is the consideration of probability analysis in Geotechnical Engineering. This is a pertinent topic nowadays, since reliability analysis is increasingly being incorporated in standards and design codes throughout the world. An additional module was created for rock slope stability problems in which the failure results from sliding on a single planar surface dipping into the excavation. Several examples are presented to demonstrate some of the features of PNW-SLOPE and results are verified with commercial programs such as Geostudio Slope/w and Rocscience Slide 2018.</div>
56

Reliability Analysis of Low Earth Orbit Broadband Satellite Communication Constellations

Islam Aly Sadek Nazmy (9192482) 31 July 2020 (has links)
<p>Large space-based communication networks have been growing in numbers of satellites, with plans to launch more than 10,000 satellites into Low Earth Orbit (LEO). While these constellations offer many advantages over ground-based communication systems, they pose a significant threat when they fail and generate space debris. Given the reliability of current satellites, engineers can use failure modeling to design satellite constellations that are more resilient to satellite failures. Several authors have analyzed the reliability of geostationary satellites, but few have expanded the work to multiple-satellite systems. </p> <p>To address this gap, we constructed a simulation model to show the performance of satellite constellations with different satellite reliability functions over time. The simulation model is broken down into four key parts: a satellite constellation model, a network model, a failure model, and a performance metric. We use a Walker star constellation, which is the most common constellation for LEO broadband satellite constellations. The network consists of satellite-to-satellite connections and satellite-to-groundstation connections, which routes data using a shortest-path algorithm. The failure model views satellites as either operational or failed (no partial failures) and considers the groundstation operator’s knowledge or lack thereof of the satellites’ operational status and uses satellite reliability to estimate the expected data throughput of the system. We also created a performance metric that measures how well the entire network is operating and helps us compare candidate constellations.</p> <p>We used the model to estimate performance for a range of satellite reliabilities, and for groundstations with different numbers of communication dishes (effectively, satellite-ground links). Satellite reliability is a significant contributing factor to the long-term constellation performance. Using the reliability of small-LEO satellites, we found that a constellation of 1,200 small-LEO satellites completely fails after less than 30 days, given that we do not consider partial failures. Satellite constellations with higher satellite reliability, such as large geostationary satellites, last less than 50 days. We expect the constellations in our model to perform worse than real satellite systems, since we are only modeling complete failures, however these findings provide a useful worst-case baseline for designing sustainable satellite constellations. We also found that the number of groundstation-to-satellite communication links at each groundstation is not a significant factor for more than five communication links, meaning that adding more communication antennas to existing satellite groundstations would not improve constellation performance significantly.</p>
57

Psychometric properties of the Copenhagen Burnout Inventory in a South African context

Smit, Anna Maria 15 May 2012 (has links)
Burnout is a prevalent problem in South Africa, affecting individuals and organisations in various industries. The study of burnout in South Africa is important in order to solve the burnout problem. Valid and reliable measurement instruments are necessary to conduct studies on burnout. The Copenhagen Burnout Inventory was developed as a result of criticism against the most popular burnout measure, namely the Maslach Burnout Inventory. The Copenhagen Burnout Inventory measures burnout in terms of three factors, namely personal burnout, work-related burnout and client-related burnout. Although the Copenhagen Burnout Inventory is a unique tool for the measurement of burnout, very little attention has been paid to determining the psychometric properties of this instrument. The purpose of the study was to determine whether the Copenhagen Burnout Inventory can be used as a valid and reliable measure for burnout in South Africa. The research methodology followed a quantitative survey research approach. A non-probability snowball sample of 215 respondents completed the Copenhagen Burnout Inventory. Data obtained was used to conduct an exploratory factor analysis and internal reliability analysis. The study proved that the Copenhagen Burnout Inventory can be used in South Africa to measure two factors with high internal reliabilities, namely exhaustion (á=0.935) and client-related burnout (á=0.913). It is recommended that additional items based on withdrawal should be added to the work-related burnout scale of the Copenhagen Burnout Inventory. Such additional items might possibly lead to confirmation of the original three-factor model in a South African context. / Dissertation (MCom)--University of Pretoria, 2011. / Human Resource Management / unrestricted
58

A Critical Review of the Observational Method

Spross, Johan January 2014 (has links)
Building a sustainable structure in soil or rock that satisfies all predefined technical requirements implies choosing a rational and effective construction method. An important aspect is how the performance of the structure is verified. For cases when the geotechnical behaviour is hard to predict, the existing design code for geotechnical structures, Eurocode 7, suggests the so-called “observational method” to verify that the performance is acceptable. The basic principle of the method is to accept predefined changes in the design during construction, in order to accommodate the actual ground conditions, if the current design is found unsuitable. Even though this in theory should ensure an effective design solution, formal application of the observational method is rare. It is therefore not clear which prerequisites and circumstances that must be present for the observational method to be applicable and be the more suitable method. This licentiate thesis gives a critical review of the observational method, based on, and therefore limited by, the outcome of the performed case studies. The aim is to identify and highlight the crucial aspects that make the observational method difficult to apply, thereby providing a basis for research towards a more applicable definition of the method. The main topics of discussion are (1) the apparent contradiction between the preference for advanced probabilistic calculation methods to solve complex design problems and sound, qualitative engineering judgement, (2) the limitations of measurement data in assessing the safety of a structure, (3) the fact that currently, no safety margin is required for the completed structure when the observational method is applied, and (4) the rigidity of the current definition of the observational method and the implications of deviations from its principles. Based on the review, it is argued that the observational method can be improved by linking it to a probabilistic framework. To be applicable, the method should be supported by guidelines that explain and exemplify how to make the best use of it. The engineering judgement is however not lost; no matter how elaborate probabilistic methods are used, sound judgement is still needed to define the problem correctly. How to define such a probabilistic framework is an urgent topic for future research, because this also addresses the concerns regarding safety that is raised in the other topics of discussion. / För att i berg eller jord kunna konstruera en anläggning, som uppfyller satta tekniska krav, krävs det att man väljer en rationell och effektiv konstruktionsmetod. En viktig aspekt i detta val är hur man verifierar konstruktionens funktion avseende exempelvis bärförmåga eller stadga. För fall när konstruktionens beteende svårt att förutsäga, erbjuder gällande standard (Eurokod 7) den så kallade observationsmetoden. Denna metod tillåter i förväg förberedda förändringar i designen under konstruktionstiden, om observationer av konstruktionens beteende indikerar att så behövs. På så vis anpassas konstruktionen till de faktiska förhållandena i marken. Trots att detta tillvägagångssätt i teorin borde ge en rationell design, används metoden sällan. Det råder därför oklarheter om vilka förutsättningar och omständigheter som krävs för att observationsmetoden ska kunna användas och dessutom utgöra den bästa lösningen. I denna licentiatuppsats granskas observationsmetoden och dess användbarhet. Målet med licentiatuppsatsen är att belysa de aspekter som kan utgöra svårigheter när observationsmetoden används. Dessa identifierades under arbetet med några fallstudier. Licentiatuppsatsen ger därmed en utgångspunkt för fortsatt forskning för att ta fram en mer användbar definition av observationsmetoden. De viktigaste aspekterna som diskuteras i uppsatsen är (1) den skenbara motsatsen mellan användandet av sannolikhetsbaserade beräkningsmetoder för att lösa komplexa dimensioneringsfrågor och kvalitativa ingenjörsmässiga bedömningar, (2) de begränsningar som finns när man använder mätdata för att utvärdera konstruktioners säkerhet, (3) att det för tillfället saknas krav på säkerhetsmarginal mot brott för konstruktioner som byggts med observationsmetoden, och (4) vad svårigheten att uppfylla Eurokodens strikta definition innebär för metodens användbarhet. Utifrån resultatet av granskningen dras slutsatsen att observationsmetoden kan förbättras genom att ge den ett sannolikhetsbaserat ramverk. För att förenkla användningen bör riktlinjer och anvisningar utformas. Även om metoden utvecklas mot en högre grad av beräkningskomplexitet, kommer ingenjörsmässiga bedömningar också framgent att vara viktiga, eftersom en avgörande aspekt är hur problemställningen formuleras. Med ett sannolikhetsbaserat ramverk ökar möjligheten att lösa de frågeställningar kring säkerhet som också diskuteras i uppsatsen. / <p>QC 20140415</p>
59

Impact of Usability for Particle Accelerator Software Tools Analyzing Availability and Reliability

Motyka, Mikael January 2017 (has links)
The importance of considering usability when developing software is widely recognized in literature. This non-functional system aspect focuses on the ease, effectiveness and efficiency of handling a system. However, usability cannot be defined as a specific system aspect since it depends on the field of application. In this work, the impact of usability for accelerator tools targeting availability and reliability analysis is investigated by further developing the already existing software tool Availsim. The tool, although proven to be unique by accounting for special accelerator complexities not possible to model with commercial software, is not used across facilities due to constraints caused by previous modifications. The study was conducted in collaboration with the European Spallation Source ERIC, a multidisciplinary research center based on the world’s most powerful neutron source, currently being built in Lund, Sweden. The work was conducted in the safety group within the accelerator division, where the availability and reliability studies were performed. Design Science Research was used as research methodology to answer how the proposed tool can help improving the usability for the analysis domain, along with to identify existing usability issues in the field. To obtain an overview of the current field, three questionnaires were sent out and one interview was conducted, listing important properties to consider for the tool to be developed along with how usability is perceived in the accelerator field of analysis. The developed software tool was evaluated with After Scenario Questionnaire and the System Usability Scale, two standardized ways of measuring usability along with custom made statements, explicitly targeting important attributes found when questioning the researchers. The result highlighted issues in the current field, listing multiple tools used for the analysis along with their positive and negative aspects, indicating a lengthy and tedious process in obtaining the required analysis results. It was also found that the adapted Availsim version improves usability of the previous versions, listing specific attributes that could be identified as correlating to the improved usability, fulfilling the purpose of the study. However, results indicate existing commercial tools obtained higher scores regarding the standardized tests targeting usability compared to the new Availsim version, pointing towards room for improvements. / Vikten av att ta hänsyn till användbarhet vid mjukvaruutveckling är välkänt inom litteraturen. Denna icke-funktionella system-aspekt fokuserar på enkelheten och effektiviteten vid systemhantering. Användbarheten av ett system kan dock inte definieras som en specifik systemaspekt då den beror på tillämpningsområdet. Detta arbete undersöker inverkan av användbarheten gällande verktyg som används vid analys utav tillgänglighet och tillförlitlighet (Eng. Availability and Reliability) för partikelacceleratorer genom att vidareutveckla den befintliga mjukvaran Availsim. Mjukvaran är bevisad att på ett unikt sett kunna ta acceleratorspecifika hänsynstaganden som inte är möjliga att återskapa med de kommersiella verktyg som finns tillgängliga idag. Trots mjukvarans unika egenskaper är den inte använd. Detta, på grund av tidigare modifieringar, vars begränsningar endast möjliggör användandet av mjukvaran vid en specifik anläggning. Studien utfördes i samarbete med European Spallation Source, ERIC. ESS är en multidisciplinär forskningsanläggning baserad på världens kraftfullaste neutronkälla som för närvarande byggs i Lund, Sverige. Arbetet utfördes i säkerhetsgruppen inom acceleratordivisionen, där analysen utav acceleratorns tillgänglighet och tillförlitlighet utförs. Design Science Research användes som forskningsmetodik för att svara på hur den föreslagna mjukvaran kan bidra till att förbättra användbarheten vid den angivna analysen, samt definiera de befintliga användbarhetsproblemen inom området. För att få en överblick av hur analysen bedrivs i dagsläget skickades tre enkäter ut och en intervju genomfördes för att sammanställa viktiga egenskaper att ta till hänsyn vid utveckling av den nya mjukvaran, tillsammans med hur forskarna uppfattar användbarhet för denna typ av analys. Den utvecklade mjukvaran utvärderades med två standardiserade frågeformulär, inriktade på att mäta användbarhet för system vid namn ”After Scenario Questionnaire” och ”System Usability Scale”. En tredje uppsättning av frågor konstruerades också för att explicit mäta de viktiga egenskaper som framkommit vid enkätutskicket och intervjun. I resultatet lyfts problem i det aktuella området fram där de verktyg som används vid analysen listades tillsammans med deras positiva och negativa egenskaper. Dessa egenskaper indikerade på en omständig och lång process för att erhålla de analysresultat som önskas. Det konstaterades också att den anpassade Availsim-versionen förbättrar användbarheten gentemot tidigare versioner genom att lista specifika egenskaper som kunde identifieras till att direkt ha en inverkan i hur användbarheten uppfattas. Resultaten visade också på att det befintliga, kommersiella verktyget Reliasoft erhöll högre resultat vid de standardiserade testerna. Något som tyder på utrymme för förbättringar.
60

Efficient Uncertainty quantification with high dimensionality

Jianhua Yin (12456819) 25 April 2022 (has links)
<p>Uncertainty exists everywhere in scientific and engineering applications. To avoid potential risk, it is critical to understand the impact of uncertainty on a system by performing uncertainty quantification (UQ) and reliability analysis (RA). However, the computational cost may be unaffordable using current UQ methods with high-dimensional input. Moreover, current UQ methods are not applicable when numerical data and image data coexist. </p> <p>To decrease the computational cost to an affordable level and enable UQ with special high dimensional data (e.g. image), this dissertation develops three UQ methodologies with high dimensionality of input space. The first two methods focus on high-dimensional numerical input. The core strategy of Methodology 1 is fixing the unimportant variables at their first step most probable point (MPP) so that the dimensionality is reduced. An accurate RA method is used in the reduced space. The final reliability is obtained by accounting for the contributions of important and unimportant variables. Methodology 2 addresses the issue that the dimensionality cannot be reduced when most of the variables are important or when variables equally contribute to the system. Methodology 2 develops an efficient surrogate modeling method for high dimensional UQ using Generalized Sliced Inverse Regression (GSIR), Gaussian Process (GP)-based active learning, and importance sampling. A cost-efficient GP model is built in the latent space after dimension reduction by GSIR. And the failure boundary is identified through active learning that adds optimal training points iteratively. In Methodology 3, a Convolutional Neural Networks (CNN) based surrogate model (CNN-GP) is constructed for dealing with mixed numerical and image data. The numerical data are first converted into images and the converted images are then merged with existing image data. The merged images are fed to CNN for training. Then, we use the latent variables of the CNN model to integrate CNN with GP to quantify the model error using epistemic uncertainty. Both epistemic uncertainty and aleatory uncertainty are considered in uncertainty propagation. </p> <p>The simulation results indicate that the first two methodologies can not only improve the efficiency but also maintain adequate accuracy for the problems with high-dimensional numerical input. GSIR with active learning can handle the situations that the dimensionality cannot be reduced when most of the variables are important or the importance of variables are close. The two methodologies can be combined as a two-stage dimension reduction for high-dimensional numerical input. The third method, CNN-GP, is capable of dealing with special high-dimensional input, mixed numerical and image data, with the satisfying regression accuracy and providing an estimate of the model error. Uncertainty propagation considering both epistemic uncertainty and aleatory uncertainty provides better accuracy. The proposed methods could be potentially applied to engineering design and decision making. </p>

Page generated in 0.0566 seconds