• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 20
  • 14
  • 11
  • 8
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 50
  • 22
  • 20
  • 19
  • 19
  • 19
  • 19
  • 18
  • 18
  • 18
  • 18
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Applying the cognitive reliability and error analysis method to reduce catheter associated urinary tract infections

Griebel, MaryLynn January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Malgorzata Rys / Catheter associated urinary tract infections (CAUTIs) are a source of concern in the healthcare industry because they occur more frequently than other healthcare associated infections and the rates of CAUTI have not improved in recent years. The use of urinary catheters is common among patients; between 15 and 25 percent of all hospital patients will use a urinary catheter at some point during their hospitalization (CDC, 2016). The prevalence of urinary catheters in hospitalized patients and high CAUTI occurrence rates led to the application of human factors engineering to develop a tool to help hospitals reduce CAUTI rates. Human reliability analysis techniques are methods used by human factors engineers to quantify the probability of human error in a system. A human error during a catheter insertion has the opportunity to introduce bacteria into the patient’s system and cause a CAUTI; therefore, human reliability analysis techniques can be applied to catheter insertions to determine the likelihood of a human error. A comparison of three human reliability analysis techniques led to the selection of the Cognitive Reliability and Error Analysis Method (CREAM). To predict a patient’s probability of developing a CAUTI, the human error probability found from CREAM is incorporated with several health factors that affect the patient’s risk of developing CAUTI. These health factors include gender, duration, diabetes, and a patient’s use of antibiotics, and were incorporated with the probability of human error using fuzzy logic. Membership functions were developed for each of the health factors and the probability of human error, and the centroid defuzzification method is used to find a crisp value for the probability of a patient developing CAUTI. Hospitals that implement this tool can choose risk levels for CAUTI that places the patient into one of three zones: green, yellow, or red. The placement into the zones depends on the probability of developing a CAUTI. The tool also provides specific best practice interventions for each of the zones.
22

The development of a human factors tool for the successful implementation of industrial human-robot collaboration

Charalambous, George January 2014 (has links)
Manufacturing organisations have placed significant attention to the potential of industrial human-robot collaboration (HRC) as a means for enhancing productivity and product quality. This concept has predominantly been seen from an engineering and safety aspect, while the human related issues tend to be disregarded. As the key human factors relevant to industrial HRC have not yet been fully investigated, the research presented in this thesis sought to develop a human factors tool to enable the successful implementation of industrial HRC. First, a theoretical framework was developed which collected the key organisational and individual level human factors by reviewing comparable contexts to HRC. The human factors at each level were investigated separately. To identify whether the organisational human factors outlined in the theoretical framework were enablers or barriers, an industrial exploratory case study was conducted where traditional manual work was being automated. The implications provided an initial roadmap of the key organisational human factors that need to be considered as well as the critical inter-relations between them. From the list of individual level human factors identified in the theoretical framework, the focus was given on exploring the development of trust between human workers and industrial robots. A psychometric scale that measures trust specifically in industrial HRC was developed. The scale offers the opportunity to system designers to identify the key system aspects that can be manipulated to optimise trust in industrial HRC. Finally, the results were gathered together to address the overall aim of the research. A human factors guidance tool was developed which provides practitioners propositions to enable successful implementation of industrial HRC.
23

Análise da confiabilidade humana na evacuação de emergência de uma aeronave. / Human reliability analysis in the emergency evacuation from aircraft.

Bayma, Alaide Aparecida de Camargo 27 February 2019 (has links)
Grandes avanços têm sido alcançados com as técnicas de análise de segurança dos sistemas essenciais de navegação e performance das aeronaves resultando na diminuição das taxas de acidentes ao longo dos últimos anos. O Relatório de Segurança de 2017 da EASA (European Agency Safety Aviation) apresenta um relevante aumento do número de acidentes não fatais. Este resultado positivo leva ao aumento das evacuações de emergência. O Relatório de Segurança de 2016 da IATA (International Air Transport Association) mostra que em 35% dos acidentes com sobreviventes em Jatos e 55% dos acidentes com sobreviventes em turbo hélice ocorreram com evacuação de emergência. Diante deste cenário, a confiabilidade humana torna-se relevante na interface destes passageiros com o projeto de segurança da cabine durante o procedimento de evacuação de emergência. Para avaliar as características e a contribuição desta interface no sucesso do procedimento de evacuação, é proposta uma metodologia para a análise da interação humana com este sistema estabelecendo um diagrama causal genérico com o objetivo de estudar o mecanismo do erro humano nesta interface. A metodologia proposta utiliza a abordagem das Redes Bayesianas apoiada pela lógica Fuzzy para modelar os Fatores de Desempenho Humano e para verificar, através da diagnose e inferência causal, quais fatores mais influenciam o desempenho humano na execução das tarefas neste ambiente de emergência. Esta pesquisa apresenta uma aplicação da metodologia proposta para analisar as tarefas do ensaio de evacuação de emergência de uma aeronave, focando na quantificação do erro humano na interface com o projeto de segurança da cabine da aeronave. Os resultados da aplicação identificaram o fator situacional: cartão de segurança, marcas na asa e escorregadores, e os fatores individuais: conhecimento e habilidades: interpretação e percepção como aqueles que mais influenciaram no teste do procedimento de evacuação de emergência de uma aeronave. / Great advances have been achieved with the safety assessment techniques of essential aircraft navigation and performance systems due to decreasing of fatal accident rates in recent years. The EASA Annual Safety Report 2017 (European Agency Safety Aviation) presents a relevant increase of non-fatal accidents. This positive results leads to increasing of emergency evacuation. The IATA Safety Report 2016 (International Air Transport Association) presents that 35% of survival accidents with Jet and 55% of survival accidents with Turboprop occurred with emergency evacuation. In view of this scenario, human reliability becomes relevant in the interface of these passengers with the cabin safety design during emergency evacuation procedure. To evaluate this interface features, and the contribution of this interface in the success of evacuation procedure, it is proposed a method for analyzing the human interaction within the system, to establish a generic causal framework aiming at the study of the human error mechanism. The proposed methodology uses the Bayesian Networks approach supported by Fuzzy logic for modelling Human Performance Factors and for verifying, through diagnosis and causal inference, which factors most influence human performance in the execution of tasks in this emergency environment. This research presents an application of this approach to analyze the tasks of the emergency evacuation testing from an aircraft, focusing on the quantification of human error in the interface with aircraft cabin safety design. The results of application has identified the situational factor: safety card, marks on the wing and escape slides, and the individual factors: knowledge and abilities: interpretation and perception as one those most of influenced the emergency evacuation test procedure from an aircraft.
24

Improving the representation of the fragility of coastal structures

Jane, Robert January 2018 (has links)
Robust Flood Risk Analysis (FRA) is essential for effective flood risk management. The performance of any flood defence assets will heavily influence the estimate of an area's flood risk. It is therefore critical that the probability of a coastal flood defence asset incurring a structural failure when subjected to a particular loading i.e. its fragility is accurately quantified. The fragility representations of coastal defence assets presently adopted in UK National FRA (NaFRA) suffer three pertinent limitations. Firstly, assumptions relating to the modelling of the dependence structure of the variables that comprise the hydraulic load, including the water level, wave height and period, are restricted to a single loading variable. Consequently, due to the "system wide" nature of the analysis, a defence's conditional failure probability must also be expressed in terms of a single loading in the form of a fragility curve. For coastal defences the single loading is the overtopping discharge, an amalgamation of these basic loading variables. The prevalence of other failure initiation mechanisms may vary considerably for combinations of the basic loadings which give rise to equal overtopping discharges. Hence the univariate nature of the existing representations potentially restricts their ability to accurately assess an asset's structural vulnerability. Secondly, they only consider failure at least partially initiated through overtopping and thus neglect other pertinent initiation mechanisms acting in its absence. Thirdly, fragility representations have been derived for 61 generic assets (idealised forms of the defences found around the UK coast) each in five possible states of repair. The fragility representation associated with the generic asset and its state of repair deemed to most closely resemble a particular defence is adopted to describe its fragility. Any disparity in the parameters which influence the defence's structural vulnerability in the generic form of the asset and those observed in the field are also likely to further reduce the robustness of the existing fragility representations. In NaFRA coastal flood defence assets are broadly classified as vertical walls, beaches and embankments. The latter are typically found in sheltered locations where failure is water level driven and hence expressing failure probability conditionally on overtopping is admissible. Therefore new fragility representations for vertical wall and gravel beach assets which address the limitations of those presently adopted in NaFRA are derived. To achieve this aim it was necessary to propose new procedures for extracting information on the site and structural parameters characterising a defence's structural vulnerability from relevant resources (predominately beach profiles). In addition novel statistical approaches were put forward for capturing the uncertainties in the parameters on the basis of the site specific data obtained after implementation of the aforementioned procedures. A preliminary validation demonstrated the apparent reliability of these approaches. The pertinent initiation mechanisms behind the structural failure of each asset type were then identified before the state-of-the-art models for predicting the prevalence of these mechanisms during an event were evaluated. The Obhrai et al. (2008) re-formulation of the Bradbury (2000) barrier inertia model, which encapsulates all of the initiating mechanisms behind the structural failure of a beach, was reasoned as a more appropriate model for predicting the breach of a beach than that adopted in NaFRA. Failure initiated exclusively at the toe of a seawall was explicitly accounted for in the new formulations of the fragility representations using the predictors for sand and shingle beaches derived by Sutherland et al. (2007) and Powell & Lowe (1994). In order to assess whether the new formulations warrant a place in future FRAs they were derived for the relevant assets in Lyme Bay (UK). The inclusion of site specific information in the derivation of fragility representations resulted in a several orders of magnitude change in the Annual Failure Probabilities (AFPs) of the vertical wall assets. The assets deemed most vulnerable were amongst those assigned the lowest AFPs in the existing analysis. The site specific data indicated that the crest elevations assumed in NaFRA are reliable. Hence it appears the more accurate specification of asset geometry and in particular the inclusion of the beach elevation in the immediate vicinity of the structure in the overtopping calculation is responsible for the changes. The AFP was zero for many of the walls (≈ 77%) indicating other mechanism(s) occurring in the absence of any overtopping are likely to be responsible for failure. Toe scour was found to be the dominant failure mechanism at all of the assets at which it was considered a plausible cause of breach. Increases of at least an order of magnitude upon the AFP after the inclusion of site specific information in the fragility representations were observed at ≈ 86% of the walls. The AFPs assigned by the new site specific multivariate fragility representations to the beach assets were positively correlated with those prescribed by the existing representations. However, once the new representations were adopted there was substantially more variability in AFPs of the beach assets which had previously been deemed to be in identical states of repair. As part of the work, the new and existing fragility representations were validated at assets which had experienced failure or near-failure in the recent past, using the hydraulic loading conditions recorded during the event. No appraisal of the reliability of the new representations for beaches was possible due to an absence of any such events within Lyme Bay. Their AFPs suggest that armed with more information about an asset's geometry the new formulations are able to provide a more robust description of a beach's structural vulnerability. The results of the validation as well as the magnitude of the AFPs assigned by the new representations on the basis of field data suggest that the newly proposed representations provide the more realistic description of the structural vulnerability of seawalls. Any final conclusions regarding the robustness of the representations must be deferred until more failure data becomes available. The trade-off for the potentially more robust description of an asset's structural vulnerability was a substantial increase in the time required for the newly derived fragility representations to compute the failure probability associated with a hydraulic loading event. To combat this increase, (multivariate) generic versions of the new representations were derived using the structural specific data from the assets within Lyme Bay. Although there was generally good agreement in the failure probabilities assigned to the individual hydraulic loading events by the new generic representations there was evidence of systematic error. This error has the potential to bias flood risk estimates and thus requires investigation before the new generic representations are included in future FRAs. Given the disparity in the estimated structural vulnerability of the assets according to the existing fragility curves and the site-specific multivariate representations the new generic representations are likely to be more reliable than the existing fragility curves.
25

Analysis and development of a live load model for brazilian concrete bridges based on WIM data. / Análise e desenvolvimento de um modelo de carga móvel para pontes brasileiras usando dados de pesagem em movimento.

Portela, Enson de Lima 03 August 2018 (has links)
This thesis presents an approach to evaluate and develop a live load model. Although the main purpose of this work is with the impact of truck traffic on bridges, the data presented in this work can be used in many engineering fields that are concerned with truck characteristics of geometry and weight. Data from two different WIM stations were considered. One in Fernão Dias highway in the State of São Paulo which is comprised of a same-direction two adjacent lanes and the sample is comprised of 20 months (September 2015 to August 2017). The second station is in Rio Grande do Sul State. This road is a same-direction three adjacent lanes. The sample is comprised of 78 days (March 2014 to June 2014) In order to evaluate and develop a new live load model, an approach to compute load effects in terms of bending moments and shear forces is proposed. It makes use of single and multiple truck presence to evaluate the live load effects for different bridge spans. Three cases of multiple presence are considered: following, side-by-side and staggered. The proposed approach to evaluate the multiple truck presence effects is compared with the approach used by AASHTO LRFD. The approach for estimating the bias factors shows that considering only full correlated trucks is too conservative, mostly for short spans where there is a lack of occurrences, especially following events. On the other hand, taking into account no correlation at all yields very low bias factors. At last, a more rational live load model was developed based on WIM data. Another purpose of this thesis is to use existing Brazilian bridges to calibrate the live load model as in NBR7188:2013. Reliability analysis is performed with sixty existing Brazilian bridges. The bridges are taken from different states of Brazil. Out of the sixty bridges, 39 are prestressed and 21 reinforced concrete bridges. Those bridges are located in five different states: Pernambuco, Ceará, Bahia, São Paulo, Minas Gerais. Probability of failure was estimated in terms of moment and shear for interior girders and box girders. Only ultimate state limit was considered. It was found that reliability indices are higher in prestressed bridges when compared to reinforced bridges. Also, the reliability indices tend to decrease as the span length increases. This means that for larger spans the probability of failure is higher than the ones for shorter spans. / Este trabalho apresenta uma abordagem para avaliação e desenvolvimento de modelo de carga móvel. Embora o principal objetivo desta tese seja averiguar o impacto de caminhões nas pontes, os dados apresentados aqui podem ser usados em qualquer aplicação de engenharia que dependa das características do tráfego de caminhão. Dados de duas estações WIM foram utilizados. Uma estação fica na Autoestrada Fernão Dias no Estado de São Paulo e possui 20 meses (Setembro de 2015 a Agosto de 2017) de dados coletados. A outra estação fica no estado do Rio Grande do Sul. Esta amostra tem 78 dias e foi coletada de Marco de 2014 a Junho de 2014. Com o objetivo de avaliar e desenvolver um novo modelo de carga móvel, uma abordagem para estimar o efeito de carga em termos de momento fletor e esforço cortante é proposta. Este método faz uso de estatísticas de caminhões em múltiplas presenças e isolados. Três casos de múltiplas presenças são considerados: \"Following\", \"Side-by-side\" e \"Staggered\". A abordagem proposta é comparada com o método usado pela AASHTO LRFD. A abordagem para estimar os \"bias factors\" mostra que considerar apenas caminhões totalmente correlacionados é muito conservador, principalmente para períodos curtos onde há uma falta de ocorrências, especialmente para eventos \"Following\". Por outro lado, não considerar a correlação de peso dos caminhões resulta em valores muito baixos de \"bias factors\". Por fim, um modelo de carga móvel mais racional foi desenvolvido com base nos dados WIM. Outro objetivo desta tese foi usar pontes brasileiras existentes para calibrar o modelo de carga móvel descrito na NBR7188:2013. Análises de confiabilidade foram realizadas em uma amostra de sessenta pontes brasileiras, sendo que destas 39 são protendidas e 21 armadas. Elas estão localizadas em cinco diferentes estados: Pernambuco, Ceará, Bahia, São Paulo e Minas Gerais. As probabilidades de falha foram estimadas em termos de momento fletor e cisalhamento para vigas internas e vigas caixão. Apenas o estado limite último foi considerado. Verificou-se que os índices de confiabilidade são maiores nas pontes protendidas quando comparadas às pontes armadas. Além disso, os índices de confiabilidade tendem a diminuir à medida que o comprimento do vão aumenta.
26

Análise de confiabilidade de peças de madeira fletidas dimensionadas segundo a NBR 7190/97 / Reliability analysis of bending timber members designed according NBR 7190/97

Adolfs, Daniel Veiga 21 November 2011 (has links)
A passagem do método das tensões admissíveis para o método dos estados limites na NBR 7190 - Projeto de Estruturas de Madeira, ocorrida em 1997, foi feita por meio de calibração determinística que teve como ponto central a resistência da madeira na compressão paralela às fibras. Um dos aspectos modificados foi o dimensionamento de peças fletidas no tocante à verificação das tensões normais devidas ao momento fletor, em que é utilizada a resistência à compressão paralela às fibras. Com a intenção de averiguar o grau de segurança do modelo de cálculo para esse caso, foram realizadas análises de confiabilidade para vigas fletidas de madeira. Foram coletados dados de 549 testes de flexão de vigas, obtendo-se valores relativos à ruptura, e também foram obtidas informações estatísticas a respeito como a média e o desvio-padrão da resistência à compressão paralela às fibras, das mesmas espécies usadas nas vigas. Nas análises de confiabilidade foram utilizadas 5 variáveis aleatórias, com 5 tipos de combinações diferentes, analisadas com e sem erro de modelo para cada um dos 16 grupos de resultados levantados, totalizando 2752 análises de confiabilidade. Os resultados das análises sem o erro de modelo mostram que a norma não atinge valores suficientes para o índice de confiabilidade e que, com a introdução do erro de modelo, os resultados são mais adequados. Também se verificou que o modelo adotado pela norma é muito conservador, no caso de peças de madeira de Pinus SP classificadas. / The transition of allowable stress to limit state design methods in \"NBR 7190 - Projeto de Estruturas de Madeira\", in 1997, was made considering the strength in compression parallel to the grain as central point of deterministic calibration. One of the aspects modified was the design of beams, related to tension and compression; in this case is used the strength in compression. A reliability analysis was made for timber beams to determine the security level of the theoretical model. Were collected data related to failure from 549 bending tests in beams, and statistical information about the mean and standard deviation of the strength in compression of the wood species used in the beams. In reliability analysis were used 5 random variables, with 5 different types of combinations, analyzed with and without model error for each of the 16 groups of results collected, resulting 2752 reliability analysis. The results of the analysis without the model error show that the standard doesnt achieve sufficient values for the reliability index and, with the introduction of model error, the results are more adequate. It was also verified that the theoretical model is very conservative in the case of graded members of Pine species.
27

Use of Human Reliability Analysis to evaluate surgical technique for rectal cancer

Wilson, Peter John January 2012 (has links)
Outcomes from surgery are dependent upon technical performance, as demonstrated by the variability that exists in outcomes achieved by different surgeons following surgery for rectal cancer. It is possible to improve such outcomes by focused training and the adoption of specific surgical techniques, such as the total mesorectal excision (TME) training programme in Stockholm which reduced local recurrence rates of cancer by 50%. It is generally accepted that good surgical technique is the enactment of a series of positive surgical actions, and the avoidance of errors. However, the constituents of good surgical technique for rectal cancer have not yet been studied in sufficient detail to identify the specific associations between individual steps and their consequences. In this study the ergonomic principles of human reliability analysis (HRA) were applied to video recordings of rectal cancer surgery. A system of error definition and identification was developed, utilising a bespoke software solution designed for the project. Calculation of optimal camera angles and position was determined in a virtual operating theatre. Analysis of synchronised footage from multiple camera views was performed, through which over 6,000 errors were identified across 14 procedural tasks. The sequences of events contributing to these errors are reported, and a series of error reduction mechanisms formulated for rectal cancer surgery.
28

Analysis of How Mobile Robots Fail in the Field

Carlson, Jennifer 03 March 2004 (has links)
The considerable risk to human life associated with modern military operations in urban terrain (MOUT) and urban search and rescue (USAR) has led professionals in these domains to explore the use of robots to improve safety. Recent studies on mobile robot use in the field have shown a noticeable lack of reliability in real field conditions. Improving mobile robot reliability for applications such as USAR and MOUT requires an understanding of how mobile robots fail in field environments. This paper provides a detailed investigation of how ground-based mobile robots fail in the field. Forty-four representative examples of failures from 13 studies of mobile robot reliability in the USAR and MOUT domains are gathered, examined, and classified. A novel taxonomy sufficient to cover any failure a ground-based mobile robot may encounter in the field is presented. This classification scheme draws from established standards in the dependability computing [30] and human-computer interaction [40] communities, as well as recent work [6] in the robotics domain. Both physical failures (failures within the robotic system) and human failures are considered. Overall robot reliability in field environments is low with between 6 and 20 hours mean time between failures (MTBF), depending on the criteria used to determine if a failure has occurred. Common issues with existing platforms appear to be the following: unstable control systems, chassis and effectors designed and tested for a narrow range of environmental conditions, limited wireless communication range in urban environments, and insufficient wireless bandwidth. Effectors and the control system are the most common sources of physical failures. Of the human failures examined, slips are more common than mistakes. Two-thirds of the failures examined in [6] and [7] could be repaired in the field. Failures which resulted in the suspension of the robot's task until the repair was completed are also more common with 94% of the failures reported in [13].
29

Sequential Design of Experiments to Estimate a Probability of Failure.

Li, Ling 16 May 2012 (has links) (PDF)
This thesis deals with the problem of estimating the probability of failure of a system from computer simulations. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited, which is incompatible with the use of classical Monte Carlo methods. In fact, estimating a small probability of failure with very few simulations, as required in some complex industrial problems, is a particularly difficult topic. A classical approach consists in replacing the expensive-to-simulate model with a surrogate model that will use little computer resources. Using such a surrogate model, two operations can be achieved. The first operation consists in choosing a number, as small as possible, of simulations to learn the regions in the parameter space of the system that will lead to a failure of the system. The second operation is about constructing good estimators of the probability of failure. The contributions in this thesis consist of two parts. First, we derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. Second, we propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on Gaussian process modeling. The new strategies are supported by numerical results from several benchmark examples in reliability analysis. The methods proposed show good performances compared to methods of the literature.
30

Weather-related geo-hazard assessment model for railway embankment stability

Gitirana Jr., Gilson 01 June 2005
The primary objective of this thesis is to develop a model for quantification of weather-related railway embankments hazards. The model for quantification of embankment hazards constitutes an essential component of a decision support system that is required for the management of railway embankment hazards. A model for the deterministic and probabilistic assessment of weather-related geo-hazards (W-GHA model) is proposed based on concepts of unsaturated soil mechanics and hydrology. The model combines a system of two-dimensional partial differential equations governing the thermo-hydro-mechanical behaviour of saturated/unsaturated soils and soil-atmosphere coupling equations. A Dynamic Programming algorithm for slope stability analysis (Safe-DP) was developed and incorporated into the W-GHA model. Finally, an efficient probabilistic and sensitivity analysis framework based on an alternative point estimate method was proposed. According to the W-GHA model framework, railway embankment hazards are assessed based on factors of safety and probabilities of failures computed using soil property variability and case scenarios. <p> A comprehensive study of unsaturated property variability is presented. A methodology for the characterization and assessment of unsaturated soil property variability is proposed. Appropriate fitting equations and parameter were selected. Probability density functions adequate for representing the unsaturated soil parameters studied were determined. Typical central tendency measures, variability measures, and correlation coefficients were established for the unsaturated soil parameters. The inherent variability of the unsaturated soil properties can be addressed using the probabilistic analysis framework proposed herein. <p> A large number of hypothetical railway embankments were analysed using the proposed model. The embankment analyses were undertaken in order to demonstrate the application of the proposed model and in order to determine the sensitivity of the factor of safety to the uncertainty in several input variables. The conclusions drawn from the sensitivity analysis study resulted in important simplifications of the W-GHA model. It was shown how unsaturated soil mechanics can be applied for the assessment of near ground surface stability hazards. The approach proposed in this thesis forms a protocol for application of unsaturated soil mechanics into geotechnical engineering practice. This protocol is based on predicted unsaturated soil properties and based on the use of case scenarios for addressing soil property uncertainty. Other classes of unsaturated soil problems will benefit from the protocol presented in this thesis.

Page generated in 0.479 seconds