• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1712
  • 419
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 25
  • 21
  • 20
  • 15
  • 10
  • 8
  • 7
  • Tagged with
  • 3629
  • 601
  • 436
  • 366
  • 360
  • 359
  • 348
  • 329
  • 326
  • 297
  • 283
  • 264
  • 215
  • 214
  • 212
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Estudo sobre a determinação de antimônio em amostras ambientais pelo método de análise por ativação com nêutrons. validação da metodologia e determinação da incerteza da medição / A study on antimony determination in environmental samples by neutron activation analysis. validation of the methodology and determination of the uncertainty of the measurement

Tassiane Cristina Martins Matsubara 09 September 2011 (has links)
O antimônio é um elemento encontrado em baixas concentrações no meio ambiente. No entanto, a sua determinação tem despertado grande interesse devido ao conhecimento de sua toxicidade e da crescente aplicação na indústria. A determinação de antimônio tem sido um desafio para os pesquisadores uma vez que o elemento é encontrado em baixas concentrações, o que faz de sua análise uma tarefa difícil. Portanto, embora a análise por ativação de nêutrons (NAA) seja um método adequado para a determinação de vários elementos em diferentes tipos de matriz, no caso de Sb, a análise apresenta algumas dificuldades. A principal dificuldade é devido às interferências espectrais. O objetivo desta pesquisa foi validar o método de NAA para a determinação de Sb em amostras ambientais. Para estabelecer condições adequadas para a determinação de Sb, ensaios preliminares foram realizados para posterior análise de materiais de referência certificados (MRC). O procedimento experimental consistiu em irradiar amostras juntamente com padrão sintético de Sb por períodos de 8 ou 16 horas no reator nuclear de pesquisa IEA-R1, seguido de espectrometria de raios gama. A quantificação de Sb foi realizada pela medição dos radioisótopos de 122Sb e 124Sb. Ensaios preliminares indicaram a presença de Sb em papel de filtro Whatman, utilizado no preparo do padrão, porém em teor muito baixo, podendo ser considerado desprezível. No caso do material plástico utilizado como invólucro para a irradiação da amostra, foi verificado que ele deve ser escolhido cuidadosamente, pois dependendo do plástico, este pode conter Sb. A análise da estabilidade da solução padrão diluída de Sb, dentro do período de oito meses, mostrou que não há alteração significativa na concentração deste elemento. Os resultados obtidos nas análises dos materiais de referência certificados indicaram a formação de radioisótopos de 76As e também de 134Cs e 152Eu, podendo interferir na determinação de Sb pela medição de 122Sb, devido à proximidade de energias dos raios gama emitidos. Além disso, a alta atividade do 24Na pode mascarar o pico do 122Sb e dificultar a sua detecção. As análises dos MRC indicaram que a exatidão e a precisão dos resultados de Sb dependem principalmente do tipo e composição da matriz, da sua concentração na amostra, do radioisotopo medido e do tempo de decaimento utilizado para a medição. A avaliação dos componentes que contribuem para a medição da incerteza da concentração de Sb, mostrou que a maior contribuição da incerteza é dada pela estatística de contagem da amostra. Os resultados da avaliação da incerteza indicaram também que o valor da incerteza padrão combinada depende do radioisótopo medido e do tempo de decaimento utilizado para as contagens. Este estudo mostrou que a NAA é um método bastante adequado na determinação de Sb em amostras ambientais, possibilitando a obtenção de resultados com baixos valores de incerteza e por ser uma técnica puramente instrumental, permite a análise de um grande número de amostras. / Antimony is an element found in low concentrations in the environment. However, its determination has attracted great interest due to the knowledge of its toxicity and increasing application in industry. The determination of antimony has been a challenge for researchers since this element is found in low concentrations which make its analysis a difficult task. Therefore, although neutron activation analysis (NAA) is an appropriate method for the determination of various elements in different types of matrix, in the case of Sb its analysis presents some difficulties, mainly due to spectral interferences. The objective of this research was to validate the NAA method for Sb determination in environmental samples. To establish appropriate conditions for Sb determinations, preliminary assays were carried out for further analysis of certified reference materials (CRM). The experimental procedure was to irradiate samples with a synthetic Sb standard for a period of 8 or 16 hours in the IEA-R1 nuclear research reactor, followed by gamma ray spectrometry. The quantification of Sb was performed by measuring the radioactive isotopes of 122Sb and 124Sb. The results of preliminary assays indicated the presence of Sb in Whatman no 40 filter paper used in the preparation of the synthetic standard, but at very low concentrations, which could be considered negligible. In the case of the plastic material used in bags for the sample irradiation, it should be chosen carefully, because depending on the thickness, they may contain Sb. The analyses of the stability of the diluted Sb standard solution showed no change in the Sb concentration within eight months after its preparation. Results obtained in the analysis of certified reference materials indicated the interference of 76As and also of 134Cs and 152Eu in the Sb determinations by measuring 122Sb, due to the proximity of the gamma ray energies. The high activity of 24Na can also mask the peak of 122Sb hindering its detection. The analysis of CRM indicated that the accuracy and precision of the results depend on the type of matrix analyzed, its concentration in the sample, radioisotope measured and of the decay time used for the measurements. The analysis of the components that contribute to the uncertainty of the Sb concentration indicated that the largest uncertainty contribution is given by statistical counting of the sample. The findings also showed that the value of combined standard uncertainty depends on the radioisotopes of Sb measured and the decay time used for counting. This study showed that NAA is a very adequate method for Sb determinations in environmental samples furnishing results with low uncertainty values.
472

A geometrical framework for forecasting cost uncertainty in innovative high value manufacturing

Schwabe, Oliver January 2018 (has links)
Increasing competition and regulation are raising the pressure on manufacturing organisations to innovate their products. Innovation is fraught by significant uncertainty of whole product life cycle costs and this can lead to hesitance in investing which may result in a loss of competitive advantage. Innovative products exist when the minimum information for creating accurate cost models through contemporary forecasting methods does not exist. The scientific research challenge is that there are no forecasting methods available where cost data from only one time period suffices for their application. The aim of this research study was to develop a framework for forecasting cost uncertainty using cost data from only one time period. The developed framework consists of components that prepare minimum information for conversion into a future uncertainty range, forecast a future uncertainty range, and propagate the uncertainty range over time. The uncertainty range is represented as a vector space representing the state space of actual cost variance for 3 to n reasons, the dimensionality of that space is reduced through vector addition and a series of basic operators is applied to the aggregated vector in order to create a future state space of probable cost variance. The framework was validated through three case studies drawn from the United States Department of Defense. The novelty of the framework is found in the use of geometry to increase the amount of insights drawn from the cost data from only one time period and the propagation of cost uncertainty based on the geometric shape of uncertainty ranges. In order to demonstrate its benefits to industry, the framework was implemented at an aerospace manufacturing company for identifying potentially inaccurate cost estimates in early stages of the whole product life cycle.
473

The improvement of vehicle noise variability through the understanding of phase angle and NVH analysis methods

Dowsett, Amy January 2018 (has links)
Noise, vibration and harshness (NVH)levels in the luxury automotive industry are used by customers as a subjective method of determining the vehicle quality. This can be achieved by adjusting the vehicle design, where simulations are used to predict the NVH behaviour. Changes can be expensive and time consuming when made after the design stage has been completed, so it is important to produce accurate simulations of the product. Variability exists to some extent in all products, even those just off the production line, however, if a high level of variability exists then only a small portion of products will meet the predicted behaviour. The aim of the project is to provide information that may lead to the reduction of variability in an automotive vehicle. This is achieved by quantifying the statistical spread of FRFs (frequency response function) in a set of nominally identical vehicles. Once overall levels have been calculated, the location of the most variable sources can be identified. Project also seeks to develop new methods of analysis for the system phase response to determine whether further information may be extracted compared to the magnitude response. There are three main themes that run throughout this thesis, with the first being the quantification of variability due to the measurement taking process which is covered in chapter 3. A novel application of a method to separate the measurement variability from the overall system uncertainty was achieved as well as the quantification of the vehicle to- vehicle variability. The second theme that runs through the study concerns the identification of variability sources. This is realised in chapter 4 and chapter 6 as a set of structural and acoustic tests on a luxury sedan door. The trim was found to be held to the door panel by a series of 11 polymer clips and 4 metal screws. The variability of small changes to a significant boundary condition at the door trim was quantified, showing that the removal of rigid clips had a more significant effect on the overall variability that if a loose clip has been removed. It was also found that clips at the corners were the most sensitive to change. The final theme outlines and tests new analysis methods on the phase and compares the statistical spread of the phase with the equivalent spread of the magnitude. Data taken from the same tests was used and for most of the cases the two results were found to be approximately the same.
474

Essays on financial frictions

Yi, Mingzi 05 December 2018 (has links)
This dissertation investigates agents’ behavior in a world with financial frictions such as financial regulations and information asymmetries. The three chapters of the dissertation are devoted to answering the following questions: Does financial regulation slow credit supply growth by imposing higher lending standards on banks? How does business volatility contribute to the declining firm entry rate in recent decades through credit channel? How does a financially distressed firm respond to risks when it is deemed "too big to fail"? Although widely acknowledged for enhancing financial stability, the Dodd-Frank Act (DFA) has continued to attract criticisms arguing that it contracts credit supply, and, as a consequence, reduces GDP and creates pressure on unemployment. In chapter I, I provide empirical and theoretical evidence on DFA’s negative impacts on credit supply. Based on a structural banking model, I find that DFA has reduced credit supply by at least 3.1% of the current volume of bank credit. This sizable loss partially validates the concern that the Wall Street reform put a strain on the economy and prevented it from fully recovering through credit channels. In chapter II, I present empirical and theoretical evidence suggesting that unexpected surging economic uncertainty hurts startups through credit channel: rising default rates accompanying heightened economic turbulence drive up credit spreads. With startups facing increasing funding costs, entry barriers go up and entry rates decline. Through simulations of an industry model incorporating dynamic entry and exit, I show that unexpected uncertainty shocks can generate larger and more persistent impact on economic outputs in a world with financial frictions than that without the frictions. In Chapter III, I argue that the risk-taking behavior of a financially distressed firm is exacerbated if the equity holders have larger bargaining power over debt holders. Using a firm’s valuation model which permits the endogenous default on the debt, I show that the threshold value triggering risk-taking behavior is positively related to the equity holders’ bargaining power in debt renegotiations. Therefore, firms anticipating a final bailout intentionally undertake more risky investments.
475

A new method of threshold and gradient optimization using class uncertainty theory and its quantitative analysis

Liu, Yinxiao 01 May 2009 (has links)
The knowledge of thresholding and gradient at different tissue interfaces is of paramount interest in image segmentation and other imaging methods and applications. Most thresholding and gradient selection methods primarily focus on image histograms and therefore, fail to harness the information generated by intensity patterns in an image. We present a new thresholding and gradient optimization method which accounts for spatial arrangement of intensities forming different objects in an image. Specifically, we recognize object class uncertainty, a histogram-based feature, and formulate an energy function based on its correlation with image gradients that characterizes the objects and shapes in a given image. Finally, this energy function is used to determine optimum thresholds and gradients for various tissue interfaces. The underlying theory behind the method is that objects manifest themselves with fuzzy boundaries in an acquired image and that, in a probabilistic sense; intensities with high class uncertainty are associated with high image gradients generally indicating object/tissue interfaces. The new method simultaneously determines optimum values for both thresholds and gradient parameters at different object/tissue interfaces. The method has been applied on several 2D and 3D medical image data sets and it has successfully determined both thresholds and gradients for different tissue interfaces even when some of the thresholds are almost impossible to locate in their histograms. The accuracy and reproducibility of the method has been examined using 3D multi-row detector computed tomography images of two cadaveric ankles each scanned thrice with repositioning the specimen between two scans.
476

Understanding travelers' route choice behavior under uncertainty

Sikka, Nikhil 01 May 2012 (has links)
The overall goal of this research is to measure drivers' attitudes towards uncertain and unreliable routes. The route choice modeling is done within the discrete choice modeling framework and involved use of stated preference data. The first set of analysis elicits travelers' attitudes towards unreliable routes. The results of the analysis provide useful information in relation to how commuters value the occurrence/chances of experiencing delay days on their routes. The frequency of days with unexpected delays also measures the travel time reliability in a way that is easy to understand by day-to-day commuters. As such, behaviorally more realistic values are obtained from this analysis in order to capture travelers' attitudes towards reliability. Then, we model attitudes toward travel time uncertainty using non-expected utility theories within the random utility framework. Unlike previous studies that only include risk attitudes, we incorporate attitudes toward ambiguity too, where drivers are assumed to have imperfect knowledge of travel times. To this end, we formulated non-linear logit models capable of embedding probability weighting, and risk/ambiguity attitudes. A more realistic willingness to pay structure is then derived which takes into account travel time uncertainty and behavioral attitudes. Finally, we present a conceptual framework to use a descriptive utility theory, i.e. cumulative prospect theory in forecasting the demand for a variable tolled lane. We have highlighted the issues that arise when a prescriptive model of behavior is applied to forecast demand for a tolled lane.
477

Evaluation of methodologies for continuous discharge monitoring in unsteady open-channel flows

Lee, Kyutae 01 December 2013 (has links)
Ratings curves are conventional means to continuously provide estimates of discharges in rivers. Among the most-often adopted assumptions in building these curves are the steady and uniform flow conditions for the open-channel flow that in turn provide a one-to-one relationships between the variables involved in discharge estimation. The steady flow assumption is not applicable during propagation of storm-generated waves hence the question on the validity of the steady rating curves during unsteady flow is of both scientific and practical interest. Scarce experimental evidence and analytical inferences substantiate that during unsteady flows the relationship between some of the variables is not unique leading to looped rating curves (also labeled hysteresis). Neglecting the unsteadiness of the flow when this is large can significantly affect the accuracy of the flow estimation. Currently, the literature does not offer criteria for a comprehensive evaluation of the methods for estimation of the departure of the looped rating curves from the steady ones nor for identifying the most appropriate means to dynamically capturing hysteresis for different possible river flow conditions. Therefore, the overarching goal of this study was to explore the uncertainty of the conventional approaches for constructing stage-discharge rating curves (hQRCs) and to evaluate methodologies for accurate and continuous discharge monitoring in unsteady open channel flows using analytical inference, index velocity rating curves (VQRCs), and continuous slope area method (CSA) with considerations on discharge measurement uncertainty. The study will demonstrate conceptual and experimental evidences to illustrate some of the unsteady flow impacts on rating curves and suggest the development of a uniform end-to-end methodology to enhance the accuracy of the current protocols for continuous stream flow estimation for both steady and unsteady river conditions. Moreover, hysteresis diagnostic method will be presented to provide the way to conveniently evaluate when and where the hysteresis becomes significant as a function of the site and storm event characteristics. The measurement techniques and analysis methodologies proposed herein will allow to dynamically tracking both the flood wave propagation and the associated uncertainty in the conventional RCs.
478

Towards a better representation of radar-rainfall: filling gaps in understanding uncertainties

Seo, Bong Chul 01 December 2010 (has links)
Radar-rainfall uncertainty quantification has been recognized as an intricate problem due to the complexity of the multi-dimensional error structure, which is also associated with space and time scale. The error structure is usually characterized by two moments of the error distribution: bias and error variance. Despite numerous efforts to investigate radar-rainfall uncertainties, many questions remain unanswered. This dissertation uses two statistical descriptions (mean and variance) of the error distribution to highlight and describe some of the remaining gaps in representing radar-rainfall uncertainties. The four central issues addressed in this dissertation include: 1. Investigation of radar relative bias caused by radar calibration. 2. Statistical modeling of range-dependent error arising from the radar beam geometry structure. 3. Scale-dependent variability of radar-rainfall and rain gauge error covariance. 4. Scale-dependence of radar-rainfall error variance. The first two issues describe systematic features of main error sources of radar-rainfall. The other two are associated with quantifying radar error variance using the error variance separation (EVS) method, which considers the spatial sampling mismatch between radar and rain gauge data. This study captures the main systematic features (systematic bias arising from radar calibration and range-dependent errors) of radar measurements without using ground reference data and the error variance structure with respect to the spatio-temporal transformation of the measurements for further applications to hydrologic fields. Such consideration of radar-rainfall uncertainties represented by error mean and variance can enhance the characterization of the uncertainty structure and yield a better understanding of the physical process of precipitation.
479

A multiscale investigation of the role of variability in cross-sectional properties and side tributaries on flood routing

Barr, Jared Wendell 01 July 2012 (has links)
A multi-scale Monte Carlo simulation was performed on nine streams of increasing Horton order to investigate the role that variability in hydraulic geometry and resistance play in modifying a flood hydrograph. This study attempts to determine the potential to replace actual cross-sections along a stream reach with a prismatic channel that has mean cross-sectional properties. The primary finding of this work is that the flood routing model is less sensitive to variability in the channel geometry as the Horton order of the stream increases. It was also established that even though smaller streams are more sensitive to variability in hydraulic geometry and resistance, replacing cross-sections along the channel with a characteristic reach wise average cross-section, is still a suitable approximation. Finally a case study of applying this methodology to a natural river is performed with promising results.
480

Impacts of Distributions and Trajectories on Navigation Uncertainty Using Line-of-Sight Measurements to Known Landmarks in GPS-Denied Environments

Lamoreaux, Ryan D. 01 December 2017 (has links)
Unmanned vehicles are increasingly common in our world today. Self-driving ground vehicles and unmanned aerial vehicles (UAVs) such as quadcopters have become the fastest growing area of automated vehicles research. These systems use three main processes to autonomously travel from one location to another: guidance, navigation, and controls (GNC). Guidance refers to the process of determining a desired path of travel or trajectory, affecting velocities and orientations. Examples of guidance activities include path planning and obstacle avoidance. Effective guidance decisions require knowledge of one’s current location. Navigation systems typically answer questions such as: “Where am I? What is my orientation? How fast am I going?” Finally, the process is tied together when controls are implemented. Controls use navigation estimates (e.g., “Where I am now?”) and the desired trajectory from guidance processes (e.g., “Where do I want to be?”) to control the moving parts of the system to accomplish relevant goals. Navigation in autonomous vehicles involves intelligently combining information from several sensors to produce accurate state estimations. To date, global positioning systems(GPS) occupy a crucial place in most navigation systems. However, GPS is not universally reliable. Even when available, GPS can be easily spoofed or jammed, rendering it useless. Thus, navigation within GPS-denied environments is an area of deep interest in both military and civilian applications. Image-aided inertial navigation is an alternative navigational solution in GPS-denied environments. One form of image-aided navigation measures the bearing from the vehicle to a feature or landmark of known location using a single lens imager, such as a camera, to deduce information about the vehicle’s position and attitude. This work uncovers and explores several of the impacts of trajectories and land mark distributions on the navigation information gained from this type of aiding measurement. To do so, a modular system model and extended Kalman filter (EKF) are described and implemented. A quadrotor system model is first presented. This model is implemented and then used to produce sensor data for several trajectories of varying shape, altitude, and landmark density. Next, navigation data is produced by running the sensor data through an EKF. The data is plotted and examined to determine effects of each variable. These effects are then explained. Finally, an equation describing the quantity of information in each measurement is derived and related to the patterns seen in the data. The resulting equation is then used to explain selected patterns in the data. Other uses of this equation are presented, including applications to path planning and landmark placement.

Page generated in 0.0603 seconds